text
stringlengths
1
2.26M
\section{Conventional DSR System} \label{sec:conventional_system} \subsection{Acoustic Beamforming} \label{sec:beamforming} Let us assume that a microphone array with $M$ sensors captures a sound wave propagating from a position and denote the frequency-domain snapshot as $\mbox{$\mathbf{X}$} (t,\omega_k) =[X_1 (t,\omega_k),\cdots,X_{M} (t,\omega_k)]^T$ for an angular frequency $\omega_k$ at frame $t$. With the complex weight vector of a array geometry type $g$ for source position $\mbox{$\mathbf{p}$}$ \vspace{-0.75em} \begin{equation} \mbox{$\mathbf{w}$}_g (t,\omega_k,\bp) = [ w_{g,1} (t,\omega_k,\bp), \cdots, w_{g,M} (t,\omega_k,\bp) ] , \label{eq:bfw} \vspace{-0.75em} \end{equation} the beamforming operation is formulated as \vspace{-0.75em} \begin{equation} Y_g (t,\omega_k,\bp) = \mbox{$\mathbf{w}$}_g^H (t,\omega_k,\bp) \mbox{$\mathbf{X}$} (t,\omega_k), \label{eq:bfo} \vspace{-0.75em} \end{equation} where $H$ is the Hermitian (conjugate transpose) operator. The complex vector multiplication~\erf{eq:bfo} can be also expressed as the real-valued matrix multiplication: \vspace{-0.75em} \begin{align} \begin{bmatrix} \mbox{$\ \operatorname{Re}$} (Y_g) \\ \mbox{$\ \operatorname{Im}$} (Y_g) \\ \end{bmatrix} &= \begin{bmatrix} \mbox{$\ \operatorname{Re}$} (w_{g,1}) & \mbox{$\ \operatorname{Im}$} (w_{g,1}) \\ -\mbox{$\ \operatorname{Im}$} (w_{g,1}) & \mbox{$\ \operatorname{Re}$} (w_{g,1}) \\ \vdots & \vdots \\ \mbox{$\ \operatorname{Re}$} (w_{g,M}) & \mbox{$\ \operatorname{Im}$} (w_{g,M}) \\ -\mbox{$\ \operatorname{Im}$} (w_{g,M}) & \mbox{$\ \operatorname{Re}$} (w_{g,M})\\ \end{bmatrix}^T \begin{bmatrix} \mbox{$\ \operatorname{Re}$} (X_{1}) \\ \mbox{$\ \operatorname{Im}$} (X_{1}) \\ \vdots \\ \mbox{$\ \operatorname{Re}$} (X_{M}) \\ \mbox{$\ \operatorname{Im}$} (X_{M}) \\ \end{bmatrix}, \label{eq:cat2} \vspace{-0.75em} \end{align} where $(t,\omega_k,\bp)$ is omitted for the sake of simplicity. It is clear from~\erf{eq:cat2} that beamforming can be implemented for a array configuration by generating $K$ sets of $2 \times 2M$ matrices where $K$ is the number of frequency bins. Thus, we can readily incorporate this beamforming framework into the DNN in either the complex or real-valued form. Notice that since our ASR task is classification of acoustic units, the real and imaginary parts can be treated as two real-valued feature inputs. In a similar manner, the hidden layer output can be treated as two separate entities. In that case, the DNN weights can be computed with the real-valued form of the back propagation algorithm~\cite{MinhuaICASSP2019}. A popular method in the field of ASR would be super-directive (SD) beamforming that uses the \emph{spherically isotropic noise} (diffuse) field~\cite{DocloM07,HimawanMS11}~\cite[S13.3.8]{Wolfel2009}. Let us first define the $(m,n)$-th component of the spherically isotropic noise coherence matrix for a array configuration~$g$ as \begin{equation} \mbox{$\Sigma_{\mathbf{N}}$}_{g,m,n} (\omega_k) = \text{sinc} \left( \omega_k d_{g,m,n} / c \right) \label{eq:NCM} \end {equation} where $d_{g,m,n}$ is the distance between the~$m$-th~and~$n$-th sensors for the array shape~$g$ and $c$ is speed of sound. This represents the spatial correlation coefficient between the $m$-th and $n$-th sensor inputs in the diffuse field. The weight vector of the SD beamformer for the array geometry $g$ can be expressed as \vspace{-0.75em} \begin{equation} \mbox{$\mathbf{w}$}^H_{\text{SD},g} = \left[ \mbox{$\mathbf{v}$}^H_g \mbox{$\bSigma_{\mathbf{N}_g}^{-1}$} \mbox{$\mathbf{v}$}_g \right]^{-1} \mbox{$\mathbf{v}$}^H_g \mbox{$\bSigma_{\mathbf{N}_g}^{-1}$} \label{eq:SD1} \vspace{-0.75em} \end {equation} where \( (\omega_k,\bp) \) are omitted and $\mbox{$\mathbf{v}$}_g$ represents the array manifold vector of the array geometry $g$ for time delay compensation. In order to control white noise gain, diagonal loading is normally adjusted~\cite[S13.3.8]{Wolfel2009}. Although speaker tracking has a potential to provide better performance~\cite[\S10]{Wolfel2009}, the simplest solution would be selecting a beamformer based on normalized energy from multiple instances with various look directions~\cite{HimawanMS11}. In our preliminary experiments, we found that competitive speech recognition accuracy was achievable by selecting a fixed beamformer with the highest total energy followed by trajectory smoothing over frames. Notice that highest-energy-based beamformer selection can be mimicked with a max-pooling layer as described in section \ref{sec:MCDNN}. \subsection{Acoustic Model with Signal Processing Front-End} \label{sec:baseline} As shown in figure~\ref{fig:dnn_baseline}, the baseline DSR system consists of audio signal processing, speech feature extraction and classification NN components. The audio front-end transforms a time-discrete signal into the frequency domain and selects the output from one of multiple beamformers based on the energy criterion. After that, the time-domain signal is reconstructed and fed into the feature extractor. The feature extraction step involves LFBE feature computation as well as causal and global mean-variance normalization~\cite{King2017}. The NN used here consists of multiple LSTM layers, affine transform and softmax layers. The network is trained with the normalized LFBE features in order to classify senones associated with the HMM state. In the conventional DSR system, the audio front-end can be separately tuned based on empirical knowledge. However, it may not be straightforward to jointly optimize the signal processing front-end and classification network~\cite{Heymann2018}, which will result in a suboptimal solution for the senone classification task. \begin{figure}[t] \addtolength{\belowcaptionskip}{-1.5em} \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=0.9\linewidth]{Baseline.pdf} \vspace{-1.0em} \caption{Conventional system} \label{fig:dnn_baseline} \end{minipage} \begin{minipage}[t]{0.05\linewidth} \includegraphics[width=\linewidth]{white.pdf} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=0.9\linewidth]{MCDNN.pdf} \vspace{-1.0em} \caption{Fully-learnable system} \label{fig:dnn_main} \end{minipage} \end{figure} \section{Conclusion} \label{sec:conclusion} We have proposed new spatial acoustic modeling methods. The ASR experiment results on the real far-field data have revealed that even when array geometry is mismatched to the training condition, the two-channel model can provide better recognition accuracy than the LFBE model with 7-channel beamforming. Furthermore, we have shown that training the MC DNN under the multiple array geometry conditions can improve robustness against the microphone placement mismatch. Moreover, we have demonstrated that our proposed method can provide a consistent improvement for multiple array configurations. We plan to combine multi-conditional training and unsupervised training~\cite{Hari2019,Mosner2019}. \section{ASR Experiment} \label{sec:ex1} We perform a series of the DSR experiments using over 1150 hours of unique speech utterances from our in-house dataset. The training and test data amount to approximately 1,100 and 50 hours respectively. The training data also contains the play back condition where music is being played with an internal loud speaker. The device-directed speech data from several thousand anonymized users was captured using 7 microphone circular array devices placed in real acoustic environments. The test data contains the real speech interactions between the users and devices under unconstrained conditions. Thus, the users may move while speaking to the device. Speakers in the test set were excluded from the training set. As a baseline beamforming method, we use robust SD beamforming with diagonal loading adjusted based on~\cite{DocloM07}. Therefore, the microphone array is well calibrated. The array geometry used here is an equi-spaced six-channel microphone circular array with a diameter of approximately 72 milli-meters (mm) and one microphone at the center. For SD beamforming, we used all the seven microphones. Multiple beamformers are built on the frequency domain toward different directions of interest and one with the maximum output energy is selected for the ASR input. It may be worth noting that conventional adaptive beamforming~\cite[S6,S7]{VanTrees2002} degraded recognition accuracy in our preliminary experiments due insufficient voice activity detection or speaker localization performance on the real data. Thus, we omit results of adaptive beamforming here. For the experiments with the MC DNN, we pick 2 or 4 microphones out of 7 sensors. As illustrated in figure~\ref{fig:AG}, we made three sets of training and test data with different microphone spacing, 73~mm, 63~mm and 36~mm, for two-channel experiments. The test datasets are split into the matched and mismatched array geometry conditions. In the mismatched geometry condition, the test array geometry is not seen in training. Each WER is calculated over the combined conditions. For the experiments with four-channel input, we created four sets of the training and test data with different relative microphone locations. In the four-channel experiment, we report the WER with respect to the number of sensor locations mismatched to the training array geometry. The number of look directions for the multi-channel layer is set to 12 in all the experiments. The baseline ASR system used a 64-dimensional LFBE feature with online causal mean subtraction~\cite{King2017}. For our MC ASR system, we used 127-dimensional complex DFT coefficients removing the direct and Nyquist frequency components (bin 0 and 128). The LFBE and FFT features were extracted every 10ms with a window size of 25ms and 12.5ms, respectively. Both features were normalized with the global mean and variances precomputed from the training data. The classification LSTM for both features has the same architecture, 5 LSTM layers with 768 cells followed by the affine transform with 3101 outputs. All the networks were trained with the cross-entropy objective using our DNN toolkit~\cite{Strom15}. The Adam optimizer was used in all the experiments. For building the DFT model, we initialize the classification layers with the LFBE model. Results of all the experiments are shown as relative word error rate reduction (WERR) with respect to the performance of the LFBE baseline system with a single array channel. The baseline system is powerful enough to achieve a single digit number in a high SNR condition. The larger WERR value indicates the bigger improvement in recognition accuracy. The LFBE LSTM model for the baseline system was trained and evaluated on the center microphone data. We also present the WERR relative to the LFBE with robust SD beamforming. Table~\ref{tab:Tab_devl} shows the relative WERRs of the LFBE LSTM with the conventional 7-channel beamformer, the elastic SF (ESF) network trained with the single and multiple array geometry data and weight-tied SF (WTSF) net trained under the multiple array geometry conditions. Each number enclosed in the parentheses indicates the WERR relative to the LFBE LSTM with 7-channel robust beamforming. Table~\ref{tab:Tab_devl} also shows how much recognition accuracy degrades with respect to the number of mismatched sensor locations indicated in the third column in table~\ref{tab:Tab_devl}. Here, the WERR results are split by estimated signal-to-noise ratio (SNR) of the utterances. The SNR was estimated by aligning the utterances to the transcriptions with an ASR model and subsequently calculating the accumulated power of speech and noise frames over an entire utterance. It is clear from table~\ref{tab:Tab_devl} that the recognition accuracy can be improved by multiple microphone systems, both conventional beamforming and fully learnable MC models. It is also clear from table~\ref{tab:Tab_devl} that the unified acoustic models with two channels outperform conventional beamforming with seven channels even if one sensor location is mismatched to the training condition. It is also apparent from table~\ref{tab:Tab_devl} that the use of 4 channels for the unified AM further improves recognition accuracy in the matched geometry condition but degrades performance in the mismatched array configuration condition. Moreover, we can see that the WTSF architecture trained under the multiple array geometry conditions provides slightly better recognition accuracy than the ESF. Notice that the CNN and max-pooling layers of the WTSF network can reduce the number of parameters compared to the fully connected ESF network architecture. Another advantage of multi-geometry spatial acoustic modeling is that multiple array configurations can be encoded in a single model. Figure~\ref{fig:ESF_wrt_AG} shows the relative WERRs of the WTSF networks trained with the single and multi-geometry data under all the SNR conditions. Here, all the models are trained with four-channel data. For generating the WERs of figure~\ref{fig:ESF_wrt_AG}, we build the single geometry WTSF network with the reference array configuration data only while training the multi-geometry model with four types of array geometry data so as to cover all the test array configurations. In figure~\ref{fig:ESF_wrt_AG}, the WERRs are plotted with respect to the dissimilarity measure from the reference array geometry; the dissimilarity index is calculated as the sum of the differences between relative sensor distances of reference and test arrays over four channels and described in the parentheses of the x-axis label. The x-axis label of figure~\ref{fig:ESF_wrt_AG} also shows the microphone index numbers used for each condition. It is clear from figure~\ref{fig:ESF_wrt_AG} that recognition accuracy of the single geometry model degrades as the array configuration of the test condition becomes more different from that of the training condition. It is also clear from figure~\ref{fig:ESF_wrt_AG} that the multi-geometry model can maintain the improvement for different array configurations. In fact, this is the new capability of the multi-geometry acoustic model in contrast to conventional multi-channel techniques. \section{Introduction} \label{sec:intro} A complete system for distant speech recognition (DSR) typically consists of distinct components such as a voice activity detector, speaker localizer, dereverberator, beamformer and acoustic model~\cite{Pearson96,Omolog2001,Wolfel2009,KumataniAYMRST12,KinoshitaDGHHKL16}. While it is tempting to isolate and optimize each component individually, experience has proven that such an approach cannot lead to optimal performance without joint optimization of multiple components~\cite{McDonough2008,Seltzer2008,VirtanenBook2012}. Conventional microphone array processing also requires meticulous microphone calibration to maintain signal enhancement performance~\cite[\S5.53]{Tashev2009}. The relative microphone placement mismatch between filter design and test conditions can degrade ASR accuracy~\cite{HimawanSM08}. Such a problem can be alleviated with self-calibration~\cite{HimawanSM08,McCowanLH08} or microphone selection\cite{WolfN14,Kumatani11channelselection,GuerreroTO18}. Reliable self-calibration typically requires a supervised signal such as time-stretched pulses~\cite{Habets2007} or accurate noise field assumption~\cite{McCowanLH08}. Accurate microphone calibration may not be necessary for DSR if we can build the acoustic model that encodes various relative microphone locations. It has been shown in~\cite{SainathASRU15,OchiaiICML17} that the dependency of specific microphone spacing can be reduced by training the deep neural network (DNN) with multi-channel (MC) input under multiple microphone spacing conditions in the unified manner. It is also straightforward to jointly optimize the unified MC DNN so as to achieve better discriminative performance of acoustic units from the MC signal~\cite{SainathASRU15,OchiaiICML17,Xiao16,MinhuaICASSP2019}. Moreover, the trained MC DNN can process streaming data in real time without the accumulation of signal statistics in contrast to batch processing methods such as maximum likelihood beamforming~\cite{Seltzer2004,Rauch2008}, source separation techniques~\cite{VirtanenBook2012,Bhiksha2010} and blind DNN clustering approaches. Another approach is the use of MC speech features such as the log energy-based features~\cite{SwietojanskiSPL14,Braun2018} or LFBE supplemented with the time delay feature~\cite{KimInterspeech16}. By doing so, the improvement with multiple sensors can be still maintained in the mismatched array geometry condition. However, the performance of those methods would be limited due to the lack of the proper sound wave propagation model~\cite{MinhuaICASSP2019}. As it will be clear in section~\ref{sec:MCDNN}, the DNN can subsume multiple beamformers with various array configurations. Moreover, the feature extraction components described in~\cite{Xiao16,SwietojanskiSPL14,Braun2018,KimInterspeech16} are not fully learnable. In this paper, we propose two MC network architectures that can model multiple array configurations. We initialize the MC input layer with beamformers' weights designed for multiple types of array geometry. This spatial filtering (SF) layer thus subsumes beamformers with various look directions and array configurations. It is implemented in the frequency domain for the sake of computational efficiency~\cite{Haykin2001}. The first network architecture proposed here combines the SF layer's output in a fully connected manner. In the second MC network, we combine the SF output of multiple look directions with the weights tied across all the frequencies followed by maximum energy selection. All the networks are optimized based on the ASR criterion in a stage-wise manner~\cite{MinhuaICASSP2019}. It is also worth noting that our method neither requires a bi-directional pass nor accumulation of signal statistics unlike DNN mask-based beamforming~\cite{OchiaiICML17,Heymann2018,Higuchi2018}. We demonstrate the effectiveness of the multi-geometry acoustic models through DSR experiments on the real-world far-field data spoken by thousands of real users, collected in various acoustic environments. The test data contains challenging conditions where speakers interact with the ASR system without any restriction under reverberant and noisy environments. This paper is organized as follows. In section~\ref{sec:conventional_system}, we review a relationship between beamforming and neural networks. In section~\ref{sec:MCDNN}, we describe our deep MC model architectures robust against the array geometry mismatch. In section~\ref{sec:ex1}, we analyze ASR results on the real-world data. Section~\ref{sec:conclusion} concludes this work. \section{Frequency Domain Multi-channel Network} \label{sec:MCDNN} \begin{figure}[t] \addtolength{\belowcaptionskip}{-1em} \begin{minipage}[t]{0.05\linewidth} \includegraphics[width=\linewidth]{white.pdf} \end{minipage} \begin{minipage}[t]{0.4\linewidth} \includegraphics[width=\linewidth]{MCDNN_BF3.pdf} \vspace{-0.8em} \centering \text{(a) Elastic SF} \label{fig:mcdnn1} \end{minipage} \begin{minipage}[t]{0.05\linewidth} \includegraphics[width=\linewidth]{white.pdf} \end{minipage} \begin{minipage}[t]{0.4\linewidth} \includegraphics[width=\linewidth]{MCDNN_BF4.pdf} \vspace{-0.8em} \centering \text{(b) Weight-tied SF} \label{fig:mcdnn3} \end{minipage} \caption{Multi-geometry spatial filtering (SF) network} \vspace{-1.5em} \label{fig:all_mcdnn} \end{figure} \begin{figure}[t] \addtolength{\belowcaptionskip}{-3em} \centering \includegraphics[width=0.75\linewidth]{WTSF.pdf} \vspace{-1.5em} \caption{Weight-tied SF output combination} \label{fig:WTSF_CNN} \vspace{-1.5em} \end{figure} Figure~\ref{fig:dnn_main} shows our whole DSR system with the fully-learnable neural network. As shown in figure~\ref{fig:dnn_main}, our DSR consists of 4 functional blocks, signal pre-processing, MC DNN, feature extraction (FE) DNN and classification LSTM. First, a block of each channel signal is transformed into the frequency domain through FFT. In the frequency domain, DFT coefficients are normalized with global mean and variance estimates. The normalized DFT features are concatenated and passed to the MC DNN that models different array geometry. Our FE DNN contains an affine transform initialized with mel-filter bank values, rectified linear unit (ReLU) and log component. Notice that the initial FE DNN generates the LFBE-like feature. The output of the FE DNN is then input to the same classification network architecture as the LFBE system, LSTM layers followed by affine transform and softmax layers. The DNN weights are trained in the stage-wise manner~\cite{MinhuaICASSP2019,Kumatani2017}; we first build the classification LSTM with the single channel LFBE feature, then train the cascade network of the FE and classification layers with the single-channel DFT feature, and finally perform joint optimization on the whole network with MC DFT input. In this work, we use training data captured with different array configurations. The proposed method can learn the spatial filters of different array geometry as well as feature extraction parameters solely from the observed data. This fully learnable network neither requires self microphone calibration, clean speech signal reconstruction nor perceptually-motivated filter banks~\cite{RichardSN13}. Figure~\ref{fig:all_mcdnn} shows new MC network architectures with multi-geometry affine transforms. The multi-geometry affine transforms correspond to beamformers with different look directions and array shapes. Figure~\ref{fig:all_mcdnn}~(a) depicts an elastic MC network architecture that combines the output of the SF layer with the fully connected network. This elastic MC DNN includes a block of the affine transforms initialized with beamformers' weights, signal power component, affine transform layer and ReLU. For initialization of the block affine transforms, we use SD beamformers' weights designed for various look directions and multiple array configurations. Let us denote the number of array geometry types as $G$ and the number of beamformer's look directions as $D$. The output power of the initial SF layer is expressed with $G \times D \times K$ blocks of frequency independent affine transforms as \vspace{-0.3em} \begin{eqnarray} \begin{bmatrix} Y_{1,1} (\omega_1) \\ \vdots \\ Y_{1,D} (\omega_1) \\ \vdots \\ Y_{g,d} (\omega_k) \\ \vdots \\ Y_{G,D} (\omega_K) \\ \end{bmatrix} = \text{pow} \left ( \begin{bmatrix} \mbox{$\mathbf{w}$}^H _{\text{SD,1}} (\omega_1, \mbox{$\mathbf{p}$}_1) \mbox{$\mathbf{X}$} (\omega_1) \\ \vdots \\ \mbox{$\mathbf{w}$}^H _{\text{SD,1}} (\omega_1, \mbox{$\mathbf{p}$}_D) \mbox{$\mathbf{X}$} (\omega_1) \\ \vdots \\ \mbox{$\mathbf{w}$}^H _{\text{SD,g}} (\omega_k, \mbox{$\mathbf{p}$}_d) \mbox{$\mathbf{X}$} (\omega_k) \\ \vdots \\ \mbox{$\mathbf{w}$}^H _{\text{SD,G}} (\omega_K, \mbox{$\mathbf{p}$}_D) \mbox{$\mathbf{X}$} (\omega_K) \\ \end{bmatrix} + \mbox{$\mathbf{b}$} \right), \label{eq:esf1} \vspace{-1.2em} \end{eqnarray} where $\text{pow}()$ is the sum of squares of real and imaginary values and $\mbox{$\mathbf{b}$}$ is a bias vector. As demonstrated in our prior work~\cite{MinhuaICASSP2019}, initializing the first layer with beamformer's weight leads to much more efficient optimization in comparison to random initialization. The output of the SF layer is combined with the fully connected weights. Accordingly, this could mix the different frequency components. Figure~\ref{fig:all_mcdnn}~(b) illustrates another MC network architecture proposed in this paper. The second MC network also connects the block of affine transforms associated with each array configuration independently. The weights of the block affine transforms are initialized with SD beamformers' weights in the same manner as the elastic SF network. We then apply the weight tied over all the frequencies in order to combine the multiple beamformers. Such a combination process is described in figure~\ref{fig:WTSF_CNN} where each element of the matrix is computed in the same manner as~\erf{eq:esf1}. As indicated in figure~\ref{fig:WTSF_CNN}, the SF layer output is convoluted with $1 \times D$ filters with $D$ width stride and one height stride. This 2D convolution process can avoid the permutation problem known in blind source separation, taking different look directions at different frequencies inconsistently. Finally, the SF layer output is selected with the max-pooling layer that corresponds to maximum energy selection. In contrast to the elastic SF network, this network can efficiently reduce the dimension with the max-pooling layer. We hypothesize that the SF layer combination has the similar effect with noise cancellation, subtracting one beamformer's output from another. This would be done with a large amount of training data rather than sample-by-sample adaptive way. Moreover, our network considers not only multiple look directions but also different array geometry. All the network parameters will be updated based on the cross entropy criterion in training. Both architectures maintain frequency independent processing at the input layer, which can reduce the number of parameters significantly. In this paper, the MC network architectures of (a) and (b) are referred as the multi-geometry elastic SF (ESF) and weight-tied SF (WTSF) network, respectively. The WTSF network has a stronger constraint than the ESF net since the same weights for combining spatial layer output are shared across all the frequencies. This weight-sharing structure maintains the consistent SF output combination over frequencies. However, it may lack of the flexibility such as smoothing over different frequencies.
\section{Introduction} One of the most promising ideas developed in Service Computing is the automatic generation of services' compositions~\cite{Papazoglou03service-orientedcomputing,SOCValery,Bartalos}. Multiple views and problems formulations have been proposed for this purpose~\cite{Rao04asurvey}. As a point of differentiation between these works, we consider the {\it functional structure} of the composition to built. By this term, we mean, the representation of the functioning of the composition, as a collaboration among small abstractions, that can each be realized by known services. Typically, oriented graphs, business processes and workflows have been used for such descriptions~\cite{Cardoso2004281,Weske2007,GoldmanNgoko}. Based on the functional structure, we distinguish between two dominant classes of services' compositions problems. In the first one, the general inputs of the composition problem consists of: (1) a basis of services whose behavior is described in the public interface; (2) a set of user constraints and goals, defining and framing the finality of the composition. One must infer from these data an interaction among services, meeting the constraints and goals~\cite{Bartalos,Sirin:2004:HPW:1741306.1741331,Jiang}. We put these formulations in the class of {\it structurally-unfixed}. Their particularity is that the functional structure of the composition is not an input data for the automation problem. It is the case in the second class of formulations that we refer to as the one of {\it structurally-fixed} problems. The challenge is reduced to a binding problem, in which concrete implementations must be associated with the abstractions of the functional structure such as to guarantee a minimal Quality of Service (QoS), while meeting some users level agreements (SLAs)~\cite{Alrifai,BenMokhtar,Yu,ZengMiddleware,Zheng,Ardagna,JISA,cpe3015}. This binding problem is also referred to as the service selection problem. It seems obvious that structurally-unfixed formulations include more automation in the design of services' compositions. Indeed, we can decompose the challenge in these cases in two: (1) find a functional structure that meets the constraints and goal; (2) solve a structurally-fixed problem. However, let us remark that in practice, it is hard to address structurally-unfixed problems without providing a formal description of the composition's behavior. The OWL-S language~\cite{Sirin:2004:HPW:1741306.1741331} and pre/post condition formalisms~\cite{Bartalos,Oh05acomparative} are some examples, utilized in this context. The existence of these additional inputs in practice tempers the high level of automation of structurally-unfixed formulations. In this work, we consider the service selection problem (structurally-fixed formulation). The functional structure in our work is given by a Hierarchical Services Graph (HSG)~\cite{GoldmanNgoko}. This modelling defines a composition as a graph with three layers: the machine, service and operations layers. The service composition logic is captured in the operations layer. The logic consists of BPMN~\cite{Weske2007} interactions among a set of operations. These operations are abstract in the sense that they can be implemented by different services located in the underneath layer. Given such a graph, we are interested in finding the {\it best} implementation for abstract operations while fulfilling the SLAs constraints. We restrict the SLAs definition to two QoS dimensions: the Service Response Time (SRT) and the Energy Consumption (EC). Although we use a particular representation of services' compositions, the core problem that we address is not new~\cite{Alrifai,BenMokhtar,Zheng,Ardagna,JISA}. But, our study has two main features. Firstly, we are interested in finding optimal solutions. This choice has a weakness: the NP-hardness of the service selection problem. However, we believe that by making an {\it intelligent search}, one can provide, within an acceptable runtime, exact solutions for {\it small or medium} services' compositions (around $20$ nodes for the service composition). It is important to notice that if we consider that the services compositions implement business processes or workflows, then there are many practical examples that will correspond to small or medium compositions. One can look for instance to the examples provided in~\cite{Omg,Freund}. The second feature of our work is that we adopt a view of the problem that has not been studied to the best of our knowledge. This is clearly stated by our main contribution, which consists of mapping the service selection problem on the Constraint Satisfaction Problem (CSP). This mapping opens a large potential in the resolution of the service selection problem. As we will see, it also captures another facet of service selection: the feasibility problem~\cite{Ardagna,JISA}. Among the existing techniques for solving CSP, we choose to investigate the backtracking~\cite{Baker95intelligentbacktracking}. We complete our contribution in proposing various backtracking-based algorithms for the service selection problem. The proposed variants are based on the notion of reduction order, introduced in our prior work~\cite{GoldmanNgoko} on QoS prediction. They are also inspired by two existing heuristics~\footnote{The referred heuristics provide an exact solution; but, their runtime can differ significantly from an instance to another} for the CSP: the max-degree and min-domain heuristics~\cite{Baker95intelligentbacktracking}. Finally, we did an experimental evaluation in which we demonstrate the runtime gain of the backtracking-based algorithms over two classical solutions for solving the service selection problem: exhaustive search and integer linear programming. The experiments also give us interesting insights about the size of the compositions on which we can expect results in real-time. The remainder of this paper is organized as follows. Section~\ref{Related} presents the related works. In Section~\ref{SSP}, we define the service selection problem and connect it to the CSP. Naive and backtracking algorithms for the problem are discussed in the sections~\ref{exhaustiveSearch} and~\ref{backtrackingSearch}. Section~\ref{ExperimentalEvaluation} gives an experimental evaluation of our work. In Section~\ref{Discussion}, we discuss about the potential in the development of service selection algorithms raised by our CSP mapping. We conclude in Section~\ref{Conclusion}. \section{Related work} \label{Related} We assumed that services' compositions problems can be structurally-unfixed or structurally-fixed. Our work will focus only on the latter case. However, interesting proposals for the former case can be found in the work of Seog Chan et al.~\cite{Oh05acomparative}, Sirin et al.~\cite{Sirin:2004:HPW:1741306.1741331} or in the survey of Rao and Su~\cite{Rao04asurvey}. A key idea that these works share is the usage of AI planning techniques for the resolution of the services' composition problem. Concomitantly, the Integer Linear Programming (ILP) is one of the most used techniques employed for tackling the service selection problems. One of the pioneer papers with this technique was done by Lee~\cite{Lee}. Though it was not its main purpose, his work demonstrated that SLAs can, in practice, be formulated as linear equations. This suggests that in many cases, the service selection problem can be solved by integer linear programming. The work of Lee focused only on two QoS dimensions: the price and the service response time. Similar modellings were proposed including other QoS dimensions like the price, duration, reputation, reliability and availability~\cite{ZengMiddleware,Yu,Ardagna}. In particular, the work of Zeng et al.~\cite{ZengMiddleware} shows that if we consider their natural formulation, constraints related to availability will result in non-linear equations. Then, they propose a method for transforming such equations into linear ones. In most papers based on linear programming, the services' composition is viewed as a collaboration among multiple services within a single business process. In practice however, collaborations among multiple processes exist. For these latter cases, the works of Ngoko et al.~\cite{JISA,mgc2012} stated how to use linear programming for the service selection problem. They focused on energy and service response time minimization. Many papers established in different contexts the NP-hardness of the service selection problem~\cite{Lee,Yu}. This means that exact solutions obtained by Integer Linear Programming (ILP) are computed in exponential runtime. For obtaining fast results, heuristics have been considered. Yu et al.~\cite{Yu,Yu:2004:SSA:1018413.1019049} proposed a branch and bound algorithm (BBLP) and a dynamic programming algorithm for service selection. These ideas are also discussed in~\cite{Lee}. In both cases the service selection problem is reduced to the multichoice knapsack problem. The branch and bound algorithm exploits the lagrangian relaxation of the ILP modelling for this problem. The dynamic programming solution adapts existing solutions for the multichoice knapsack problem. BBLP and dynamic programming improve in runtime the naive resolution of the service selection problem. In this work, we will also propose an exact resolution (as the branch and bound solution of Yu et al.) that improves the naive one. Ben Mokhtar et al.~\cite{BenMokhtar} proposed a two-phases heuristic for service selection. In the first phase, the heuristic classifies services regarding their {\it utility}. A search approach based on this classification completes the selection process in the second phase. The advantage of this approach is to propose near-exact solutions within {\it short} runtime. Ngoko et al.~\cite{JISA} and Zeng et al.~\cite{ZengMiddleware} proposed sub-optimal approaches and exact algorithms on special cases for the resolution of the service selection problem. As they stated, these solutions are efficient only on a small class of services' compositions. Yu et al.~\cite{Yu:2004:SSA:1018413.1019049} proposed {\it improvement heuristics} for finding near optimal solutions. The first step of the heuristic consists of finding a feasible solution. The solution is then improved by considering other potential combinations of services. As they showed, the proposed heuristic can improve the runtime required for {\it BBLP}. Let us however recall that solutions found are not necessarily optimal. Alrifai et al.~\cite{Alrifai} also proposed a heuristic for the service selection problem. The main idea is to decompose the global service selection problem onto subproblems that can be solved by local search. Doing so, they show that they can drastically improve the runtime of the heuristic proposed by Yu et al.~\cite{Yu}. Est\'evez-Ayres et al. ~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11} proposed a heuristic for finding good approximations for the service selection problem under a runtime boundary. One of the main ideas that they describe will be used in our work. It consists of making partial evaluation on a sub-composition of services. In our proposal, we go deeper in the idea; we focus on the ordering that can be used for the set of partial evaluations to be made. Genetic programming approaches were also proposed~\cite{Jaeger,Canfora,Cao,cpe3015} in viewing a services' composition as a string where each character is a service operation. Finally, let us remark that as suggested by Cardoso et al.~\cite{Cardoso2004281} and developed in other works~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11,Yu:2004:SSA:1018413.1019049}, heuristics for the service selection can be obtained from those solving the QoS prediction problem. We will come back on this point later. For now, let us point out that many QoS prediction algorithms as the SWR algorithm~\cite{Cardoso2004281} or other graph reduction algorithms~\cite{GoldmanNgoko,Zheng2} can serve for designing heuristics for the service selection problem. A large literature exists about the service selection problem. Existing works established connections between the service selection problem and the SAT problem~\cite{Oh05acomparative}, the multichoice Knapsack problem~\cite{Lee}, the multidimension multichoice Knapsack problem and the multiconstrained optimal path problem~\cite{Yu:2004:SSA:1018413.1019049}. However, we did not find any work that proposes to study this problem as a variant of the CSP. The solution that we propose is inspired by a CSP view of service selection. It also has some similarities with the work done by Est\'evez et al.~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11} the branch and bound algorithm of Yu et al.~\cite{Yu} and our prior work~\cite{JISA}. For circumscribing our originality, let us also notice that our prior work was based on mixed integer linear programming. Though interesting, this solution is tuned to a particular QoS modelling. For instance, it does not work with the probabilistic modelling of Hwang et al.~\cite{Hwang20075484}. As Est\'evez et al.~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11}, we propose to improve the runtime of the service selection algorithm by reducing the set of candidate solutions that are explored. We differ from their work by the usage of the backtracking technique. Similarly to Yu et al.~\cite{Yu}, we adopt a branching technique for reducing the set of candidate solutions during the resolution. But instead of using the lagrangian relaxation, we propose a novel estimation. Finally, let us observe that while most work focused on heuristics approaches for the service selection problem, we are interested in exact resolution. The NP-hardness of the service selection problem is the main weakness of this choice. However, we believe that in using an appropriate search algorithm, one can obtain in a short runtime optimal solutions for {\it small and medium services' compositions}. Our work will demonstrate this point in considering services' compositions implementing well-known business processes. Moreover our proposal can serve as a solid basis for the development of approximated solutions on the service selection problem. \section{The service selection problem} \label{SSP} \subsection{Structure of a services' composition} We model a services' composition as a Hierarchical Services Graph (HSG)~\cite{GoldmanNgoko,mgc2012,JISA}. In this representation, a services' composition is a three-layers graph, each (of the layers) encapsulating a particular abstraction of a service composition; these are the business processes, the services and the physical machines layers. An example of such a graph is given in Figure~\ref{ExampleHSG}. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.68\linewidth,height=2.1in]{./Figures/HSG.eps}} \caption{Example of Hierarchical Services Graph}\label{ExampleHSG} \end{figure} \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.64\linewidth,height=2.4in]{./Figures/Pattern.eps}} \caption{Set of generic subgraph patterns in the operations layer of a HSG. Each $G_i$ is again a subgraph obtained from these patterns or an operation}\label{flows} \end{figure} A HSG comprises three layers organized as follows. The first layer, which we will also refer to as the \textbf{operations graph}, describes the functioning of the services' compositions as a business process interaction among abstract operations. For the sake of simplicity, we will reduce these interactions to the ones obtained by composing the subgraphs patterns of Figure~\ref{flows}. Abstract operations are implemented in the services that are in the layer underneath. Finally the last layer states the machines where services are deployed. An operation can be implemented in multiple services. This means that given an abstract operation, we have at the services layer, many concrete operations that can implement its behavior. A same service can be deployed on various machines. This accounts for capturing the possible migrations that can occur during the execution of a services' composition. However, for the sake of simplicity, we will assume that each service is deployed on a unique machine. Finally, let us remark that the HSG modelling we consider corresponds to the implementation of a single business process. Representations for multiple business processes also exist~\cite{mgc2012,JISA}; but, this is out of the scope of our study. The service selection problem is based on predefined relationships between abstract operations and services. The problem consists in choosing the best concrete operations for abstract ones such as to minimize the service response time and the energy consumption. Below, we provide a formal definition. \subsection{Problem formulation} Here, we consider the formulation introduced in~\cite{JISA,cpe3015} {\bf Problem's inputs: } Let us consider a HSG whose set of abstract operations is $O$. For each operation $u \in O$, there is a set of concrete implementations $Co(u) = \{u_1, \dots, u_{m_u}\}$. For each concrete implementation $u_v$, we have the mean response time $S(u_v)$ and the energy consumption $E(u_v)$. Finally, we assume two upper bounds that are issued from SLAs constraints: the bound $MaxS$ for the service response time and $MaxE$ for energy consumption. Finally, we have a tuning parameter $\lambda \in [0,1]$ used for giving more priority to the SRT or the EC in the problem optimization goal.\\ {\bf Problem objective: } We are looking for an assignment of concrete operations for $O$ that fulfills the following constraints: \begin{enumerate} \item[$C_1$:] each operation must be associated with a unique concrete implementation; \item[$C_2$:] the QoS of the resulting composition must not exceed $MaxS$ in response time and $MaxE$ in energy consumption; \item[$C_3$:] if $S$ is the service response time of the resulting composition and $E$ its energy consumption, then the assignment must minimize the global penalty $\lambda S + (1-\lambda)E$. \end{enumerate} In this formulation, the constraint $C_2$ defines users' SLAs for the response time and energy consumption. The composition has a penalty defined by the constraint $C_3$. For completing this formulation, it is important to explain how $S$ and $E$ are computed given a binding of abstract services to concrete ones. We address this issue in the following subsection by associating an execution semantics to HSGs. \subsection{Execution semantics} We divide the semantics in two parts. The first one states how we represent the QoS of a concrete operation; the second one determines how we compute the QoS of a request that {\it traverses} multiple concrete operations. \subsubsection{QoS of a concrete operation.} We use a deterministic modelling for QoS operation. The mean of the QoS of a concrete operation in a given dimension (response time, energy consumption) is expressed as a real value. Though criticizable, this modelling has been considered in multiple other works~\cite{Cardoso2004281,ZengMiddleware,GoldmanNgoko}. Moreover, the conclusion of our current work can be extended to other modellings like the probabilistic one of Hwang et al.~\cite{Hwang20075484}. Given the QoS of each concrete operation, we will now show how to aggregate them in order to capture the mean QoS of a request that is processed in a HSG. \subsubsection{QoS of a subgraph.} We compute the QoS of a service composition depending on the structure of the operations graph (upper layer graph of the HSG). The idea is to aggregate the operation QoS, in considering all possible execution cases of a request processed in a HSG. The aggregation rules are depicted in Table~\ref{tabAggRules}. They state how to compute the response time and the energy consumption, expected from a request that is processed in a HSG whose structure is matched to the patterns of Figure~\ref{flows}. $E(P)$ refers to the energy consumption of a request processed in the subgraph $P$. $S(P)$ is its response time. In {\it exclusive choice}, $p_i$ gives the probability for a request to be routed towards the subgraph $P_i$. For the sake of simplification in {\it inclusive choice} (see Figure~\ref{flows}), we assume that the request can only be routed towards the subgraph $P_1$, $P_2$ or simultaneously, to both. Each routing occurrence, has a known probability $p_{or1}$, $p_{or2}$ and $p_{or||}$. We added this restriction on inclusive choices for the sake of simplicity. Our solutions can however be generalized to the case where we have more routing occurrences. Finally, for any loop subgraph, we assume that we have the mean number $m$ of times in which the requests loop on it. \begin{table}[htbp] \centering \begin{tabular}{|p{4cm}|p{4cm}|p{2cm}|} \hline \small \textbf{Sequence} & \small \textbf{Fork} & \small \textbf{Loop} \\\hline $S(P_1) + S(P_2)$ & $\max\{ S(P_1),\dots S(P_n) \}$ & $m.S(P_1)$ \\ $E(P_1) + E(P_2)$ & $ E(P_1)+ \dots+ E(P_n)$ & $ m.E(P_1)$ \\\hline \small \textbf{Exclusive choice} & \multicolumn{2}{c|}{\small \textbf{Inclusive choice}} \\\hline $\sum_{i=1}^n p_i.T(P_i)$ & \multicolumn{2}{c|}{$p_{or1}.T(P_1)+ p_{or2}.T(P_2)$ $ + p_{or||}.\max \{T(P_1), T(P_2)\}$} \\ $\sum_{i=1}^n p_i.E(P_i)$ & \multicolumn{2}{c|}{$p_{or1}.E(P_1)+ p_{or2}.E(P_2) + p_{or||}.(E(P_1)+ E(P_2))$} \\\hline \end{tabular} \caption{Aggregation rules on subgraphs patterns}\label{tabAggRules} \end{table} \normalsize In these formula, we have almost the same aggregation rules for energy consumption and response time. The difference between these two dimensions is on how we interpret the parallelism: from an energy viewpoint, all paths of execution, {\it even parallel}, will induce an energy consumption. For additional explanations about these formula, we refer the reader to~\cite{JISA}. From the Lee result~\cite{Lee}, it is easy to establish that the described service selection problem under our execution semantics can be reduced to a multi-choice knapsack problem; this proves its NP-hardness. Below, we will show that we can use the constraint satisfaction problem for solving the service selection problem. \subsection{Service selection as a constraint satisfaction problem} \label{Decomposition} The Constraint Satisfaction Problem (CSP) is a classical problem in artificial intelligence and combinatorial optimization. A CSP is defined as a tuple $(V, D, C)$ where: \begin{itemize} \item $V = \{ v_1,\dots, v_n \}$ is a set of variables; \item $D = \{ D(v_1),\dots, D(v_n)\}$ is the set of variables' domains; \item $C = \{C_1,\dots, C_m\}$ is the set of constraints; each $C_i$ imposes a restriction on the possible values that can be assigned to a subset of variables; \item There are two classical objectives in this problem. In the {\it one-solution} objective, we are looking for an assignment of values to variables that does not violate any constraint. In the {\it all-solutions} objective, we are looking for all assignments that do not violate the constraints. \end{itemize} Regarding the objective function, we will show that the {\it one-solution} objective captures the feasibility problem in service selection~\cite{Ardagna,JISA} while the {\it all-solutions} objective captures the service selection problem. We recall that in the feasibility problem problem, the interest is in finding an assignment of concrete services that meet the SLAs. Firstly, let us assume the {\it all-solutions} objective. Let us also assume that we have to solve the service selection problem for an arbitrary HSG. We propose to map the problem onto a CSP throughout the following rules: \begin{enumerate} \item we consider that variables correspond to the operations of the HSG; \item the domain of a variable is the set of possible concrete operations that implements the abstract one to which it refers to; \item the constraints of the problem are the SLAs of the service selection problem. This means that we are looking for all assignments $f \subseteq D(v_1)\times \dots \times D(v_n)$, such that $E(f) \leq MaxE$ and $S(f) \leq MaxS$; here $E(f)$ and $S(f)$ are the energy consumption and the response time. \end{enumerate} The resolution of this problem will return all candidate solutions for the service selection problem. Let us suppose that it gives us $\omega$ assignments $f_0, \dots, f_{\omega -1}$. For solving the service selection problem, we select the assignment $f_{opt}$ such that $\displaystyle opt = \arg \{ \min_{0\leq u \leq \omega-1} \lambda.S(f_{u}) + (1-\lambda).E(f_u) \}$. CSPs are often classified according to the constraints formulation~\cite{Baker95intelligentbacktracking}. In binary CSPs for instance, each constraint is defined as a set of pair values that cannot be associated with two specific variables; this can be generalized to $k$-ary CSPs where constraints are defined as tuples of $k$ values that cannot be associated with $k$ distinct variables. In nonlinear CSPs, constraints are formulated as nonlinear inequalities on variables. With the proposed mapping, the service selection problem is a nonlinear-like CSP. It is straightforward to notice that in applying the given mapping with the {\it one-solution} objective, we obtain the feasibility problem in service selection. The proposed mapping can be extended to many other formulations of the service selection problem. For instance one can include in it other SLAs constraints on reputation, price, or availability. One of its main benefits is to suggest that the service selection problem can be solved by adopting CSPs algorithms. For this, we need to provide an {\it evaluation algorithm} that states how given an assignment $f_u$, we compute $E(f_u)$ and $S(f_u)$. In the following text, we describe this algorithm and a first solution for the service selection problem \section{Evaluation algorithm and exhaustive search} \label{exhaustiveSearch} \subsection{Evaluation algorithm} \label{evaluationAlgorithm} We propose to use our prior QoS prediction algorithm~\cite{GoldmanNgoko}. We will recall below some key points of the evaluation algorithm. In the algorithm proposed in~\cite{GoldmanNgoko}, the QoS are computed with the graph reduction technique. We consider as input a HSG whose operations graph that is obtained by composing the patterns of Figure~\ref{flows}. We will say that such a graph is {\it decomposable}. The algorithm will successively seek in the operations graph, subgraphs whose structure is defined as in Figure~\ref{flows}, but with the $P_i$s, corresponding here to operations. We will use the term \textit{elementary subgraphs} for qualifying them. As soon as an elementary subgraph is found, it is reduced. This means that its QoS are computed and the subgraph is replaced by a single node with the same QoS. Then, the execution continues until the reduction of the operations graph reaches a single node. For optimizing the algorithm's runtime, a reduction order is computed at the beginning. The reduction order is a stack of subgraphs such that: (1) the top subgraph is elementary, (2) as soon as the top subgraph is reduced, the new one in the top is also elementary. The reduction is done according to this order. Goldman and Ngoko~\cite{GoldmanNgoko} showed that elementary subgraphs can be characterized by two frontier-nodes: a root and a leaf one. This fact eases the subgraphs' representation in the reduction order. In Figure~\ref{Reduction-w-order}, an illustration of the reduction process is provided. Initially, we have the graph of Figure~\ref{Reduction-w-order}(1). The first phase of the algorithm will generate the reduction order (or the reduction stack) for the graph. We represent it as a stack in which subgraphs are given by a root and a leaf node. The second phase begins with the unstacking of the top element in the order and then the reduction of the corresponding elementary subgraph. This leads to the graph of Figure~\ref{Reduction-w-order}(2). The algorithm continues in the same way until the reduction stack is empty. At this step, the operations graph will be reduced to a unique node. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.8\linewidth,height=1.7in]{./Figures/Reduction_with_order.eps}} \caption{Example of graph reduction.}\label{Reduction-w-order} \end{figure} As stated in the introduction, our resolution of the service selection problem will be based on the notion of reduction order. We will recall here some important details about its representation. The reduction order is made of pairs $(x,y)$ that each defines a subgraph to reduce. Regarding the composition of each pair, four cases can be distinguished: a) $x$ and $y$ are operations; in this case, the referred reduction is an elementary sequence with $x$ and $y$; in Figure~\ref{Reduction-w-order}(2) for example, we have the reduction $(B, CD)$; b) in the second case, $x$ is a split connector and $y$ is a join connector (for instance $(g_3, g_4)$ in Figure~\ref{Reduction-w-order}(1)); then, the reduction refers to the elementary split/join subgraphs; c) in the third case, $x$ is a split connector and $y$ is an operation; then, the reduction refers to the subgraphs whose root is $x$ and leaf is $y$; in Figure~\ref{Reduction-w-order}(1), we have the reduction $(g_1, F)$ that refers to the subgraph comprising $g_1, B, g_3, C, D, g_4, E, g_2, F$; d) in the last case, $x$ is an operation and $y$ a split connector; then, the reduction refers to the subgraph whose root is $x$ and leaf is {\it the leaf of $y$}. For instance, $(B, g_3)$ comprises $B, g_3, C, D, g_4$. Now that the evaluation algorithm has been detailed, we can derive a service selection algorithm by stating how we solve the CSP. One can envision at this stage to use a generic CSP solver. But, let us observe that in our CSP mapping (See Section~\ref{Decomposition}), we cannot easily map the description of the SLAs constraints ($E(f_0) \leq MaxE$ and $S(f_0) \leq MaxS$) to classical options used in CSP solver (e.g set of unauthorized values, linear equations). This is why in the sequel, we will consider a proper resolution. \subsection{Exhaustive search algorithm} We propose to consider the exhaustive search algorithm for the CSP~\cite{Baker95intelligentbacktracking,DBLP:journals/concurrency/Estevez-AyresGBD11}. Given a CSP $(V, D, C)$, the principle of this algorithm is to randomly generate assignments of values taken in $D$ to the variables $V$. Each time that an assignment $f$ is generated, one evaluates whether or not it fulfills the constraints $C$. If it is the case, we return $f$ as solution; otherwise, we generate another assignment. In using this algorithm in our mapping of the service selection problem (see Section~\ref{Decomposition}), we obtain the Algorithm~\ref{alg:Exhaustive} for the service selection problem. The proposed scheme is based on the following notations: \begin{itemize} \item $|f|$ is the number of abstract services. \item With each abstract operation, we associated a distinct integer number (the abstract services have the numbers: $0, \dots, |f|-1$). \item For defining assignments, we use the array $f$. $f[i]$ will denote the concrete operation associated with the abstract operation $i$. \item $E(f)$ and $S(f)$ are the energy and the service response time of the partial or concrete assignment made in $f$. \item The over-defined notion $Co(index)$ refers to the set of concrete services that can be assigned to the abstract operation $index$. \end{itemize} Though we deduced the exhaustive search algorithm from the know-how in CSP resolution, let us observe that the proposal can be found in other work~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11,Yu:2004:SSA:1018413.1019049}. The main difference between our solution and theirs is the evaluation algorithm. The exhaustive search proposes an exact solution for the service selection problem. However, considering the way we solve the CSP, this solution is not necessarily the best for the runtime viewpoint. In what follows, we propose a faster algorithm that includes more in depth the know-how in CSP's resolution. \scriptsize \begin{algorithm}[H] \begin{algorithmic}[1] \scriptsize \Function{Main}{} \State $OptPenalty = +\infty$; $index = 0$; \State Create an uninitialized array $f$ of values for abstract operations; \State Call exhaustive($f$, $H$, $OptPenalty$, $index$); \State Return the best assignment and $OptPenalty$; \EndFunction \Function{exhaustive}{$f$, $H$, $OptPenalty$, $index$} \If{$index = |f|$} \State Compute $E(f)$ and $S(f)$ from the evaluation algorithm with $H$ and $Q$; \If{ $S(f) \leq MaxS$ and $E(f) \leq MaxE$ } \If{$\lambda.S(f) + (1-\lambda).E(f) < OptPenalty$} \State Save $f$ as the best assignment; \State $OptPenalty = \lambda.S(f) + (1-\lambda).E(f)$; \EndIf \EndIf \State Return; \EndIf \For{ all concrete operations $u \in Co(index)$} \State $f[index] =$ $u$; \State Call exhaustive($f$, $H$, $OptPenalty$, $index+1$); \EndFor \EndFunction \normalsize \end{algorithmic} \caption{\scriptsize SS-Exh (Exhaustive search for service selection). \\ {\bf INPUT:} a HSG $H$ and a QoS matrix $Q$ giving the energy consumption and service response time of each concrete operation; \\ {\bf OUTPUT:} An assignment of concrete operations to abstract ones } \label{alg:Exhaustive} \end{algorithm} \normalsize \section{A Backtracking search for the Service Selection Problem} \label{backtrackingSearch} Our objective is to improve the runtime exhaustive search algorithm. Our conviction is that this algorithm has an amount of {\it useless work} that can be avoided. This section is organized in two parts. In the first one, we discuss about useless work in the exhaustive search. Then, we propose an algorithm for avoiding them. \subsection{Useless work in exhaustive search} Our prior work~\cite{JISA} highlights a critical situation that happens in the service selection problem: {\it the infeasibility problem}. Indeed, given a HSG $H$, it might be impossible to respect the constraints defined in the SLAs because for a sub-HSG $H' \subset H$, the service selection problem does not have any solution. As illustration, let us consider the service selection problem with the operations graph of Figure~\ref{noSolution}. In the SLAs constraints, if it is set that the service response time must be lower than $13$ms, then the infeasibility of the problem can be established in considering the possible assignments for the subgraph $H' = (g_1, g_2)$. The exhaustive search here is not always optimal regarding the amount of work. Indeed, a better search would have consisted of exploring the possible assignments that can be made for the abstract operations in $H'$ and then checking each time whether or not these assignments respect the SLAs. Doing so, the infeasibility could have been established in exploring only a part of the search space. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.9\linewidth,height=1.7in]{./Figures/uselessWork.eps}} \caption{Example of operations graph with related concrete operations. $D$ is implemented by $D_1$ and $D_2$ and $E$ is implemented by $E_1, E_2, E_3$}\label{noSolution} \end{figure} The second instance of useless work is similar to the first one. We suppose now that the problem is feasible; but, there are multiple assignments that do not respect the SLAs. If the constraints violation were already identified in a sub-HSG $H' \subset H$, a part of the useless work could have been avoided. As illustration, if we consider in Figure~\ref{noSolution} that the response time must be lower or equal to $14$ms, then there is only one assignment to the subgraph $(g_1, g_2)$ that can lead to a feasible solution. It is only this assignment that must be joined with the other possibilities for $A$ and $B$. The last situation of useless work is related to the quality of a partial assignment. Let us assume that we already have a correct assignment whose penalty is $p$. It might happen in the exhaustive search that we made an assignment to a sub-HSG $H'$ whose total penalty already exceed the value of $p$. In these cases, we must not try to {\it complete} this assignment (to all operations of $H$) since we already have a better one. Let us observe that this analysis is often done in branch and bound algorithms. In the context of service selection, a discussion can also be found in the work of Yu et al.~\cite{Yu}. Summarizing, we have useless work in the exhaustive search if we can find a sub-HSG for which multiple assignments are infeasible or already dominated by an existing solution. For optimizing these situations we propose to use a backtracking search that we discuss in the following. \subsection{Backtracking algorithms} We consider the CSP resolution with the backtracking technique applied with a static initial ordering. Given a CSP tuple $(V, D, C)$, the technique starts by defining an ordering of the variables $V$. Let us assume that the resulting order is $v_1, \dots, v_n$. Then, one successively assigns values to the variables according to the ordering and their domain definition. In the processing, we can reach a situation where values are assigned to $v_1, \dots, v_i$. Then, one checks if no constraint is violated by this partial assignment and if the assignment is not already dominated by another one. If it is not the case, one assigns a value to $v_{i+1}$. Otherwise, one will assign another value to $v_i$ (one backtracks). The backtracking technique might reduce the useless work that we have identified before. In exhaustive search, we evaluate all possible assignments to the variables. In backtracking, this is not the case. If for instance, there is no assignment for $v_1$ that satisfies the constraints, then backtracking will consider only $|D(v_1)|$ assignments instead of $|D(v_1)|\times \dots \times |D(v_n)|$ for exhaustive search. To demonstrate the gain expected from backtracking, we illustrate in Figure~\ref{backvsExh} the search spaces that we explore. This is the case where, for the graph of Figure~\ref{noSolution}, an SLA constraint states that the service response time must be lower than $13$ms. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.9\linewidth,height=1.7in]{./Figures/SearchSpace.eps}} \caption{Exhaustive search (in a) vs backtracking search (in b). The dark sub-trees correspond to assignments made to $A, B, F, G, H$. With backtracking, these assignments will not be explored.}\label{backvsExh} \end{figure} For applying the backtracking technique, we will now discuss two points. Firstly, we present the basic principles that we use for ordering variables. Then we discuss the implementations of the principles in a backtracking algorithm for service selection. \subsubsection{Ordering principles} The ordering of variables is important in backtracking. The literature~\cite{Baker95intelligentbacktracking} proposes multiple static orderings. We propose to consider two of the most popular: the min-domain and the max-degree ordering. In the min-domain ordering, the abstract operations whose concrete set of operations are smaller must be considered first in the assignments. In the max-degree ordering, the abstract operations that are the most connected to the other operations will be considered first. We map these orderings in the resolution of the service selection problem by the mean of two principles that we introduce further. Let us first consider the following definitions. \begin{definition}[Correct partial evaluation] Given a decomposable HSG $H$ whose abstract operations are bound to concrete one, we define a correct partial evaluation of QoS as the vector of QoS values (energy consumption, response time) of a decomposable subgraph of $H$. \end{definition} For instance, given the operations graph of Figure~\ref{noSolution}, if $D$ is assigned to $D_1$ and $E$ to $E_1$, then a correct partial evaluation is the vector $(22, 1.6)$ for the response time and energy consumption of the subgraph $(g_1, g_2)$. We only compute partial evaluations on decomposable graphs. Let us recall that such graph are obtained by composing the regular patterns that we consider in the semantics of the operations graph. The objective in making partial evaluations is twice: (1) the partial evaluations bounds are compared to the SLAs bounds to check if there is a violation; (2) these bounds are compared to the local optimum found for the service selection problem to see if it is not already dominated. It is important to notice that in the comparisons, the partial evaluations bounds must in some cases be weighed. In Figure~\ref{noSolution}, let us consider the QoS vector (SRT, EC) issued from the reduction of $(g_3, g_4)$. Since this subgraph is included in the xor subgraph $(g_5, g_6)$, we cannot directly compare the response time of this vector to the SLA bound on response time. For taking into account the semantics, we need to multiply this value with the probability for a request to be routed towards $g_3$. In our prior work~\cite{mgc2012}, for such situations, we introduced the notion of {\it reachability probabilities}. In a simplified manner, for each subgraph, this gives the probability for a request to be routed to it. We will use these probabilities for weighting the comparison of QoS vectors with SLAs bounds. The correct partial evaluations capture a facet of our backtracking algorithms: we will regularly evaluate sub-assignments of concrete services to abstract ones such as to detect whether or not we must continue in this exploration path. As stated before, the order of assignments of concrete services to abstract ones can have a great influence in the run. We introduce below the partial evaluation precision for characterizing the possible orderings. \begin{definition}[Partial Evaluation Precision] Given a decomposable HSG $H$ and an ordered list of abstract operations $L = [ u_1, \dots, u_k ]$, we define the precision of the partial evaluation of $L$ as the difference $\displaystyle pep(L) = |L| - neo(L)$. In this formula, $neo(L)$ is the maximal number of $L$' operations from which one can compute a correct partial evaluation of QoS from $H$. \end{definition} This definition relies on the fact that any partial assignment will not lead to a lower bound on SRT and EC that includes all abstract operations. At the beginning of the backtracking algorithm, we must define an ordered list $L$ of abstract operations. This list is such that $L[1]$ is the first abstract operation that will be assigned, then follows $L[2]$ and so one. At a moment in the algorithm, we could have assigned a concrete operation to $L[1], L[2], \dots, L[i]$; but it might happen that the assigned operations do not constitute a decomposable graph (See Section~\ref{evaluationAlgorithm} for decomposable graph). In these cases, a correct partial evaluation will be obtained only from a subset of the assignments ($neo(L)$). In Figure~\ref{noSolution} for instance if $L = [D, A]$ then $pep(L) = 2$. If $L = [D, E]$, then $pep(L) = 0$. Indeed, $[D, A]$ do not {\it shape} a decomposable graph. We can generalize the notion of precision. Let us assume that $L[1..i]$ defines the sublist having the first $i$ elements of $L$. \begin{definition}[Total Evaluation Precision] Given a decomposable HSG $H$ and an ordered list $L = [ u_1, \dots, u_k ]$ of its abstract operations, we define the precision of the total evaluation of $L$ as $\displaystyle \sigma(L) = \sum_{i = 2}^{|L|-1} pep(L[1..i])$. \end{definition} $\sigma(L)$ captures the distance between two numbers of operations: those to which a concrete service is assigned and those from which a correct evaluation can be made. The following result is straightforward. \begin{property} Let us consider a decomposable HSG for which all abstract operations are assigned to concrete ones. Let us also consider an ordered list $L = [ u_1, \dots, u_k ]$ of its abstract operations. Then, $pep(L[1..i]) \geq 0$, $pep(L[1..k]) = 0$ and $\displaystyle \sigma(L) \geq 0$. \end{property} Based on these definitions, we can then define the first principle that we will use for ordering abstract operations. \begin{principle}[Partial Evaluation of QoS First (PEQF)] Let us consider a decomposable HSG $H$ for which all abstract operations are assigned to concrete ones. The generated ordering list $L$ for $H$ must minimize the precision of the total evaluation of $L$. \end{principle} With this principle, our objective is to maintain, each time during the search, an updated correct partial evaluation that we can use for checking whether or not SLAs are violated. As one can remark, this is not the case with large values of $\sigma(L)$. In Figure~\ref{noSolution} for instance, with the ordering $L = [B, A, E, D]$, we have $\displaystyle \sigma(L) = 5$ ( $[B, A, E]$ do not describe a decomposable graph). In this case, the backtracking will not improve the exhaustive search. This is because we must wait for the assignment of a concrete operations to each abstract one for expecting a QoS evaluation. In choosing however the ordering $E, D, A, B$ we have $\displaystyle \sigma(L) = 0$. As one can observe, we can quickly have here a correct partial evaluation that can then be used for checking SLAs violation. Regarding the implementation of the PEQF principle, it is important to consider the following result. \begin{property} We can find a decomposable HSG $H'$ for which there exist two ordering lists $L_1$ and $L_2$ of abstract operations for which $\sigma(L_1) = \sigma(L_2) = 0$. \end{property} This is the case in Figure~\ref{noSolution} with the lists: $L_1 = [D, E, A, B]$ and $L_2 = [D, E, B, A]$. The question in these settings is to choose among the two lists. We adopt for this following principle. \begin{principle}[Min Domain First (MDF)] The ordering of abstract operations must consider in priority, the correct partial evaluations with the shortest number of concrete operations. A random ordering must be adopted if we have multiple options. \end{principle} This principle is inspired from the min-domain heuristic in constraint satisfaction. Let us indeed assume that $B$ has less concrete operations than $A$ (i.e. $|C_o(B)| < |C_o(A)|$). Then, the ordering to choose in Figure~\ref{noSolution} is $L_2 = [D, E, B, A]$. The objective is to detect quickly invalid assignments by considering small domains before. One can criticize the choice of min-domain because in CSP resolution, the max-degree heuristic also performs well for detecting invalid assignments. However, let us observe that the idea of max-degree (to start with the most connected variable) is partially included in the first principle (PEQF). Indeed, we will see that for finding quickly decomposable graphs, we must consider nested operations in priority. To summarize, we modelled our ordering goals within principles, below we consider their implementation for deriving backtracking algorithms. \subsubsection{Implementation of the PEQF principle} For implementing the PEQF principle, we propose to use the ordering of abstract operations suggested by the reduction order of the evaluation algorithm. For instance, in Figure~\ref{noSolution}, from the reduction order $(g_1, g_2); (g_1, B), (A, g_1)$, we deduce the possible ordering $E, D, B, A$. What is challenging is to derive systematically such an ordering. For this we will introduce two data structures. We will also manipulate the deepness concept, introduced in prior work~\cite{GoldmanNgoko}. Below, we recall its definition. \begin{definition}[Deepness~\cite{GoldmanNgoko}] Given an operation graph $G_o$, let us suppose that for a node $u$ (operation or connector), we have $n$ paths $Pt_1, \dots Pt_n$ leading to it. In each path $Pt_i$, we have $\alpha_i$ split connectors and $\beta_i$ join connectors. The deepness of $u$ is defined as $deep(u) = \underset{1 \leq i \leq n}{\max} \{\alpha_i - \beta_i\}$. \end{definition} For example in Figure~\ref{dataStructure}, $deep(A) = deep(g_1) = 0$, $deep(B) = deep(E) = 1$. The first data structure that we consider for the implementation of the PEQF principle is the {\it nested list for subgraphs' nodes (NeLS)}. For split/join subgraphs, the list gives the operation nodes whose deepness are equal to the one of the split node plus $1$. In Figure~\ref{noSolution}, the NeLS will have an entry $(g_1, g_2)$ pointing towards a list with the operations $D$ and $E$. This is because we have a unique split/join graph that comprises these operations. For the operations graph of Figure~\ref{dataStructure}, the NeLS has two entries. The first entry ($(g_3, g_4)$) points towards a list with $C, D$. The second entry ($(g_1, g_2)$) points towards $B, E$. Let us remark that we did not include $C$ here because $deep(C) = deep(g_1)+2$. Given a NeLS $h_s$, we will use the term $hs(x,y)$ for referring to the list that the entry $(x,y)$ points to. For instance, if $h_s$ is the name of the NeLS of Figure~\ref{dataStructure}, then $h_s(g_3, g_4)$ points towards $C$ and $D$. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.8\linewidth,height=2.5in]{./Figures/DataStructure.eps}} \caption{Data structures for the PEQF principle}\label{dataStructure} \end{figure} The second data structure is the {\it nested list for operations' ordering (NeLO)}. The main entries of a NeLO consist of a list of ordered operations. Each entry points towards a list of subgraphs, defined with their root and leafs. The idea is that once a value is assigned to the abstract operation in one entry, the pointed subgraphs can be reduced. In our algorithms, while the NeLO will be used for storing the ordering of abstract operations, we will use the NeLS for the generation of this order. In particular, {\it the NeLS will serve us for detecting when to assign a value to an abstract operation that is not included in the reduction order}. Figure~\ref{dataStructure} shows a NeLO related to an operations graph. This NeLO describes the ordering in which abstract operations must be considered in backtracking assignments. The first operation to which a concrete one must be assigned is $C$. This entry does not point towards any list. Therefore, after this assignment, no reduction is done. Then, we must consider $D$. The assignment of a concrete operation to $D$ implies that we can reduce the subgraph $(g_3, g_4)$. Then, we must continue with $B, E,$ and so on. Let us notice that it is easy to redefine the notion of {\it total evaluation precision} for computing the value of $\sigma(h_o)$ given the NeLO $h_o$. This is because the entries of a NeLO constitute an ordered list of abstract operations. With the defined data structures, we will now state how we implement the PEQF principle. We view the implementation of the PEQF principle as the computation of a NeLO for which the total precision is minimal. The computation of this NeLO is done from an operations graph and a NeLS. The process is the following. Firstly, from the operations graph, we generate a reduction order and a NeLS. Then, we pick the top element of the reduction order. This element corresponds to a pair $(x,y)$ defining the frontiers of a subgraph. We process this element for generating new entries in the NeLO to build. The rules of this processing are given in Figure~\ref{Cases}. Once $(x,y)$ is processed, we consider the next element of the reduction order. We continue in the same way until processing the last element of the reduction order. In Figure~\ref{dataStructureExample}, we illustrate the application of this process. \begin{figure}[!htbp] \begin{center} \begin{tabular}{|p{13cm}|} \hline \small [We have a pair $(x,y)$ in the reduction order and a NeLS denoted $h_s$]\\ \small case \#1 [$x$ and y are operations]: we create two new entries referring to $x$ and $y$ in the NeLO. We chain the last entry with a list pointing towards $(x,y)$; this is for stating that this subgraph can be reduced after assigning a value to $x$ and $y$. We then remove $x$ and $y$ from all lists of $h_s$.\\ \small case \#2 [$x$ is an operation and $y$ is a split connector]: we create a unique entry $x$ and chain it with a list towards $(x,y)$. We remove $x$ from all lists of the the $h_s$. \\ \small case \#3 [$x$ is a split connector and $y$ is an operation]: we create a unique entry $y$ and chain it towards $(x,y)$. We remove $y$ from all lists of the the $h_s$. \\ \small case \#4 [$x$ is a split connector and $y$ is a join connector]: If the list $h_s(x,y)$ is not null, then we create an entry in the NeLO for all elements of $h_s(x,y)$. We chain the list of the last element of the NeLO with $(x,y)$. Then, we delete $h_s(x,y)$. \\ \hline \end{tabular} \caption{Processing of an element $(x,y)$ for the generation of the operations hash table.} \label{Cases} \end{center} \end{figure} \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=1\linewidth,height=2.6in]{./Figures/PEQF.eps}} \caption{Computation of the ordering}\label{dataStructureExample} \end{figure} One can understand the role of the NeLS in this computation as follows. The objective in the NeLO is to have a list of abstract operations pointing towards reductions to be done. For obtaining partial evaluations quickly, we generate the NeLO based on the reduction order. However, the representation of this order might not include some operations. For instance, in Figure~\ref{dataStructureExample}, $C$ and $D$ do not appear in the reduction order. In the NeLO generation, we find the missing operations from the NeLS. In particular, when we have a subgraph reduction, we first explore the NeLS (see case \#3) for including the subgraph' operations in the NeLO. As one can remark in Figure~\ref{dataStructureExample}, $C$ and $D$ are referred to in the NeLS. There are two important observations. The first one is that given an element $(x,y)$ of the reduction order, in Figure~\ref{Cases}, we consider all cases regarding the type of $x$ and $y$~\footnote{The cases are described in Section~\ref{evaluationAlgorithm}}. The second observation is that the described generation process has a polynomial runtime in the number of nodes of the operations graph. More precisely, we have the following result. \begin{lemma} Giving an operations graph, a reduction order and a NeLS, the NeLO generation can be done in $O(n^2)$ where $n$ is the number of nodes of the operations graph. \end{lemma} \begin{proof} Firstly, let us observe that the generation process that we described loops on the number of elements of the reduction order. If we have $n$ nodes in the operation graph, then our prior work~\cite{GoldmanNgoko} guarantees that the number of elements of the reduction order is in $O(n)$. In the treatment of an element of the reduction order, we have the cases listed in Figure~\ref{Cases}. These cases are dominated by two instructions: the creation of NeLO entries and the deletion of NeLS elements. Since the number of elements of NeLO and NeLS are in $O(n)$, these two instructions are in $O(n)$. We have the proof in considering that we loop on elements of the reduction order. \end{proof} Regarding the PEQF principle, the quality of the process proposed for NeLO computations can be perceived throughout the following result. \begin{lemma} Let us consider an operations graph with $n$ abstract operations. If the maximal outgoing degree of a split connector in the graph is $2$ then, the generated NeLO $h_o$ is such that $\displaystyle \sigma(h_o) \leq \frac{n}{2}$. If the graph is a sequence, then $\displaystyle \sigma(h_o) = 0$. \end{lemma} \begin{proof} We obtain the result from an analysis of the process of Figure~\ref{Cases}. Firstly, let us assume that the graph is a sequence. The first element $(x,y)$ of the reduction order in this case refers to two operations. According to the process of Figure~\ref{Cases}, this will generate two entries in the NeLO such that the last entry points towards the reduction $(x,y)$. Consequently, $pep(h_o[1..2]) = 0$. According to the reduction order algorithm~\cite{GoldmanNgoko}, the second element $(x',x)$ of the reduction order will refer to two operations $x'$ and $x$ (already in the NeLO). Therefore, we will put an operation in the NeLO entry with a reduction to operate. This implies that $pep(h_o[1..2])+pep(h_o[1..3]) = 0$. The third element of the reduction order will have the form $(x",x')$. Consequently $pep(h_o[1..2])+pep(h_o[1..3])+ pep(h_o[1..4])= 0$. Generalizing, we will have for the resulting NeLO $h_o$, $\sigma(h_o) = 0$. Let us now assume that we have split connectors in the operation graph. Then, for the first element $(x,y)$ of the reduction order, we have two cases. Either we have two operations or we have a split connector ($x$) and a join ($y$). There are no other possibilities from the reduction algorithm. In the first case, we can easily guarantee from what precedes that $pep(h_o[1..2]) = 0$. In the second case, the process of Figure~\ref{Cases} states that we will add two operations in the NeLO entry such that the last operation will point towards the reduction $(x,y)$. Consequently, $pep(h_o[1..2]) = 0$. For the processing of the next element $(x', y')$ of the reduction order, we can have multiple cases. Either $x'$ or $y'$ is an operation, or, they correspond to connectors. In the former case, we can ensure that $pep(h_o[1..3]) = 0$; in the latter case, we can ensure that $pep(h_o[1..3]) \leq 1$ and $pep(h_o[1..4]) \leq 1$. Generalizing, $\displaystyle \sigma(h_o) \leq \frac{n}{2}$ \end{proof} An interesting question is the one of knowing whether or not better total precisions can be expected. The answer is no. For sequences graphs, the optimality of the result is guaranteed. For arbitrary structures, we have a lower bound. Indeed, let us consider a sequence of subgraphs of two elements. On such graphs, it is impossible to build an ordering $h_o$ such that $\sigma(h_o) < \frac{n}{2}-1$. This comes from the fact that for reducing an internal subgraph, we must assign a concrete operation to each abstract one. When assigning a value to the first concrete operation, an evaluation is not possible. It is only when making an assignment to the second operation that we can evaluate. Consequently, we can consider that the proposed implementation of the PEQF principle is optimal. In the following, we state how to implement the MDF principle. \subsubsection{Implementation of the MDF principle} The objective in the MDF principle is to start the assignments with abstract operations whose set of concrete ones are small. The applicability of this principle however can be in conflict with the implementation of the NeLO. It is the case in Figure~\ref{dataStructureExample} if $|Co(F)| > |Co(A)|$. Indeed, the evaluation of $(A, g_1)$ will concern a domain that is smaller than the evaluation of $(g_1, F)$. However the latter evaluation is done before in the NeLO. Therefore, how can we conciliate the two principles? For this, we consider the following result. \begin{lemma}[Free permutation of operations and subgraphs]\label{freePermutation} Let us consider a decomposable HSG $H$; \\ a) let us assume that in the operations graph, we have a sequence $(x, y)$ where $x$ and $y$ are either operations or decomposable subgraphs. The HSG $H'$ in which we reversed the subgraphs $x$ and $y$ (so we have the sequence $(y, x)$) has the same mean response time and energy consumption as $H$; \\ b) let us assume that we have a split connector $g$ in $H$. The $H'$ in which we reversed two branches of the split connector has the same mean response time and energy consumption as $H$. \end{lemma} \begin{proof} The results come from the commutativity of computations in the QoS aggregation rules (see Table~\ref{tabAggRules}). The response time of the graph $(x, y)$ will be $S(x)+S(y) = S(y) + S(x)$. The energy consumption will be $E(x)+E(y) = E(y) + E(x)$. Since $S(y) + S(x)$ and $E(y) + E(x)$ are the response time and energy consumption of $(y,x)$, we have the proof in the case of sequences. In the case of split connectors, we can establish the proof in the same way. For instance, given an elementary Fork with the operations $x, y$ its response time is $\max\{x, y\} = \max\{y, x\}$. \end{proof} The interest in this result is that it suggests a solution for applying the MDF principle without violating the PEQF principle. The idea is that the generation of the NeLO is based on a particular exploration of the operations node. Before this generation, one can use the reverse instructions of Lemma~\ref{freePermutation} for obtaining a topology of the operations graph, adjusted to partial evaluations for small domains in priority. As a simple illustrative example, let us reconsider the operations graph of Figure~\ref{dataStructureExample}. In the case where $|Co(F)| > |Co(A)|$ we can switch the nodes $A$ and $F$. As a result, the reduction order will be $(g_3,g_4); (B, g_3); (g_1,g_2); (g_1, A); (F,g_1)$. In it, the evaluation $(g_1, A)$ is now {\it before} $(F,g_1)$. For the implementation of the MDF principle, we propose to make a topological sorting of the initial operations graph that is done based on an extended NeLS (denoted NeLS+). The extensions concern the following points: a) we distinguish between branches of the elements of each split/join subgraph; b) we include split/join subgraphs in the list towards which a NeLS entry points; c) We assume that the root and leaf nodes of the operations graph are branches of a virtual graph $(g_0, g'0)$. \begin{algorithm}[H] \begin{algorithmic}[1] \scriptsize \Function{Main}{} \State Generate the reduction order and store it in $ORD$; \State Build a NeLS+ $hs^+$; \State Topological Sorting of $H$ according to $hs^+$. The result is $H'$; \State Build a NeLS $hs$ for $H'$; \State Generation of a NeLO $h_o$ from $H'$ and $hs$; \State $OptPenalty = +\infty$; $index = 0$; \State Create an uninitialized array $f$ of values for abstract operations; \State Call backtrack($f$, $H'$, $h_o$, $OptPenalty$, $index$); \State Return the best assignment and $OptPenalty$; \EndFunction \Function{backtrack}{$f$, $H'$, $h_o$, $index$} \If{$index = |f|$} \State Compute $E(f)$ and $S(f)$ from the evaluation algorithm with $H$ and $Q$; \If{ $S(f) \leq MaxS$ and $E(f) \leq MaxE$ } \If{$\lambda.S(f) + (1-\lambda).E(f) < OptPenalty$} \State Save $f$ as the best assignment; \State $OptPenalty = \lambda.S(f) + (1-\lambda).E(f)$; \EndIf \EndIf \State Return; \EndIf \For{ all concrete operations $u \in Co(h_o[index])$} \State $f[index] =$ $u$; \If{ $h_o[index]$ points towards some reductions} \State Update $E(f)$ and $S(f)$ by making the reductions; \State Get the reachabilities probabilities $pa$ of the last reduction; \If{ $pa.S(f) \leq MaxS$ and $pa.E(f) \leq MaxE$ } \If{ $\lambda.pa.S(f) + (1-\lambda)pa.E(f) < OptPenalty$ } \State Call backtrack($f$, $H$, $h_o$, $OptPenalty$, $index+1$); \EndIf \EndIf \Else \State Call backtrack($f$, $H$, $h_o$, $OptPenalty$, $index+1$); \EndIf \EndFor \EndFunction \normalsize \end{algorithmic} \caption{\scriptsize SS-b-PM (Backtracking search for service selection with the PEQF and MDF principle). \\ {\bf INPUT:} a HSG $H$ and a QoS matrix $Q$ giving the energy consumption and service response time of each concrete operation; \\ {\bf OUTPUT:} An assignment of concrete operations to abstract ones } \label{alg:Backtracking} \end{algorithm} Based on the lemma~\ref{freePermutation}, we will consider two main instructions: the {\it branch sorting} and the {\it sequence sorting}. In branch sorting, the objective is to revert the branches of a split/join subgraph such as to ensure that when computing the reduction order, in the first place, we will consider the branch with the smallest domain. Such an instruction is meaningful because the computation of the reduction order occurs through a depth first search algorithm in which the branches are explored according to a number assigned on them. In our current implementation, the branch with the greatest number is explored in priority. The sequence sorting considers a sequence of operations and subgraphs and revert them such as to ensure that the operations or subgraphs with smallest domain will be considered in priority. In the computation of the reduction ordering, let us observe that the last operations of each sequences are considered in priority. Therefore, the reverting must ensure that these operations have the smallest domain size. In Figure~\ref{transClosureExample} for example, the sequence sorting will reverse in the entry $(g_0, g'_0)$ the nodes $A$ and $F$. The topological sorting is done as follows. Initially, we sort the entries of the NeLS+ based on their deepness. The idea is to have the deeper subgraph at the top and the less deep one at the end. Then, we compute the total domain size to which correspond each entry. The domain size of an entry is the sum of domain size of the lists of elements to which it gives access. Let us remark that because of the initial sorting, we can simply evaluate the domain size starting from the top entry to the last one. In Figure~\ref{transClosureExample} for instance the domain size of $g_3$ is {\it dom-size($g_3,g_4$)} = $|Co(C)| + |Co(D)|$. The domain size of $(g_1, g_2)$ is $|Co(E)| + |Co(B)| + $ {\it dom-size($g_3,g_4$)}. Once we have the domain size of each entry, we make branches sorting first and sequence sorting next. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.75\linewidth,height=1.4in]{./Figures/TransitiveClosure.eps}} \caption{Example of NeLS+}\label{transClosureExample} \end{figure} Now that we stated how we implement the principles, in Algorithm~\ref{alg:Backtracking}, we give our general backtracking scheme. The proposed scheme is based on the following notations: \begin{itemize} \item We assume that $h_o[i]$ refers to the variable of $h_o$ in the $i^{th}$ entry. \item For defining assignments, we use the array $f$. $f[i]$ will comprise the concrete service associated with $h_o[i]$. \item The over-defined notion $Co(h_o[index])$ refers to the set of concrete services that can be assigned to $h_o[index]$. \end{itemize} We will refer to this global algorithm as {\it SS-b-PM}. We will also consider its variant {\it SS-b-P} where we do not apply the MDF principle. \section{Experimental evaluation} \label{ExperimentalEvaluation} Throughout the experimental evaluations, our objectives were the following: \begin{enumerate} \item to demonstrate that the backtracking based heuristics outperforms the exhaustive search; \item to demonstrate that the backtracking based heuristics outperforms integer linear programming; \item to compare the different heuristics. \end{enumerate} \subsection{Backtracking versus exhaustive search} For these experiments, we used $4$ types of operations graphs, each based on reference BPMN processes~\cite{Omg,Freund}. The structure of the processes is given in Figure~\ref{figProcess}. From each processes, we created $300$ service selection problems. Depending on the SLAs, we ranged instances in three classes of $100$ instances: simple, medium, hard. We chose these names because our experiments globally showed that in increasing the size of the bounds, $MaxS$ and $MaxE$, we increase the runtime of all algorithms. Intuitively, the reason is that with big bounds, there are more candidate solutions. Depending on the number of concrete implementations per abstract operations, we ranged our instances in $5$ classes of $60$ instances each. The setting of instances is resumed in Table~\ref{instanceSetting}. For each instance, we randomly draw the service response time of operation between $1,\dots, 1500$. Given a service response time $S$, we deduced the energy consumption from the formula $E = P.S$, where $P$ is a power consumption value randomly drawn between $100,\dots,150$. \begin{figure}[ht] \centering \fbox{ \subfloat[Shipment]{ \includegraphics[scale=0.30]{./Figures/shipment.eps} } \subfloat[Procurement]{ \includegraphics[scale=0.30]{./Figures/procurement.eps} } } \fbox{ \subfloat[Disbursement]{ \includegraphics[scale=0.30]{./Figures/Disbursement.eps} } \subfloat[Meal Options]{ \includegraphics[scale=0.30]{./Figures/MealOptions.eps} } } \caption{Process examples} \label{figProcess} \end{figure} \begin{table}[htbp] \centering \begin{tabular}{c|c|c} \hline {\bf Type of SLAs } & {\bf (MaxS; MaxE)} & {\bf Domain sizes} \\\hline \textit{Simple} & {$(3500; 7000)$} & \multirow{3}{*}{$4$, $6$, $8$, $10$, $12$} \\ \textit{Medium} & {$(3000; 5500)$} \\ \textit{Hard} & {$(2700; 4000)$} \\\hline \end{tabular} \caption{Instance settings} \label{instanceSetting} \end{table} \begin{figure}[ht] \centering \subfloat[Shipment]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/ShipmentSimple.eps} } \subfloat[Procurement]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/ProcurementSimple.eps} } \subfloat[Disbursement]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/DisbursementSimple.eps} } \subfloat[Meal Options]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/MealSimple.eps} } \caption{Runtime of the exhaustive search on {\it simple} instances} \label{Time-ss-Exh} \end{figure} In Figure~\ref{Time-ss-Exh}, we depict the mean runtime obtained from the exhaustive search algorithm ({\it ss-Exh}) on the class of {\it simple} instances. As one can notice, these runtime exponentially increase with the domain size which we recall, is the number of concrete services that can be assigned to an abstract operation. In the same example, the mean runtime {\bf in all experiments } was lower than $2$ seconds for the backtracking based algorithms. The same trends were also observed in considering instances of the {\it medium} and {\it hard} class. This demonstrates that there is a large class of instances on which the backtracking based algorithms outperforms the exhaustive search. For validating our prior intuition regarding the considerable amount of useless work in exhaustive search (see Section~\ref{backtrackingSearch}), we quantified and computed this work. We defined the useless work of an algorithm as a complete assignment that does not improve the current local solution. In Figure~\ref{UselessExh}, we depict the useless work observed in some instances of the disbursement process. As one can notice, this quantity was always greater in exhaustive search. The same trend was observed in the other instances. \begin{figure}[ht] \centering \subfloat[Disbursement Medium]{ \includegraphics[width=0.45\linewidth,height=1.4in]{./Figures/UsefulUseless.eps} } \subfloat[Disbursement Hard]{ \includegraphics[width=0.45\linewidth,height=1.4in]{./Figures/UsefulUseless2.eps} } \caption{Useless searches in exhaustive search and backtracking} \label{UselessExh} \end{figure} \subsection{Backtracking versus Integer Programming} We also compared the backtracking algorithms with integer linear programming. For this purpose, we used the integer modelling that we proposed in prior work~\cite{JISA}. This modelling was proposed for a more general class of services compositions. However, it supports our restricted setting. The integer model was run with the GLPK solver~\cite{GLPK}. In the search of the optimal solution, the solver internally computes a lagrangian relaxation for avoiding useless searches. Doing so, the run of our integer model is very close to the one of the BBLP algorithm. In the first experiments, we compared the solvers and our approaches on the previously defined instances. We did not however see any significant differences in performances. We increased the domain sizes to $140$; but no significant differences appeared. The mean runtime was around $2.5$ seconds in either approach. For exhibiting runtime differences, we considered two other processes: the motif network and the genelife2 workflow. Both were taken from the Pegasus database~\cite{Pegasus} and come with different sizes (small, medium large). For both, we chose the small size variants. An illustrative representation of the chosen processes is given in Figure~\ref{Workflow}. \begin{figure}[ht] \centering \subfloat[Motif]{ \includegraphics[width=0.35\linewidth,height=1.4in]{./Figures/motif.eps} } \subfloat[Genelife2]{ \includegraphics[width=0.35\linewidth,height=1.4in]{./Figures/gene2life.eps} } \caption{Pegasus workflows} \label{Workflow} \end{figure} On these two processes, we randomly generated $30$ instances using {\it simple} SLAs constraints and $30$ ones with {\it hard constraints}. The domain size of the instances were taken between $4$, $8$, $16$, $32$, $64$ \begin{figure}[ht] \centering \subfloat[Genelife2 Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/Genelife2Hard.eps} } \subfloat[Motif Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/MotifHard.eps} } \subfloat[Genelife2 Simple]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/Genelife2Simple.eps} } \subfloat[Motif Simple]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/MotifSimple.eps} } \caption{Runtime of the exhaustive search on {\it simple} instances} \label{Time-ss-Exh} \end{figure} The experimental results are depicted in Figure~\ref{Time-ss-Exh}. We noticed that the integer programming (MILP in the Figure) was dominated by the backtracking algorithms when we increase the domain size. In particular, for the motif network with a domain size equal to $32$, we were not able to get a solution from integer programming after $1$ week. We believe that differences between integer programming and backtracking were due to the more important number of operations of genelife2 and the motif network. Indeed, we finally had $15$ operations nodes for the motif network and $11$ operations nodes for the genelife2 workflow. In addition to the superiority of backtracking, these experiments also revealed that if we can expect real-time results with backtracking in the case where we have less than $9$ abstract operations, with more than $10$ abstract operations, the algorithm becomes time consuming. But it is important to remark that given few number of abstract operations, we can have quick solutions even if the number of concrete services is important (greater than $1400$ for instance). \subsection{What is the best backtracking algorithm?} The objective here was to determine what is the faster backtracking algorithm. On this point, our experiments did not reveal a clear trend. In Figure~\ref{Time-ss-Exh} for instance, one can notice that the algorithms are quite similar even if there are some runtime differences. We did additional experiments where instead of a fixed number of concrete services per abstract operations, we set a random number chosen each time between $3$ and a maximal domain size. In Figure~\ref{Time-ss-b}, we depict the results obtained on {\it hard } instances. As one can notice, they do not define a particular trend. We believe that the small variations that we observed state that depending on the distribution of energy consumption and service response time, a backtracking algorithm can detect earlier some partial assignments that must not be complete. \begin{figure}[ht] \centering \subfloat[Shipment Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/ShipmentHard.eps} } \subfloat[Disbursement Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/DisbursementHard.eps} } \subfloat[Procurement Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/ProcurementHard.eps} } \subfloat[Meal Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/MealHard.eps} } \caption{Runtime of the backtracking algorithms on {\it hard} instances} \label{Time-ss-b} \end{figure} Our experiments globally showed that the backtracking algorithms can outperform classical exact solutions for the service selection problem. But, we did not see any clear difference between the backtracking algorithms. \section{Discussion} \label{Discussion} In this paper, we proposed a novel approach for solving the service selection problem. The main idea here is to view this problem as a CSP. We now introduce a short discussion about the potential of this viewpoint. As already mentioned, our proposal can also be adapted for the resolution of the feasibility problem. In an algorithmic viewpoint, it suffices in {\it SS-b-PM} to add a control that will stop the execution when a first solution is found. The experiments demonstrated that backtracking can be envisioned for real time service compositions on small processes. In the case of large processes, we propose to modify theses algorithms for obtaining quick solutions that are not necessarily optimal. A classical and simple idea for this is to include a cutoff time. Cutoff time means that in the algorithm, we include a control for returning the best computed solution when the cutoff time is reached. The CSP mapping that we propose can also inspire other algorithms for the service selection problem. While our proposal only focuses on the backtracking technique, it might be interesting to consider other CSP resolution technique like the forward checking or the backjumping~\cite{Baker95intelligentbacktracking}. Moreover, we can also revisit our backtracking proposal to adapt it to other ordering techniques used in constraint satisfaction. We have for instance the max-domain, min-degree, max-domain/degree and the set of dynamic orderings. Finally there also exist many parallel algorithms for constraint satisfaction. The techniques employed for achieving parallelization can certainly be reused in the service selection problem. \section{Conclusion} \label{Conclusion} This paper proposes a novel approach for the resolution of the service selection problem. The main idea is to consider this problem as a particular case of the constraint satisfaction problem. We gave the theoretical mapping that supports this idea. We then derive from it two backtracking algorithms for service selection. We showed in the experiments that the proposed algorithms can outperform classical exact solutions for the service selection problem. However, for using them in real-time context, one must consider small services compositions. This result was expected due to the NP-hardness of the problem; however the techniques that we propose can drastically reduce the search space explored for finding the optimal services composition. For continuing this work, we envision two main directions. In the first one, we are interesting in developing a parallel algorithm for the service selection problem based on what is done in CSP parallelization. Our second direction consists of applying our algorithms for service selection on real services compositions. \section*{Acknowledgments} The experiments conducted in this work were conducted on the nodes B500 of the university of Paris 13 Magi Cluster and available at http://www.univ-paris13.fr/calcul/wiki/ \bibliographystyle{hplain}
\section{Introduction} In four-dimensional general relativity, there is a maximum charge and angular momentum that can be added to a black hole of given mass. In Einstein-Maxwell theory, these extremal black holes are characterized by having a degenerate horizon with zero Hawking temperature. In theories that also have (real) scalars fields exponentially coupled to the Maxwell field, such as supergravity or string theory, the extremal limit is either singular \cite{Garfinkle:1990qj}, or similar to Einstein-Maxwell due to an attractor mechanism \cite{Andrianopoli:2006ub}. In this paper, we show that there are black holes with a different type of extremal limit. In the theory we consider, black holes again have a maximum charge for given mass\footnote{For simplicity, we will restrict our attention to static black holes with no rotation.}, but the extremal black hole can have a nondegenerate horizon with nonzero Hawking temperature. Our theory will include a scalar field, but unlike some of the theories mentioned above, the usual Reissner-Nordstr\"om (RN) solution (describing static charged black holes in Einstein-Maxwell theory) remains a solution in the theory with scalar added. It has recently been shown that if a massless scalar field is appropriately coupled to $F^2\equiv F_{ab}F^{ab}$, RN can become unstable to forming scalar hair, \emph{i.e.}, static scalar fields outside the horizon \cite{Herdeiro:2018wub,Fernandes:2019rez}. This is because $F^2 < 0$ for an electrically charged black hole, and acts like a negative potential for the scalar near the horizon. When the charge is large enough, this destabilizes the scalar field and causes it to become nonzero. We add a massive, charged scalar field $\psi$ to Einstein-Maxwell with a simple $|\psi|^2 F^2$ coupling. As before, when the electric charge is large enough, RN becomes unstable and develops charged scalar hair. Our original motivation for exploring this model was that its solutions are asymptotically flat analogs of the asymptotically anti-de Sitter (AdS) solutions known as holographic superconductors \cite{Gubser:2008px,Hartnoll:2008vx,Hartnoll:2008kx}, which have been extensively studied. In AdS, the charged scalar condenses at low temperature without any explicit coupling between the scalar and Maxwell field. Without the cosmological constant, however, this does not happen and one needs to add a coupling like $|\psi|^2 F^2$. (There are also hairy black holes without this coupling if the charged scalar has an appropriate potential \cite{Herdeiro:2020xmb}, but they do not branch off from RN.) It was shown in \cite{Hartnoll:2020fhc} that the dynamics inside the horizon of a holographic superconductor is quite intricate. We study the dynamics inside the horizon of this asymptotically flat analog in a companion paper \cite{dhs}. Here we focus on the solutions outside the horizon, and look at their extremal limit. As seen before \cite{Garfinkle:1990qj,Herdeiro:2018wub}, the hairy black holes can exceed the usual extremal limit and have $Q^2 > M^2$. However, unlike previous examples, we find that for some range of parameters, the maximum charge solution for fixed mass is a nonsingular hairy black hole with nonzero Hawking temperature. So although it is ``extremal" in the sense of having maximum charge, it is not a familiar ``extremal black hole" with either zero temperature or a singular horizon. We will call this new type of extremal black hole a ``maximal warm hole". The existence of maximal warm holes raises puzzling questions about the endpoint of Hawking radiation. If a black hole continues to radiate neutral gravitons when it reaches its extremal limit, it would appear to create a naked singularity. Unlike the standard Planck mass naked singularity expected at the endpoint of the evaporation of a neutral black hole, this could create a naked singularity with large mass. We will argue that this does not occur. Our theory also contains charged solitons, and for completeness, we include a discussion of them. We find that they have a minimum mass, so they do \emph{not} exist arbitrarily close to Minkowski space. They also cannot be viewed as the limit of the hairy black holes as the black hole radius goes to zero. \section{Equations of motion} We start with the action \begin{equation}\label{eq:action} S= \int \mathrm{d}^{4}x \sqrt{-g}\left[R- F^2-4(\mathcal{D}_a\psi)(\mathcal{D}^a \psi)^\dagger-4 m^2 |\psi|^2-4 \alpha F^2 |\psi|^2\right]\,, \end{equation} where $\mathcal{D}=\nabla-i\,q\,A$ and $F=\mathrm{d}A$\,. This theory satisfies all the usual energy conditions if the coupling constant $\alpha$ is positive, which we will assume is the case. The equations of motion for this general action read \begin{subequations}\label{EOM:S} \begin{multline} R_{ab}-\frac{R}{2}g_{ab}=2\left(1+4 \alpha |\psi|^2\right)\left(F_{ac}F_b^{\phantom{b}c}-\frac{g_{ab}}{4}F^{cd}F_{cd}\right) \\ +2\left[(\mathcal{D}_a \psi) (\mathcal{D}_b \psi)^\dagger+(\mathcal{D}_a \psi)^\dagger(\mathcal{D}_b \psi) -g_{ab} (\mathcal{D}_c \psi)(\mathcal{D}^c \psi)^\dagger-g_{ab} m^2 |\psi|^2 \right]\,, \end{multline} \begin{equation} \nabla_a\left[\left(1+4 \alpha |\psi|^2\right)F^{ab}\right]=i\,q\, \left[(\mathcal{D}^b \psi)\psi^\dagger-(\mathcal{D}^b \psi)^\dagger \psi \right]\,, \end{equation} and \begin{equation} \mathcal{D}_a \mathcal{D}^a \psi-\alpha F^{cd}F_{cd} \psi-m^2 \psi=0\,. \label{eq:linearscalar} \end{equation} \end{subequations} In order to understand the static, spherical solutions to the above equations of motion, we use the following standard ansatz \begin{subequations} \label{eq:ansatzout} \begin{equation} \mathrm{d}s^2=-p(r)\,g(r)^2\,\mathrm{d}t^2+\frac{\mathrm{d}r^2}{p(r)}+r^2 (\mathrm{d}\theta^2 +\sin^2\theta\ \mathrm{d}\phi^2 ) \end{equation} For the scalar and Maxwell potential we take \begin{equation} A=\Phi(r)\,\mathrm{d}t\,,\qquad \psi=\psi^\dagger=\psi(r)\,. \end{equation} \end{subequations} The equations of motion restricted to our ansatz become \begin{subequations}\label{EOM} \begin{align} &\frac{g}{r^2}\left[\frac{r^2}{g}(1+4\,\alpha\, \psi^2)\Phi^\prime\right]^\prime-\frac{2\,q^2\,\psi^2}{p}\Phi=0\,, \\ &\frac{1}{r^2g}\left(r^2\,g\,p\,\psi^\prime\right)^\prime+\frac{2\,\alpha\,{\Phi^\prime}^2}{g^2}\psi+\left(\frac{q^2\,\Phi^2}{p\,g^2}-m^2\right)\psi=0\,, \\ &\frac{g^\prime}{g}-2\,r\,\left(\frac{q^2\Phi^2\psi^2}{p^2g^2}+{\psi^\prime}^2\right)=0\,, \\ &\frac{1}{r^2g}\left(r\,g\,p\,\right)^\prime-\frac{1}{r^2}+2\,m^2\psi^2+\frac{1+4\,\alpha\,\psi^2}{g^2}{\Phi^\prime}^2=0\,, \end{align} \end{subequations}% where ${}^\prime$ denotes a derivative with respect to $r$. Note that there are second order differential equations for $\Phi$ and $\psi$, but only first order equations for $g$ and $p$. The event horizon $r=r_+$ is the largest root of $p(r)$, and we will focus on the region outside the horizon, $r\geq r_+$. (The behavior inside the horizon is studied in \cite{dhs}.) For numerical convenience we work with a compact radial coordinate \begin{equation} z=\frac{r_+}{r}\in(0,1)\,, \end{equation} and change variables as \begin{equation} p(r)=\left(1-z\right)q_1(z)\,,\quad \Phi(r) = \left(1-z\right)q_2(z)\,,\quad \psi(r)=q_3(z)\quad \text{and}\quad g(r)^2=q_4(z)\,. \end{equation} (This imposes the gauge condition that $A_t = 0$ on the horizon.) We then solve for $q_1$, $q_2$, $q_3$ and $q_4$ subject to appropriate boundary conditions. At asymptotic infinity, located at $z=0$, we demand \begin{equation} q_1(0)=q_4(0)=1\,,\quad q_3(0)=0\,,\quad\text{and}\quad q_2(0)=\mu \end{equation} with $\mu$ being the electrostatic potential. The hairy black hole solutions depend on several parameters. In addition to the parameters in the action $\{m, q, \alpha\}$, black holes are characterized by their mass $M$ and charge $Q$. These turn out to be given by \begin{equation} M = \frac{r_+}{2}[1-\dot{q}_1(0)]\,, \qquad Q = r_+\,[\mu-\dot{q}_2(0)]\,, \end{equation} where $\dot{}$ denotes a derivative with respect to $z$. There is a scaling symmetry, so we will present our results using the four dimensionless quantities $\{q/m, \alpha, M\,m, Q\,m\}$. However, to find the solutions numerically, it is more convenient to use a slightly different set of dimensionless quantities: $\{q/m,\alpha, y_+, \mu\}$, where $y_+ \equiv m \, r_+$ controls directly the area of the black hole event horizon (located at $z=1$ in our compact coordinates). At the horizon, smoothness determines the behaviour of all functions, giving a Dirichlet boundary condition, and three Robin boundary conditions for $q_2$, $q_3$ and $q_4$. For concreteness we present the Dirichlet condition which takes the form \begin{equation} q_1(1)=1-2 y_+^2 q_3(1)^2-\frac{q_2(1)^2}{q_4(1)} \left[1+4\,\alpha\, q_3(1)^2\right]\,. \end{equation} The strategy is now clear: for each value of $\{q/m, \alpha, y_+, \mu\}$ we solve the resulting equations of motion as a boundary value problem with the above boundary conditions. We solve these via a standard relaxation method on a Gauss-Lobatto collocation grid (see \cite{Dias:2015nua} for a review of such numerical methods). At several points in the main text, we will refer to the entropy and temperature of the black holes. These are given by \begin{equation} m^2\,S=\pi\,y_+^2\quad\text{and}\quad \frac{T}{m}=\frac{q_1(1)\sqrt{q_4(1)}}{4\pi y_+}\,. \end{equation} It is a simple exercise to show that the mass $M$, charge $Q$, chemical potential $\mu$, entropy $S$ and Hawking temperature $T$ obey the first law of black hole mechanics \begin{equation} \mathrm{d}M= T\,\mathrm{d}S+\mu\,\mathrm{d}Q\,, \end{equation} which we check numerically throughout. All solutions in this manuscript satisfy this relation to at least the $10^{-4}\%$ level of confidence. Finally, we note that when the scalar field vanishes, \emph{i.e.} $\psi=0$, the only black hole is given by the familiar Reissner-Nordstr\"om (RN) solution for which \begin{equation} \label{RNsol} p(r)=p_{\mathrm{RN}}(r)\equiv\frac{(r-r_+)(r-r_-)}{r^2}\,,\quad g(r)=1\,,\quad\text{and}\quad \Phi(r)=\Phi_{\mathrm{RN}}(r)\equiv\left(1-\frac{r_+}{r}\right)\mu \end{equation} with $Q=\mu\,r_+$ and $r_{\pm}\equiv M\pm\sqrt{M^2-Q^2}$. The RN temperature is $T_{\mathrm{RN}}=\frac{r_+-r_-}{4\pi r_+^2}$ and, at extremality, one thus has $r_-=r_+=M=Q$ and $\mu=1$. Note that $r_-/r_+ = \mu^2$. \subsection{Asymptotic condition} There is another condition that must be satisfied in order to obtain hairy black holes. The scalar field will be bound to the black hole only if it falls off appropriately at infinity. In our gauge with $A_t (r_+) = 0$, and $A_t(r=\infty) =\mu $, this is only possible if \begin{equation}\label{condition} q^2 \mu^2 \le m^2\,. \end{equation} The necessity of this condition can be seen by considering the asymptotic behavior of the scalar field. If $ q^2 \mu^2< m^2$, the scalar field behaves at large radius like \begin{equation}\label{outpsiinf1} \psi =\frac{e^{- r \sqrt{m^2-q^2 \mu^2}}}{r^{1+\eta}}\left [b + \mathcal{O}(r^{-1}) \right ], \end{equation} for a constant $b$, where \begin{equation}\label{outpsiinf2} \eta \equiv \sqrt{m^2-q^2 \mu^2}\,M-\frac{\mu\,q^2\,(\mu M-Q)}{\sqrt{m^2-q^2 \mu^2}}\,. \end{equation} The exponential decay at large distance is characteristic of a bound state. If $ m^2=q^2 \mu^2$, the scalar field still decays exponentially like \begin{equation}\label{outpsiinf3} \psi = \frac{e^{-2 \sqrt{2} \,q \sqrt{\mu }\sqrt{Q-\mu M}\; \sqrt{r}}}{r^{3/4}}\left[b+\mathcal{O}(r^{-1/2})\right]. \end{equation} However, if $ q^2 \mu^2 > m^2$, the scalar field oscillates asymptotically indicating that the scalar field is not bound to the black hole. More importantly, such solutions would have infinite energy. \section{Linear instability}\label{sec:linear} The familiar RN metric with $\psi = 0$ is clearly always a solution to our equations of motion \eqref{EOM:S}. However, this solution can become unstable to forming scalar hair. This is because $F^2 < 0$ for an electrically charged black hole, so the last term in the action acts like a negative contribution to the scalar mass. This can become large enough near the horizon to dominate the $m^2$ term in the action. In this section we determine when this instability sets in using a linearized analysis. In particular, we will take Eq.~(\ref{eq:linearscalar}) and set the metric and gauge field to be those of the RN black hole \eqref{RNsol}. Furthermore, we will take the scalar field $\psi$ to be radially symmetric and Fourier expand in time as \begin{equation} \psi(t,r) = \tilde{\psi}(r)\,e^{-i\,\omega\,t}\,, \end{equation} which introduces the frequency $\omega$ of the perturbation and brings the scalar equation (\ref{eq:linearscalar}) to the following form \begin{equation} \frac{1}{r^2}\left[r^2 p_{\mathrm{RN}}(r)\tilde{\psi}^\prime(r)\right]^\prime+\left\{\frac{\left[\omega+q\,\Phi_{\mathrm{RN}}(r)\right]^2}{p_{\mathrm{RN}}(r)}-m^2+2\,\alpha\,{\Phi_{\mathrm{RN}}^\prime(r)}^2\right\}\tilde{\psi}(r)=0\,. \label{eq:linear} \end{equation} We would like to understand whether finite energy excitations, regular on the future event horizon of the RN black hole, exist for which $\mathrm{Im}\, \omega>0$, in which case we have a mode whose amplitude grows in time and the system develops an instability. Searching for such excitations amounts to studying a generalised eigenvalue problem in $\omega$, which we present in Appendix \ref{sec:Appendix}. Here we present a simple criterion for when RN is unstable, and compute the onset of the instability by looking for $\omega = 0$ modes. \subsection{The near horizon analysis}\label{sec:linearNH} Since the RN black hole has a maximum electric field at extremality, we expect that the minimum charge ratio $q/m$ and minimum $\alpha$ needed to herald an instability can be determined by analysing the extremal solution. The near horizon geometry of the extremal RN black hole takes the direct product form $\mathrm{AdS}_{2}\times S^2$ where $\mathrm{AdS}_{2}$ stands for 2-dimensional anti-de Sitter spacetime. This is best seen by first setting $r_-=r_+$, introducing new coordinates $(\tau,\rho)$ as \begin{equation}\label{NHcoord} t = \frac{r_+\,\tau}{\lambda}\,,\quad\text{and}\quad r=r_+(1+\lambda\,\rho) \end{equation} and taking the limit $\lambda\to0$. Once we do this, one obtains \begin{subequations}\label{NHsolution} \begin{equation} \mathrm{d}s^2_{\mathrm{AdS}_{2}\times S^2}= L^2_{\mathrm{AdS}_{2}}\left(-\rho^2\mathrm{d}\tau^2+\frac{\mathrm{d}\rho^2}{\rho^2}\right)+r_+^2\,\left(\mathrm{d}\theta^2+\sin^2\theta\,\mathrm{d}\phi^2\right) \end{equation} and \begin{equation} A_{\mathrm{AdS}_{2}\times S^2}=\mu_{\mathrm{AdS}_2}\,\rho\,\mathrm{d}\tau\,, \end{equation} \end{subequations where the first factor in the line element corresponds to the two-dimensional AdS$_2$ with $L_{\mathrm{AdS}_{2}}=r_+$ and $\mu_{\mathrm{AdS}_2}=r_+$. The near-horizon solution \eqref{NHsolution} solves \eqref{EOM} with $\psi=0$. It is a well know fact that \emph{neutral} massive scalar waves propagating on asymptotically AdS spacetimes possess a value for the mass squared below which AdS is unstable and negative energy solutions to the wave equation can be constructed. This is the so-called Breitenl\"ohner-Freedman (BF) bound \cite{Breitenlohner:1982bm,Breitenlohner:1982jf}. In particular, for a neutral massive scalar field in $\mathrm{AdS}_2$ this bound reads \begin{equation} m^2_{\mathrm{AdS}_2}L_{\mathrm{AdS}_{2}}^2\geq -\frac{1}{4}\,. \end{equation} However, a \emph{charged scalar} field not only gets contributions from bare mass terms in its equation of motion, but also from the gauge fields, since these can act as effective two-dimensional masses. It was first conjectured in \cite{Denef:2009tp}, and proved in certain cases in \cite{Dias:2010ma}, that the the \emph{full} extreme black hole is unstable with respect to charged perturbations if \begin{equation} m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2\equiv m^2_{\mathrm{AdS}_2}L_{\mathrm{AdS}_{2}}^2-q^2\mu_{\mathrm{AdS}_2}^2<-\frac{1}{4}\,. \end{equation} This is a sufficient, but not necessary condition in general. In the Appendix~\ref{sec:Appendix} we argue that for our case, this condition is also necessary (see, in particular, Sec.~\ref{sec:A3} and the discussion associated to Fig.~\ref{fig:extremalfrequency}). Note that an instability will only be physically acceptable if it is possible to keep $m^2$ positive from the perspective of the asymptotic flat ends, and yet have $m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2<-1/4$ in the near horizon $\mathrm{AdS}_{2}\times S^2$ region. It remains to compute $m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2$ in our particular theory. This is a rather standard procedure and we refer the reader to \cite{Dias:2010ma} for details\footnote{In short, we apply the coordinate transformation \eqref{NHcoord} to the linearized scalar equation \eqref{eq:linear}, set $\omega = \lambda \tilde{\omega}$ and keep only the leading terms in the $\lambda\to 0$ expansion while keeping $\tilde{\omega}$ fixed. Then, one compares the resulting equation to that of a charged, massive scalar living on a rigid AdS$_2$ with mass $m^2_{\mathrm{AdS}_2}$, charge $q$ and frequency $\tilde{\omega}$. From this, we can reconstruct $m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2$.}. In our case we find that the $\mathrm{AdS}_2$ BF bound is violated when \begin{eqnarray} \label{BFviolation} && m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2+\frac{1}{4}=\frac{1}{4}+(m^2-q^2)L^2_{\mathrm{AdS}_2}-2\alpha<0 \nonumber\\ && \qquad\qquad\qquad\qquad\qquad \Rightarrow \alpha>\frac{1}{2}\left[\frac{1}{4}+(m^2-q^2)L^2_{\mathrm{AdS}_2}\right]\,. \label{eq:bound} \end{eqnarray} When the background RN black hole is extremal, \emph{i.e.} when $\mu=1$, the bound state condition given in Eq.~(\ref{condition}) simplifies to $m>|q|$, so that the term on the right hand side of the above inequality is always positive. This is essentially the reason why we need the new coupling $\alpha$ if we want to make the RN black hole unstable. \subsection{The onset of hairy black holes}\label{sec:linearOnset} When \eqref{BFviolation} is satisfied, the extremal RN black hole is unstable, so the onset of the instability starts at some $Q<M$. This onset can be found by searching for static, finite energy perturbations, so we set $\omega = 0$ in \eqref{eq:linear}. Typically, the onset occurs when $q^2\mu^2 < m^2$. In this case we require that $\psi$ fall off as in \eqref{outpsiinf1} and \eqref{outpsiinf2}. It is convenient not to work directly with $\psi$, but instead define a new function $\hat{\psi}$ through the relation \begin{equation} \psi \equiv e^{-\sqrt{m^2-q^2\mu^2}\,r}\left(\frac{r_+}{r}\right)^{1+\eta}\hat{\psi}\,. \label{eq:off} \end{equation} Numerically, it is hard to work with infinite domains so we introduce a compact coordinate $y$ given by \begin{equation} r=\frac{r_+}{1-y}\,, \label{eq:ycoord} \end{equation} with the horizon located at $y=0$ and asymptotic infinity at $y=1$. The boundary conditions for $\hat{\psi}$ are then found by demanding $\hat{\psi}$ to have a regular Taylor expansion at $y=0$ and $y=1$. This procedure yields rather cumbersome Robin boundary conditions at $y=0$ and $y=1$ which we do not present here. If we now fix $\alpha$, $q/m$, and $m \, r_+$, the equation for $\hat{\psi}$ is a generalized eigenvalue equation in $\mu$. By computing these eigenvalues, we determine a curve in the space of RN black holes that marks the onset of the scalar hair. This is how the blue curve was generated in Fig.~\ref{fig:phasediag}. For $q^2 > m^2$, modes with $q^2\mu^2 = m^2$ can also branch off from RN. These are the beginning of the solutions that we discuss in the next section. To find them, we require that $\psi$ satisfy \eqref{outpsiinf3} asymptotically, and set \begin{equation} \psi = e^{-2 \sqrt{2}\,q \sqrt{\mu } \sqrt{Q-\mu M}\; \sqrt{r}}\left(\frac{r_+}{r}\right)^{3/4}\hat{\psi}\,. \end{equation} It is again convenient to introduce a compact coordinate \begin{equation} r = \frac{r_+}{y^4}\,, \end{equation} so that the higher order terms in $r^{-1/4}$ appearing in the expansion \eqref{outpsiinf3} now become integer powers of $y$. The boundary conditions for $\hat{\psi}$ can then be found by assuming that $\hat{\psi}$ has a regular Taylor series at $y=0$ (asymptotic infinity) and $y=1$ (black hole event horizon). They again turn out to be Robin boundary conditions. For fixed $\alpha$ and $q/m$, we regard the equation for $\hat{\psi}$ as an eigenvalue equation for $m \, r_+$, and solve for these eigenvalues. This is how the onset line was generated in Fig.~\ref{fig:qm}. In the Appendix~\ref{sec:Appendix} we show that these $\omega=0$ modes indeed mark a transition between stable and unstable perturbations (see, in particular, Sec.~\ref{sec:A2} and the discussions associated to Figs.~\ref{fig:example}-\ref{fig:3D}). \section{Maximal warm holes} We now discuss the full nonlinear solutions, and start with the case $q/m =1$.\footnote{From now on we assume charges are positive, but our results remain valid if $q$ and $Q$ are replaced by their absolute value.} So the condition \eqref{condition} is satisfied for $\mu \le 1$. A phase diagram of these solutions is shown in Fig.~\ref{fig:phasediag}, for coupling $\alpha =1$. The green region below the horizontal dashed line with $Q-M=0$ describes the standard RN solutions. The blue line denotes the onset of the scalar instability in RN and thus the merger between the RN and the hairy black holes. The latter exist in the brown shaded region, and the red line denotes the curve $\mu = 1$ which represents the largest charge on a black hole of mass $Mm \gtrsim 0.8$. Notice that the vertical axis is proportional to $Q-M$, so when this is positive, the hairy black holes exceed the usual extremal limit $Q=M$. It is not surprising that one can create black holes with $Q>M$ by adding matter with $q=m$, since one can also do this with neutral matter. The point is simply that the equation of motion (2.2b) with $q=0$ implies that the conserved charge is $\oint (1+4\alpha|\psi|^2)\star F$. So the electric charge $Q_{\mathcal{H}}$ on the black hole, defined as \begin{equation} Q_{\mathcal{H}} \equiv \frac{1}{4\pi} \oint_{\mathcal{B}}\star F\,, \end{equation} where $\mathcal{B}$ is the bifurcating Killing surface, will be less than the total charge $Q$ measured at infinity. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=0.85\textwidth]{phase_diagram.pdf} \caption{The phase diagram of solutions with $q/m =1$ and $\alpha=1$. Hairy black holes exist in the brown shaded region. The blue line denotes the onset of the scalar instability, and the red line denotes the curve with $\mu =1$. Note that these black holes can slightly exceed the usual extremal bound $Q=M$. } \label{fig:phasediag} \end{figure} As mentioned in the introduction, the extremal limit of a black hole with scalar hair is often singular, with vanishing horizon area. This is true for the black holes along the left boundary of the phase diagram. However despite having the largest charge for given mass, the black holes along the red line with $\mu =1 $ are nonsingular ($S\neq 0$), and remarkably have nonzero Hawking temperature. This is shown in Fig.~\ref{fig:constmu} which shows various physical properties of the $\mu =1$ black holes including their entropy $S = A/4$, temperature $T$, $F^2$ on the horizon, and charge on the black hole $Q_{\mathcal{H}}$. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1\textwidth]{constant_mu_1.pdf} \caption{Physical properties of the maximal warm holes along the red line in Fig.~\ref{fig:phasediag}. The plots show the entropy $S$, temperature $T$, $F^2$ on the horizon, and charge on the black hole, $Q_{\mathcal{H}}$, as a function of black hole mass. Note that despite having the maximum charge for a given mass, these black holes have nonzero Hawking temperature! The implications for Hawking evaporation are discussed in section \ref{sec:hawkingeva}.} \label{fig:constmu} \end{figure} The reason these black holes exist can be understood as follows. As one increases their charge (for fixed mass), the region near the horizon behaves as a typical black hole with scalar hair and wants to become singular. However, if the mass is large enough, before one reaches a singular horizon, the asymptotic condition \eqref{condition} is saturated. Since one cannot support scalar hair if this bound is violated, and there are no black holes without hair having $Q>M$, the extremal black hole has $T>0$. This is a new kind of extremal black hole that we are calling a maximal warm hole. Increasing the coupling $\alpha$ increases the charge that these maximal warm holes can carry. But it also increases the minimum mass required for the extremal black hole to be nonsingular. Both of these effects are shown in Fig.~\ref{fig:alpha} which shows the maximal warm holes in theories with $q=m$ and different couplings $\alpha$. These curves all have $\mu = 1$ and generalize the red curve in Fig.~\ref{fig:phasediag} to larger $\alpha$. The physical properties of these black holes are qualitatively similar to Fig.~\ref{fig:constmu}. In particular, they are all nonsingular with nonzero Hawking temperature. For example, the properties of the black holes when $\alpha = 100$ are shown in Fig.~\ref{fig:tenalpha}. Notice that increasing $\alpha$ increases the extremal temperature only slightly (top-right panel), but greatly decreases the fraction of the charge $Q_{\mathcal{H}}$ that is carried by the black hole (bottom-right panel). Most of the charge is now in the scalar hair, which is not surprising since we have increased the scalar instability. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=0.85\textwidth]{changing_alpha} \caption{Maximal warm holes in theories with $q=m$ and different couplings $\alpha$. These are all nonsingular ($S> 0$) black holes with maximum charge and nonzero $T$. As they approach the solution with minimum mass, $S\to 0$ and $T\to 0$.} \label{fig:alpha} \end{figure} \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=\textwidth]{alpha_100} \caption{Physical properties of the maximal warm holes with $q=m$ and $\alpha = 100$. Comparing with the $\alpha=1$ case of Fig.~\ref{fig:constmu}, we see that increasing $\alpha$ increases the extremal temperature only slightly (top-right panel), but decreases substantially the fraction of the charge carried by the black hole (bottom-right panel).} \label{fig:tenalpha} \end{figure} Next we return to $\alpha = 1$, and consider the effects of changing $q/m$. The existence of maximal warm holes turns out to be very sensitive to this parameter. The smooth black holes with maximum charge for given mass again have the maximum possible potential difference $\mu$ allowed by \eqref{condition}. They are shown in Fig.~\ref{fig:qm}, and all have $T > 0$ (except the leftmost point that approaches $S\to 0$ and $T\to 0$). \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1\textwidth]{changing_q_m} \caption{Black holes with $q\mu = m$ as a function of $q/m$, with $\alpha=1$. When $Q>M$, these are maximal warm holes. The green shaded region denotes RN black holes, and the bottom blue curve denotes the onset of their instability when $q\mu = m$. For masses outside the range of the maximal warm holes, the extremal black hole is singular.} \label{fig:qm} \end{figure} This figure has several interesting features. First, black holes with $q/m > 1$ scalar hair only exist when the black hole is small enough. This can be understood as follows. If we increase $q/m > 1$, the maximum value of $\mu$ is reduced to $\mu \le m/q < 1$. Since the maximum allowed $\mu$ is reduced, the maximum electric field on the horizon is also reduced. But for the RN black hole to become unstable, we need a large enough electric field. Since the electric field increases as one decreases the size of the black hole, only small black holes can have this kind of hair. Second, the mass where maximal warm holes become singular rapidly decreases to zero as $q/m$ increases, and for $q/m \gtrsim 1.1$, maximal warm holes can have arbitrarily small mass. This is also easy to understand: increasing $q/m$ decreases the maximum allowed $\mu$, so this maximum is reached sooner, before the horizon becomes singular. Third, the maximum charge the hairy black hole can carry also decreases as $q/m$ increases, and for $q/m \gtrsim 1.3$, it falls below $Q = M$. At this point, the maximum charge black hole is the usual RN solution with no scalar hair. However, when they exist, the hairy black holes always have larger entropy than a RN solution with the same $M$ and $Q$. As one increases $Q$ for fixed $M$, the RN solution becomes unstable as before, but if one continues to increase $Q$, one reaches a point where the hair no longer exists and the solution returns to RN. Next we consider decreasing $q / m < 1$. This increases the maximum allowed $\mu$, making it easier for the horizon to become singular before reaching this limit. So the minimum mass required for a maximal warm hole increases, as shown in Fig.~\ref{fig:qm} for the case $q/m=0.994$. There is also a maximum mass, but unlike the case $q/m > 1$, it is not because they no longer satisfy $Q > M$. Instead, it is because a solution with $m = q\mu$ requires $Q > \mu M$; see \eqref{outpsiinf3}. This constraint was not an issue when $\mu \le 1$, but since we have increased $\mu$, this constraint is violated for large $M$ and the extremal limit again becomes singular. The finite range of masses for which the black hole has a smooth extremal limit rapidly shrinks as we decrease $q/m$ and vanishes completely for $q/m \lesssim 0.99$. When the maximal warm holes only exist for large enough masses (as in the top three curves of Fig.~\ref{fig:qm}), the singular extremal black holes lie along curves that extend from the maximal warm hole with smallest mass to $Q=M=0$. For $q/m<1$, they also extend from the maximal warm hole with largest mass to arbitrarily large $M$. For smaller scalar field charges, \emph{i.e.} $q/m \lesssim 0.99$, there are no nonsingular extremal black holes. In a phase diagram like Fig.~\ref{fig:qm}, hairy black holes with this scalar charge are bounded from above by a single curve that describes singular extremal black holes that extends from $Q=M=0$ to arbitrarily large $Mm$. We might then ask what happens \emph{e.g.} to a nonextremal hairy black hole family with fixed $Mm$ as it approaches the singular extremal curve. The evolution of the physical properties of such a black hole with $q/m = 1/2$ and $Mm =1$ as it approaches extremality are shown in Fig.~\ref{fig:halfq2}. (Other choices of mass $Mm$ and small $q/m$ are similar.) One sees that both the black hole entropy and temperature go to zero in the extremal limit (largest $(Q-M)m$ solution). The charge on the black hole also vanishes in this limit, since any residual charge would produce a diverging Maxwell field increasing the scalar instability. Note that even though the condition \eqref{condition} allows $\mu \le 2$ in this case, the solution becomes singular when $\mu$ is only slightly larger than one. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1\textwidth]{halfq2.pdf} \caption{Physical properties of black holes approaching extremality, with $q = m/2$, $\alpha = 1$, and $Mm = 1$. This is a representative example of $q/m \lesssim 0.99$ solutions where the maximum charge hairy black holes always approach a singular extremal solution.} \label{fig:halfq2} \end{figure} As illustrated in Fig.~\ref{fig:qm}, the only case which allows maximal warm holes to have arbitrarily large mass is the original one we studied with $q=m$ (see Fig.~\ref{fig:phasediag} for $\alpha=1$). The reason for this is that, from \eqref{condition}, the maximum allowed $\mu$ is then $\mu = 1$ which is just the potential for an extremal RN black hole of any mass. This has two consequences. First, since $q=m$ is on the threshold of charged superradiance for extremal RN, any extra source of instability (such as the scalar-Maxwell coupling we added) will cause the scalar field to condense (see also Eq.~(\ref{eq:bound})). One does not need the electric field to be ``large enough" in this case. Second, once the black hole has $Q> M$, the second constraint ($Q > \mu M$) that follows from \eqref{outpsiinf3} is satisfied for all $M$. So there is no upper limit on the mass. \subsection{Hawking evaporation\label{sec:hawkingeva}} Typically, if a theory does not have particles with $q > m$, a near extremal black hole will Hawking radiate neutral massless particles such as gravitons and become extremal. Since an extremal RN black hole has zero Hawking temperature, it is a stable endpoint for this process. For some dilatonic black holes with singular extremal limits, the Hawking temperature does not go to zero at extremality \cite{Garfinkle:1990qj}. But in those cases, it has been shown that evaporation stops because the effective potential in the scalar wave equation does not vanish on the horizon as usual in the extremal limit \cite{Holzhey:1991bx}. Since the horizon is at $r_\star = -\infty$ in the usual ``tortoise" coordinate in which the wave equation is simple, this produces an infinite potential barrier allowing no particles to escape. Since maximal warm holes are smooth black holes with maximal charge and nonzero temperature, we need to find another scenario for the endpoint of their Hawking evaporation. We will not perform a complete analysis including the potentials outside the horizon. Instead, we give a simple plausible explanation for why these black holes will \emph{not} form naked singularities, despite the fact that they have maximal charge and nonzero temperature. Consider first the case $\alpha =1$ and $q=m$. Since the temperature of the hairy black holes is low, charged particles are only created by the Schwinger mechanism with a rate proportional to $e^{-\pi m^2/qE}$, while neutral photons and gravitons are produced thermally. Since the photons acquire a mass inside the charged condensate, they will be surpressed compared to gravitons. Nevertheless, since charged particle emission appears exponentially suppressed, one might expect that in the late stages of Hawking evaporation, the black hole will lose mass but not charge. Comparing the scales on the horizontal and vertical axes in Fig.~\ref{fig:phasediag} this would correspond to an essentially vertical line in the figure. So if $Mm$ is large enough, Hawking evaporation would appear to end on the red line. But since these black holes have nonzero temperature, they would appear to keep radiating. This is the puzzle we want to resolve. The resolution is that the rate of charged particle production is not actually exponentially suppressed, since all the factors in the Schwinger exponent are order one: we have assumed $q = m$, and Fig.~\ref{fig:constmu} shows that $E/m \sim O(1)$. In contrast, the temperature is $T \sim 10^{-3}$ so the rate of thermal radiation would be proportional to $T^4 \sim 10^{-12}$ and is highly suppressed. Thus the late stages of Hawking radiation will be dominated by the production of $q = m $ particles which should keep $Q-M$ approximately constant. As a result, Hawking radiation causes the black hole to evolve along a horizontal line in Fig.~\ref{fig:phasediag}, rather than a vertical line. This ends in a singular solution as expected. The physical quantities evolve as shown in Fig.~\ref{fig:evap}. Note that the charge on the black hole goes to zero linearly with the mass, as expected from the production of $q=m$ particles. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1\textwidth]{evaporation} \caption{Physical properties of the hairy black hole with $q=m$ and $\alpha = 1$ are shown along a line of constant $(Q-M)m = 5\times 10^{-3}$. In the late stages of Hawking evaporation, the black hole is expected to approximately follow such a line with decreasing $M$.} \label{fig:evap} \end{figure} Now suppose $q \ne m$. If we increase $q/m$ above 1.1, we have seen (see discussion of Fig.~\ref{fig:qm}) that there are no singular extremal black holes. But Hawking radiation of these hairy black holes is likely to again be dominated by charged particle emission which will decrease the black hole charge more than its mass. So the black hole will evolve away from extremality. On the other hand, if we decrease $q/m$ below $.99$ even charged particle emission will increase $Q-M$, so evaporating black holes will always follow an essentially vertical line in a phase diagram like Fig.~\ref{fig:phasediag}. But we have seen (Fig.~\ref{fig:qm}) that in this case the maximal warm holes disappear and all extremal limits are singular. Thus when $\alpha \approx 1$, the natural endpoint of Hawking evaporation is either a singular extremal solution or possibly a neutral black hole that evaporates completely. The physics of the singular endpoint will of course require a complete quantum theory of gravity. However, the story changes when we increase $\alpha$, since this decreases the electric field on the horizon and increases the black hole temperature. Eventually (certainly before $\alpha = 100$) the electric field becomes too small to create charged particles, and Hawking radiation is dominated by thermal gravitons. Thus we are again faced with the question of what happens when these black holes evaporate past extremality. Since the black hole is evaporating but not loosing charge, the horizon area will shrink and the potential difference $\mu$ between the horizon and infinity should increase. But $\mu$ was already at the maximum value that allows static scalar hair. So the evaporation past extremality will cause the scalar hair to become unbound and start radiating to infinity. At this point there are a couple possible outcomes depending on how much scalar field is radiated away. If the scalar field only radiates enough to recover $\mu = 1$, the evolution will essentially follow the $\mu = 1$ curve as $M$ decreases. At the other extreme, all the hair could classically radiate away leaving a RN black hole. (This option is only possible if the resulting black hole is classically stable.) Finally, it is possible that a fraction of the hair is radiated leaving a hairy black hole with $\mu < 1$. We will leave it to future investigations to determine which of these possibilities the black hole actually follows. But notice that in no case does the black hole immediately turn into a naked singularity. \section{Solitons} Unlike analogous theories with neutral scalars \cite{Herdeiro:2019oqp}, the theory we are considering also admits soliton solutions, \emph{i.e.} regular horizonless solutions. For completeness we describe them in this section. We will see that their mass and charge satisfy $Q^2 < M^2$ so they coexist with RN black holes. But unlike other systems with scalar condensation, these solitons are not the zero horizon radius limit of the hairy black holes studied in the previous section. Since solitons have no horizon, we have to change our ansatz (\ref{eq:ansatzout}) which was tailored to enforce a zero of $p(r)$. In this section, we thus consider the gravitational ansatz \begin{equation} \mathrm{d}s^2=-f(r)\,\mathrm{d}t^2+\frac{\mathrm{d}r^2}{g(r)}+r^2(\mathrm{d}\theta^2 +\sin^2\theta\ \mathrm{d}\phi^2 )\,, \end{equation} for the soliton with $r\in(0,+\infty)$. As before, for the Maxwell and scalar fields we take \begin{equation} A=\Phi(r)\,\mathrm{d}t \quad \text{and}\quad \psi = \psi^\dagger = \psi(r)\quad \,. \end{equation} The equations of motion read \begin{subequations} \begin{align} & \frac{1}{r^2}\sqrt{\frac{g}{f}}\left(\sqrt{f\,g}\,r^2\psi^\prime\right)^\prime+\left(\frac{q^2\,\Phi^2+2\,\alpha\,g\,{\Phi^\prime}^2}{f}-m^2\right)\psi=0\,,\label{eq:firstsoliton} \\ & \frac{1}{r^2}\sqrt{\frac{g}{f}}\left[\sqrt{\frac{g}{f}}\left(1+4\,\alpha\,\psi^2\right)r^2\Phi^\prime\right]^\prime-\frac{2\,q^2\,\psi^2}{f}\Phi=0\,,\label{eq:secondsoliton} \\ & \frac{1}{r^2}\left(r\,g\right)^\prime-\frac{1}{r^2}+\frac{g}{f}(1+4\,\alpha\,\psi^2){\Phi^\prime}^2+2\,q^2\,\psi^2\frac{\Phi^2}{f}+2m^2\psi^2+2\,g\,{\psi^\prime}^2=0\,,\label{eq:thirdsoliton} \\ &\frac{g}{r^2\,f}\left(r\,f\right)^\prime-\frac{1}{r^2}+\frac{g}{f}(1+4\,\alpha\,\psi^2){\Phi^\prime}^2-2\,q^2\,\psi^2\frac{\Phi^2}{f}+2m^2\psi^2-2\,g\,{\psi^\prime}^2=0\,.\label{eq:lastsoliton} \end{align} \end{subequations} We can now use \eqref{eq:lastsoliton} to express $g$ as a function of $f$, $\psi$, $\Phi$ and their first derivatives: \begin{equation} g = \frac{2 r^2 \psi^2\left(q^2\Phi^2-m^2f\right)+f}{\left(r\,f\right)^\prime+(1+4\,\alpha\,\psi^2)r^2{\Phi^\prime}^2-2 r^2\,f\,{\psi^\prime}^2}\,. \end{equation} This expression for $g$ can now be plugged in (\ref{eq:firstsoliton})-(\ref{eq:lastsoliton}) to reduce the problem to studying a system of three second order coupled ordinary differential equations for $f$, $\Phi$ and $\psi$. At the spacetime origin, located at $r=0$, we impose regularity, which amounts to requiring \begin{equation} f^\prime(0)=\psi^\prime(0)=\Phi^\prime(0)=0\,. \end{equation} (Note in particular that these conditions imply $g(0)-1=g^\prime(0)=0$, as required.) At the asymptotic boundary we demand \begin{equation} \lim_{r\to+\infty}\psi = 0\,,\quad\lim_{r\to+\infty}f=1\quad \text{and}\quad \lim_{r\to+\infty}\Phi=\mu\,. \end{equation} We now introduce a compact radial coordinate $y$ defined as \begin{equation} y=\frac{m\,r}{1+m\,r} \end{equation} so that $y\in(0,1)$ with $y=0$ being the regular center and $y=1$ the spatial infinity. The moduli space of solutions is then three-dimensional depending on $\{\mu,\alpha,q/m\}$ or alternatively $\{m M,\alpha,q/m\}$. However, as we shall shortly see, these parameters do not uniquely parametrize a soliton. Therefore, we will use instead the value of the scalar field at the origin, $\psi_0\equiv\psi(0)$, to move along the moduli space and determine $\{m M,m\,Q\}$ at fixed $\{\alpha,q/m\}$. It turns out that $\psi_0$ is one-to-one with the soliton solutions, at fixed $\{\alpha,q/m\}$. In Fig.~\ref{fig:solitons} we plot the chemical potential $\mu$ as a function of $\psi_0$ for fixed $\alpha=1$ and for several values of $q/m$. The behaviour at large $\psi_0$ is consistent with the following functional form \begin{equation} \mu = \mu_{\infty}+\hat{\mu}_{\infty}e^{-\alpha\,\psi_0^2}\sin\left(\Omega_{\infty}\,\psi_0^2+\gamma_{\infty}\right)\,. \end{equation} For instance, for $q/m=1$ we find $\mu_{\infty}\approx0.9736$, $\hat{\mu}_{\infty}\approx0.0274$, $\Omega_{\infty}\approx 4.061$ and $\gamma_{\infty}\approx-2.70745$. The above asymptotic expression was inspired by the work developed in \cite{Bhattacharyya:2010yg}, where a class of supersymmetric solitonic solutions was studied in great detail. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1.0\textwidth]{wiggles_soliton} \caption{Chemical potential $\mu$ of the solitons as a function of $\psi_0$ at fixed $\alpha=1$. The legend shows curves with different values of $q/m$.} \label{fig:solitons} \end{figure} Each of the oscillations in Fig.~\ref{fig:solitons} is mapped into characteristic swallowtail curves in the corresponding phase diagram of Fig.~\ref{fig:moduli_solitons}. This Fig.~\ref{fig:moduli_solitons} has $\alpha=1$ and serves to illustrate that the properties of solitonic solutions in this theory are somehow intricate. For any value of $0<|q|/m<1$ we find that solitons only exist in a window of masses $ M\in(M_{\min},M_{\max})$. For each $q/m$ curve, $M_{\min}$ in Fig.~\ref{fig:moduli_solitons} corresponds to approach $\psi_0\to 0$ in Fig.~\ref{fig:solitons}. As we decrease $|q|/m$ towards zero, $M_{\min}$ appears to approach $0$ and the curve becomes increasingly steep (see for instance the curve with $|q|/m=5\times 10^{-3}$ in Fig.~\ref{fig:moduli_solitons}). On the other hand, for each $q/m$, $M_{\max}$ in Fig.~\ref{fig:moduli_solitons} corresponds to the first minimum in the corresponding curve of Fig.~\ref{fig:solitons}. The solution with $q=m$ is special. In this case when we let $\psi_0\to0$ we approach $M\to+\infty$ and the line $Q-M\to 0$ from below. But there is still a minimum value of the mass $M_{\min}$, which is given by the corresponding minimum in Fig.~\ref{fig:solitons}. Finally, we also found solitons with $|q|/m>1$. In this case, solitons again exist in a window $M\in(M_{\min}, M_{\max})$, with the window shrinking as we increase $|q|/m$ and disappearing altogether at a critical value of $|q|=q_c$. For $\alpha=1$, we find that $q_c\simeq 1.05\,m$. For $|q|>m$, we also find that $\psi_0$ never approaches zero, and is instead cut off by a value $\psi_0^c$ at which point the solution becomes singular since the Kretschmann curvature scalar at the origin grows unbounded. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1.0\textwidth]{solitons_moduli_space} \caption{Moduli space of solitonic solutions for fixed $\alpha=1$, and for several values of $q/m$ labelled on the left. The green region indicates where RN black holes exist. } \label{fig:moduli_solitons} \end{figure} By comparing the mass and charge of the solitons in Fig.~\ref{fig:moduli_solitons} with the mass and charge of the hairy black holes, one finds that they do not overlap. So the solitons cannot be viewed as the limit of a hairy black holes as $r_+\rightarrow 0$. \section {Discussion} We have shown that just by adding a simple coupling between a charged scalar field and a Maxwell field, one can change some basic properties of four-dimensional, asymptotically flat, extremal black holes. In particular, for a range of parameters the black hole with maximum charge (for given mass) has a smooth horizon with nonzero Hawking temperature. We have called these objects maximal warm holes. The existence of maximal warm holes raises a number of questions. We have (partially) addressed perhaps the most obvious one concerning the endpoint of Hawking evaporation. But in addition to gaining a more complete understanding of this process, there are a number of other questions which we leave for future investigation. These include the following: \begin{enumerate} \item What characterizes the class of theories in which maximal warm holes occur? \item Do maximal warm holes have implications for black hole physics besides Hawking radiation? \item Can maximal warm holes exist in more than four spacetime dimensions? \item Are there asymptotically anti-de Sitter examples of maximal warm holes? If so, what are the implications for the AdS/CFT correspondence? They do not exist in the simplest models of holographic superconductors \cite{Horowitz:2009ij}, but they might exist in theories with additional interactions. \item Are there asymptotically de Sitter examples of maximal warm holes? This seems unlikely since one would need to ensure that there is no flux across both the cosmological and event horizons. \item How does the addition of rotation affect maximal warm holes? It is known that Kerr black holes can develop massive scalar hair near extremality even without additional interactions \cite{Herdeiro:2014goa,Herdeiro:2015gia}. Can extremal neutral black holes have nonzero temperature? \end{enumerate} \noindent We hope to report on some of these questions in the future. \subsection*{Acknowledgments} O.J.C.D. acknowledges financial support from the STFC Grants ST/P000711/1 and ST/T000775/1. The work of G.~H. was supported in part by NSF Grant PHY-2107939. J.~E.~S has been partially supported by STFC consolidated grants ST/P000681/1, ST/T000694/1. The numerical component of this study was partially carried out using the computational facilities of the Fawcett High Performance Computing system at the Faculty of Mathematics, University of Cambridge, funded by STFC consolidated grants ST/P000681/1, ST/T000694/1 and ST/P000673/1. The authors further acknowledge the use of the IRIDIS High Performance Computing Facility, and associated support services at the University of Southampton, in the completion of this work.
\section{Introduction} \input{Parts/Introduction11} \section{Data}\label{Data} \input{Parts/Data11} \section{Our Stochastic Block Models}\label{Model} \input{Parts/Model11} \section{Computations}\label{Computation} \input{Parts/Computation11} \section{Results}\label{Results} \input{Parts/Results11} \section{Conclusions and Discussion}\label{Discussion} \input{Parts/Discussion11} \section*{Appendix}\label{Appendix} \input{Parts/Appendix11} \subsection{Preliminary Data Analysis} Our data show clear patterns of bicycle-sharing usage by time of day and day of the week, including heavier use during commuting hours. In Figure \ref{fig:byhour}, we illustrate usage patterns by plotting the number of trips by starting hour for each city. In New York City and San Francisco, activity spikes during weekday morning and evening commuting hours, whereas weekend trips peak in early afternoon. Similar patterns were observed previously for bicycle-sharing systems in New York and many other cities \cite{austwick2013structure, etienne2014velib, zhu18multi, xie2018examining, aqil14tp19}. By contrast, in Los Angeles, the number of trips by hour has a mid-day peak on weekdays that is nearly as strong as the morning peak. In New York City and Los Angeles, about one quarter of the trips occur on weekends, but only about eight percent of the trips in San Francisco occur then. This suggests that the San Francisco network covers more commercial areas and fewer residential areas. For individual stations, the morning and evening peaks for in-degree and out-degree are often unbalanced: one direction has a stronger morning peak, and the opposite direction has a stronger evening peak. This is a key motivation for our time-dependent identification of stations into ``home" and ``work" types. \begin{figure}[H] \flushleft \includegraphics[scale=.15]{IMG/trip_count_by_hour.png} \caption{Total trips by hour for weekdays, weekends, and overall. Hour $0$ designates midnight. } \label{fig:byhour} \end{figure} To further explore the imbalance between morning and evening activity in each network, we calculate the singular-value decomposition (SVD) of the matrices of in-degree and out-degree for each station by hour. To be explicit, entry $i,j$ of the matrix of in-degrees is equal to the total number of trips that arrive at station $i$ in hour $j$, and the we constructed the matrix of out-degrees analogously for departing trips. We show results for New York City in Figure \ref{fig:ny_svd} and for Los Angeles (see Figure \ref{fig:la_svd}) and San Francisco (see Figure \ref{fig:sf_svd}) in the appendix. The first two principle components either strengthen both observed peaks or weaken one peak while strengthening the other. The first two singular vectors explain at least $88 \%$ and as much as $97 \%$ percent of the variation in the corresponding matrix, supporting the importance of peak morning and evening commutes for classifying stations. Another characteristic of our data that we incorporated into the design of our models is the strong positive Pearson correlation coefficient between the total (summed over all time periods) in-degree and out-degree of each station: $0.99$ in New York, $0.98$ in San Francisco, and $0.91$ in Los Angeles. This is an intrinsic feature of docked bicycle-sharing systems, because a bicycle must be returned to a station before a new trip with it can begin. However, the use of trucks to redistribute bicycles in a system can loosen this requirement. \begin{figure}[H] \includegraphics[scale=.15]{IMG/ny_svd.png} \caption{The first two singular vectors from the New York City bicycle-sharing network. }\label{fig:ny_svd} \end{figure} \subsection{Inference using the TDMM-SBM} Let $\mathbf{\omega}=\{\wt{ght}\}$ be the $K\times K\times T$ array that represents the inter-block connectivity parameters, and let $\mathbf{C}=\{C_{ig}\}$ be the matrix that represents the collection of node-strength parameters. We estimate the model parameters using a two-step gradient descent.\footnote{Although we are maximizing a function and thus technically performing gradient ascent, we refer to this class of method by its more common monicker of ``gradient descent''.} First, we move in the direction of the gradient with respect to $\mathbf{\omega}$ and update the inter-block connectivity parameters. Second, we move along the direction of the gradient with respect to $\mathbf{C}$ and update the node-strength parameters. In the description of our algorithm, we let $\omega^{(n)}$ and $\mathbf{C}^{(n)}$, respectively, denote the $n^{\text{th}}$ update of the inter-block connectivity and node-strength parameters. We initialize the algorithm with random values $\mathbf{\omega}^{(0)}$ and $\mathbf{C}^{(0)}$ with components distributed according to $\exp(X)$, where $X$ is a Gaussian random variable with mean $0$ and variance $1$. (That is, we draw random values from a log-normal distribution.) We denote the mean activity along edge $(i,j)$ with initial parameters $\mathbf{\omega}^{(0)}$ and $\mathbf{C}^{(0)}$ by $\mu_{ijt}^{(0)}$. We scale the parameters so that the TDMM-SBM at the starting point of the optimization has the same mean number of trips as the data. Specifically, we multiply the inter-block connectivities $\omega^{(0)}_{ght}$ by $\left(\sum_{t}\omega^{(0)}_{ght}\right)^{-1}\left(\sum_{i,j,t} \At{ijt}\right)/K^2$ and normalize $\mathbf{C}^{0}$ to satisfy the constraint $\sum_i C_{ig}^{(0)}=1$ for each block $g$. This results in $\sum_{ijt} \mu_{ijt}^{(0)}=\sum_{i,j,t} \sum_{g,h} C_{ig}^{(0)}\wt{ght}^{(0)} C_{jh}^{(0)}=\sum_{g,h,t} \wt{ght}^{(0)}=\sum_{g,h}\left(\sum_{i,j,t} \At{ijt}\right)/K^2=\sum_{i,j,t} \At{ijt}$, the total number of edges in the network. Without this scaling, the initial parameters would have very small magnitudes, such that the mean total number of trips from the TDMM-SBM with these initial parameters is much smaller than the total number of trips in the data. Therefore, it is very likely that the randomly chosen initial parameters will have very small log-likelihoods relative to the MLE log likelihood. Early gradient-descent steps might then dramatically increase the magnitude of the parameters while affecting the relative sizes of individual parameters in unpredictable ways. To ensure that our estimated parameters are nonnegative, we use the following change of variables: $\exp(\tilde{\omega}^{(n)})=\omega^{(n)}$ and $\exp(\tilde{\mathbf{C}}^{(n)})=\mathbf{C}^{(n)}$. We can then write the gradient descent as \begin{align*} \tilde{\mathbf{\omega}}^{(n+1)} &= \tilde{\mathbf{\omega}}^{(n)}+\eta^{(n)}\nabla_{\mathbf{\omega}} \ell(\mathbf{C}^{ (n)},\mathbf{\omega}^{ (n)})\exp(\tilde{\mathbf{\omega}}^{ (n)})\,, \\ \tilde{\mathbf{C}}^{(n+1)} &= \tilde{\mathbf{C}}^{(n)}+h^{(n)}\nabla_{\mathbf{C}}\ell(\mathbf{C}^{(n)},\mathbf{\omega}^{(n+1)})\exp(\tilde{\mathbf{C}}^{(n)})\,, \end{align*} where $h^{(n)}$ and $\eta^{(n)}$ are small positive numbers. From the definitions of $\tilde{C}^{(n)}$ and $\tilde{\omega}^{(n)}$, we write \begin{align*} \mathbf{\omega}^{(n+1)} &= \mathbf{\omega}^{(n)}\exp(\eta^{(n)}\nabla_{\mathbf{\omega}} \ell(\mathbf{C}^{(n)},\mathbf{\omega}^{(n)}) \mathbf{\omega}^{(n)})\\ \mathbf{C}^{(n+1)} &= \mathbf{C}^{(n)}\exp(h^{(n)}\nabla_{\mathbf{C}} \ell(\mathbf{C}^{(n)},\mathbf{\omega}^{(n+1)}) \mathbf{C}^{(n)})\,. \end{align*} We take the exponential of a vector to be the result of applying the exponential to each component of the vector. Let $h^{(0)}=\eta^{(0)}=\Delta>0$ be the fixed initial step size. For our application, we choose $\Delta =10^{-4}$. We generate two candidate updates for $\mathbf{\omega}^{(n+1)}$ for the first step in our algorithm using $h^{(n+1)}=1.2\,h^{(n)}$ and $h^{(n+1)}=.8\,h^{(n)}$, and we choose the one that gives a $\mathbf{\omega}^{(n+1)}$ that yields the larger value of $\ell(\mathbf{C}^{(n)},\mathbf{\omega}^{(n+1)})$. Similarly, we choose the one of $\eta^{(n+1)}=1.2\,\eta^{(n)}$ or $\eta^{(n+1)}=.8\,\eta^{(n)}$ that gives the $\mathbf{C}^{(n+1)}$ with the larger $\ell(\mathbf{C}^{(n+1)},\mathbf{\omega}^{(n+1)})$. We compute the gradient of the log-likelihood function $\ell$ using the chain rule. Recall that we compute the log-likelihood in two parts. One part is the computation of the mean $\mt{ijt}=\sum_{g,h}^K C_{ig}\omega_{gh}^t C_{jh}$ of the number of trips from node $i$ to node $j$ at time $t$. We then insert the expression for the mean into the function $\ell=\sum_t\sum_{i,j}\left(\At{ijt}\log(\mt{ijt})-\mt{ijt}\right)$. We compute the derivative of $\ell$ with respect to $\mt{ijt}$ to obtain \begin{align*} \frac{\partial \ell}{\partial \mt{ijt}} = \frac{\At{ijt}}{\mt{ijt}} - 1\,. \end{align*} The derivative of $\mt{ijt}$ with respect to $C_{kg}$ is \begin{align*} \frac{\partial \mt{ijt}}{\partial C_{kg}} =& \delta_{ki}\sum_{h} \wt{ght} C_{jh} + \delta_{kj} \sum_{h}C_{ih} \wt{hgt} \,, \end{align*} and the derivative of $\mt{ijt}$ with respect to $\omega_{ght}$ is \begin{align*} \frac{\partial \mt{ijt}}{\partial\wt{ght}}=C_{ig}C_{jh}\,. \end{align*} Here $\delta_{ab}$ is the kronecker delta (i.e. $\delta_{ab}=1$ if $a=b$ and $\delta_{ab}=0$ if $a\not=b$). Using the above calculations, we see that the derivatives of $\ell$ with respect to $C_{kg}$ and $\omega_{ght}$ are \begin{align*} \frac{\partial \ell} {\partial C_{kg}} &=\sum_{t=0}^{23}\sum_{i,j\in\mathcal{N}}\frac{\partial \ell}{\partial \mt{ijt}}\frac{\partial \mt{ijt}}{\partial C_{kg}}\\ &= \sum_{t=0}^{23} \left(\sum_{j\in\mathcal{N}}\left(\frac{\At{kjt}}{\mt{kjt}}-1\right) \sum_{h}\wt{ght}C_{jh}+ \sum_{i\in\mathcal{N}}\left(\frac{\At{ikt}}{\mt{ikt}} -1\right) \sum_{h}C_{ih}\wt{hgt}\right) \,, \\ \frac{\partial\ell}{\partial \wt{ght}} &= \sum_{i,j\in\mathcal{N}}\frac{\partial \ell}{\partial \mt{ijt}}\frac{\partial \mt{ijt}}{\partial \wt{ght}} \\&= \sum_{i,j\in\mathcal{N}} \left(\frac{\At{ijt}}{ \mt{ijt}}-1\right) C_{ig} C_{jh}\,. \end{align*} We run the gradient descent until four significant digits of the base-10 floating-point representation of the log-likelihood \eqref{logroll} does not change for 600 steps in a row. For the networks that we examine, this usually takes between 600 and 5000 iterations, with models with more blocks generally needing more iterations to reach this stopping criterion. Because of the non-convexity of the log-likelihood function \eqref{logroll}, we are not guaranteed to reach a global optimum. Most of the time, our method converges to an interesting local optimum (which may also be a global optimum), revealing the existence of functional roles (see Section \ref{Results}). Our results produce recognizable inter-block connectivity parameters $\omega_{ght}$ (e.g., home--work commute patterns and leisure-usage patterns) and the parameters $C_{ig}$ indicate known spatial divisions of the stations (e.g., residential versus commercial districts). In some cases, however, our algorithm converges to an uninteresting local optimum; one example is when the block-assignment parameters $C_{ig}$ for each station appear as if they are assigned independently at random. To improve our results, we run our algorithm repeatedly (specifically, $10$ times for each network) and store the estimate with the largest likelihood. {We compare the parameters that we obtain from gradient descent versus those that we obtain by running a Hamiltonian Monte Carlo (HMC) sampling method in a Bayesian framework with weak priors (implemented in \textsf{Stan} \cite{stan2017jss, rstan2018}). The log-likelihoods that result from our gradient-descent method are as good or better than those that we obtain with an HMC method. The HMC method is more computationally and memory intensive than our gradient-descent method, although it may be preferable in applications in which one has meaningful prior information about parameters.} Improving our optimization method and investigating trade-offs between accuracy and efficiency are worthwhile topics for future work. For instance, it will likely be beneficial to adapt optimization methods for related time-dependent SBMs \cite{xing2010dmsbm, yang2011dsbm, ho2011dmsbm, xu2014dsbm, matias2017semi} to the optimization of our model. The supplementary material has our \textsf{Python} implementation of our gradient-descent method, as well as code for our inference in \textsf{R} using \textsf{Stan}. \subsection{Inference using the TDD-SBM} To fit our TDD-SBM, we use a Kernighan--Lin-type (KL-type) algorithm \cite{kernighanlin70} that we base on the one in \cite{karrer2011stochastic}. Given a number $K$ of blocks, we initialize the algorithm by assigning each node to a block uniformly at random, so each node has a probability of $\frac{1}{K}$ of being assigned to a given block. The algorithm then calculates the best possible block reassignment for any single node with respect to the associated change in log-likelihood (either the largest increase or the smallest decrease). Subsequently, we make the best reassignment for a different node, again chosen uniformly at random, with respect to change in log-likelihood. The algorithm cycles through all nodes; a key feature of the algorithm is that a node that has been reassigned cannot move again until all other nodes have moved. One set of sequential reassignment of all nodes constitutes one step of the algorithm. The algorithm then searches all of the states (with respect to block membership of nodes) that have occurred during the step, and it selects the state with the maximum log-likelihood of any during the step. This state is the starting point for the next step of the algorithm. A single run of the algorithm is completed when a step does not increase the log-likelihood beyond a preset tolerance value near $0$. (In practice, we use $1 \times 10^{-4}$.) To find block assignments that are as good as possible, we do many runs of the algorithm for each network. {In our examples, we use $50$ runs per network. We initialize each run randomly, as described above. Another key feature of the algorithm is that changes in the block membership of nodes affect only the terms of the objective function that involve the origin and destination blocks of the change. (We see in \eqref{tdd-sbm-objective} that the objective is a sum over block-pair terms over $T$ time slices.) Consequently, we do not need to recalculate the full objective function at each step. We implement our KL-type algorithm for TDD-SBM in \textsf{R} using \pkg{Rcpp} \cite{r18, eddelbuettel11rcpp}. The back-end calculations are in \textsf{c++} for speed, and we return results in \textsf{R} to enable visualization and other analyses. Our implementation can also estimate time-independent SBMs, including directed and/or degree-corrected ones. This facilitates comparison of the results of inference from time-dependent and time-independent SBMs. See \url{https://github.com/jcarlen/sbm} for our \textsf{R} package \textbf{\texttt{sbmt}} for parameter estimation for the TDD-SBM. We include code (which uses the \textbf{\texttt{sbmt}} package) in supplementary material to replicate our examples in Section \ref{Results}. \subsection{Time-Dependent Mixed-Membership Stochastic Block Model (TDMM-SBM)}\label{TDMM-SBM} We now describe the framework for our mixed-membership SBM. Let $i,j \in \mathcal{N}$ (with $|\mathcal{N}| = N$) be nodes, which represent bicycle stations; let $g,h \in \mathcal{K}$ (with $|\mathcal{K}| = K$) be blocks. Our data is a three-dimensional array of size $(N,N,T)$, where $T$ is the number of time slices (i.e., time layers). We consider hourly groupings of the trips based on their starting times. The quantity $\At{ijt}$ is the observed number of trips from station $i$ to station $j$ with starting time greater than or equal to $t$ and less than $t+1$. Let $\tilde{A}_{ij}=\sum_{t=0}^{23}\At{ijt}$ denote the weights of the associated time-aggregated matrix. Our network is a directed multilayer network, so we count each trip that both starts and ends at a node $i$ during hour $t$ (i.e., self-edges) exactly once in $A_{iit}$. For each node $i$, there is a length-$K$ vector of real numbers $C_{ig}\in [0,1]$. These numbers represent the mixed-membership block assignment of each node. The block-assignment parameter $C_{ig}$ indicates the ``strength'' of node $i$ in block $g$. For each ordered pair $g,h$ of blocks and each time $t\in \{0,1,\ldots,23\}$ (where $t = 0$ represents the hour that starts at midnight), there is a parameter {$\wt{ght}$, which we call the ``inter-block connectivity'' parameter or ``block-to-block'' parameter,} that represents the directed activity from block $g$ to block $h$ during hour $t$. Note that $\wt{ght}$ need not be equal to $\wt{hgt}$ if the network is directed; this captures any asymmetries in the number of trips with respect to reversing origins and destinations. We also define the notation $\tilde{\omega}_{gh}=\sum_{t=0}^{23} \wt{ght}$ for the time-aggregated matrix. For each pair of nodes, $i$ and $j$, we assume that the number of trips that depart from $i$ and arrive at $j$ at time $t$ is Poisson-distributed with mean $\mt{ijt} = \sum_{g,h} C_{ig}\wt{ght} C_{jh}$. Our use of the Poisson distribution follows \cite{karrer2011stochastic} and \cite{peixoto2017bayesian}, facilitates computation, and is standard for modeling count data (although overdispersion is a concern). For identifiability, we apply the constraint $\sum_i C_{ig}=1$ for all $g$. This does not constrain the set of possible models in terms of realizable mean edge activities $\mu_{ijt}$. Consider a model with unconstrained parameters $\omega_{ght}$ and $C_{ig}$. {The model with parameters $\omega'_{ght}$ and $C'_{ig}$ such that $C'_{ig}=\frac{C_{ig}}{\sum_{j} C_{jg}}$ and $\omega'_{ght}=\omega_{ght}\left(\sum_{j} C_{jg}\right)\left(\sum_{j}C_{jh}\right)$ is an equivalent model, because the means of the distributions of edge weights are equal to the those of the model with unconstrained parameters. That is, $\mu'_{ijt}=\sum_{g,h}C'_{ig}\omega'_{ght}C'_{jh}=\sum_{g,h}C_{ig}\omega_{ght}C_{jh}=\mu_{ijt}$. Because $\sum_{i} C_{ig}=1$, we can think of $C_{ig}$ as the proportion of the total activity of block $g$ from the activity of node $i$; the expected total number of trips at node $i$ is $\sum_g C_{ig}\sum_{h,t} (\omega_{ght}+\omega_{hgt})$. In this light, $\sum_g C_{ig}$ is a measure of the activity of node $i$ in which we do not weight each $C_{ig}$ term by the total activity of the corresponding block. We can interpret $C_{ig}$ relative to $C_{ih}$ as how strongly block $g$ is associated with node $i$ relative to how strongly block $h$ is associated with node $i$. We use these quantities when visualizing the TDMM-SBM of our data, because they help ensure that we do not overlook blocks with important usage patterns but relatively lower activity. The parameter $C_{ig}$ is analogous to the degree-correction parameter for SBMs that was introduced in \cite{karrer2011stochastic}, but we apply it to mixed block membership. We elaborate on this connection in Subsection \ref{TDD-SBM}, where we introduce a model that specifies that nodes have only one block. We now compute the likelihood function that we will optimize to obtain the maximum-likelihood estimate (MLE). We assume conditional independence between hourly numbers of trips along each edge, given model parameters, so the likelihood of the data is \begin{align}\label{llik_TDMM-SBM} L(G;\mathbf{\omega},\mathbf{C})= \prod_{t=0}^{23}\prod_{i,j\in\mathcal{N}} \frac{(\mt{ijt})^{\At{ijt}}}{\At{ijt}!}\exp\left(-\mt{ijt}\right)\,, \end{align} where $\mathbf{\omega}$ and $\mathbf{C}$ give the model parameters (i.e., $\mathbf{\omega}=\{\omega _{ght}\}$ and $\mathbf{C}=\{C_{ig}\}$). Note that $\mt{ijt}=\sum_{g,h} C_{ig}\wt{ght} C_{jh}$ is a function of these parameters, the set $\mathcal{N}$ of nodes in the network is fixed and pre-determined, and the number $K$ of blocks is also fixed and pre-determined. The unnormalized log-likelihood is \begin{align} \label{logroll} \ell(G;\mathbf{\omega},\mathbf{C})= \sum_{t=0}^{23}\sum_{i,j\in\mathcal{N}} \left[\At{ijt}\log\left({\mt{ijt}}\right) - \mt{ijt}\right]\,, \end{align} although note that we omit the addition of the constant $-\sum_{i,j,t}\log\left(A_{ijt}!\right)$, because it does not affect maximum of the function. \subsection{Time-Dependent Discrete Stochastic Block Model (TDD-SBM)} \label{TDD-SBM} We derive a single-membership SBM from our mixed-membership SBM by making the extra assumption that, for each node $i\in \mathcal{N}$, we have that $C_{ig}>0$ for only one block $g \in \mathcal{K}$. (We also call this the ``discrete version'' of our model.) For our single-membership SBM, we introduce some new notation to aid our description and be consistent with notation in \cite{karrer2011stochastic,zhu2013oriented}. For a given node $i$, the block $g$ for which $C_{ig}>0$ is the block $g_i$ that includes node $i$. Therefore, we use a single parameter $\theta_i=C_{ig_i}$ for each node $i$ to indicate both the strength of $i$ in block $g$ and the membership of node $i$ in block $g$. {\hyperlink{degcorrected}{We will show} that this term is a multilayer extension of the degree-correction term of \cite{karrer2011stochastic}. The mean of the Poisson distribution of the value of an edge from node $i$ to node $j$ at time $t$ is $\theta_i\theta_j\omega_{g_ig_jt}=C_{ig_i}\wt{g_{i},g_{j}t} C_{jg_{j}}$. We retain the sum constraints of our mixed-membership model, such that $\sum_{i\in g}\theta_i = 1$ for all $g$. We compute optimal values for the parameters $\mathbf{\omega}$ and $\mathbf{\theta}=\{\theta_i\}_{i\in\mathcal{N}}$. As in the TDMM-SBM, take $\mathcal{N}$ and $K$ to be fixed and pre-determined. Again dropping the constant term $-\sum_{i,j,t}\log\left(A_{ijt}!\right)$, the log-likelihood of our single-membership SBM is \begin{align*} \ell(G;\mathbf{\omega}, \mathbf{\theta}) = \sum_{t}\sum_{g,h}\sum_{i\in g, j\in h} \left[\At{ijt}\log\theta_i+\At{jit}\log \theta_j+ \At{ijt}\log \wt{ght} -\theta_i\theta_j \wt{ght}\right] \,. \end{align*} We find explicit formulas for the MLEs of $\theta_i$ and $\wt{ght}$. In the following calculations, removal of $t$ from the subscript of a parameter and addition of a tilde designates a sum over all $t$. Specifically, we define $\tilde{A}_{ij}=\sum_{t=0}^{23} \At{ijt}$ and $\tilde{\omega}_{gh}=\sum_{t=0}^{23} \wt{ght}$. We differentiate $\ell$ with respect to $\wt{ght}$ to yield \begin{align*} \frac{\partial}{\partial \wt{ght}} \ell= \frac{\sum_{i \in g, j \in h}\At{ijt}}{\wt{ght}}-1\,, \end{align*} where we have used the block-wise sum constraints on $\theta_i$. Therefore, the MLE for $w_{ght}$ is \begin{align*} \hat{\omega}_{ght}=m_{ght}\,, \end{align*} where $m_{ght}$ is the sum of weights of edges from nodes in block $g$ to nodes in block $h$ during hour $t$. That is, $m_{ght} = \sum_{i\in g,j\in h}\At{ijt}$. We then differentiate $\ell$ with respect to $\theta_i$ to obtain \begin{align*} \frac{\partial}{\partial\theta_i}\ell=\frac{\sum_j \tilde{A}_{ij}+\sum_{j} \tilde{A}_{ji}}{\theta_i} -\sum_{h}\tilde{\omega}_{g_i h}-\sum_{h}\tilde{\omega}_{h g_i}\,. \end{align*} At $\hat{\omega}_{ght}$, the MLE for $\theta_i$ is \begin{align*} \hat{\theta_i}=\frac{\sum_j \tilde{A}_{ij}+\tilde{A}_{ji}}{\sum_{g}\tilde{m}_{gh}+\tilde{m}_{hg}} =\frac{k_i}{\kappa_{g_i}}\,, \end{align*} where $k_i=\sum_j \left(\tilde{A}_{ij}+\tilde{A}_{ji}\right)$ is the sum of the in-degree and out-degree of $i$ over all time periods, $\tilde{m}_{gh}=\sum_{t=0}^{23} m_{ght}$, and $\kappa_{g}=\sum_{h} \left(\tilde{m}_{gh}+\tilde{m}_{hg}\right)$ is the sum of the in-degrees and out-degrees of all nodes in block $g$ over all time periods. The term $2\tilde{m}_{gg}$ in the equation for $\kappa_g$ implies that we count each intra-block edge twice: once for emanating from $g$ and once for arriving at $g$. Similarly, $k_i$ includes the term $2\tilde{A_{ii}}$, so we count self-edges twice in this term. Our computation demonstrates that the MLE of the strength of $i$ in block $g$ is the relative proportion of the strength of node $i$ to the total activity of block $g$. \hypertarget{degcorrected}{The} parameter $\hat{C}_{ig_i} =\hat{\theta}_i$ in the TDD-SBM for modeling directed, multilayer networks is analogous to the degree-correction parameter in the degree-corrected SBM \cite{karrer2011stochastic} for undirected networks with one layer. Indeed, the MLE for the degree correction parameter in the latter model is the proportion of number of edges connected to a node to the number of edges connected to its assigned block. Another similarity between a degree-corrected SBM and our TDD-SBM is that in the MLE of the TDD-SBM, the sum over time of the expectation of the degree of a node $i$ is equal to the degree of node $i$ from the observed data. That is, $\sum_{t}\sum_{j}\left(\mu_{ijt}+\mu_{jit}\right)$, the sum of the mean weights of edges connected to $i$, is equal to $k_i$, the degree of node $i$ in the observed data. (See Section \ref{appendix:node_deg} of the appendix for the proof.) For our mixed-membership SBM, we are not aware of such a precise relationship between the data and the expected value of model statistics, although there does appear to be a positive correlation between the time-aggregated node degrees and the sum of the mixed-membership parameters ($\sum_g C_{ig}$ for all $i$). We now calculate the MLE of the unnormalized log-likelihood of the TDD-SBM. We obtain \begin{align*} \sum_{t}\left[\sum_{i,j} \left(\At{ijt}\log \left(\frac{k_i}{\kappa_{g_i}}\right)+\At{jit}\log\left(\frac{k_j}{\kappa_{g_j}}\right)\right)+\sum_{g,h} m_{ght}\log m_{ght} - \sum_{g,h} m_{ght}\right] \\ = \sum_i k_i \log k_i - \sum_i k_i\log {\kappa_{g_i}}+\sum_t\sum_{g,h} m_{ght}\log m_{ght} -\tilde{m}\,, \end{align*} where $\tilde{m}$ is the total number of edges in the network. By a similar calculation as one in \cite{karrer2011stochastic}, we obtain \begin{align*} \sum_{i} k_i\log \kappa_{g_i}&=\sum_t\sum_{g} \sum_{i\in g} k_{it} \log\kappa_g\\ &=\sum_t \sum_{g}\sum_{i\in g}\left( k_{\text{in},it} \log\kappa_g + k_{\text{out},it} \log\kappa_g\right)\\ &=\sum_t\sum_g\kappa_{\text{out},gt}\log\kappa_g+\sum_t\sum_h\kappa_{\text{in},ht}\log \kappa_h\\ &=\sum_t\sum_g\sum_h m_{ght}\log\kappa_g+\sum_t\sum_g\sum_h m_{ght}\log \kappa_h\\ &=\sum_{t}\sum_{g,h} m_{ght}\log \kappa_g\kappa_h\,, \end{align*} where $k_{\text{in},it}$ and $k_{\text{out},it}$ are the respective in-degrees and out-degrees of node $i$ during hour $t$, the quantity $\kappa_{\text{in},gt}=\sum_{i\in g} k_{\text{in},it}$ is the number of edges going into $g$ during hour $t$, and $\kappa_{\text{out},gt}=\sum_{i\in g} k_{\text{out},it}$ is the number of edges that emanate from $g$ during hour $t$. Including only the terms that depend on block assignments yields the following objective function: \begin{align}\label{tdd-sbm-objective} \sum_t\sum_{g,h} m_{ght}\log\left(\frac{m_{ght}}{\kappa_g\kappa_h}\right)\,. \end{align} Unlike the directed SBM in \cite{zhu2013oriented}, we do not have two strength parameters (representing an in-degree strength $\theta_i^{\text{in}}$ and an out-degree strength $\theta_i^{\text{out}}$) for each station. Nevertheless, our model still captures the directed nature of the data. We can see this by conceptualizing the estimated means as an approximation to the number of trips by hour in both directions at the same time. By representing the $48$-dimensional vectors as $2\times 24$ matrices, we see that \begin{align*} \theta_i\theta_j\begin{bmatrix}\wt{g_ig_j0}&\wt{g_ig_j1}&\ldots &\wt{g_ig_j23}\\ \wt{g_jg_i0}&\wt{g_jg_i1}&\ldots &\wt{g_jg_i23} \end{bmatrix} \hspace{1cm}\mbox{approximates}\hspace{1cm} \begin{bmatrix}\At{ij0}&\At{ij1}&\ldots &\At{ij23}\\ \At{ji0}&\At{ji1}&\ldots &\At{ji23} \end{bmatrix}\,. \end{align*} This perspective also holds for our mixed-membership SBM, for which \begin{align*} \sum_{g,h}C_{ig}C_{jh}\begin{bmatrix}\wt{gh0}&\wt{gh1}&\ldots &\wt{gh23}\\ \wt{hg0}&\wt{hg1}&\ldots &\wt{hg23} \end{bmatrix} \hspace{1cm}\mbox{approximates}\hspace{1cm} \begin{bmatrix}\At{ij0}&\At{ij1}&\ldots &\At{ij23}\\ \At{ji0}&\At{ji1}&\ldots &\At{ji23} \end{bmatrix}\,. \end{align*} The validity of this matrix representation depends on there being a large correlation between the time-aggregated in-degrees and out-degrees of nodes. This is related to the fact that in a given 24-hour period, the number of trips from one station to another is predictive of the number of trips in the opposite direction. This matrix representation is related to the fact that the $24$-hour time activities for trips between two stations in one direction are predictive of the activities in the opposite direction. The latter observation, in turn, is related to the axiom of human mobility that for each current of travel, there is a counter current \cite{barbosa2018human}. We observe (as did \cite{zhang2018equivalence}), from the above matrix expressions, that maximizing the log-likelihood of both SBMs is equivalent to a form of nonnegative matrix factorization with $K^2$ $48$-dimensional basis columns. In a sense, our model is neither an extension of the usual undirected degree-corrected SBM nor one of the usual directed degree-corrected SBMs. Instead, our model's single-layer network analog is a degree-corrected SBM with parameters $\theta_i$ and $\tilde{\omega}_{gh}$, except that the $\tilde{\omega}_{gh}$ are not constrained to be symmetric. \subsection{Singular vectors for Los Angeles and San Francisco} We show our singular vectors for the bicycle-sharing networks for downtown Los Angeles in Figure~\ref{fig:la_svd} and for San Francisco in Figure~\ref{fig:sf_svd}. \begin{figure}[H] \includegraphics[scale=.15]{IMG/la_svd} \caption{The first two singular vectors of data for the downtown Los Angeles bicycle-sharing network. }\label{fig:la_svd} \end{figure} \begin{figure}[H] \includegraphics[scale=.15]{IMG/sf_svd} \caption{The first two singular vectors of the data for the San Francisco bicycle-sharing network. }\label{fig:sf_svd} \end{figure} \subsection{Proof that expected node degrees are the same as node degrees in the data generated from our TDD-SBM \label{appendix:node_deg}} Suppose that we are given a network that is generated by our time-dependent discrete-membership stochastic block model (TDD-SBM). We prove that the expected value of the total degree (i.e., the sum of the in-degree and the out-degree) of a node is the same as the total degree of the node in the observed data. Let $X_{ijt}$ for each node pair $i,j$ and time $t \in \{0,1,\ldots,23\}$ be random edge weights distributed according to the TDD-SBM and inferred from the data $A_{ijt}$. Recall that $g$ denotes block $g$ in the SBM and that $\kappa_g$ is the sum of the in-degrees and out-degrees of all nodes in block $g$ over all time periods. For node $i$, we show that the mean degree of $i$ is equal to the degree of $i$ in the data. That is, $\mathbf{E}\left(\sum_{j}\sum_{t=0}^{23} X_{ijt}\right)=k_i=\sum_j \sum_{t=0}^{23} A_{ijt}$. We have \begin{align*} \mathbf{E}\left(\sum_{t=0}^{23} \sum_{j} \left(X_{ijt}+X_{jit}\right)\right)&=\frac{k_i}{\kappa_{g_i}}\sum_t \sum_h \sum_{j\in h}\frac{k_j}{\kappa_{h}}\left(m_{g_iht} + m_{hg_it}\right)\\ &= \frac{k_i}{\kappa_{g_i}}\sum_{h}\left(m_{g_iht} + m_{hg_it}\right)\\ &= k_i\,. \end{align*} We are not aware of a relationship between the expected degrees and degrees of the observed data for our mixed-membership stochastic block model. \section*{Acknowledgements} We thank Brian Karrer and Mark Newman for allowing us to use and share their code for degree-corrected stochastic block models from \cite{karrer2011stochastic}. We thank Susan Handy's lab at UC Davis for useful discussions on contextualizing our work for transportation researchers and planners, David Kempe for introducing us to the work of Rajmonda Caceres \cite{caceres2013temporal} on choosing a temporal scale for time-dependent networks, and Michelle Feng and others in the networks journal club at UCLA for helpful comments. CM and SSC thank NSF (DMS-1351860) for funding, and SSC also thanks NIGMS (R01 GM126556) and an NIH Ruth L. Kirschstein National Research Service Award (T32-GM008185) for funding. SW thanks NSF (CCF-1422795), ONR (N000141110719, N000141210834), DOD (W81XWH-15-1-0147), Intel STC-Visual Computing Grant (20112360), and Adobe Inc. for funding. \break \subsection{Downtown Los Angeles} In Figure \ref{fig:LA_mixed_discrete}, we show the mixed-membership (TDMM-SBM) and discrete (TDD-SBM) block assignments of two-block models of the downtown Los Angeles system. For the TDMM-SBM, the we scale the size of a given node $i$ in our plots based on $\sum_gC_{ig}$. We refer to these sums as ``C total'' values. These values correlate strongly with node degree (specifically, the sum of in-degree and out-degree), which is evident in the similarity of node sizes in the left and right panels of Figure \ref{fig:LA_mixed_discrete}. For both models, we observe that home and work blocks are interspersed geographically. (We will soon describe our method for determining the block labels in Figure \ref{fig:LA_mixed_discrete}.) The TDMM-SBM result reveals a group of stations (which we color in gray) in the left panel of Figure \ref{fig:LA_mixed_discrete} are neither strongly home-identified nor strongly work-identified; instead, they possess a roughly even mixture of the two types. For this network, the TDD-SBM output is very similar to what we obtain from a discretization of the TDMM-SBM output (which we discretize by assigning each node $i$ to the block with the maximum value of its $C_{ig}$ parameter), but this is not true for all of our bicycle-sharing networks. \begin{figure}[H] \center \includegraphics[scale=.18]{IMG/LA_mixed_discrete-small.jpg} \caption{Downtown Los Angeles bicycle stations classified using (left) a two-block TDMM-SBM and (right) a two-block TDD-SBM. The sizes of the nodes take continuous values. In the left panel, we scale their area based on the value of $\sum_gC_{ig}$; in the right panel, we scale them based on the sum of the in-degree and out-degree (divided by the maximum value of that sum). }\label{fig:LA_mixed_discrete} \end{figure} Our model does not yield ``home" and ``work'' labels for each block on its own, so we use the time-dependent block-to-block parameter estimates $\hat{\omega}_{ght}$ to assign these labels. We assign the labels heuristically under the assumption that the ``home" block is the origin of many trips to the work block in the morning and the ``work" block is the origin of many trips to the home block in the evening. Figure \ref{fig:la_omega}, which shows $\hat{\omega}_{ght}$ for each possible value of $g$ and $h$, with the hour $t$ on the horizontal axis, supports our labeling. Based on our labeling, we observe a clear peak in home-to-work traffic in the mornings and work-to-home traffic in the evenings. We make similar ``home" and ``work" assignments for San Francisco and New York City. In Los Angeles, the traffic in the work block peaks in the middle of the day. This perhaps represents lunchtime errands, leisure activity, or tourist activity, as there are many tourist attractions in the downtown area. The traffic in the home block has a mild evening peak and has by far the least activity overall. \begin{figure}[H] \center \includegraphics[scale=.13]{IMG/la_omega.png} \caption{Estimated time-dependent block-to-block parameters $\hat{\omega}_{ght}$ for the two-block TDMM-SBM and two-block TDD-SBM for downtown Los Angeles.} \label{fig:la_omega} \end{figure} To further validate our block labels, we use the zoning map for downtown Los Angeles from the Los Angeles Department of City Planning \cite{la_zoning}.\footnote{Permission for use of these proprietary data is granted by the City of Los Angeles Department of City Planning. Copyright $\copyright$ 2015 City of Los Angeles. All Rights Reserved.} Zoning ordinances determine the allowable uses of city land. They distinguish land that is available for commercial uses, industrial uses, residential uses, park districts, and others. In the background of Figure \ref{fig:LA_Zoning}, we show a simplified version of the underlying zoning map (with a grouping of similar designations). The industrial areas house a mixture of manufacturing and commercial uses. Public facilities include government buildings, public schools, parking under freeways, and police and fire stations \cite{la_zoning_dict}. In downtown Los Angeles, manufacturing and industrial areas are split cleanly from residential areas, whereas commercial and residential areas are intermixed across the bicycle-sharing system's coverage area. \begin{figure}[H]\center \includegraphics[scale=.15]{IMG/la_continuous_zones} \caption{Mixed-membership (TDMM-SBM) assignments of Los Angeles bicycle-share stations overlaid on a simplified LA zoning map. Industrial blocks include manufacturing and commercial areas. As in Figure \ref{fig:LA_mixed_discrete}, we scale the area of nodes to the value of $\sum_gC_{ig}.$ }\label{fig:LA_Zoning} \end{figure} Figure \ref{fig:LA_Zoning} illustrates that most stations that are strongly home-identified are in or near zones for pure residential use or mixed residential and commercial use. We find that many stations that are not predominantly home-identified or work-identified align with mixed-use commercial/residential zones. The discrete-role plot (see the right panel of Figure \ref{fig:LA_mixed_discrete}) has a stripe of ``home" stations that cut diagonally through the ``work" stations. In Figure \ref{fig:LA_Zoning}, we see that this aligns roughly with areas that are zoned for purely residential use. By contrast, industrial and public facility zones tend to host stations that are mostly work-identified (although some of the most strongly work-identified stations are in mixed-used areas). One station that seems to deviate from the overall pattern is the heavily-trafficked station at Union Station. Although it is adjacent to a public facility zone with many government buildings, it is also strongly home-identified. Although this may seem surprising on its surface, this is consistent with other home-identified stations, because Union Station is a major transit hub for the Los Angeles metropolitan area. Accordingly, many morning trips originate there, as commuters transition from other forms of transportation, and many evening trips conclude there --- an activity pattern that is sensibly associated with home-identified stations. Such idiosyncrasies of transit hubs also arise in our results for San Francisco and New York City. \subsection{San Francisco} In Figure \ref{fig:sf_cont_discrete}, we compare the two-block TDMM-SBM and two-block TDD-SBM for San Francisco. As we saw for Los Angeles, the San Francisco blocks are interspersed geographically, and stations vary from strongly home-identified ones to strongly work-identified ones. The most strongly home-identified station is a major transit hub, the San Francisco Caltrain Station on 4th Street. \begin{figure}[H] \includegraphics[scale=.18]{IMG/SF_mixed_discrete.png} \caption{San Francisco bicycle stations classified using (left) a two-block TDMM-SBM and (right) a two-block TDD-SBM. The sizes of the nodes take continuous values. In the left panel, we scale their area based on the value of $\sum_gC_{ig}$; in the right panel, we scale them based on the sum of the in-degree and out-degree (divided by the maximum value of that sum). }\label{fig:sf_cont_discrete} \end{figure} In Figure \ref{fig:sf_omega}, we show the estimated traffic between the ``home" and ``work" blocks for the TDMM-SBM and TDD-SBM. As in downtown Los Angeles, we observe inter-block commuting. However, unlike in downtown LA, using the discrete model (i.e., TDD-SBM), we observe intra-block morning and evening peaks in both the home and work blocks. This may be due to last-mile commuting, such as using bicycle-sharing facilities to get to or from a train station. Recognizing last-mile usage is important for integrating bicycle sharing with nearby public transportation. One possible reason that we do not observe a similar phenomenon in downtown LA is that San Franciscans are more likely than Angelenos (i.e., inhabitants of Los Angeles) to use public transportation \cite{pubtranLAT2015}. The intra-block morning and evening peaks may also arise from the intermixing of commercial and residential uses of land, such that some travel within blocks may also constitute commuting. \begin{figure}[H] \center \includegraphics[scale=.13]{IMG/sf_omega.png} \caption{Estimated time-dependent block-to-block parameters $\hat{\omega}_{ght}$ for the two-block TDMM-SBM and two-block TDD-SBM for San Francisco.} \label{fig:sf_omega} \end{figure} Before presenting our results for New York City, we briefly compare our results from Los Angeles and San Francisco to results for a time-independent SBM fit to these networks, where we have aggregated the data over all time periods. To do this, we calculate the adjacency matrix $\tilde{A}_{ij}=\sum_{t=0}^{23}\At{ijt}$ a time-aggregated network. (The time-independent SBM also has two blocks and is both directed and degree-corrected.) For downtown Los Angeles, we observe a clear geographically-based division in the results of the time-independent SBM. For San Francisco, however, the differences between the blocks of the time-dependent SBM and time-independent SBM are less noticeable, although they are still present. This confirms that our time-dependent SBMs are detecting behavior that is not evident in the time-aggregated data. \begin{figure}[H] \center \includegraphics[scale=.17]{IMG/la_sf_static_discrete-small.jpg} \caption{Estimated blocks of discrete, directed, degree-corrected, time-independent SBM for time-aggregated bicycle-sharing data from Los Angeles and San Francisco. }\label{fig:static_discrete} \end{figure} \subsection{New York City} In Figure \ref{fig:NY_mixed_discrete}, we compare our results from a three-block TDMM-SBM and a three-block TDD-SBM for New York City. In initial calculations, we found that a two-block TDD-SBM divides the network along the East River into a Manhattan block and Brooklyn block and that the two-block TDMM-SBM divides the network slightly farther north in Lower Manhattan. This suggests a possible limitation to the size of networks for which our time-dependent SBMs can recover functional blocks, as opposed to geographically-based blocks. We will explore this hypothesis further by examining the output of our time-dependent SBMs for the entire New York City bicycle-sharing network and subsequently modeling a subset of the New York City network. \begin{figure}[H] \includegraphics[scale=.17]{IMG/NY_mixed_discrete-small.jpg} \caption{New York City bicycle stations classified using (left) a three-block TDMM-SBM and (right) a three-block TDD-SBM. The sizes of the nodes take continuous values. In the left panel, we scale their area based on the value of $\sum_gC_{ig}$; in the right panel, we scale them based on the sum of the in-degree and out-degree (divided by the maximum value of that sum). }\label{fig:NY_mixed_discrete} \end{figure} In Figure \ref{fig:ny_omega}, we compare estimated inter-block traffic, as captured by the values of $\hat{\omega}_{ght}$, for the three-block TDMM-SBM and three-block TDD-SBM. We observe prominently that all intra-block traffic has two peaks and much higher hourly trip counts than inter-block traffic. The double peaks are reminiscent of the overall system activity in Figure \ref{fig:byhour}. This may be due in part to last-mile commuting, as we also suspected in San Francisco. However, for a system that is this large, the double peaks and minimal inter-block traffic suggests that it is useful (and important) to consider each block as its own ecosystem. We also find strong similarity between results from our TDD-SBM and a three-block time-independent SBM for time-aggregated data for New York City (not shown), providing further evidence that our time-dependent SBMs are not capturing time-dependent roles for New York City. Consequently, we choose the labels of these blocks based on the primary borough and zone type of each block's stations, as indicated in the underlying zoning map for this part of New York City \cite{ny_zoning} in Figure \ref{fig:ny_discrete_zones}. \begin{figure}[H] \includegraphics[scale=.11]{IMG/ny_omega.png} \caption{Estimated time-dependent block-to-block parameters $\hat{\omega}_{ght}$ for the three-block TDMM-SBM and three-block TDD-SBM for New York City. We use ``M" to signify Manhattan and ``BK" to signify Brooklyn.} \label{fig:ny_omega} \end{figure} \begin{figure}[H] \center \includegraphics[scale=.14]{IMG/ny_discrete_zones.png} \caption{TDD-SBM station roles versus the coverage-area zoning map of New York City.}\label{fig:ny_discrete_zones} \end{figure} In Figure \ref{fig:ny_discrete_zones}, we illustrate that there is general overlap, although it is far from perfect, between (1) the Upper Manhattan (``home") block and residential areas or parks and between (2) the Lower Manhattan (``work'') block and commercial or manufacturing areas. All stations in Brooklyn are in the third block, which contains mostly residential areas. These observations motivate our block labels in this figure, Figure \ref{fig:NY_mixed_discrete}, and Figure \ref{fig:ny_omega}. (Although Figure \ref{fig:ny_discrete_zones} shows only TDD-SBM-estimated blocks, the same reasoning motivates our labels for the three-block TDMM-SBM.) No block has exclusively commercial or residential areas, reinforcing our conclusion that these blocks represent primarily geographic divisions (with most of the traffic occurring within blocks), as opposed to functionally similar groups of stations. We examined several time-dependent SBMs for New York City with more than three blocks to try to discover functional blocks, but we found that the blocks were still geographically based. In some cases, TDMM-SBM with larger numbers of blocks were able to find functional divisions within smaller geographic areas (subdividing the blocks in Figure \ref{fig:NY_mixed_discrete}), but neither our discrete model nor our mixed model detected system-wide ``home'' or ``work'' blocks. See our supplementary material for our code to fit and visualize time-dependent SBMs of the New York City network with a number of blocks other than three. \subsubsection{Manhattan} \begin{figure}[H] \center \includegraphics[scale=.13]{IMG/NY_HM_mixed_discrete.png} \caption{Comparison of estimated blocks from (left) a five-block TDMM-SBM and (right) a five-block TDD-SBM of the Manhattan (home) block (i.e., the Manhattan subnetwork) of the New York City network (see Figure \ref{fig:ny_discrete_zones}). In the role labels of the TDD-SBM, we use ``W'' to represent west and ``E'' to represent east.} \label{fig:ny_plot_hm} \end{figure} To examine the New York City bicycle-sharing network on a smaller scale, we fit models to the subset of stations and trips within the Manhattan (home) block of the three-block TDD-SBM that we identified above (see Figure \ref{fig:ny_discrete_zones}); we refer to this subnetwork as the ``Manhattan subnetwork''. This subnetwork includes 256,840 trips and 166 stations. In Figure \ref{fig:ny_plot_hm}, we present our results for a five-block TDD-SBM and TDD-SBM applied to the Manhattan subnetwork. The area without stations in the middle of each panel of Figure \ref{fig:ny_plot_hm} is Central Park, which has stations on its perimeter but not in its interior. The estimated blocks of the five-block TDMM-SBM and TDD-SBM (see Figure \ref{fig:ny_plot_hm}) of the Manhattan subnetwork outline similar subregions. The mixed-membership block assignments also illustrate how the subregions transition into each other. The models return block-membership parameters that capture the residential and commercial sections of the area much better than the three-block TDD-SBM and TDMM-SBM of the full New York City network; one can see this by comparing the five-block subnetwork results with the underlying zoning map for the area in Figure \ref{fig:ny_discrete_zones}. The stations in residential zones generally have larger block-membership parameters for ``home" blocks than for ``work" blocks, and the opposite is true for stations in commercial zones. We label the five detected blocks as (clockwise from top left) ``home (west)'', ``park'', ``home (east)'', ``work'', and ``mixed''. We base these labels on the land usage of the underlying areas and the time-dependent block-to-block activity parameters ($\hat{\omega}_{ght}$) that we show in Figure \ref{fig:ny_hm_omega}. We highlight the appearance of the ``park'' block, which we have not observed in previous models and has distinctive behavior. The park block is similar to a residential block in terms of its spike in morning traffic to the work block and its spike in evening traffic from the work block, but it has distinct intra-block activity that peaks in the afternoon. The intra-block activity resembles weekend activity in the New York City bicycle-sharing system as a whole (see Figure \ref{fig:byhour}); this reflects leisure use of the bikes. Bicycles near Central Park (which also places them near several major museums) are likely to be used by tourists and other non-commuters during the day for leisure or travel to nearby attractions. In Figure \ref{fig:ny_hm_omega}, we show the values of the block-to-block parameters $\hat{\omega}_{ght}$ for the five-block TDD-SBM and TDMM-SBM. Our estimates of $\hat{\omega}_{ght}$ for these models illustrate important differences in the behavior of different blocks that we can observe only with a time-dependent model.\footnote{We obtain similar block identifications for this subnetwork using a discrete, directed, degree-corrected, time-independent SBM as we do from a TDD-SBM with the same number of blocks. We do not show the time-independent SBM results, but they can be produced using the code in our supplementary material.} We see some overlap in the time-dependent behavior of blocks, evidencing potential overfitting. For example, the home (east), home (west), and mixed block have similar traffic with blocks other than their own. However, models of this subnetwork with fewer than five blocks do not cleanly distinguish the ``park" block of stations from other residential stations. \begin{figure}[H] \center \includegraphics[scale=.11]{IMG/ny_hm_omega.png} \caption{Estimated time-dependent block-to-block parameters $\hat{\omega}_{ght}$ for (left) a TDMM-SBM with five blocks and (right) a TDD-SBM with five blocks of the Manhattan subnetwork of the New York City bicycle-sharing network. } \label{fig:ny_hm_omega} \end{figure} One reason that our time-dependent SBMs of the Manhattan subnetwork of New York City bicycle-sharing network perform better (with respect to detecting functionally meaningful blocks) than any models that we applied to the entire system is the dependence of station-to-station trip counts on the distance between stations. Although our SBMs correct for the overall activity of each station, they do not normalize expected edge values by the distance between stations. In a small geographic area, such as the coverage areas of the Los Angeles and San Francisco networks, this is a reasonable choice, as all stations are within ``biking distance" of each other. However, when examining a system as large as New York City's, the lack of distance correction weakens the functional groupings that we obtain with our time-dependent SBMs. Intra-block trips dwarf inter-block trips (see Figure \ref{fig:ny_omega}), and it seems more reasonable to construe each block as its own ecosystem. \subsection{Model Selection} Although statistically rigorous model selection is outside the scope of our paper, we briefly compare the number of parameters in our mixed-membership and discrete SBMs. This is valuable for considering model-selection criteria, such as the Akaike information criterion (AIC) and Bayesian information criterion (BIC), that penalize a model based on its number of parameters. For a network with $N$ nodes, $K$ blocks, and $T$ time slices, the number of parameters for the TDMM-SBM is \begin{align}\label{first} K \times N - K + T \times K^2\,, \end{align} and the number of parameters for the TDD-SBM is \begin{align}\label{second} 2 \times N - K + T \times K^2\,. \end{align} The first term of \eqref{first} comes from the fact each node in our mixed-membership model has $K$ parameters ($C_{ig}$, with $g \in \{1,\ldots,K\}$) that express the strength of membership in each block. By contrast, each node in our discrete-membership model has one parameter for block membership and one degree-correction parameter. Therefore, given a value of $N$, the first term in \eqref{first} increases linearly with the number of blocks, whereas the corresponding term in \eqref{second} is fixed. Otherwise, formulas \eqref{first} and \eqref{second} are equivalent. The $-K$ term in each formula arises from identifiability constraints for each model. As we described in Section \ref{Model}), these constraints are $\sum_i C_{ig}=1$ for all $g$ for the mixed-membership model and $\sum_{i\in g}\theta_i = 1$ for all $g$ for the discrete model. The last term in each formula is the total number of $\omega_{ght}$ terms in the model (see Section \ref{TDMM-SBM}). In Table \ref{ny_hm_llik_table}, we show the unnormalized log-likelihood and number of parameters ($N_p$) for TDMM-SBM and TDD-SMB of the Manhattan subnetwork (which has $N = 166$ nodes) with two, three, four, and five blocks. \begin{table}[ht] \centering \begin{tabular}{rrrrrr} & \textbf{TDMM-SBM} & & \textbf{TDD-SBM} & \\ \hline Number of blocks & $N_p$ & log-likelihood & $N_p$ & log-likelihood \\ \hline 2 & 426 & $-260625$ & 426 & $-270809$ \\ 3 & 711 & $-235162$ & 545 & $-254779$ \\ 4 & 1044 & $-212295$ & 712 & $-236198$ \\ 5 & 1425 & $-198489$ & 927 & $-222539$ \\ 6 & 1854 & $-189670$ & 1190 & $-216468$ \\ \hline \end{tabular} \caption{Comparison of log-likelihood and number of parameters in models of the Manhattan subnetwork, which has $N = 166$ nodes.} \label{ny_hm_llik_table} \end{table} In this example, TDMM-SBM outperforms TDD-SBM with respect to log-likelihood when the two models have the same number of parameters. This result makes sense because of the additional constraint of the TDD-SBM that stations must belong to exactly one block. \begin{figure}[H] \center \includegraphics[scale=.3]{IMG/LAAIC.png} \caption{Akaike information criterion for maximum likelihood TDMM-SBM with 2--10 blocks for the Los Angeles bicycle-sharing network. } \label{fig:AICvsblocks} \end{figure} Calculating AIC, which is given by \cite{akaike1974new} \begin{align*} \text{AIC}=(2\times N_p) - (2\times \text{log-likelihood}) \,, \end{align*} for TDMM-SBM with $2$--$10$ blocks for the Los Angeles bicycle-sharing network selects the TDMM-SBM with the largest number of blocks. {The AIC is a cost function for comparing the relative quality of statistical models; one construes the model with a smaller AIC as the ``better'' model. The AIC takes into account both the likelihood and how description complexity of a model is in its two summands. The negative log-likelihood is smaller for models with higher likelihood, and one measures the complexity of a model based on its number of parameters. From this perspective, a model with a smaller AIC is better at capturing the trends of a data set while avoiding overfitting.} In Figure \ref{fig:AICvsblocks}, {we see that AIC decreases as we increase the number of blocks for $2$--$10$ blocks. However, the graphs of the MLE values of $\omega_{ght}$ for TDMM-SBM with $7$ or more blocks on the Los Angeles network are no more informative than models with fewer blocks.} We make this observation for TDMM-SBM with large numbers of blocks using the LA network rather than the Manhattan network, because computing models on data with fewer stations takes less time; and we are confident that the noisy and/or redundant information from using $7$ or more blocks on the LA network arise from overfitting. Although our calculations above are straightforward, choosing appropriate model-selection criteria deserves serious consideration \cite{yan2016bayesian,yan2014model}. We leave such an investigation for future work.
\section*{Abstract} We consider a model of two tunnel-coupled one-dimensional Bose gases with hard-wall boundary conditions. Bosonizing the model and retaining only the most relevant interactions leads to a decoupled theory consisting of a quantum sine-Gordon model and a free boson, describing respectively the antisymmetric and symmetric combinations of the phase fields. We go beyond this description by retaining the perturbation with the next smallest scaling dimension. This perturbation carries conformal spin and couples the two sectors. We carry out a detailed investigation of the effects of this coupling on the non-equilibrium dynamics of the model. We focus in particular on the role played by spatial inhomogeneities in the initial state in a quantum quench setup. \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \label{sec:introduction} The study of one-dimensional quantum many-body systems out of equilibrium has seen great progress in the past decades. Long-standing questions concerning the equilibration of observables, spreading of correlations and entanglement, and the emergence of statistical mechanics from microscopics have been successfully tackled using a range of innovative theoretical ideas \cite{Rigol2007,CalabreseCardy2007,Polkovnikov2011,EFreview,Vidmar2016,DAlessio2016,Gogolin2016,CalabreseCardy2016,CalabreseRev2018}, whilst spectacular advances in the ability to realize archetypical one-dimensional quantum many-body sytems using cold atoms \cite{Ketterle2001,Greiner2001,Kinoshita2004,Schumm2005,Hofferberth2007} have made it possible to test many of these theoretical developments using tabletop experiments \cite{Trotzky2012,Cheneau2012,Gring2012,Langen2013,Langen2015,Kaufman2016}. However, such experimental engineering of quantum many-body Hamiltonians relies on certain assumptions to make the experiments map onto a model of physical interest. These assumptions often include having a low energy density, at which an effective low-energy theory holds, and translational invariance, which can generally simplify the problem and specifically play an important role in the integrability of the low-energy theory. When studying non-equilibrium problems in finite quantum many-body systems, these two assumptions are sometimes brought into question. We here study a situation where both the successes and challenges described above are clearly present: we consider pairs of tunnel-coupled, elongated Bose gases, as realized in the Vienna experiments \cite{Schumm2005,Hofferberth2007,Gring2012,Kuhnert2013,Smith2013,Langen2013,Langen2015,Schweigler2017,Pigneur2017}. An interesting feature of these experiments is that in certain limits, density measurements after matter-wave interference \cite{Ketterle1997,Schumm2005} correspond to projective von Neumann measurements of the relative phase field \cite{Nieuwkerk2018}. This allows for the reconstruction of full distribution functions of quantum mechanical observables \cite{Kuhnert2013,Smith2013,Schweigler2017}, which is of considerable theoretical interest \cite{cd-07,Imambekov2007,lp-08,ia-13,sk-13,e-13,k-14,sp-17,CoEG17,nr-17,hb-17,lddz-15,bpc-18,Groha18,Collura20} in general. In the case at hand, situations without tunnel-coupling can be modelled by a two-component Luttinger liquid \cite{Haldane1981,Bistritzer2007}. This description in terms of a quadratic quantum critical model has yielded theoretical results for the full fluctuation statistics of the relative phase field \cite{Imambekov2007,Kitagawa2010,Kitagawa2011} which show a satisfying match with experimental results \cite{Gring2012,Langen2015}. Our interest lies in the effect of a finite tunnel barrier between the gases \cite{Albiez2005,Gati2006,Levy2007,Hofferberth2007}. This introduces a relevant perturbation and at sufficiently low energies leads to a decoupled theory of a Luttinger liquid describing the symmetric combination of Bose gas phases (``symmetric sector'') and a sine-Gordon model \cite{Gritsev2007} describing the relative phase (``antisymmetric sector''). The sine-Gordon model is of great theoretical importance as it is an exactly solvable, Lorentz invariant quantum field theory that exhibits a rich range of physical phenomena like dynamical mass generation and topological excitations and moreover has important applications to electronic degrees of freedom in solids \cite{EsslerKonik2005}. Its behaviour out of equilibrium has received a lot of attention in the past decade. To be able to study dynamics, the very weakly interacting limit is amenable to a simple harmonic approximation \cite{Iucci2010,Foini2015,Foini2017}, while the free fermion point can also be used to obtain exact results \cite{Iucci2010}. Integrability-based methods were used in Refs.~\cite{Bertini2014,Schuricht2017,Horvath2017,Horvath2018a} to study quenches from ``integrable'' initial states, whereas semiclassical methods \cite{Kormos2016,Pascu2017} were applied to the study of the time-dependence of one and two-point functions as well as the probability distribution of the phase. The truncated conformal space approach\cite{James2018} was employed in Ref.~\cite{Kukuljan2018} to analyse the time evolution of two and four-point functions after a quantum quench. A first litmus test for the experimental realization of the sine-Gordon model using split Bose gas experiments was performed in an equilibrium situation: high order equilibirum correlation functions extracted from projective phase measurements in the classical limit have been found to agree well with classical field simulations \cite{Schweigler2017}. A quantum many-body treatment of some such correlation functions is available as well, with possible generalizations to non-equilibrium situations \cite{Sotiriadis2018}. For non-equilibrium initial conditions, however, experimental studies \cite{Pigneur2017,Pigneur2019thesis,schweigler2019correlations} have shown puzzling behaviour: when preparing two elongated Bose gases with an initial phase difference, applying a tunnel-coupling between them sets Josephson oscillations of density and phase in motion. These oscillations show a rapid damping, accompanied by a narrowing of the distribution function of the phase. To date, no satisfying theoretical explanation of this damping is known \cite{Pigneur2018}. The damping seems incompatable with a description in terms of a translationally invariant sine-Gordon model, which fails to provide a mechanism for the observed strong and rapid damping in both a self-consistent harmonic treatment \cite{Nieuwkerk2018b} and in a combination of truncated Wigner and truncated conformal space approaches \cite{Horvath2018}. In this work, we go beyond previous studies in two important ways: \begin{enumerate} \item{} We take into account the next most relevant perturbation at low energies. This perturbation induces an interaction between the symmetric and antisymmetric sectors. \item{} We drop the assumption of translational invariance. To this end we place the model in a hard-wall box geometry and consider inhomogeneous initial conditions. \end{enumerate} Our strategy is to treat the resulting two-component model in the self-consistent time-dependent approximation (SCTDHA) as described in \cite{Nieuwkerk2018b}. We consider the dynamics after initializing the system in a state in which the sectors are uncorrelated and observe how the new coupling term causes correlations between the two sectors to develop over time. In addition to this, energy starts to oscillate between the sectors. Depending on the initial density profile imprinted on the gas, Josephson oscillations of density and phase are affected by the presence of the additional term, showing modulations of the amplitude that differ from the ones observed in the SCTDHA treatment of isolated sine-Gordon dynamics \cite{Nieuwkerk2018b}. This paper is organized as follows. In Sec. \ref{sec:tunnel_coupled_bose_gases_in_a_hard_wall_box}, we introduce the low-energy effective theory in a box geometry, the additional interaction term and the observable relevant for experiment. We also establish some notational conventions. In Sec. \ref{sec:self_consistent_time_dependent_harmonia_approxiation}, we recapitulate the self-constistent time-dependent harmonic approximation as well as the framework to compute observables and some important distribution functions. In Sec. \ref{sec:results_for_experimentally_relevant_initial_states}, we apply our formalism to an initial state which is commonly used in the literature, and present results on energy flow and growth of correlations between the sectors, along with the effect on Josephson oscillations, due to the additional interaction term. Sec. \ref{sec:conclusion} summarizes our conclusions and discusses questions for further study. \section{Tunnel-coupled bose gases in a hard-wall box} \label{sec:tunnel_coupled_bose_gases_in_a_hard_wall_box} An appropriate model for the experiments carried out by the Vienna group is an interacting Bose gas confined in three-dimensional space by a tight harmonic potential in the $z$-direction, a double-well potential $V_\perp(y)$ in the $y$-direction and a shallow harmonic potential in the $x$-direction. We will refer to the $x$-direction as \textit{longitudinal}, and to the remaining directions as \textit{transverse}. To simplify the problem, we take the longitudinal potential to be an infinite square well \begin{align} V_{||}(x) = \begin{cases} 0 &\text{ if } 0 <x <L\ ,\\ \infty &\text{ otherwise.} \end{cases} \end{align} Just like a shallow harmonic potential this breaks translational invariance in the longitudinal direction, but has the advantage to be considerably simpler to analyze. Our starting point is thus the following Hamiltonian \begin{align} H_{\mathrm{3d}}=\int dx\ dy\ dz\ \bigg\{\Psi^\dagger(x,y,z)\left[-\frac{\nabla^2}{2m}+ V_{||}(x) + V_\perp(y)+\frac{m\omega_{z}^2}{2}z^2 \right]\Psi(x,y,z)\notag \\ +c\big(\Psi^\dagger(x,y,z)\big)^2\big(\Psi(x,y,z)\big)^2\bigg\}\ , \end{align} where $\Psi(x,y,z)$ are complex Bose fields obeying the usual bosonic commutation relations. \subsection{Low-energy effective theory} \label{sub:low_energy_effective_theory} In situations where the transverse potentials are sufficiently tight, the dynamics in the $y$- and $z$-directions can be integrated out, in a way analogous to Ref. \cite{Olshanii1998}. Details of this procedure will be reported elsewhere \cite{TBP}. Projecting to the lowest two states of the transverse potential, and taking appropriate linear combinations of these, we obtain a Hamiltonian for two species of bosons, $\Psi_{1,2}$, which are approximately localized in wells $1$ and $2$: \begin{align} H_{\mathrm{1d}} = \int_{0}^{L} dx\,\Bigg[ \sum_{j=1,2} \frac{1}{2m} \partial_{x} \Psi^{\dagger}_{j}(x) \partial_{x} \Psi_{j}(x) &+ \sum_{j,k,l,m=1,2} \Gamma_{jklm}\,\Psi^{\dagger}_{j}(x)\Psi^{\dagger}_{k}(x)\Psi^{\vphantom{\dagger}}_{l}(x)\Psi^{\vphantom{\dagger}}_{m}(x) \notag\\ &- \left( T_{\perp} \Psi^{\dagger}_{1}(x)\Psi_{2}(x) + \mathrm{h.c.} \right) \Bigg] . \label{eq:micr_1d_ham} \end{align} Here the Bose fields $\Psi(x)$ have commutation relations $\left[ \Psi_{i}^{\vphantom{\dagger}}(x), \Psi_{j}^{\dagger}(x^{\prime}) \right] = \delta_{i,j} \delta(x-x^{\prime})$. The two Bose gases are coupled by a tunnelling term as well as contact interactions. The corresponding coupling constants $\Gamma_{jklm}$ follow from the details of the low-energy projection \cite{TBP}. For our purposes, we will assume the diagonal elements to be equal to the usual Lieb-Liniger interaction constant, $\Gamma_{jjjj} = g \, \forall \, j$. Hard-wall boundary conditions are imposed by restricting our problem to states $\ket{\Phi}$ where the density at the boundary has a vanishing eigenvalue: \begin{align} \Psi^{\dagger}_{j}(L)\Psi^{\vphantom{\dagger}}_{j}(L)\ket{\Phi} = \Psi^{\dagger}_{j}(0)\Psi^{\vphantom{\dagger}}_{j}(0)\ket{\Phi} = 0. \label{eq:hard_wall_cond} \end{align} The one-dimensional model \fr{eq:micr_1d_ham} gives an accurate description of the full theory $H_{\rm 3d}$ at energies that are small compared to the energy $E_{\perp,2}$ of the second excited state of the transverse confining potential. In the actual experiments this is a large energy scale. The physics of interest occurs at energies that are small compared to $v/\xi\ll E_{\perp,2}$, where $\xi$ is the coherence length and $v$ the speed of sound. This enables us to make a second low-energy projection by employing bosonization \cite{Haldane1981} \begin{align} \Psi^{\dagger}_{j}(x) \sim \sqrt{\rho_{0} + \partial_{x} \theta_{j}/\pi} \, e^{-i \phi_{j}(x)} \sum_{m=-\infty}^{\infty} B_{m} e^{2 m i (x \pi \rho_{0}+ \theta_{j})} \ . \label{eq:bosonization_identity} \end{align} This provides a low-energy description of (\ref{eq:micr_1d_ham}) in terms of phase fields $\phi_{j}$ and $\theta_{j}$ with a cutoff length scale set by the coherence length of the gases, which for weak interactions is given by $\xi = \pi/m v$ (the sound velocity $v$ is defined below). The hard-wall condition is encoded in the boundary conditions of the $\theta$-fields in a way that is described in Sec. \ref{sub:mode_expansions_for_H0}. Let us first consider the case where interactions and tunnelling between the two gases are absent, meaning that both $T_{\perp}$ and the non-diagonal elements of $\Gamma$ are zero. This leaves us with two Lieb-Liniger models in a hard-wall box, with interaction strength $g$. Under the mapping (\ref{eq:bosonization_identity}), the low-energy physics of this model maps to a pair of Luttinger liquids \begin{align} H_{j} &= \frac{v}{2 \pi} \int_{0}^{L} dx \left[ \frac{1}{K} \left( \partial_{x} \theta_{j}(x) \right)^2 + K \left( \partial_{x} \phi_{j}(x) \right)^2 \right] , \;\;\;\; j=s,a. \label{eq:Luttinger_Liquids} \end{align} Here we have defined (anti)symmetric combinations of the phase fields by \begin{equation} \phi_{s/a} = \phi_{1} \pm \phi_{2}\ ,\quad \partial_{x}\theta_{s/a} = \frac{\partial_{x}\theta_{1} \pm \partial_{x}\theta_{2}}{2}\ . \end{equation} These fields are compact $\phi = \phi + 2 \pi$, $\theta = \theta + \pi$ and fulfil commutation relations \begin{align} \left[ \partial_{x} \theta_{j}(x), \phi_{l}(y) \right] = i \pi \delta_{j,l} \delta(x - y). \end{align} This implies that the canonically conjugate fields to $\phi_{s/a}$ are given by \begin{align} \Pi_{j}(x) \equiv \frac{\partial_{x} \theta_{j}(x)}{\pi}\ . \end{align} For weak interactions, the sound velocity $v$ and Luttinger parameter $K$ are related to the parameters in the Lieb-Liniger model in a simple way \cite{Cazalilla2004} \begin{align} v &= \frac{\rho_{0}}{m} \sqrt{\gamma} \left( 1 - \frac{\sqrt{\gamma}}{2 \pi}\right)^{1/2}, \quad K = \frac{\pi}{2\sqrt{\gamma}} \left( 1 - \frac{\sqrt{\gamma}}{2 \pi}\right)^{-1/2}. \label{eq:micr_pars} \end{align} Here $\gamma = m g/\rho_{0}$ is the dimensionless interaction strength and $\rho_{0}$ the average density of each of the two Bose gases. In the next step we take into account the tunnelling term in \fr{eq:micr_1d_ham} as well as ``off-diagonal'' interaction terms proportional to $\Gamma_{ijkl}$ with not all indices being equal. These introduce relevant perturbations (in the renormalization group sense) with respect to the critical Hamiltonian (\ref{eq:Luttinger_Liquids}). Inserting the bosonization identity (\ref{eq:bosonization_identity}) and assuming $\Gamma$ to be real, permutation symmetric and symmetric under $ 1 \leftrightarrow 2$, we find that the perturbations with the lowest scaling dimensions can be written in the form \begin{align} H_{\perp} = - 2t_{\perp} \int_{0}^{L} dx \, \left[\rho_{0} + \sigma \Pi_{s}(x) \right] \cos \phi_{a}(x) \ , \label{eq:mixing_hamiltonian} \end{align} where $t_\perp$ and $\sigma$ depend on the microscopic parameters in \fr{eq:micr_1d_ham}. Importantly, the two terms in (\ref{eq:mixing_hamiltonian}) get generated independently and we therefore will treat $t_\perp$ and $\sigma$ as independent phenomenological parameters in the following. The Hamiltonian $H_s+H_a+H_\perp$ should be viewed as the result of integrating out high energy degrees of freedom in a renormalization group sense. As $t_\perp$ grows much faster than $t_\perp\sigma$ under the renormalization group it would be unphysical to consider very large values of $\sigma$. We have therefore restricted the numerical analyses reported below to the range $0\leq\sigma\leq 2$. In addition to \fr{eq:mixing_hamiltonian} there are other perturbations with higher scaling dimensions. Their systematic derivation as well as an analysis of their effects will be presented elsewhere \cite{TBP}. In the case $\sigma=0$ the full low-energy theory decouples into symmetric and antisymmetric sectors $H=H_s+H'_a$, where $H'_a$ is the Hamiltonian of a quantum sine-Gordon model \cite{Gritsev2007} \begin{equation} H'_a=\frac{v}{2 \pi} \int_{0}^{L} dx \left[ \frac{1}{K} \left( \partial_{x} \theta_{a}(x) \right)^2 + K \left( \partial_{x} \phi_{a}(x) \right)^2 \right] - 2t_{\perp}\rho_{0}\int_{0}^{L} dx \, \cos \phi_{a}(x). \end{equation} The non-equilibrium dynamics of this model was analyzed for the translationally invariant case in the framework of a SCTDHA in our recent work \cite{Nieuwkerk2018b}. The additional $\sigma$-term in (\ref{eq:mixing_hamiltonian}) couples the sine-Gordon model to the Luttinger liquid Hamiltonian $H_{s}$. In the following we extend the analysis \cite{Nieuwkerk2018b} to \begin{align} H = H_{a} + H_{s}+H_{\perp}. \label{eq:full_hamiltonian} \end{align} \subsection{Time-of-flight measurements} \label{sub:time_of_flight_measurements} In the Vienna experiments \cite{Schumm2005,Hofferberth2007,Hofferberth2008,Gring2012,Langen2013,Langen2015,Schaff2015,Pigneur2017,Schweigler2017} measurements are performed by turning off the trapping potential at some time $t_{0}$, letting the gas expand freely and imaging the three-dimensional boson density after a time-of-flight $t_{1}$. The outcome of each such ``single-shot'' measurement is determined by the eigenvalues $e^{\frac{i}{2} \varphi_{a,s}(x,t)}$ of the bosonic vertex operators $e^{\frac{i}{2}\phi_{a,s}(x,t_{0})}$ \cite{Imambekov2007,Nieuwkerk2018}. As shown in \cite{Nieuwkerk2018}, the result of a single measurement of the boson density after a time-of-flight $t_{1}$ in the regime relevant for the Vienna experiments can be well approximated by \begin{align} &\varrho_{\mathrm{tof}}(x,\vec{r},t_{1},t_0) \simeq \rho_{0} \Big| f(\vec{r},t_{1})\Big|^{2} \times \notag \\ & \Big|\int dx^{\prime}\, G(x-x^{\prime},t_{1})\Big[ e^{i \frac{m}{2t_{1}}\vec{r} \cdot \vec{d}} e^{\frac{i}{2}\left( \varphi_{s}(x^{\prime},t_0)+ \varphi_{a}(x^{\prime},t_0) \right)} + e^{-i \frac{m}{2t_{1}}\vec{r} \cdot \vec{d}} e^{\frac{i}{2}\left( \varphi_{s}(x^{\prime},t_0) - \varphi_{a}(x^{\prime},t_0) \right) } \Big] \Big|^2 . \label{eq:density_tof_bos_longitudinal_approx} \end{align} Here $\vec{d}$ is the distance between the minima of the double well, $x,x'$ and $\vec{r} = (y,z)$ respectively denote longitudinal and transverse coordinates, and $G(x,t)$ is the Green's function for a free particle \begin{align} G(x,t) = \sqrt{\frac{m}{2 \pi i t \gamma}} \exp \left( i \frac{m}{2t \gamma} x^{2} \right). \label{eq:free_GF} \end{align} The function $f(\vec{r},t)$ is an overall envelope whose precise from follows from the details of the trapping potential. By measuring $\varrho_{\mathrm{tof}}$, the system collapses to a simultaneous eigenstate of all $e^{\frac{i}{2}\phi_{a,s}(x,t)}$. The outcome of such measurements can be simulated if one has access to distribution functions of the corresponding eigenvalues $e^{\frac{i}{2}\varphi_{a,s}(x,t)}$. Such distribution functions will be computed in Sec. \ref{sub:full_distribution_functions}. In principle, the observable (\ref{eq:density_tof_bos_longitudinal_approx}) also contains small contributions from the density fields $\Pi_{a,s}(x)$ \cite{Nieuwkerk2018}. In order to treat these, the above description of a projective measurement has to be preceded by a diagonalization of the full observable, which now contains noncommuting fields. We do not pursue this further here because these effects are expected to be small in the regime where our low-energy approximation applies. Experiments typically report results related to the quantity \begin{align} R(x_{0},\vec{r},t_{1},t_0) &= \int_{x_{0}-\ell}^{x_{0}+\ell} dx \, \varrho_{\mathrm{tof}}(x,\vec{r},t_{1},t_0)\nonumber\\ &=\rho_0\Big| f(\vec{r},t_{1})\Big|^2\int^{x_0+\ell}_{x_0-\ell}dx \left[|g_+(x)|^2+|g_-(x)|^2+2{\rm Re}\Big(g_+(x)g_-^*(x)e^{i\frac{m\vec{r}\cdot\vec{d}}{t_1}} \Big)\right] , \label{eq:density_tof_bos_longitudinal_approx_integrated} \end{align} where we have defined \begin{equation} g_\pm(x)=\int dx^{\prime}\, G(x-x^{\prime},t_{1}) e^{\frac{i}{2}\left( \varphi_{s}(x^{\prime},t_0)\pm \varphi_{a}(x^{\prime},t_0) \right)}\ . \label{gpm} \end{equation} \subsection{Mode expansions for the two-component Luttinger liquid} \label{sub:mode_expansions_for_H0} The free boson Hamiltonians $H_{a,s}$ are diagonalized by the mode expansions (see e.g. \cite{Cazalilla2004}) \begin{align} \theta_{j}(x) &= \theta_{j,0} + \frac{\pi x}{L} \delta N_{j} + i \sum_{q>0} \left( \frac{\pi K}{qL} \right)^{1/2} \sin{q x} \left( b_{j,q}^{\vphantom{\dagger}} -b^{\dagger}_{j,q} \right), \label{eq:mass_quench_theta_modes} \\ \phi_{j}(x) &= \phi_{j,0} + \sum_{q>0} \left( \frac{\pi}{qKL} \right)^{1/2} \cos{q x} \left( b_{j,q}^{\vphantom{\dagger}} +b^{\dagger}_{j,q} \right), \end{align} where $q = \frac{\pi n}{L}$, $n \in \mathbb{Z}$, $\left[ b^{\vphantom{\dagger}}_{q},b^{\dagger}_{k} \right] = \delta^{\vphantom{\dagger}}_{q,k}$ and $\left[ \delta N_{j}, \phi_{l,0} \right] = i \delta_{j,l} $. The zero modes $\delta N_{j}$ have integer eigenvalues. The Hamiltonians then take the form \begin{align} H_{j} = \frac{v \pi}{2 L K}\delta N^{2}_{j}+ \sum_{q>0} v q \,b_{j,q}^{\dagger} b_{j,q}^{\vphantom{\dagger}}, \;\;\; j=a,s. \label{eq:ham_modes} \end{align} Going back to Eq. (\ref{eq:bosonization_identity}), we see that the hard-wall condition (\ref{eq:hard_wall_cond}) is guaranteed by choosing the c-number $\theta_{0}$ such that \begin{align} \theta(0) = \theta_{0} \notin \mathbb{Z}. \end{align} It turns out to be useful in what follows to rewrite the mode-expansions in the form \begin{align} \phi_{l}(x,t) &=\sum_{\nu}u^{(l)}_{\nu}(x) \left( b_{\nu}^{\vphantom{\dagger}}(t) + b_{\nu}^{\dagger}(t) \right),\label{eq:phi_vec_form_full} \\ \partial_{x} \theta_{l}(x,t)/\pi &= \sum_{\nu}w^{(l)}_{\nu}(x) \left( b_{\nu}^{\vphantom{\dagger}}(t) - b_{\nu}^{\dagger}(t) \right)\ ,\quad l=a,s\ . \label{eq:Pi_vec_form_full} \end{align} Here we have introduced a multi-index $\nu=(l,q)$ that runs over all positive momenta $q\geq 0$ and the two sectors $l=a,s$ and we have defined \begin{align} u^{(l)}_{(j,q)}(x) &= \delta_{j,l}\begin{cases} \left( \frac{\pi}{q K L} \right)^{1/2} \cos qx , &\text{ if }q \neq 0\ ,\\ \frac{1}{2}\sqrt{\frac{1}{K}}&\text{ if }q = 0\ , \end{cases} \label{eq:u_vect}\\ w^{(l)}_{(j,q)}(x) &= \delta_{j,l}\begin{cases} i \left( \frac{q K}{\pi L} \right)^{1/2} \cos qx, &\text{ if }q \neq 0\ ,\\ \frac{i}{L} \sqrt{K}&\text{ if }q = 0\ , \end{cases}\\ b_{j,0} &= \sqrt{K} \phi_{j,0} - \frac{i}{2} \sqrt{\frac{1}{K}}\delta N_{j}\ . \end{align} \section{Self-consistent time-dependent harmonic approxiation} \label{sec:self_consistent_time_dependent_harmonia_approxiation} Our aim is to determine the non-equilibrium evolution after a \emph{quantum quench}: the system is prepared in a density matrix $\rho(0)$ that does not commute with the Hamiltonian (\ref{eq:full_hamiltonian}). We moreover take the density matrix to be Gaussian for simplicity. The ensuing time evolution is described in the Schr\"odinger picture via the time evolving density matrix \begin{align} \rho(t) = e^{-iH t} \rho(0) e^{iH t}. \label{eq:time evolved_density_matrix} \end{align} As our Hamiltonian of interest \fr{eq:full_hamiltonian} is not solvable we resort to an analysis by means of a SCTDHA \cite{Boyanovsky1998,Sotiriadis2010,Nieuwkerk2018b,Lerose19,Collura20}. Below we generalize the analysis of \cite{Nieuwkerk2018b} to include the nonlinear interaction between the symmetric and antisymmetric sectors. The SCTDHA amounts to replacing the exact time evolution operator with \begin{align} e^{-iH t}\longrightarrow U_{\rm SCH}(t)=Te^{-i \int_{0}^{t} H_{\mathrm{SCH}}(\tau) d \tau}\ , \label{eq:repl_time_evol} \end{align} where \begin{align} H_{\mathrm{SCH}}(t)= H_a &+ H_s + \int dx \bigg[ f(x,t) + \phi_{a}(x) g^{(1)}(x,t) \notag\\ &+ \Pi_{s}(x) g^{(2)}(x,t) + \phi_{a}^{2}(x) h^{(1)}(x,t) + \phi_{a}(x)\Pi_{s}(x) h^{(2)}(x,t)\bigg]. \label{eq:replacement_generic} \end{align} Here the functions $g^{(1,2)}(x,t)$ and $h^{(1,2)}(x,t)$ are determined self-consistently. In order to derive \fr{eq:replacement_generic} we decompose the fields into their space and time dependent expectation values and their fluctuations \begin{align} \phi_{l}(x,t) &= \braket{\phi_{l}(x,t)} + \chi_{l}(x,t),\\ \Pi_{l}(x,t) &= \braket{\Pi_{l}(x,t)} + \pi_{l}(x,t)\ ,\quad l=a,s. \end{align} Substituting this decomposition into the interaction part of the Hamiltonian (\ref{eq:mixing_hamiltonian}) gives \begin{align} H_{\perp} = - 2t_{\perp} \int_{0}^{L} dx \, \left[ \rho_{0} + \sigma \braket{\Pi_{s}} + \sigma \pi_{s} \right] \left[ \cos \braket{\phi_{a}} \cos \chi_{a} - \sin \braket{\phi_{a}} \sin \chi_{a} \right] . \end{align} In the next step we expand the Hamiltonian to quadratic order in fluctuations following \cite{Nieuwkerk2018b}, which gives \begin{align} H_{\perp}\approx - 2t_{\perp} \int dx \, \Bigg[ &\left(\rho_{0} + \sigma \,\pi_{s} - \frac{1}{2} \left( \rho_{0} + \sigma \,\left< \Pi_{s} \right> \right) \chi_{a}^{2} - \sigma \left< \chi_{a} \,\pi_{s} \right> \chi_{a} \right) \cos \left< \phi_{a} \right> \\ - &\left( \left( \rho_{0} + \sigma \left( \pi_{s} + \braket{\Pi_{s}} \right) \right) \chi_{a} - \frac{\sigma}{2} \left< \chi_{a} \,\pi_{s} \right> \chi_{a}^{2} \right) \sin \left< \phi_{a} \right> \Bigg] e^{-\frac{1}{2} \left< \chi_{a}^{2} \right> } + {\rm const} \notag \end{align} After re-expressing this in terms of the original fields $\phi_{a}$ and $\Pi_{s}$, we arrive at Eq. (\ref{eq:replacement_generic}), where the functions $h^{(j)}(x,t)$ and $g^{(j)}(x,t)$ are determined self-consistently by \begin{align} h^{(1)}(x,t) &= \mathrm{Re} \overline{F}(x,t)/2, \nonumber\\ h^{(2)}(x,t) &= \sigma \mathrm{Im} F(x,t),\nonumber\\ g^{(1)}(x,t) &= \mathrm{Im} \overline{F}(x,t) - 2 \braket{\phi_{a}(x,t)} h^{(1)}(x,t) - \braket{\Pi_{s}(x,t)} h^{(2)}(x,t), \nonumber\\ g^{(2)}(x,t) &= -\sigma \mathrm{Re} F(x,t) - \braket{\phi_{a}(x,t)} h^{(2)}(x,t). \label{eq:h_g_def4} \end{align} Here we have defined two functions \begin{align} F(x,t)&= 2t_{\perp} {\rm Tr}\bigg[U_{\rm SCH}(t)\rho(0)U^\dagger_{\rm SCH}(t) e^{i\phi_{a}(x)}\bigg], \nonumber\\ \overline{F}(x,t) &= 2t_{\perp}{\rm Tr}\bigg[U_{\rm SCH}(t)\rho(0)U^\dagger_{\rm SCH}(t) e^{i\phi_{a}(x)} \left(\rho_{0} + \sigma \Pi_{s}(x) \right)\bigg]. \label{eq:def_Fs} \end{align} One subtlety associated with the SCTDHA concerns the zero mode $\phi_{a,0}$. The spectrum of $\phi_{a,0}$ originally reflected the compact nature of the phase field $\phi_a(x)=\phi_a(x)+2\pi$. The latter feature is lost in the SCTDHA, where fluctuations are assumed to be small but the fields themselves take arbitrary real values. \subsection{Gaussian initial states} \label{sub:initial_state} In order to investigate the effects of the $\sigma$-term that couples the symmetric and antisymmetric sectors we want to start from a factorized state and study how correlations develop over time. An important requirement is related to our use of the SCTDHA: its accuracy strongly depends on the initial state obeying Wick's theorem. These two considerations lead us to consider the same class of initial states previously used in the literature \cite{Bistritzer2007,Kitagawa2010,Kitagawa2011} \begin{align} \rho(0) = \rho_a(0)\otimes \rho_{s}(0)\ , \end{align} where $\rho_a(0)=|V, r, \varphi\rangle_{a}{}_a\langle V,r,\varphi|$ is a Gaussian pure state \begin{align} \ket{V, r, \varphi}_{a} = \mathcal{N} \exp \left(\sum_{pq} V_p \left( \mathrm{sech \,} r^T \right)_{pq} b^{\dagger}_{a,q} +\sum_{p,q,k} \frac{1}{2} b^{\dagger}_{a,p} \left( \tanh r \right)_{pq} e^{i \varphi_{q,k}} b^{\dagger}_{a,k} \right)|0\rangle_a . \label{eq:general_gaussian} \end{align} It is useful to define new annihilation operators $\alpha_{a,k}$ satisfying \begin{align} \alpha_{a,k}\ket{V, r, \varphi}_{a}=0\ , \end{align} which are related to the $b$-operators via the canonical transformation \begin{align} b_{a,q} = \sum_{k}( \cosh r )_{qk} \left[ \alpha_{a,k} + V_{k} \right] + \left(\sinh r e^{i \varphi}\right)_{qk}\left[ \alpha^{\dagger}_{a,k} + V^{*}_{k} \right]. \label{eq:a_as_alphas} \end{align} In previous works it has been assumed that the symmetric sector is initialized in a thermal state \cite{Kitagawa2011}. We will follow this assumption, but in order to study the effects of spatial inhomogeneity we take our initial state to be given by a ``displaced'' thermal density matrix \begin{align} \rho_{s} = D(R)\frac{e^{- \beta H_{s}}}{\mathrm{Tr}\,e^{- \beta H_{s}}} D^{\dagger}(R)\ , \end{align} where the displacement operators are defined via \begin{align} D^{\dagger}(R) b_{j,k} D(R) &= b_{j,k} + R_{j,k}\ , \quad j=a,s \ . \end{align} This suggests the definition of displaced annihilation operators $\alpha_{s,k}$ via a constant shift \begin{align} b_{s,k} = \alpha_{s,k} + R_{s,k}\ , \label{eq:def_alpha_s} \end{align} so that \begin{align} \braket{\alpha_{s,k}} = 0 \end{align} on the initial state. Since $\rho_{s}(0)$ satisfies Wick's theorem, it is then completely fixed by the vector $R_{s,k}$ along with connected two-point functions of the fields. Using the mode expansion of $H_{s}$ from Eq. (\ref{eq:ham_modes}) we simply find bosonic occupation numbers for $q>0$, \begin{align} \left< b^{\dagger}_{s,q} b^{\vphantom{\dagger}}_{s,k} \right>_{c} = \frac{\delta_{q,k}}{e^{\beta v q}-1} \equiv n_{(s,q)}, \end{align} the anomalous expectation values $\braket{b_{s,q} b_{s,q^{\prime}}}_{c}$ being zero. For the zero mode, the only expectation values on $\rho_{s}(0)$ that we will need are \begin{align} \braket{\delta N_{s}^{2}}_{c} = \frac{\sum_{n} e^{- \beta \frac{v \pi}{2 K L} n^{2}} n^{2}}{\sum_{n} e^{- \beta \frac{v \pi}{2 K L} n^{2}}}, \quad \braket{\delta N_{s}} = 0, \end{align} where the second identity implies $\mathrm{Im}R_{s,0}(0)=0$. As will become clear in the next section, expectation values involving the field $\phi_{s,0}$ will never be required for the computation of physical quantities. \subsection{Equations of motion} \label{sub:equations_of_motion} The SCTDHA allows for a closed-form expression of the equations of motion. We will work in the Heisenberg picture from here onwards. The SCTDHA guarantees that time evolving annihilation operators can always be written as \begin{align} b_{\nu}(t) = R_{\nu}(t) + S_{\nu \mu}^{\vphantom{\dagger}}(t) \alpha_{\mu}^{\vphantom{\dagger}} + T_{\nu \mu}^{*}(t) \alpha_{\mu}^{\dagger} \label{eq:annihilation_operator_generic} \end{align} where $\alpha_{\mu}$ are a set of bosonic creation and annihilation operators. We choose these to be given by \begin{equation} \alpha_{\nu}=\begin{cases} \alpha_{a,k} & \text{if } \nu=(a,k)\\ \alpha_{s,k} & \text{if } \nu=(s,k) \end{cases}, \end{equation} where the $\alpha_{a,k}$ are defined in \fr{eq:a_as_alphas} and the $\alpha_{s,k}$ in (\ref{eq:def_alpha_s}). For \fr{eq:annihilation_operator_generic} to be a canonical transformation we require \begin{align} S S^{\dagger} - T^{*} T^{T} = \mathbbm{1}\ ,\qquad S T^{\dagger} - T^{*} S^{T} = 0\ . \label{eq:Canonical_Condition_matrix} \end{align} The initial condition on $R, S$ and $T$ are given by \begin{align} R_{\mu}(0) &= \begin{cases} \sum_{q}(\cosh r)_{pq} \, V_q + (\sinh r e^{i \varphi})_{pq} \, V^{*}_q& \text{if }\mu=(a,p),\\ 0 & \text{else,} \end{cases} \notag\\ S_{\nu,\mu}(0) &= \begin{cases} ( \cosh r )_{pq} & \text{if } \nu=(a,p),\ \mu=(a,q),\\ \delta_{pq}& \text{if } \nu=(s,p),\ \mu=(s,q),\\ 0 & \text{else, } \end{cases} \label{eq:IC_RST}\\ T^*_{\nu,\mu}(0) &= \begin{cases} ( \sinh r e^{i\varphi})_{pq} & \text{if } \nu=(a,p),\ \mu=(a,q),\\ 0 & \text{else.} \end{cases}\notag \end{align} We note that the $\alpha_{\mu}$'s satisfy Wick's theorem on the initial state, along with $\left< \alpha_{\mu} \right> =0$ for all $\mu$. The time evolution of any operator is then encoded in the time-dependence of the tensors $R, S$ and $T$, which we will now determine. To this end, we write the SCTDHA Hamiltonian in the generic form \begin{align} H_{\mathrm{SCH}}(t) = b^{\dagger}_{\nu} A_{\nu\mu}^{\vphantom{\dagger}}(t) b^{\vphantom{\dagger}}_{\mu} &+ \frac{1}{2} \left( b_{\nu}^{\dagger} B_{\nu\mu}^{\dagger}(t) b_{\mu}^{\dagger} + b_{\nu} B_{\nu\mu}(t) b_{\mu} \right) \notag\\ &+ C(t)+ D^{\vphantom{\dagger}}_{\nu}(t) \left( b^{\vphantom{\dagger}}_{\nu} + b_{\nu}^{\dagger} \right) + E^{\vphantom{\dagger}}_{\nu}(t) \left( b^{\vphantom{\dagger}}_{\nu} - b_{\nu}^{\dagger} \right). \label{eq:generia_quadratic_ham} \end{align} The matrices $A,B$ and vectors $D,E$ depend on the self-consistency functions $g^{(1,2)}$ and $h^{(1,2)}$, cf. Eqs. (\ref{eq:h_g_def4}), and are given in Appendix \ref{sec:tensors_occurring_in_HSCH}. Inserting the expansion (\ref{eq:annihilation_operator_generic}) into the Heisenberg equation of motion \begin{align} i \frac{d}{dt} b_\nu(t) = U_{\rm SCH}(t)\left[ b_\nu, H_{\mathrm{SCH}}(t)\right]U^\dagger_{\rm SCH}(t) \label{eq:Heisenberg_EOM} \end{align} yields a system of coupled, first order differential equations \begin{align} i\dot{R}_{\nu}(t) &= A_{\nu\mu}(t)R_{\mu}(t) + B^{\dagger}_{\nu\mu}(t) R_{\mu}^{*}(t) + D_{\nu}(t) - E_{\nu}(t) \notag\\ i\dot{S}_{\nu\mu}(t) &= A_{\nu\lambda}(t)S_{\lambda\mu}(t) + B^{\dagger}_{\nu\lambda}(t) T_{\lambda\mu}(t) \label{eq:ODEs}\\ -i\dot{T}_{\nu\mu}(t) &= A^{*}_{\nu\lambda}(t)T_{\lambda\mu}(t) + B^{T}_{\nu\lambda}(t) S_{\lambda\mu}(t). \notag \end{align} This system of ODE's is \textit{nonlinear}: as a result of the self-consistency functions (\ref{eq:h_g_def4}) on which the tensors $A,B,D$ and $E$ depend, these tensors are themselves functions of $R, S$ and $T$, which therefore enter the system (\ref{eq:ODEs}) in nonlinear combinations. To simplify some of the following equations we introduce linear combinations \begin{align} Q^{\vphantom{\dagger}}_{\nu \mu}(t) = S^{\vphantom{\dagger}}_{\nu \mu}(t) + T^{\vphantom{\dagger}}_{\nu \mu}(t), \;\;\;\; \overline{Q}^{\vphantom{\dagger}}_{\nu \mu}(t) = S^{\vphantom{\dagger}}_{\nu \mu}(t) - T^{\vphantom{\dagger}}_{\nu \mu}(t)\ . \label{eq:def_Q} \end{align} In terms of these functions mode expansions of the time evolved fields take the form \begin{align} \phi_{a}(x,t) &= \sum_{\nu} u^{(a)}_{\nu}(x) \left( 2 \mathrm{Re} R^{\vphantom{*}}_{\nu}(t) + \sum_{\mu} \left[ Q^{\vphantom{\dagger}}_{\nu\mu}(t) \alpha_{\mu}^{\vphantom{\dagger}} + Q^{*}_{\nu \mu}(t) \alpha_{\mu}^{\dagger} \right] \right) \label{eq:phi_as_alphas}\ ,\\ \Pi_{l}(x,t) &= \sum_{\nu} w^{(l)}_{\nu}(x) \left( 2i \mathrm{Im} R^{\vphantom{*}}_{\nu}(t) + \sum_{\mu}\left[ \overline{Q}_{\nu \mu}(t) \alpha_{\mu}^{\vphantom{\dagger}} - \overline{Q}^{*}_{\nu \mu}(t) \alpha_{\mu}^{\dagger} \right] \right). \label{eq:Pi_as_alphas} \end{align} The functions (\ref{eq:def_Fs}) can then be computed using Wick's theorem for the $\alpha$-operators, based on the above expressions. This closes the system of ODE's (\ref{eq:ODEs}). The zero mode in the symmetric sector $\phi_{s,0}$ reflects the compact nature of the phase field $\phi_s$ and therefore needs to be treated separately from the finite momentum modes. We therefore define a field \begin{align} \widetilde{\phi_{s}}(x) \equiv \phi_{s}(x) - \phi_{s,0}, \end{align} which time evolves as \begin{align} \widetilde{\phi_{s}}(x,t) &= \sum_{\nu\neq (s,0)} u^{(s)}_{\nu}(x) \left( 2 \mathrm{Re} R^{\vphantom{*}}_{\nu}(t) + \sum_{\mu} \left[ Q^{\vphantom{\dagger}}_{\nu\mu}(t) \alpha_{\mu}^{\vphantom{\dagger}} + Q^{*}_{\nu \mu}(t) \alpha_{\mu}^{\dagger} \right] \right). \end{align} Importantly the zero mode $\phi_{s,0}$ does not get generated under Heisenberg time evolution of other fields. This is easily checked by inspection of the Hamiltonian (\ref{eq:full_hamiltonian}) which is seen to not involve $\phi_{s,0}$. This in turn implies that the zero mode cannot appear on the rhs of the Heisenberg equation of motion (\ref{eq:Heisenberg_EOM}). Since we can express the zero mode at $t=0$ as \begin{align} \phi_{s,0} = \left( \alpha_{(s,0)}^{\vphantom{\dagger}} + \alpha_{(s,0)}^{\dagger} \right)/\sqrt{4K}, \end{align} we conclude that this linear combination of $\alpha$-operators does not appear in the sums over modes in (\ref{eq:phi_as_alphas},\ref{eq:Pi_as_alphas}) except in the expansion for $\phi_{s}(x,t)$, where it occurs in the term with $\nu = (s,0)$. This directly leads to \begin{align} \mathrm{Re}\, Q_{\nu,(s,0)}(t) &= 0 \;\;\; \forall \;\nu \neq (s,0) ,\qquad \mathrm{Im}\, \overline{Q}_{\nu,(s,0)}(t) = 0 \;\;\; \forall \; \nu. \end{align} \subsection{Self-consistent expectation values} \label{sub:self_consistent_expectation_values} \subsubsection{One-point functions} As all relevant one-point functions of $\alpha_{\nu}$ and $\delta N_{s}$ are zero we have \begin{align} \left< \widetilde{\phi_{s}}(x,t) \right> &= 2\sum_{\nu \neq (s,0)} u^{(s)}_{\nu}(x) \mathrm{Re} R^{\vphantom{*}}_{\nu}(t) \label{eq:phi_s_1pt}\ , \\ \left< \phi_{a}(x,t) \right> &= 2\sum_{\nu} u^{(a)}_{\nu}(x) \mathrm{Re} R^{\vphantom{*}}_{\nu}(t) \label{eq:phi_a_1pt}\ ,\\ \left< \Pi_{l}(x,t) \right> &= 2i \sum_{\nu} w^{(l)}_{\nu}(x) \mathrm{Im} R^{\vphantom{*}}_{\nu}(t). \label{eq:Pi_1pt} \end{align} \subsubsection{Two-point functions} Comparing the definitions from Section \ref{sub:initial_state} to the initial conditions (\ref{eq:IC_RST}), we find that for any $\nu, \mu \neq (s,0)$, \begin{align} {\mathfrak g}_{\nu,\mu}=\left< \alpha_{\nu}^{\dagger} \alpha_{\mu}^{\vphantom{\dagger}} \right> &= \left< \alpha_{\nu}\alpha_{\mu}^{\dag} \right> -\delta_{\nu,\mu}= \delta_{\nu,\mu}\begin{cases} 0 & \text{if } \nu\in\{(a,q),(s,0)\}\\ n_{(s,q)} & \text{if } \nu\in\{(s,q)|q\neq 0\} \end{cases}. \end{align} If we define $P^{(s)}_{0}$ to be the projector on the symmetric zero modes, along with its complement $\tilde{\mathbbm{1}} = \mathbbm{1} - P^{(s)}_{0}$, we then find the following connected two-point functions \begin{align} \left< \phi_{j}(x,t)\phi_{l}(y,t) \right>_{c} &= u^{(j)}(x) \bigg(2{\rm Re}(Q^{*} \mathfrak{g} Q^{T})+ Q \tilde{\mathbbm{1}} Q^{\dagger} + \frac{\braket{\delta N^{2}_{s0}}}{K} \mathrm{Im}Q P^{(s)}_{0} \mathrm{Im}Q^{T} \bigg) u^{(l)}(y)\ , \label{eq:phi_2pt} \nonumber\\ \left< \phi_{j}(x,t)\Pi_{l}(y,t) \right>_{c} &= -u^{(j)}(x) \bigg(2i{\rm Im}(Q \mathfrak{g}\overline{Q}^{\dagger})+ Q \tilde{\mathbbm{1}} \overline{Q}^{\dagger} +i \frac{\braket{\delta N^{2}_{s0}}}{K} \mathrm{Im}Q P^{(s)}_{0} \mathrm{Re}\overline{Q}^{T} \bigg) w^{(l)}(y)\ . \end{align} In the above, indices on all matrices and vectors have been suppressed for conciseness. If we want to consider the field $\widetilde{\phi_{s}}$ instead of $\phi_{s}$, we need leave out the symmetric zero mode term. This leads, for instance, to \begin{align} \left< \widetilde{\phi_{s}}(x,t)\Pi_{l}(y,t) \right>_{c} &= u^{(j)}(x) \left(P^{(s)}_{0}-\mathbbm{1} \right) \times \notag \\ &\times\bigg(2i{\rm Im}(Q \mathfrak{g}\overline{Q}^{\dagger})+ Q \tilde{\mathbbm{1}} \overline{Q}^{\dagger} +i \frac{\braket{\delta N^{2}_{s0}}}{K} \mathrm{Im}Q P^{(s)}_{0} \mathrm{Re}\overline{Q}^{T} \bigg)w^{(l)}(y)\ , \end{align} and analogous modifications for $\left< \widetilde{\phi_{s}}(x,t)\widetilde{\phi_{s}}(y,t) \right>_{c}$ and $\left< \widetilde{\phi_{s}}(x,t)\phi_{a}(y,t) \right>_{c}$. \subsection{Full distribution functions} \label{sub:full_distribution_functions} Individual measurement outcomes in interference experiments of interest \cite{Pigneur2017} are fully determined by the eigenvalues $\varphi_{a}$ and $\widetilde{\varphi_{s}}$ of the phase fields $\phi_{a}$ and $\widetilde{\phi_{s}}$ \cite{Nieuwkerk2018}, cf. Eq. (\ref{eq:density_tof_bos_longitudinal_approx}). To model the outcomes of such measurements we therefore require the time-dependent distribution functions for $\varphi_{a}$ and $\widetilde{\varphi_{s}}$. These can be determined in the framework of the SCTDHA \cite{Nieuwkerk2018b,Collura20}. For the case at hand, we first expand the eigenvalues of the phase fields as Fourier series, \begin{align} \widetilde{\varphi_{s}}(x,t) &= \sum_{\mu \neq (s,0)} u^{(s)}_{\mu}(x) f_{\mu,t}\ ,\qquad \varphi_{a}(x,t) = \sum_{\mu} u^{(a)}_{\mu}(x)f_{\mu,t}\ . \label{eq:phase_evals} \end{align} Here we have again used our multi-index notations $\mu = (j,q)$, where $j=a,s$ labels the sector and $q$ the momentum. Each measurement selects a particular set of Fourier coefficients and we denote the averages over many measurements by \begin{equation} \overline{ f_{\mu,t}}\ ,\quad \overline{ f_{\mu,t}\ f_{\nu,t}}\quad \text{etc}. \end{equation} The mean values for the Fourier coefficients can be read off from the one-point functions calculated earlier, \emph{cf.} Eqs. (\ref{eq:phi_s_1pt},\ref{eq:phi_a_1pt}) \begin{align} \overline{{f}_{\mu,t}} = 2\mathrm{Re}\, R_{\mu}(t)\ . \label{eq:fs_mean} \end{align} The object of interest is then the time-dependent joint probability distribution $P$ of Fourier coefficients $\{\mathfrak{f}_{\mu} \}$. Within the SCTDHA all cumulants of $\phi_{a,s}$ other than the variance vanish, so that this probability distribution is Gaussian \begin{align} P(\{\mathfrak{f}_{\mu} \},t) = \frac{1}{(2 \pi)^{N/2}} \frac{1}{\sqrt{\det M(t)}} \mathrm{exp} \left( - \frac{1}{2} \sum_{\mu,\nu}\left( \mathfrak{f}_{\mu} - \overline{f_{\mu,t}} \right) M^{-1}_{\mu \nu}(t)\left( \mathfrak{f}_{\nu} - \overline{f_{\nu,t}} \right) \right)\ . \label{eq:prob_distr_fs} \end{align} Here $N$ is the total number of Fourier modes retained in \fr{eq:phase_evals}. Noting that \begin{align} \left< \phi_{j}(x,t)\phi_{l}(y,t) \right>_{c} = u^{(j)}_{\mu}(x) \big( \overline{f_{\mu,t}f_{\nu,t}}-\overline{f_{\mu,t}}\ \overline{f_{\nu,t}}\big) u^{(l)}_{\nu}(y), \;\;\;\; j,l \in \{a,s\} \end{align} and comparing to Eq. (\ref{eq:phi_2pt}), we can directly read off the covariance matrix as well: \begin{align} M(t) = 2{\rm Re}(Q \mathfrak{g} Q^{\dagger}) + QQ^{\dagger} + \frac{\braket{\delta N^{2}_{s0}}}{K} \mathrm{Im}Q P^{(s)}_{0} \mathrm{Im}Q^{T}. \label{eq:fs_cov_mat} \end{align} Having obtained a time-dependent probability distribution for the coefficients $\{f_{\mu,t} \}$, we can directly model experiments: we draw coefficients $\{f_{\mu,t} \}$ from the distribution (\ref{eq:prob_distr_fs}), reconstruct the corresponding eigenvalues (\ref{eq:phase_evals}), and insert these in the time-of-flight density (\ref{eq:density_tof_bos_longitudinal_approx}) to compute the measured density profile. We note that in the notations used above the set $\{\mathfrak{f}_{\mu} \}$ contains the non-physical Fourier coefficient $\mathfrak{f}_{(s,0)}$. This quantity does not enter the observable (\ref{eq:density_tof_bos_longitudinal_approx}), and can simply be discarded, whenever a set of coefficients is drawn from $P \left( \{\mathfrak{f}_{\mu} \}, t \right) $. By repeating the above procedure for modelling a measurement many times over we can reconstruct the full distribution function of any observable that depends only on the phase fields $\phi_{a,s}$. In what follows, we will focus on the ``interference term'' in the spatially integrated density after time-of-flight $R_{\mathrm{tof}}(x_{0},\vec{r},t_{1},t_0)$ defined in (\ref{eq:density_tof_bos_longitudinal_approx_integrated}). The eigenvalues of this observable are proportional to \begin{align} \mathcal{I}_{\ell}\left( \{f_{\mu} \},x_{0},t_{0},t_{1} \right) = \frac{1}{\ell} &\int_{x_{0}-\ell/2}^{x_{0}+\ell/2} dx\, g_+(x)g_-^*(x) \label{eq:interference_term} \end{align} where $g_\pm(x)$ are defined in \fr{gpm} and are related to the coefficients $f_{\mu}$ via (\ref{eq:phase_evals}). Motivated by the experimental data analyses of Refs~\cite{Kuhnert2013,Smith2013,Pigneur2018} we parametrize the interference term (\ref{eq:interference_term}) as \begin{align} \mathcal{I}_{\ell}\left( \{f_{\mu} \},x_0,t_0,t_1 \right) = C_\ell(x_0,t_0,t_1,\{f_{\mu} \}) e^{i \Phi_\ell(x_0,t_0,t_1,\{f_{\mu} \})}\ . \label{eq:interference_eval_parametr} \end{align} By drawing many sets $\{\mathfrak{f}_{\mu} \}$ of coefficients from the distribution function $P \left(\{\mathfrak{f}_{\mu} \}, t \right)$ and plotting the resulting values of $\Phi_{\ell}$ or $C_{\ell}$ in a normalized histogram, we converge to probability distributions $P_{\Phi_{\ell},C_{\ell}}$ for these quantities. These distribution functions can formally be written as \begin{align} P_{\Phi_{\ell}}(\alpha,t_{0},t_{1}) &= \left( \prod_{\mu}\int d \mathfrak{f}_{\mu} \right) \delta \left( \alpha - \mathrm{Arg} \, \mathcal{I}_{\ell}\left( \{f_{\mu} \},x_{0},t_{0},t_{1} \right) \right) P \left(\{\mathfrak{f}_{\mu} \}, t_{0} \right)\ , \\ P_{C_{\ell}}(\gamma,t_{0},t_{1}) &= \left( \prod_{\mu}\int d \mathfrak{f}_{\mu} \right) \delta \left( \gamma - \mathrm{Abs} \, \mathcal{I}_{\ell}\left( \{f_{\mu} \},x_{0},t_{0},t_{1} \right) \right) P \left(\{\mathfrak{f}_{\mu} \}, t_{0} \right)\ . \end{align} \section{Results for experimentally relevant initial states} \label{sec:results_for_experimentally_relevant_initial_states} \subsection{Choice of initial state} \label{sub:specific_initial_state} We now specialize to an initial state that is often used in the literature, see e.g. \cite{Bistritzer2007,Kitagawa2010,Kitagawa2011}. In these works, a quasi-classical argument is used to conjecture how the state of a pair of elongated Bose gases follows from the splitting process of a single gas. It is reasoned that when splitting a gas, each particle has an equal probability to end up in well $1$ or in well $2$. The relative particle number resulting from this poisson process is thus a stochastic variable with mean zero and variance proportional to the particle density. Assuming short-range correlations, one arrives at \begin{align} \left< \Pi_{a}(x,0) \Pi_{a}(y,0) \right>_{c} = \frac{\eta \rho_{0}}{2} \delta_{\xi}(x-y), \label{eq:dens_two_pt_init_Kitagawa} \end{align} with $\eta$ a phenomenological parameter which we will set to $1$. Following \cite{Kitagawa2011}, the delta function above is understood as a flat sum over plane waves running up to momentum $\pi/\xi$. To reproduce this initial two-point function, it suffices to use the initial state (\ref{eq:general_gaussian}), with $r$ a real and diagonal matrix and $\varphi = 0$. The resulting initial condition on $\overline{Q}$, \begin{align} \overline{Q}(0)_{(a,j)(a,k)} = \delta_{jk} e^{-r_{jj}}\ , \end{align} then leads to \begin{align} \left< \Pi_{a}(x,0)\Pi_{a}(y,0) \right>_{c} = \frac{K}{L^{2}} e^{-2r_{00}} + \sum_{j>0} \frac{qK}{\pi L} \cos \left( q_{j} x \right) \cos \left( q_{j} y \right) e^{-2 r_{jj}}\ . \label{eq:phi_tpt_generic_IC} \end{align} Comparing Eqs. (\ref{eq:dens_two_pt_init_Kitagawa}) and (\ref{eq:IC_RST}), we can thus read off \begin{align} e^{-2 r_{jk}} = \delta_{jk}\begin{cases} \frac{L \eta \rho_{0}}{2K} &\text{ if } q=0\ ,\\ \frac{\pi \eta \rho_{0}}{q K} &\text{ if } q>0\ , \end{cases} \end{align} for the antisymmetric sector. For the symmetric sector, we again follow Ref. \cite{Kitagawa2011}: the above quasiclassical splitting argument applies to the relative degrees of freedom, leaving the symmetric combinations of densities and phases unaltered. In \cite{Kitagawa2011}, the symmetric sector is therefore taken to be in a finite temperature equilibrium state. We adhere to this conjecture here and use the thermal density matrix described in Section \ref{sub:initial_state}, thereby fixing the initial conditions for both $T$ and $S$ in conjunction with the above discussion. Finally, the initial conditions for $R$ can be used to enforce various initial profiles on the density and phase fields in both sectors, which we will explore in Sec. \ref{sub:time_evolution} below. \subsection{Experimental parameters} \label{sub:experimental_parameters} We fix the parameters for our plots by following Ref.~\cite{Kuhnert2013}: the one-dimensional density is taken to be $\rho_{0} = 45 \, \mu \mathrm{m}^{-1}$, the healing length is $\xi= \hbar\pi / mv = \pi \times 0.42 \,\mu \mathrm{m}$, the sound velocity is given by $v \approx 1.738 \cdot 10^{-3} \, \mathrm{m}/\mathrm{s}$ and the Luttinger parameter in our conventions is $K \approx 28$. We take the one-dimensional box size as large as we can achieve for a given value of the cutoff length scale, which amounts to $L = 80 \,\mu \mathrm{m}$. This is comparable to the size reported in \cite{Kuhnert2013}. We work at a temperature of $5 \,\mathrm{nK}$ throughout. In all figures, time is measured in units of the \textit{traversal time} \cite{EFreview}, $t_{\mathrm{tr}}=L/2v$, which is the time it takes for a light cone to reach the edge of the system from the centre of the box. We have chosen the value of the phenomenological tunnel coupling strength $t_{\perp}$ by considering a trade-off: we would like to maximize the Josephson frequency in order to follow as many density-phase oscillations as possible, whilst keeping the gap $\Delta$ of the model's dispersion relation no larger than a small fraction of the energy cutoff in the Luttinger liquid. The latter is equal to $\epsilon_{c} = v \pi / \xi$, with $\xi$ the cutoff length scale. We have aimed for the ratio of the gap to the cutoff to be no larger than $\Delta/\epsilon_{c} = 0.125$, which we can guarantee by taking $t_{\perp} = 15 \,\mathrm{Hz}$ for the above parameters. The only exception to the above is Fig. \ref{fig:BC_compare}, where we take $t_{\perp} \approx 1.17\,\mathrm{Hz}$ following Ref. \cite{Nieuwkerk2018b}, to enable a comparison with the case of periodic boundary conditions as presented there. \subsection{Time evolution} \label{sub:time_evolution} We now consider time evolution under the SCTDHA Hamiltonian (\ref{eq:replacement_generic}), with the initial condition described in Sec. \ref{sub:specific_initial_state}. Throughout, we choose $R(0)$ such that \begin{align} \left< \phi_{a}(x,0) \right> = 0.2, \;\;\;\;\left< \Pi_{a}(x,0) \right> = 0. \label{eq:profiles_flat_a} \end{align} The one-point functions $\big< \widetilde{\phi_{s}}(x,0) \big>$ and $\left< \Pi_{s}(x,0) \right>$ will be given different spatial profiles, to investigate the effects of broken translational invariance. \subsubsection{\secfix{No coupling between symmetric and antisymmetric sectors ($\sigma=0$)}} We will start from the situation where \begin{align} \left< \widetilde{\phi_{s}}(x,0) \right> = 0 = \left< \Pi_{s}(x,0) \right>\ . \label{eq:profiles_flat} \end{align} and $\sigma = 0$. This will serve as our benchmark, as it most closely resembles the translationally invariant scenario described in \cite{Nieuwkerk2018b} in which the (anti)symmetric sectors remain uncorrelated. It is characterized by Josephson oscillations between density and phase, see Fig. \ref{fig:benchmark_plots}(a), with a phase variance that initially grows, and then shows oscillating behavior, see Fig. \ref{fig:benchmark_plots}(b). \begin{figure}[htbp!] \centering (a)\includegraphics[width=0.46\textwidth]{Phase_midpoint0L.pdf} (b)\includegraphics[width=0.46\textwidth]{Phase_var0L.pdf} \caption{(a) Josephson oscillations of relative density (arbitrary units) and phase (radians) at the centre of the gas, $x_{0} = L/2$. (b) initial growth and oscillations of the variance of the relative phase. The initial phase and density profiles profiles are chosen according to Eqs. (\ref{eq:profiles_flat_a},\ref{eq:profiles_flat}) and coupling between the sectors is absent in these pictures, meaning $\sigma = 0$.} \label{fig:benchmark_plots} \end{figure} \begin{figure}[htbp!] \centering (a)\includegraphics[width=0.46\textwidth]{Phase_midpoint_compBox.pdf} (b)\includegraphics[width=0.46\textwidth]{Phase_var_compBox.pdf} \caption{Comparison between results for box boundary conditions (blue) and periodic boundary conditions (red). The curves are in perfect agreement until the traversal time $t_{\mathrm{tr}}=L/2v$, after which deviations occur. (a) Josephson oscillations of phase (radians) at the centre of the gas, $x_{0} = L/2$. (b) initial growth and subsequent oscillations in the variance of the relative phase.} \label{fig:BC_compare} \end{figure} To connect with our previous work \cite{Nieuwkerk2018b} we include a comparison between results from that paper, where periodic boundary conditions were used, and the results derived for a box geometry in the present paper. Fig. \ref{fig:BC_compare} shows that the two geometries give extremely similar results in the centre of the trap for times below the traversal time, whereas deviations do occur after this time. It should also be noted that in \cite{Nieuwkerk2018b} and Fig. \ref{fig:BC_compare}, results are presented for smaller tunnel couplings ($t_{\perp} \approx 1.17\, \mathrm{Hz}$) than in the rest of this paper. The reason for choosing these values in \cite{Nieuwkerk2018b} was that for a relatively shallow field potential, the inharmonicity of the cosine in the sine-Gordon model manifests itself more strongly, making deviations from the purely quadratic theory more apparent. For the purposes of this paper, however, it is more interesting to look at relatively large tunnel-couplings ($t_{\perp} = 15\, \mathrm{Hz}$, see Sec. \ref{sub:experimental_parameters}), as this enhances the coupling between the sectors in which we are interested. \subsubsection{\secfix{Finite coupling between sectors ($\sigma>0$) and homogeneous initial conditions}} We next investigate different values of the coupling constant $\sigma$, and the resulting mixing between the sectors. Fig. \ref{fig:flat_profile_sigmas} shows results for $\sigma=0,1/2,1,3/2,2$, starting from completely flat profiles, as in Eqs. (\ref{eq:profiles_flat_a}), (\ref{eq:profiles_flat}). When increasing $\sigma$, the phase oscillations remain essentially unchanged. A stronger effect is visible in the covariance between $\phi_{a}$ and $\widetilde{\phi_{s}}$, however. To quantify this, we define \begin{align} C(x,t) \equiv \frac{\big< \widetilde{\phi_{s}}(x,t) \phi_{a}(x,t) \big>_{c} }{\sqrt{\big< \widetilde{\phi_{s}}(x,t) \widetilde{\phi_{s}}(x,t) \big>_{c} \big< \phi_{a}(x,t) \phi_{a}(x,t) \big>_{c}}}\ . \label{eq:covariance} \end{align} As can be seen in Fig. \ref{fig:flat_profile_sigmas}(b), the covariance $C(x,t)$ increases to appreciable values as $\sigma$ is increased. We also note that for larger values of $\sigma$, the variance of the relative phase increases somewhat for times below the traversal time, see Fig. \ref{fig:var_phi}. \begin{figure}[ht] \centering (a)\includegraphics[width=0.46\textwidth]{Phase0L.pdf} (b)\includegraphics[width=0.46\textwidth]{Phase_as0L.pdf}\\ \caption{(a) time evolution of the phase in the antisymmetric sector at the box centre $x_{0}=L/2$. Curves are displayed for different values of $\sigma$, with a flat intial density profile $\langle\Pi_s(x)\rangle=0$. A change of $\sigma$ has no appreciable effect on this observable. (b) a somewhat stronger effect is the development of correlations between $\phi_{a,s}$, where the normalized covariance from Eq. (\ref{eq:covariance}) is displayed, for $x_{0}=L/2$.} \label{fig:flat_profile_sigmas} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{Phase_var_comp.pdf} \caption{Variance of the relative phase, for $\sigma = 0$ (blue) and $\sigma= 2$ (red). A slight increase in the variance is visible for the larger value of $\sigma$ for times below $t_{\mathrm{tr}}=L/2v$} \label{fig:var_phi} \end{figure} It is also instructive to consider the energy flow between different terms in the Hamiltonian. To this end we define the following quantities \begin{align} e_{a,0}(t) &= \frac{\braket{H_{a}}}{L}\ , \quad e_{a, \perp}(t) = - \frac{2 t_{\perp} \rho_{0}}{L} \int_{0}^{L} dx \, \braket{\cos \phi_{a}(x)}\ ,\quad e_{sG}(t) = e_{a,0}(t) + e_{a, \perp}(t)\ , \nonumber\\ e_{\rm int}(t) &= - \frac{2 T_{\perp} \sigma}{L} \int_{0}^{L} dx \, \braket{\Pi_{s}(x) \cos \phi_{a}(x)}, \quad e_{s}(t) = e_{\mathrm{int}}(t) + \braket{H_{s}(t)}/L\ .\label{eq:energies4} \end{align} We note that the total energy density, which is given by $e_{sG}(t) + e_{\rm int}(t)+\langle H_s\rangle/L$, is independent of time, as required for a closed quantum system. Since we are interested in the time dependence of the various energy densities we subtract their values in the initial state and consider \begin{align} \Delta e_{j}(t) \equiv e_{j}(t) - e_{j}(0)\ . \end{align} To quantify the effects of the $\sigma$-coupling on the flow of energy from and to the sine-Gordon model we show $\Delta e_{\rm SG}(t)$ in Fig. \ref{fig:E_flat}. To ascertain which fraction of the energy change is due to the kinetic and interaction parts of the sine-Gordon model we also show $\Delta e_a(t)$ and $\Delta e_{\perp,a}(t)$ in Fig. \ref{fig:E_flat}(a). We observe that the change in $\Delta e_{\rm SG}(t)$ is very small, as significantly larger changes in $\Delta e_a(t)$ and $\Delta e_{\perp,a}(t)$ largely compensate each other. In Fig. \ref{fig:E_flat}(b) we show how much of the energy from the sine-Gordon model $\Delta e_{\rm SG}(t)$ ends up in the new interaction term $e_{\mathrm{int}}(t)$ and how much goes to $\braket{H_{s}(t)}/L$. \begin{figure}[ht] \centering (a)\includegraphics[height=0.37\textwidth]{E_compare0_sigma2L.pdf} (b)\includegraphics[height=0.37\textwidth]{E_compare_int_flat_prof_sigma2L.pdf} \caption{Energy flow between the different terms in Eqs. (\ref{eq:energies4}), as a ratio with the reference scale $e_{r} = \braket{H_{s}(0)}/L$.} \label{fig:E_flat} \end{figure} \subsubsection{\secfix{Finite coupling between sectors ($\sigma>0$) and inhomogeneous initial conditions}} As a next step, we investigate the effect of initial density profiles $\braket{\Pi_{s}(x)}$ that are spatially inhomogeneous. These profiles will evolve in time as is shown in Fig. \ref{fig:Pi_profiles} (a,b). \begin{figure}[ht] \centering (a)\includegraphics[width=0.45\textwidth]{profiles1.pdf} (b)\includegraphics[width=0.45\textwidth]{profiles2.pdf} \caption{Examples of the time evolution of the density profile for $\sigma=0$. The initial profile in (a) is symmetric around the origin, while the one in (b) is not.} \label{fig:Pi_profiles} \end{figure} The profiles $\braket{\phi_{a}(x)}$ and $\braket{\Pi_{a}(x)}$ are strongly affected by the strength of the $\sigma$-coupling to the inhomogeneous profile $\braket{\Pi_{s}(x)}$ and develop inhomogeneities as a consequence. This is illustrated in Figs. \ref{fig:profiles_non_flat}(a,b) and has repercussions for the Josephson oscillations. \begin{figure}[ht] \centering (a)\includegraphics[width=0.45\textwidth]{profiles_phi_a.pdf} (b)\includegraphics[width=0.45\textwidth]{profiles_pi_a.pdf} \caption{(a) The time and position dependence of $\braket{\phi_{a}(x)}$ corresponding to the same initial condition as Fig.~\ref{fig:Pi_profiles}(a) with $\sigma=2$. We see that the initially flat profile develops inhomogeneities due to the sector coupling. (b) the same as panel (a), but showing $\braket{\Pi_{a}(x)}$.} \label{fig:profiles_non_flat} \end{figure} The latter now displays spatial variations, which are caused by an effective Josephson frequency that has become $\sigma$- and position-dependent due to the presence of the space-dependent $\Pi_{s}(x)$-field in the interaction term. This local and $\sigma$-dependent Josephson frequency is illustrated in Fig. \ref{fig:phi_midpoint_nonflat}. \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{Phase1L.pdf} \quad \includegraphics[width=0.48\textwidth]{Phase2L.pdf} \caption{Time dependence of the relative phase in the centre of the box for the same initial conditions as Fig.~\ref{fig:Pi_profiles}(a) and (b), respectively.} \label{fig:phi_midpoint_nonflat} \end{figure} The spatial average of the phase, which is equal to the zero mode $\phi_{a0}$, does not show any $\sigma$-dependence in its Josephson frequency, see Figs. \ref{fig:ZM_cov_nonflat1},\ref{fig:ZM_cov_nonflat2}. In this case, however, a $\sigma$-dependent modulation in the amplitudes is visible: the Josephson oscillations at different points in the box move out of phase due to the spatially varying Josephson frequency mentioned above. This leads to a decrease in the spatial average. \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{Phase_ZM1L.pdf} \quad \includegraphics[width=0.48\textwidth]{Phase_as1L.pdf} \caption{Time dependence of the relative phase and the covariance $C(x_0,t)$ \fr{eq:covariance} in the centre of the box for the profiles shown in panel (a) of Fig. \ref{fig:Pi_profiles}.} \label{fig:ZM_cov_nonflat1} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{Phase_ZM2L.pdf} \quad \includegraphics[width=0.48\textwidth]{Phase_as2L.pdf} \caption{Time dependence of the relative phase and the covariance $C(x_0,t)$ \fr{eq:covariance} in the centre of the box for the profiles shown in panel (b) of Fig. \ref{fig:Pi_profiles}.} \label{fig:ZM_cov_nonflat2} \end{figure} \begin{figure}[ht!] \centering \includegraphics[height=0.38\textwidth]{E_compare1_sigma1L.pdf} \includegraphics[height=0.38\textwidth]{E_compare1_sigma2L.pdf} \caption{Energy flow between different terms in Eqs. (\ref{eq:energies4}), as a ratio with the reference scale $E_{r} = \braket{H_{s}(0)}$. Results are shown for the density profile from Fig. \ref{fig:Pi_profiles}(a), with $\sigma = 1$ (left panel) and $\sigma = 2$ (right panel).} \label{fig:var_energy_flows_compare} \end{figure} For an inhomogeneous profile of $\braket{\Pi_{s}(x,0)}$, the covariance grows in time, in resemblance with the homogeneous case. This happens to an extent that is roughly proportional to $\sigma$. The same can be said of the energy flow between the (anti)symmetric sectors, as shown in Fig. \ref{fig:var_energy_flows_compare}. We see that the effects of the sector coupling term become stronger when we increase $\sigma$, but in the window of applicability of our bosonization based approach the effects remain small. \subsubsection{Distribution functions of the density after time of flight} As described in Sec. \ref{sub:full_distribution_functions}, our formalism allows the construction of distribution functions for the measured density after time-of-flight expansion. As a proof of principle we present such distribution functions in Fig. \ref{fig:FDFs}, for the observables $\Phi_{\ell}$ and $C_{\ell}$ defined in Eq. (\ref{eq:interference_eval_parametr}). \begin{figure}[htbp!] \centering (a)\includegraphics[width=0.36\textwidth]{FDFRun2p2_alpha.pdf} \includegraphics[width=0.096\textwidth]{Colourbar.png} (b)\includegraphics[width=0.36\textwidth]{FDFRunAbs2p2_gamma.pdf} \caption{Distribution functions $P_{\Phi_{\ell}}(\alpha,t,t_{1})$ (a) and $P_{C_{\ell}}(\gamma,t,t_{1})$ for the observables $\Phi_{\ell}$ and $C_{\ell}$ defined in Eq. (\ref{eq:interference_eval_parametr}). We choose a time of flight $t_{1} = 15\, \mathrm{ms}$ and integration length $\ell = 20 \, \mu \mathrm{m}$. The density profile used for these plots is homogeneous, with $\sigma = 1$.} \label{fig:FDFs} \end{figure} \section{Conclusion} \label{sec:conclusion} We have extended the theory for non-equilibrium dynamics in pairs of elongated, tunnel-coupled Bose gases using a self-consistent time-dependent harmonic approximation (SCTDHA). In contrast to earlier works, we have studied the effect of a relevant perturbation which couples the (anti)symmetric sectors describing (anti)symmetric combinations of the two Bose gas phases. On top of this, we have dropped the assumtion of translational invariance by placing the system in a box and by imposing inhomogeneous initial density profiles. Starting from an initial state in which these sectors are uncorrelated, the coupling of the sectors under time evolution leads to a number of new but weak effects. First of all we observe the development of correlations between the sectors over time. This effect is present for all initial states we have considered, but the covariance between the sectors never reaches more than a few percent of the geometric mean of the variances. Second, the spreading of correlations is accompanied by a small transfer of energy between the sectors. And finally, the presence of the coupling term makes the dynamics in the antisymmetric sector susceptible to the breaking of translational invariance in the symmetric sector. The well-known Josephson oscillations of relative density and phase are modulated when taking an inhomogeneous initial density profile of the symmetric sector. This shows that the role of the trapping potential, which creates strong inhomogeneities, may play a more important role in experiment than was previously assumed. However, the model presented here does not capture the puzzling damping phenomenon observed recently \cite{Pigneur2017,schweigler2019correlations,Pigneur2019thesis}. This is not surprising given that our box potential is very different from the quadratic potential used in experiment. In future experiments, however, the box potential is likely to be used, which adds to the relevance of the calculations presented here. We conclude that \textit{(i)} the new term coupling the (anti)symmetric sectors leads to very weak effects. This means that the simulation of a sine-Gordon model using the setup described in this paper should not be severely hampered by the presence of this term. \textit{(ii)} we have shown that it is possible to treat states with broken translational invariance in the SCTDHA formalism as presented in \cite{Nieuwkerk2018b}. Combined with the sector coupling, we find that inhomogeneities in the density can have weak but nontrivial effects on the amplitude of Josephson oscillations. This means that the trapping potential is likely to have an effect on the dynamics probed in experiment. In a forthcoming paper, we will present a study of the projected Hamiltonian (\ref{eq:micr_1d_ham}) in a microscopic analysis that takes a quadratic longitudinal potential into account. It would be interesting to combine such a microscopic approach with low-energy effective field theory calculations in the presence of such a quadratic trapping potential. However, the calculations using the box potential presented here may gain additional relevance when more experiments using a box potential, such as Refs. \cite{Rauer2018Box,Tajik2019}, are performed. \section*{Acknowledgements} We are grateful to J\"{o}rg Schmiedmayer, Thomas Schweigler and Marine Pigneur for stimulating discussions and to the Erwin Schr\"odinger International Institute for Mathematics and Physics for hospitality and support during the programme on \emph{Quantum Paths}. This work was supported by the EPSRC under grant EP/S020527/1 and YDvN is supported by the Merton College Buckee Scholarship and the VSB, Muller and Prins Bernhard Foundations.
\section{Introduction} The solar wind expansion from the Sun is highly non-adiabatic, partly noticed by proton temperatures falling off much more slowly than is expected for a freely expanding ideal gas \citep[e.g.,][]{P1958,R1995}. Throughout its radial expansion, the solar wind develops a strongly turbulent regime \citep{B2005} that can be characterized by proton density, velocity, temperature, and magnetic field fluctuations \citep{M2011}. Furthermore, large-scale magnetohydrodynamic (MHD) turbulence serves as a reservoir of energy that cascades down to the smallest scales {\citep[e.g.,][]{P1998a,P1998b}} where it can be dissipated by kinetic effects while it heats the plasma {\citep[e.g.,][]{L1998,Sa2009,A2009,H2020a}}. In the MHD inertial range, where the energy is transferred without dissipation through different spatial and temporal scales \citep[e.g.,][]{F1995}, the solar wind exhibits a constant energy cascade rate as a function of such scales \citep{SV2007,Co2015,H2017a,B2020,A2021}, in which the magnetic spectrum presents a -5/3 slope {\citep[e.g.,][]{M1982,L1998,M2021,H2021}}. The presence of a magnetic guide field ${\bf B}_0$ induces several types of anisotropy in solar wind turbulence on MHD and kinetic {dissipation} scales \citep[see][]{Ho2012}. In particular, the energy transfer between scales depends strongly on the direction of the magnetic guide field, splitting the energy cascade according to the parallel and the perpendicular directions with respect to ${\bf B}_0$. Several observational results have shown that the solar wind fluctuations at 1 astronomical unit (au) at the largest MHD spatial scales are a combination of field-aligned (or slab) and perpendicular (or 2D) wavevectors \citep[see][]{Ma1990,Da2005}. \citet{Da2005} used five years of ACE data from near-Earth orbit to investigate the correlation anisotropy of solar wind MHD scale fluctuations and showed that the nature of the anisotropy differs in fast and slow solar winds. In particular, fast winds are more dominated by fluctuations with wavevectors almost parallel to the local magnetic field, while slow solar winds, which appear to be more fully evolved turbulence, are more dominated by quasi-perpendicular fluctuation wavevectors. \citet{Ad2021} studied anisotropic turbulence in the slow and fast solar wind as a function of the angle between the mean solar wind speed and the mean magnetic field and as a function of the heliocentric distance. Using Solar Orbiter measurements, the authors compared the observed results with the solar wind and with nearly incompressible (NI) MHD turbulence transport model equations \citep{Z1993}, and found agreement between the theoretical and observed results in the slow and fast winds as a function of the heliocentric distance. Typically, there are two types of fluctuation anisotropy that are recurrently observed in the solar wind, spectral and variance anisotropy \citep[see][]{O2015}. On the one hand, if the components of the fluctuating magnetic (or {velocity}) field have unequal {average} energies, then the field is said to exhibit variance or component anisotropy \citep{Ma2005,W2011}. On the other hand, when the energy distribution at a given spatial ($\ell$) or temporal ($\tau$) scale is not isotropic, the field exhibits spectral or wavevector anisotropy \citep{M1981,Sh1983,G1995,O2015}. In the present paper we focus our attention on two particular features of anisotropic turbulence, the variance anisotropy ratio and the ratio of fluctuation to mean field for the velocity and the magnetic fields, respectively. The investigation of these anisotropy ratios, the energy cascade rate in the MHD scales, the isotropic and anisotropic models, and their connection with the solar wind temperature are the main objectives of the present paper. Using exact relations in fully developed turbulence, it is possible to obtain expressions for the energy cascade rate. Assuming spatial homogeneity and full isotropy, an exact relation for incompressible MHD turbulence can be derived \citep{P1998b,P1998a}. This exact relation provides a precise computation of the amount of energy per unit time and volume $\varepsilon_\text{I}$ (or heating rate) as a function of the velocity and magnetic correlation functions. The MHD exact relation and its connection with the nonlinear energy cascade rate has been numerically and observationally validated for both incompressible and compressible MHD turbulence \citep{WEY2007,M1999,G1997,C2009b,St2009,S2010,B2016c,H2017a,H2017b,A2018b,A2019}, has been generalized to include sub-ion scale effects \citep{A2018,A2019b,H2018,F2019,F2021a}{, and has been extended to include constant velocity shear effects \citep{W2009,W2010}}. Estimations of the isotropic energy cascade rate in the inertial range of solar wind turbulence have been previously computed at 1 au \citep[see][]{M2008,Co2015,B2016c,H2017a} and more recently at small and large heliocentric distances \citep[see][]{B2020,A2021}. Assuming a 2D and slab cylindrical symmetric geometry, where the perpendicular cascade rate is considered to depend only on the perpendicular scale and the parallel cascade depends on the parallel direction, \citet{Mac2008} derived a relation for homogeneous incompressible anisotropic MHD turbulence. In particular, they derived expressions for the correlation functions that are applicable to both parallel and perpendicular cascades. Using seven years of solar wind observations from the ACE spacecraft at 1 au, \citet{Mac2008} found a {region with} linear scaling of the energy flux, as is expected for the MHD inertial range. In addition, they found that both fast and slow solar winds exhibit an active energy cascade over an inertial range, with an energy cascade rate in the parallel direction consistently lower than in the perpendicular direction. \citet{St2009} investigated the convergence of third-order structure functions to compute cascade rates in the solar wind using ACE observation at 1 au covering the years from 1998 to 2007. The authors found that a minimum of one year of data is normally required to get good convergence and statistically significant results. They also compared the computed energy cascade rates with previously determined rates of proton heating at 1 au, as determined from the radial gradient of the proton temperature. \citet{S2010} investigated ACE observations of large cross-helicity states using isotropic and anisotropic expression for the energy cascade rate. In contrast to intervals with small helicity values, large helicity states demonstrate a significant back-transfer of energy from small to large scales. In the present paper, using a large Parker Solar Probe (PSP) data set (more than 5000 hours in the solar wind), we extend the current state of knowledge of solar wind turbulence in the inner heliosphere by computing the energy cascade rate using both the anisotropic and isotropic relations for fully developed turbulence. Using magnetic field and plasma \ADD{moment} observations between $\sim0.2$ au and $\sim0.8$ au, we investigate how the energy cascade rate is affected not only by the heliocentric distance, but also by the presence of a guide magnetic guide and the consequence anisotropy. The study is structured as follows. In Sections \ref{sec:model} and \ref{sec:exact} we present the theoretical incompressible MHD model and a brief description of the anisotropic and isotropic exact relations, respectively. In Section \ref{sec:obs} we briefly describe the PSP observation data set and the conditions that each turbulent event must fulfill. In Section \ref{sec:res} we present the main results of our analysis. Finally, the discussion and conclusions are developed in Section \ref{sec:dis}. \section{The incompressible MHD model}\label{sec:model} The three-dimensional (3D) incompressible MHD equations are the momentum equation for the velocity field {\bf u} (in which the Lorentz force is included), the induction equation for the magnetic field {\bf B}, and the solenoid condition for both fields. These equations can be written as \begin{align}\label{1} &\frac{\partial \textbf{u}}{\partial t} = -{\bf u}\cdot\boldsymbol\nabla{\bf u} + {\bf B}\cdot\boldsymbol\nabla{\bf B} - \frac{1}{\rho_0}\boldsymbol\nabla(P+P_M) + \textbf{f}_k + \textbf{d}_k , \\ \label{2} &\frac{\partial {\bf B}}{\partial t} = - {\bf u}\cdot\boldsymbol\nabla{\bf B} + {\bf B}\cdot\boldsymbol\nabla{\bf u} + \textbf{f}_m + \textbf{d}_m , \\ \label{3} &\boldsymbol\nabla\cdot{\bf u} = 0, \\ \label{4} &\boldsymbol\nabla\cdot{\bf B}= 0, \end{align} where the magnetic field is in Alfv\'en velocity units (i.e., the real magnetic field is $\textbf{B}\sqrt{4\pi\rho_0}$, where $\rho_0$ is the mean mass density and $\mu$ is the magnetic permeability of the plasma) and $P_M$ is the magnetic pressure. Finally, \textbf{f}$_{k,m}$ are respectively a mechanical and the curl of the electromotive large-scale forcings, and $\textbf{d}_{k,m}$ are respectively the small-scale kinetic and magnetic dissipation terms \citep{A2016b,F2021b}. \section{The exact relation in MHD turbulence}\label{sec:exact} Using Eqs.~\eqref{1}--\eqref{4} and following the usual assumptions for fully developed homogeneous turbulence (i.e., infinite kinetic and magnetic Reynolds numbers and a steady state with a balance between forcing and dissipation) \citep[see, e.g.,][]{A2017b}, an exact relation for incompressible anisotropic MHD turbulence can be obtained as \citep[e.g.,][]{G2018} \begin{align}\label{exactlaw0} -4\varepsilon&= \rho_0\boldsymbol\nabla_\ell\cdot{\bf F}, \end{align} where {\bf F} is the incompressible energy flux \begin{align}\label{flux} {\bf F} &= \rho_0\langle (\delta{\bf u}\cdot\delta{\bf u}+\delta{\bf B}\cdot\delta{\bf B})\delta{\bf u} - (\delta{\bf u}\cdot\delta{\bf B}+\delta{\bf B}\cdot\delta{\bf u})\delta{\bf B}\rangle, \end{align} and $\varepsilon$ is the total energy cascade rate per unit volume. Fields are evaluated at position $\textbf{x}$ or $\textbf{x}'=\textbf{x}+\boldsymbol\ell$; in the latter case a prime is added to the field. The angular bracket $\langle\cdot\rangle$ denotes an ensemble average \citep{Ba1953}, which is taken here as a time average assuming ergodicity. Finally, we introduced the usual increment definition: $\delta\alpha\equiv\alpha'-\alpha$. It is worth noting that we do not have access to multi-spacecraft measurements, and therefore it is necessary to assume some sort of symmetry to integrate Eq.~\eqref{exactlaw0} and be able to compute the energy cascade rate $\varepsilon$ \citep[see][]{St2011}. In particular, we work with two models for the energy cascade rate, an isotropic form for $\varepsilon_\text{I}$ \citep{P1998a,P1998b}, and the anisotropic expressions $\varepsilon_\perp$ and $\varepsilon_\parallel$ respectively for the perpendicular and parallel cascade rates \citep{Mac2008}. \subsection{The isotropic energy cascade rate}\label{sec:iso} Assuming the Taylor hypothesis (i.e., $\ell\equiv\tau U_0$, where $U_0$ is the mean plasma flow speed and $\ell=|\boldsymbol\ell|$ is the longitudinal distance) and full isotropy, Eq.~\eqref{exactlaw0} can be integrated and expressed as a function of time lags $\tau$. While Eq.~\eqref{exactlaw0} includes increments in all the spatial directions, the isotropic cascade only includes increments in the longitudinal direction $\ell$ (for single-spacecraft measurements, in the plasma velocity direction $\hat{\bf U}_0$). Therefore, the isotropic energy cascade rate \citep{P1998a,P1998b} can be {evaluated using only increments in the longitudinal direction} as \begin{align}\label{law_iso} \varepsilon_\text{I} &= \rho_0\langle [(\delta{\bf u}\cdot\delta{\bf u}+\delta{\bf B}\cdot\delta{\bf B})\delta{u}_\ell - (\delta{\bf u}\cdot\delta{\bf B}+\delta{\bf B}\cdot\delta{\bf u})\delta{B}_{\ell}]/(-4\tau U_0/3)\rangle, \end{align} where $u_\ell={\bf u}\cdot{\bf \hat U}_0$ and $B_{\ell}={\bf B}\cdot{\bf \hat U}_0$. In particular, the total isotropic energy cascade rate $\varepsilon_\text{I}$ can be expressed as a function of two components: $\varepsilon_1$ proportional to $\delta u_\ell$, and $\varepsilon_2$ proportional to $\delta B_{\ell}$. \begin{figure*} \begin{center} \includegraphics[width=0.75\textwidth]{histograms.jpeg} \end{center} \caption{Occurrence rates for the proton density, and the proton and Alfv\'en velocity absolute mean values (top) and fluctuations (bottom).} \label{fig:histo} \end{figure*} \begin{figure} \centering \includegraphics[width=.95\hsize]{kde1.jpeg} \caption{Bivariant KDE for the mean ((a) and (b)) and fluctuating ((c) and (d)) {velocity} and magnetic field absolute values as a function of the heliocentric distance.}\label{fig:kde1} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\hsize]{kde2.jpeg} \caption{Bivariant KDE for the {normalized fluctuation} ((a) and (b)) and variance anisotropy ((c) and (d)) ratios respectively for the {velocity} and magnetic fields as a function of the heliocentric distance. The dotted lines in (c) and (d) correspond to the isotropic (kinetic or magnetic) energy distribution.}\label{fig:kde2} \end{figure} \subsection{The 2D and slab energy cascade rates \citep{Mac2008}}\label{sec:2d1d} As we discuss in the Introduction, observational results have shown that {an important part of the energy power can be confined to the parallel and perpendicular directions with respect to the magnetic guide field \citep[e.g.,][]{Sh1983,M2004,Da2005,Ou2013}}. Therefore, here we present the hybrid formulation (i.e., 1D plus 2D) that can address the parallel and perpendicular fluctuations, temporal increments, and energy cascade rates \citep[see][]{Mac2008,St2009}. To find expressions for the perpendicular and parallel cascade rates, we use the magnetic field {aligned} basis \citep[e.g.,][]{B1996}, where the velocity and magnetic field observations are properly rotated to leave parallel magnetic fluctuations in one direction. Then, in this particular basis, the ${\hat e}_3$ versor is along the magnetic guide field direction and the unit vectors are \begin{align} {\bf \hat e}_3 &\equiv {\bf \hat e}_{B}, \\ {\bf \hat e}_2 &\equiv {\bf \hat e}_3\times{\bf \hat e}_1, \\ {\bf \hat e}_1 &\equiv \frac{{\bf \hat e}_U\times{\bf \hat e}_{B}}{|{\bf \hat e}_U\times{\bf \hat e}_B|}, \end{align} where ${\bf \hat e}_{B} = \langle {\bf B}\rangle / \langle | {\bf B} |\rangle$ and ${\bf \hat e}_U = \langle {\bf u} \rangle / \langle |{\bf u}| \rangle$. Assuming that we have cylindrical symmetry and that the energy flux \eqref{flux} is perpendicular to the mean magnetic field (and depends only on $\ell_\perp$), an expression for the perpendicular energy cascade rate can be found as \begin{align}\label{law_perp} \varepsilon_\perp &= \rho_0\langle [(\delta{\bf u}\cdot\delta{\bf u}+\delta{\bf B}\cdot\delta{\bf B})\delta{u}_2 - (\delta{\bf u}\cdot\delta{\bf B}+\delta{\bf B}\cdot\delta{\bf u})\delta{B}_{2}]/(-2 \tau U_0\sin\theta_{BV})\rangle, \end{align} where $u_2={\bf u}\cdot{\bf \hat e}_2$, $B_{2}={\bf B}\cdot{\bf \hat e}_2$ and $\theta_{BV}$ is the angle between ${\bf \hat e}_{B}$ and ${\bf \hat e}_{U}$. On the other hand, still assuming that we have cylindrical symmetry but that the energy flux \eqref{flux} is parallel to the mean magnetic field and depends only on the parallel direction $\ell_\parallel$, an expression for the parallel cascade rate can be found as \begin{align}\label{law_para} \varepsilon_\parallel &= \rho_0\langle [(\delta{\bf u}\cdot\delta{\bf u}+\delta{\bf B}\cdot\delta{\bf B})\delta{u}_3 - (\delta{\bf u}\cdot\delta{\bf B}+\delta{\bf B}\cdot\delta{\bf u})\delta{B}_{3}]/(-4 \tau U_0\cos\theta_{BV})\rangle, \end{align} where $u_3={\bf u}\cdot{\bf \hat e}_3$ and $B_{3}= {\bf B} \cdot{\bf \hat e}_3$. Finally, the total hybrid energy cascade rate in this model is $\varepsilon_\text{H}=\varepsilon_\perp/2+\varepsilon_\parallel/4$. In the present paper we are interested in computing $\varepsilon_\text{I}$, $\varepsilon_\perp$, and $\varepsilon_\parallel$, which are fully defined by velocity and magnetic field increments that can be estimated from single in situ measurements. \section{Observations and selection criteria}\label{sec:obs} We used a data set of PSP observations \citep{Fo2016,K2016,Ba2016,Ka2019,B2019,C2020} covering the period between October 10, 2018, and December 31, 2020. This large data set includes the first six PSP perihelia. We used the magnetic field and the proton moments from the FIELDS and SPC experiments, respectively. The spurious data (i.e., high artificial peaks) in the SPC moments \citep[see][]{K2016} were removed using a linear interpolation \citep[see][]{B2020,P2020} and the data set was re-sampled to 0.873 s time resolution. In order to analyze the solar wind turbulence on MHD scales, the data set was divided into a series of samples of equal duration of 60 minutes. This time duration ensures several correlation times of the turbulent fluctuations at heliocentric distances of less than 1 au \citep[see][]{P2020,H2017a}. As in previous studies \citep[e.g.,][]{A2020,A2021}, we avoided intervals that contained significant disturbances or large-scale gradients (e.g., coronal mass ejection or interplanetary shocks) or rapid flips in the Sun’s magnetic field that reversed direction (i.e., magnetic switchbacks). We further considered only intervals that did not show large fluctuations of the energy cascade rate over the MHD scales; typically, we retained events with $\text{std}(\varepsilon_\text{I})/\text{mean}(|\varepsilon_\text{I}|)<1$ (where std is the standard deviation). \section{Results}\label{sec:res} \subsection{Occurrence rates} Figure \ref{fig:histo} shows the occurrence rates for the number density, velocity, and magnetic field absolute mean and fluctuation values for all the events in our data set. In particular, we separated the velocity and magnetic fields in terms of its mean and fluctuation values as \begin{align}\label{meanv} {\bf u}({\bf x},t) &= {\bf U}_0 + {\bf v}({\bf x},t), \\ \label{meanb} {\bf B}({\bf x},t) &= {\bf B}_0 + {\bf b}({\bf x},t), \end{align} where ${\bf U}_0=\langle{\bf u}({\bf x},t)\rangle$, ${\bf B}_0=\langle{\bf B}({\bf x},t)\rangle$ and $\langle\cdots\rangle$ denotes a time averaging operator, which in the present paper is the global mean (i.e., a one hour average). It is worth noting that most of the cases studied in the present paper correspond to slow solar wind (i.e., $|{\bf U}_0| \lesssim 500$ km s$^{-1}$). {Since we want to estimate the incompressible energy cascade rates to ensure the incompressibility approximation, we keep only the cases where $\langle |\Delta n| / n \rangle < 15 \%$ (where $\Delta n \equiv n - \langle n \rangle$). In other words, we use the full velocity fields in the incompressible MHD exact relation in those events where the velocity fluctuations have only weak compressible effects. However, we are estimating the incompressible energy cascade rate since velocity fluctuations may still contain a small compressible component.} This leaves us with a data set of $\sim$ 5200 events of one-hour duration each. Figure \ref{fig:kde1} shows the bivariant kernel density estimation (KDE) for the mean and fluctuating velocity and the magnetic fields as a function of the heliocentric distance. A bivariant KDE produces a continuous probability density surface in two dimensions \citep[see][]{W2021}, where brighter regions correspond to regions with more analyzed events. It is worth noting that while the mean velocity field values do not present a statistical dependence with the heliocentric distance, the magnetic guide field and both magnetic and {velocity} fluctuation values strongly decrease as we move away from the Sun. In particular, as we approach the Sun, the magnetic and kinetic fluctuation levels increase to the same order ($\sim$ 50 -- 70 km s$^{-1}$). We return to this point in Section \ref{sec:epsi} when we analyze the isotropic cascade rate. \begin{figure} \centering \includegraphics[width=.45\hsize]{epsi_left.jpeg}\includegraphics[width=.45\hsize]{epsi_right.jpeg} \caption{Cascade rate component $\langle|\varepsilon_2|\rangle$ as a function of the component $\langle|\varepsilon_1|\rangle$. In panel (a) the color bar is the total cascade $\langle|\varepsilon_\text{I}|\rangle,$ and in panel (b) it is the heliocentric distance $\langle r \rangle$.}\label{fig:epsi} \end{figure} \subsection{Variance anisotropy and normalized fluctuation ratios} As we discussed in the Introduction, there are two types of fluctuation anisotropy that are typically observed in the solar wind: spectral anisotropy and variance anisotropy. To quantify them we consider the velocity and magnetic fields in terms of mean values plus fluctuations around these means (see Eqs.~\eqref{meanv} and \eqref{meanb}). On the one hand, if the components of the field have unequal energies (e.g., in {Cartesian} coordinates, departures from $\langle b_x^2 \rangle = \langle b_y^2\rangle= \langle b_z^2\rangle$ for the magnetic field), the field exhibit variance anisotropy {\citep[e.g.,][]{B1971,T2012}}. To quantify this variance anisotropy, we consider the {velocity} and magnetic anisotropy ratios \citep[see][]{O2015} as \begin{align} A_{v} &= \frac{v_\perp^2}{v_\parallel^2},\\ A_{b} &= \frac{b_\perp^2}{b_\parallel^2}, \end{align} where we employ the magnetic field coordinate system \citep[see][]{B1996}. Variance anisotropy is {scale \citep[e.g.,][]{M2012} and plasma $\beta$ dependent \citep[e.g.,][]{Ou2016}}; however, in the present paper we focus our attention on their values for the largest MHD scales (i.e., one hour mean values). On the other hand, generally speaking, when the energy distribution at a given time scale $\tau$ is not isotropic, we speak of spectral anisotropy. In particular, spectral anisotropy is usually associated with energy cascades that are also anisotropic \citep{O2015,Ho2012}. Moreover, for incompressible MHD turbulence, numerical and observational evidence shows that strong (or even moderate) mean magnetic fields give rise to a suppression of the energy cascade in the parallel direction, and the perpendicular energy cascade is thus much stronger than the parallel cascade {\citep{Sh1983,O1994,C2000,M2001,Mac2008,St2009,Ou2011,O2013,M2012,A2018b}}. {Therefore, in the present paper we consider the ratio of the fluctuation fields and the mean as indicative of spectral anisotropy at MHD scales for both $\bf u$ and $\bf B$; in other words, the ratios $\langle |v| \rangle / |{\bf U}_0|$ and $\langle |b| \rangle / |{\bf B}_0|$ are the normalized fluctuation ratios for the {velocity} and magnetic fields, respectively.} Figure \ref{fig:kde2} show the bivariant KDE for {the normalized fluctuation ratios} and variance anisotropy ratios respectively for the {velocity} and magnetic fields as a function of the heliocentric distance. The dotted lines in Figures \ref{fig:kde2} (c) and (d) correspond to the isotropic ({velocity} or magnetic) energy distribution. While the {normalized velocity fluctuation} ratios show a dependence on the heliocentric distance $r$ (with a very low amplitude), the magnetic {fluctuation ratios do} not show a clear dependence. However, the magnetic fluctuations are much larger than their means, while the {velocity} fluctuations are small when they are compared with their means. The variance anisotropy ratios, both {velocity} and magnetic, do not exhibit a dependence with respect to the heliocentric distance. Moreover, for the velocity field most of the cases remain around 2, suggesting that the kinetic energy distribution is approximately isotropic on MHD scales, and for the magnetic field most of the events reported here show high anisotropy ratios (i.e., $2 \leq A_b$). \subsection{The incompressible energy cascade rate}\label{sec:epsi} To compute the right-hand side of Eqs.~\eqref{law_iso}, \eqref{law_perp}, and \eqref{law_para}, we constructed temporal correlation functions of the different turbulent fields at different time lags $\tau$ in the interval [1,3600] s, which covers the MHD inertial range \citep{H2017a} {at heliocentric distances between $\sim$0.2 and $\sim$0.8 au}. Once we have the energy cascade rates as a function of the time increments, we average them on the large timescales (i.e., for $\tau\in[1000,3000]$ s) to obtain representative values for the cascades in the largest MHD scales. \begin{figure} \centering \includegraphics[width=.45\hsize]{epsp_left.jpeg}\includegraphics[width=.45\hsize]{epsp_right.jpeg} \caption{ Cascade rate component $\langle|\varepsilon_\perp|\rangle$ as a function of the component $\langle|\varepsilon_\parallel|\rangle$. In panel (a) the color bar is the total cascade $\langle|\varepsilon_\text{H}|\rangle$ {(where $\varepsilon_\text{H}=\varepsilon_\perp/2+\varepsilon_\parallel/4$ is the total hybrid cascade rate),} and in panel (b) it is the heliocentric distance $\langle r \rangle$.}\label{fig:epsp} \end{figure} As we discuss in Section \ref{sec:iso}, the total isotropic energy cascade rate can be written as a function of two components, \begin{align} \varepsilon_\text{I} &= \varepsilon_1+\varepsilon_2 ,\\ \label{eps1} \varepsilon_1 &= \rho_0\langle (\delta{\bf u}\cdot\delta{\bf u}+\delta{\bf B}\cdot\delta{\bf B})\delta{u}_\ell /(-4 \tau U_0/3)\rangle, \\\label{eps2} \varepsilon_2 &= - \rho_0\langle (\delta{\bf u}\cdot\delta{\bf B}+\delta{\bf B}\cdot\delta{\bf u})\delta{B}_{\ell}/(-4\tau U_0/3)\rangle, \end{align} where we can relate the first component $\varepsilon_1$ to the total energy (kinetic plus magnetic) and the second component $\varepsilon_2$ to the cross-helicity (i.e., ${\bf u}\cdot{\bf B}$) in the plasma. This interpretation comes directly from Eqs.~\eqref{eps1} and \eqref{eps2}. Figure \ref{fig:epsi} shows the mean absolute value $\langle |\varepsilon_2| \rangle$ as a function of $\langle |\varepsilon_1| \rangle$. The color bar corresponds in panel (a) to the mean total energy cascade rate absolute value $\langle |\varepsilon_\text{I}| \rangle$ and in panel (b) to the heliocentric distance $r$. As a reference, we plot a gray dashed line with slope equal to 1. As we expected, there is a strong correlation between the cascade rate amplitude and the heliocentric distance to the Sun: the closer PSP is to the Sun, the stronger the isotropic energy cascade rate is. In particular, the strongest cases correspond to {approximately} equal cross-helicity and energy components {(i.e., $\langle |\varepsilon_1| \rangle \approx \langle |\varepsilon_2| \rangle$)}. Figure \ref{fig:epsp} shows the mean absolute value $\langle |\varepsilon_\perp| \rangle$ as a function of $\langle |\varepsilon_\parallel| \rangle$ in the same format as in Figure \ref{fig:epsi}. As in Figure \ref{fig:epsi}, as we move far away from the Sun, both components decrease their amplitudes. Moreover, we observe a clear trend of obtaining more perpendicular than parallel energy cascade values as we approach the Sun (slope larger than one in Figure \ref{fig:epsp} (b)). \begin{figure} \centering \includegraphics[width=.49\hsize]{rate_temp_1.jpeg}\includegraphics[width=.49\hsize]{rate_temp_2.jpeg} \caption{Cascade rate component $\langle\varepsilon_2\rangle$ as a function of the component $\langle\varepsilon_1\rangle,$ and the perpendicular component $\langle\varepsilon_\perp\rangle$ as a function of the parallel component $\langle\varepsilon_\parallel\rangle$. In panels (a) and (b) the color bar is the temperature.}\label{fig:temp1} \end{figure} \begin{figure} \centering \includegraphics[width=.95\hsize]{rate_temp_3.jpeg} \caption{ \ADD{For a given temperature bin the following averages are shown:} (a) Components and total isotropic energy cascades rates; (b) components and total anisotropic energy cascades rates; (c) fluctuation in kinetic and magnetic energies; and (d) normalized cross-helicity and normalized residual energy as a function of the temperature.}\label{fig:temp2} \end{figure} \subsection{The isotropic, perpendicular, and parallel cascade rates and their relation with the temperature} Figure \ref{fig:temp1} shows the mean absolute value $\langle |\varepsilon_2| \rangle$ as a function of $\langle |\varepsilon_1| \rangle$ and the mean absolute value $\langle |\varepsilon_\perp| \rangle$ as a function of $\langle |\varepsilon_\parallel| \rangle$. In both panels, the color bar corresponds to the proton temperature, and as a reference a gray dashed line indicates a slope equal to one. In comparison with Figures \ref{fig:epsi} and \ref{fig:epsp}, we note the clear (and expected) correlation between the heliocentric distance and the temperature: as $r$ increases, the temperature decreases. In the case of the anisotropic cascade rates, we also observed that the hottest events mainly correspond to those where the perpendicular cascade is dominant with respect to the parallel cascade in the MHD range. Typically, in MHD it defines the normalized cross-helicity $\sigma_c = \langle {\bf v}\cdot{\bf b}\rangle/(E_k+E_m)$ and the normalized residual energy $\sigma_r = (\langle {\bf v}^2 \rangle - \langle {\bf b}^2 \rangle)/(\langle {\bf v}^2 \rangle + \langle {\bf b}^2 \rangle)$, where $E_k\equiv\langle {\bf v}^2 \rangle/2$ is the incompressible kinetic energy and $E_m\equiv\langle {\bf b}^2 \rangle/2$ is the magnetic energy. While the cross-helicity measures the level of Alfv\'enicity of a particular event, the residual energy quantifies the relative energy in kinetic and magnetic fluctuations. By definition, the parameters $\sigma_c$ and $\sigma_r$ range between -1 and 1. For simplicity we drop the “normalized” prefix, assuming the understanding that these imply the normalized versions $\sigma_c$ and $\sigma_r$. Figure \ref{fig:temp2} shows the average of different variables as a function of the temperature. In particular, we group events according to the temperature values and then bin average them. The error bars correspond to the standard deviation divided by the square root of the number of samples in each group. Then, for a given temperature, we averaged (a) the isotropy and (b) anisotropy energy cascade rates (total and components), (c) the incompressible kinetic and magnetic fluctuation energies, and (d) the cross-helicity and residual energy. Figure \ref{fig:temp2} (a) and (b) show in a compact form the results analyzed in Figure \ref{fig:temp1}: as the isotropic (or anisotropic) energy cascade rate increases, the temperature increases in the plasma. In particular, for the isotropic cascade the events with the highest temperatures correspond to {$\langle|\varepsilon_1|\rangle\approx\langle|\varepsilon_2|\rangle$}, while for the anisotropic cascade these events correspond to $\langle|\varepsilon_\perp|\rangle>\langle|\varepsilon_\parallel|\rangle$. Interestingly, in these hottest events the kinetic and magnetic fluctuation energies become approximately equal. Moreover, these events {seems to be} Alfv\'enic events since $\sigma_c\shortrightarrow1$. \section{Discussion and conclusions}\label{sec:dis} In this paper we analyzed a large PSP solar wind data set of $\sim$ 5200 events, covering observations from October 2018 to December 2020. Our statistical results show a clear correlation between the incompressible energy cascade rate, heliocentric distance, and plasma temperature in the inner heliosphere. In particular, for both isotropic and anisotropic rates, as we decrease the heliocentric distance, the energy cascade rates increase by several orders of magnitude. We covered heliocentric distance from $\sim0.8$ au to $ \sim0.1$ au, obtaining energy cascade rates from $\sim1\times10^{-19}$ J m$^{-2}$s$^{-1}$ to $\sim1\times10^{-12}$ J m$^{-2}$s$^{-1}$. Recently, \citet{B2020} estimated the isotropic energy cascade rate for the first PSP perihelion. The authors found that $\varepsilon_\text{I}$ at $\sim$ 0.17 au is about 100 times higher than the average value at 1 au. In agreement with this finding and previous statistical results \citep[see][]{Mac2008,A2021}, we found an amplification of $\varepsilon_\text{I}$ and $\varepsilon_\text{H}$ as we approach the Sun. This amplification as we decrease the heliocentric distance is due to the increase in the {velocity} and magnetic fluctuation amplitudes (see Figure \ref{fig:kde1}) and the mean solar wind density value. In contrast with previous results \citep{O2015}, we do not observe a clear dependence of the spectral and variance anisotropy ratios on the heliocentric distance in the inner heliosphere. \citet{O2015} reported a review of solar wind anisotropy with different anisotropy ratios $A_v$ and $A_b$ from slow and fast solar wind at different heliocentric distances. \citet{B1999} computed $A_v$ and $A_b$ for three events at 0.3, 0.7, and 0.9. The authors found that the magnetic fluctuation variance ratio slightly increases with heliocentric distance, while the {velocity} ratio remains constant. On the other hand, using Helios 1 observations from 0.3 au to 1 au, \citet{Mac2010} showed that the magnetic variance anisotropy scales with both proton beta and the amplitude of fluctuation power spectrum with no dependence on the heliocentric distance. In agreement with \citet{Mac2010}, our statistical results do not show any apparent increase in $A_b$ (or $A_v$) with respect to the heliocentric distance. Moreover, we observe that most of the cases exhibit $A_b>A_v$ \citep[see][]{B1999} {in agreement with previous observational \citep{O2015} and numerical \citep{Ou2016} results}. Using the isotropic assumption \citep{P1998a,P1998b} and the slab and 2D assumption \citep{Mac2008}, we computed the {incompressible} energy cascade rate components from both models {using PSP solar wind observations.} For the isotropic model, in the cases near the Sun (i.e., the largest cascade values or hottest events) the energy and cross-helicity components (see Eqs.~\eqref{1} and \eqref{2}) are approximately equal. On the contrary, for the anisotropic model, in the same events the dominant component is the perpendicular one. At 1 au, using ACE solar wind observations from 1998 to 2005, \citet{Mac2008} reported different cascade values for different types of solar wind. The authors found that fast and slow solar winds both exhibit an active cascade rate over the inertial range, and that the energy flux in the parallel cascade is consistently smaller than in the perpendicular cascade. Beyond the fact that we are exploring different heliocentric distances at different correlation times {(an independent event lasts two days or tens of correlation lengths at 1 au for MacBride et al., while we consider an event to last one hour or approximately four correlation lengths),} we observed the same trend: for a large majority of the cases the perpendicular cascade is much larger than the parallel one. This statistical result is totally consistent with a dominant 2D cascade and/or geometry in slow solar wind turbulence on MHD scales \citep[e.g.,][]{Sh1983,M1996,Da2005,W2012,O2013,A2017a,B2021,Z2021,B2021}. Moreover, the NI MHD model \citep[e.g.,][]{Z1993,Z2021} predicts that the energy-containing range in the slow solar wind is a superposition of a majority quasi-2D component and a minority slab component. Using the NI model, PSP observations, and Solar Orbiter observations, \citet{Z2021} and \citet{Ad2021} show that both the slow and fast solar winds are not typically aligned with the large-scale magnetic field, and therefore the quasi-2D fluctuations are visible to the PSP spacecraft, in agreement with our findings here. We found a robust correlation between the temperature, the heliocentric distance, and the isotropic and anisotropic energy cascade rates: as we approach the Sun, the temperature and cascade rates both increase. The temperature rise is clearly related to the most Alfvénic events ($\sigma_c\shortrightarrow1$) in an imbalanced and magnetic fluctuation dominant regime {($E_m>E_k$ or $\sigma_r<0$)}. Using a NI MHD model, \citet{Z2021} predicted arbitrary values of the (normalized) residual energy with a tendency to evolve toward negative values in magnetic energy dominated regimes. The authors also analyze PSP slow solar wind observations showing that the normalized residual energy becomes increasingly negative with increasing heliocentric distance (i.e., it becomes magnetic energy-dominated with distance). In the present paper we confirm these predictions, exploring not only the heliocentric distance dependence, but also the amplification of the cascade and the local temperature. While we do not observe that $\sigma_r$ becomes increasingly negative with increasing heliocentric distance, we do observe a constant and negative value for $\sigma_r$ as we approach the Sun. In addition, these observations of $\sigma_c$ and $\sigma_r$ are consistent with the dominant 2D structures over the minority slab component {\citep{B2008,B2011,Ou2016}}. Finally, some aspects of this work require improvement. On the one hand, we did not take into account possible compressibility under various closures \citep{S2021b,S2021a}, which may be relevant even in the usual incompressible solar wind \citep{B2016c,H2017a,A2017a,A2021}. On the other hand, we did not include the sub-ion scales energy cascade physics \citep{A2018,A2019b,H2018,F2021a}, which are closely related to the solar wind heating problem {\citep[e.g.,][]{Ma2020,M2021}}. These issues are planned for upcoming works. \section{Acknowledgements} N.A. acknowledge financial support from CNRS/CONICET Laboratoire International Associé (LIA) MAGNETO. N.A. acknowledges financial support from the following grants: PICT 2018 1095 and UBACyT 20020190200035BA. We thank the NASA Parker Solar Probe SWEAP team led by J. Kasper and FIELDS team led by S. D. Bale for use of data. {N.A. thanks M. Brodiano for fruitful discussions about the data set.} \bibliographystyle{aa}
\section{introduction} Particles with a macroscopic decay length, ranging from a few {centimeters} to several hundred meters and beyond, can be classified as long-lived particles (LLPs) at the large hadron collider (LHC). Such LLPs are endemic in new physics models beyond the standard model (SM); see e.g.\ \cite{Lee:2018pag, Alimena:2019zri} for recent reviews. A number of new detectors at the LHC have been recently proposed to search for LLPs, which can be collectively referred to as lifetime frontier detectors. These include the detectors that are placed in the forward region: FACET~\cite{FACET:talk1}, FASER~\cite{Feng:2017uoz, Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}, {FASER2} \cite{ Ariga:2019ufm, Kling:2021fwx}, AL3X~\cite{Gligorov:2018vkc}, and {MoEDAL-MAPP} \cite{Staelens:2019gzt}; the detectors that are placed in the central region: MATHUSLA \cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd, Lubatti:2019vkf, Alpigiani:2020tva}, CODEX-b~\cite{Gligorov:2017nwh, Aielli:2019ivi}, ANUBIS~\cite{Bauer:2019vqk}; and the precision timing detectors that are to be installed at ATLAS, CMS, and LHCb to mitigate the pileup backgrounds in the coming HL-LHC phase: CMS-MTD~\cite{CMStiming}, ATLAS-HGTD~\cite{Allaire:2018bof}, LHCb-TORCH~\cite{LHCb:2017MTD, LHCb:2008vvz}. A plethora of LLPs can be studied in the newly proposed lifetime frontier detectors \cite{Curtin:2017izq, Evans:2017lvd, Feng:2017vli, Kling:2018wct, Helo:2018qej, Liu:2018wte, Feng:2018pew, Cerri:2018rkm, Curtin:2018ees, Berlin:2018jbm, Dercks:2018eua, Dercks:2018wum, Flowers:2019gvj, Kim:2019oyh, Mason:2019okp, Boiarska:2019vid, No:2019gvl, Krovi:2019hdl, Jodlowski:2019ycu, Du:2019mlc, Hirsch:2020klk, Yuan:2020eeu, Liu:2020vur, Dreiner:2020qbi, DeVries:2020jbs, Bertuzzo:2020rzo, Cottin:2021lzz, Cheung:2021utb, Guo:2021vpb, Bhattacherjee:2021rml, Mitsou:2021tti}. One well-motivated new physics particle is the dark photon (denoted by $A'_\mu$) which can naturally arise in kinetic mixing model \cite{Holdom:1985ag} \cite{Foot:1991kb} and in Stueckelberg models \cite{Kors:2005uz} \cite{Feldman:2006ce} \cite{Feldman:2006wb} \cite{Feldman:2007wj} \cite{Feldman:2009wv} \cite{Du:2019mlc}. The interaction between the dark photon $A'_\mu$ and the SM fermion $f$ can be parameterized as \begin{equation} e \, \epsilon \, Q_f \, A'_\mu \bar f \gamma^\mu f. \label{eq:epsilon} \end{equation} Long-lived dark photons (LLDPs) have a small $\epsilon$ coupling, which, however, leads to a suppressed collider signal. Recently, a new dark photon model is proposed in Ref.\ \cite{Du:2019mlc} where the dark photon is produced at colliders by the hidden fermion radiation so that the collider signal no longer suffers from the small $\epsilon$ parameter. For that reason, the LLDP signal at the LHC in this new dark photon model can be significantly enhanced.\footnote{See \cite{Buschmann:2015awa, Arguelles:2016ney, Kim:2019oyh, Krovi:2019hdl} for other dark photon models with a sizeable LLDP signal.} Thus, we will refer to the dark photon models, where the dark photon interacts with the SM sector only via the interaction Lagrangian in Eq.\ \eqref{eq:epsilon}, as the ``minimal'' dark photon models, to be distinguished from the dark photon models proposed in Ref.\ \cite{Du:2019mlc}. In this paper, we investigate the capability of various lifetime frontier detectors in probing the parameter space of LLDPs both in the minimal dark photon model and in the newly proposed dark photon model \cite{Du:2019mlc}. We carry out detailed analysis for detectors: the far forward detector, FACET and FASER, the central transverse detector, MATHUSLA, and the precision timing detector, CMS-MTD. We compute the expected limits from these detectors. We find that the parameter space probed by FACET and MATHUSLA are significantly enlarged by the hidden fermion radiation in the new dark photon model, as compared the minimal dark photon model. We also find that the LLDP signal at the newly proposed far detector FACET is significantly larger than FASER, owing to a larger decay volume and a shorter distance to the interaction point of the FACET detector. The rest of the paper is organized as follows. We briefly review the dark photon model that has an enhanced LLDP signal in section~\ref{sec:model}. A mini-overview on lifetime-frontier detectors is given in section~\ref{sec: detector-review}. We discuss three main DP production channels in section~\ref{sec:DP_production}. We analyze the signal events in different lifetime-frontier detectors in section~\ref{sec:simu-and-considerations}. Given in section~\ref{sec:result} are the {sensitivities} to the parameter space from four different detectors: FACET, FASER(2), MATHUSLA, and CMS-MTD. A semi-analytic comparison between far detectors is given in section~\ref{sec:facet-faser-comparison}. We summarize our findings in section~\ref{sec:summary}. \section{The model and its parameter space} \label{sec:model} In this analysis, we consider the dark photon model that has been proposed recently to enhance the (suppressed) long-lived dark photon signal at the LHC \cite{Du:2019mlc}. In this model, the standard model is extended by a hidden sector that consists of two abelian gauge groups $U(1)_F$ and $U(1)_W$ with corresponding gauge bosons $X_\mu$ and $C_\mu$ respectively, and one Dirac fermion $\psi$ charged under both gauge groups \cite{Du:2019mlc}. The gauge boson mass terms {(due to the Stueckelberg mechanism \cite{Kors:2005uz, Feldman:2006ce, Feldman:2006wb, Feldman:2007wj, Feldman:2009wv, Du:2019mlc})} and the gauge interaction terms in the hidden sector are given by \begin{eqnarray} \label{eq:lagrangian} {\cal L} = - \frac{1}{2} ( \partial_\mu \sigma_1 + m_{1}\epsilon_1 B_{\mu} + m_{1} X_{\mu} )^2 - \frac{1}{2} ( \partial_\mu \sigma_2 + m_{2}\epsilon_2 B_{\mu} + m_{2} C_{\mu} )^2 + g_F \bar \psi \gamma^\mu \psi X_{\mu} + g_W \bar \psi \gamma^\mu \psi C_{\mu}, \end{eqnarray} where $B_\mu$ is the hypercharge boson in the SM, $\sigma_1$ and $\sigma_2$ are the axion fields in the Stueckelberg mechanism, $g_F$ and $g_W$ are the gauge coupling constants, and $m_1$, $m_2$, $m_{1}\epsilon_1$, and $m_{2}\epsilon_2$ are mass terms in the Stueckelberg mechanism with $\epsilon_{1,2}$ being (small) dimensionless numbers. The 2 by 2 neutral gauge boson mass matrix {in the SM} is extended to a 4 by 4 mass matrix due to the fact that the two new gauge bosons, $X_\mu$ and $C_\mu$, have mixed mass terms with the SM hypercharge boson $B_\mu$; the new neutral gauge boson mass matrix in the basis of $V= ( C,X, B, A^3)$ is given by \cite{Du:2019mlc} \begin{equation} M^2 = \begin{pmatrix} m_{2}^2 & 0 & m_{2}^2 \epsilon_2 & 0\cr 0 & m_{1}^2 & m_{1}^2 \epsilon_1 & 0 \cr m_{2}^2 \epsilon_2 & m_{1}^2 \epsilon_1 & \sum\limits_{i=1}^2 m_{i}^2 \epsilon_i^2 + {g'^2 v^2 \over 4} & - {g'g v^2 \over 4} \cr 0 & 0 & - {g'g v^2 \over 4} & {g^2 v^2 \over 4} \end{pmatrix} \label{eq:massmatrix} \end{equation} where $A^3$ is the third component of the $SU(2)_L$ gauge bosons, $g$ and $g'$ are gauge couplings for the SM $SU(2)_L$ and $U(1)_Y$ gauge groups respectively, and $v = 246$ GeV is the vacuum expectation value of the SM Higgs boson. Diagonalization of the mass matrix (via an orthogonal transformation ${\cal O}$) leads to the mass eigenstates $E= ( Z', A', Z, A)$ with $E_i={\cal O}_{ji} V_{j}$ where $A$ is the SM photon, $Z$ is the SM $Z$ boson, $A'$ is the dark photon, and $Z'$ is the new heavy boson. The interaction Lagrangian between the mass eigenstates of the neutral gauge {bosons} and the fermions is given by \cite{Du:2019mlc} \begin{equation} \label{eq:coupling} \left[ \bar f \gamma_\mu (v^f_i - \gamma_5 a^f_i) f + v^\psi_i \bar \psi \gamma_\mu \psi \right] E^\mu_i \end{equation} where $f$ is the SM fermion. The small coupling $v_4^\psi$ between the hidden fermion $\psi$ and the SM photon can be rewritten as $v_4^\psi \equiv e \delta$ where $\delta$ is usually referred to as ``millicharge''. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.4\textwidth]{./figures/limit_mchi_eps2_v5} \caption{ The upper bound on $\epsilon_2$ as a function of $m_\psi$. The other parameters are $m_1 = 3$ GeV, $m_2 = 700$ GeV, $g_F = 1.5$, $g_W = 1.0$, and $\epsilon_1 \ll \epsilon_2$. Here $\epsilon_2 \simeq (-g'/g_W) \delta$ where $\delta$ is the millicharge of $\psi$. The limits include the constraints on millicharged particles (shaded light gray) \cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx}, the electroweak precision measurements for the $Z$ mass shift (dashed red) \cite{Du:2019mlc}, the $Z$ invisible decay (dashed green)~\cite{ALEPH:2005ab}, the di-lepton high mass resonance search at ATLAS ({dashdotted} blue) ~\cite{ATLAS:2019vcr}, and the {monojet} search at ATLAS (solid black) \cite{Aaboud:2017phn}. } \label{fig:limit-mchi-eps2} \end{centering} \end{figure} Fig.~\ref{fig:limit-mchi-eps2} shows various experimental constraints on the model, including the {constraints} from millicharged particle searches \cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx}, the electroweak precision measurements for the $Z$ mass shift \cite{Du:2019mlc}, the $Z$ invisible decay \cite{ALEPH:2005ab}, the di-lepton high mass resonance search at ATLAS \cite{ATLAS:2019vcr}, and the {monojet} search at ATLAS \cite{Aaboud:2017phn}. Here we choose $m_1 = 3$ GeV, $m_2 = 700$ GeV, $g_F = 1.5$, $g_W = 1$, and $\epsilon_1 \ll \epsilon_2$. Throughout this analysis we use $m_2 = 700$ GeV, $g_F = 1.5$, and $g_W = 1$ as the default values for these three parameters as in {Ref.\ \cite{Du:2019mlc}}; in the {parameter} space of interest, we have $m_1 \simeq$ GeV $\ll m_2$ so that the dark photon mass $m_{A'} \simeq m_1$, and the heavy $Z'$ boson has a mass $m_{Z'} \simeq m_2$. For {the hidden fermion mass} $m_\psi \gtrsim 3$ GeV the electroweak constraint on the $Z$ mass shift gives the most stringent limit, $\epsilon_2 \lesssim 0.036$, whereas for the mass range $0.3$ GeV $\lesssim m_\psi \lesssim 3$ GeV, the leading constraints come from the recent ArgoNeuT data \cite{Acciarri:2019jly} and the milliQan demonstrator data \cite{Ball:2020dnx}. We note that the mass fraction of the millicharged DM is constrained to be $ \lesssim 0.4\%$ by the CMB data \cite{Boddy:2018wzy, dePutter:2018xte, Kovetz:2018zan}, which is satisfied in the parameter space of interest of our model \cite{Du:2019mlc}. \section{A mini-overview on lifetime-frontier detectors} \label{sec: detector-review} A number of new lifetime-frontier detectors have been proposed recently at the LHC, which can be used to search for LLPs. Table \ref{tab:detectors} shows the angular coverage, location, size, and expected running time of these new detectors. We classify the detectors into three categories: forward detectors, central transverse detectors, and precision timing detectors. The forward detectors include FACET \cite{FACET:talk1}, FASER \cite{Feng:2017uoz, Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}, {FASER2} \cite{ Ariga:2019ufm, Kling:2021fwx}, AL3X \cite{Gligorov:2018vkc}, and {MoEDAL-MAPP} \cite{Staelens:2019gzt}. The central transverse detectors include CODEX-b \cite{Gligorov:2017nwh, Aielli:2019ivi}, MATHUSLA \cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd, Lubatti:2019vkf, Alpigiani:2020tva}, ANUBIS \cite{Bauer:2019vqk}. The precision timing detectors include CMS-MTD \cite{CMStiming}, ATLAS-HGTD \cite{Allaire:2018bof}, and LHCb-TORCH~\cite{LHCb:2017MTD, LHCb:2008vvz}. Below we provide a mini-overview of the new lifetime frontier detectors. \begin{table*}[htbp] \begin{tabular}{|l|c|c|c|c|c|} \hline Detector & \multicolumn{1}{|p{1.5cm}|}{\centering $\eta$} & \multicolumn{1}{|p{4cm}|}{\centering Distance from IP (m)} & \multicolumn{1}{|p{3cm}|}{\centering Decay volume (m$^3$)} & \multicolumn{1}{|p{2cm}|}{\centering LHC runs} \\ \hline \hline FACET~\cite{FACET:talk1} & $[6, 7.2]$ & $100$ ({upstream}) & $12.3$ & run 4 (2027) \\ \hline FASER~\cite{Feng:2017uoz, Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm} & $> 9$ & $480$ (downstream) & $0.047$ & run 3 (2022) \\ \hline {FASER2} \cite{ Ariga:2019ufm, Kling:2021fwx} & $> 6.87$ & $480$ (downstream) & $15.7$ & HL-LHC \\ \hline AL3X~\cite{Gligorov:2018vkc} & $[0.9, 3.7]$ & $5.25$ ({upstream}) & $915.2$ & run 5 ({2032}) \\ \hline {MoEDAL-MAPP} \cite{Staelens:2019gzt} & $\sim 3.1$ & $55$ ({upstream}) & $\sim 150$ & run 3 (2022) \\ \hline \hline CODEX-b~\cite{Gligorov:2017nwh, Aielli:2019ivi} & $[0.14, 0.55]$ & $26$ (transverse) & $10^3$ & run 4 (2027) \\ \hline MATHUSLA \cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd, Lubatti:2019vkf, Alpigiani:2020tva} & $[0.64, 1.43]$ & $60$ (transverse) & $2.5 \times 10^5$ & HL-LHC \\ \hline ANUBIS~\cite{Bauer:2019vqk} & [0.06, 0.21] & $24$ (transverse) & $\sim 1.3 \times 10^4$ & HL-LHC \\ \hline \hline CMS-MTD~\cite{CMStiming} & $[-3, 3]$ & {$1.17$ (barrel), $3.04$ (endcaps)} & 25.4 & HL-LHC \\ \hline ATLAS-HGTD~\cite{Allaire:2018bof} & $[2.4, 4]$ & $ 3.5$ (endcaps) & $8.7$ & HL-LHC \\ \hline LHCb-TORCH~\cite{LHCb:2017MTD, LHCb:2008vvz} & $[1.6, 4.9]$ & $9.5 $ {(beam direction)} & -- & HL-LHC \\ \hline \end{tabular} \caption{Proposed detectors for long-lived particles searches at the LHC. The first column shows the detector name, the second column shows the pseudorapidity coverage, the third column shows the distance from {interaction point (IP)} to the near side of the detector and the location (to far side of the detector, for FASER), the fourth column shows the decay volume of the detector, and the last column shows the starting time of {data-taking}. The first {five} detectors are located at the forward region of the {corresponding} IP; the middle three detectors are located at the far central {transverse} region of the {corresponding} IP; the last three detectors are the precision timing detectors to be installed at CMS, ATLAS and LHCb respectively to control the HL-LHC pile-up {background}. The HL-LHC is expected to start {data-taking} in 2027 (run 4) \cite{LHCtime}. Here ``upstream'' (``downstream'') means that the detector is located in the clockwise (anti-clockwise) direction of the {corresponding} IP, {viewed from above}.} \label{tab:detectors} \end{table*} \subsection{Forward detectors} FASER (the ForwArd Search ExpeRiment), is located at $480$ m downstream of the ATLAS detector along the beam axis \cite{Feng:2017uoz, Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}. FASER has a cylindrical decay volume {of length} ${L} = 1.5$ m and radius $R = 10$ cm. FASER has been installed at {the} TI12 tunnel at {the} LHC and {is expected} to collect data during LHC Run 3 (2022) \cite{FASER:2019dxq}. The upgrade version, FASER 2, with {a decay volume of length} ${L = 5}$ m and radius $R = 1$ m is proposed to be installed during the HL-LHC run (2026-35) \cite{ Ariga:2019ufm, Kling:2021fwx}. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.5\textwidth]{figures/FACET-new-layout.pdf} \caption{Schematic layout of the proposed FACET detector (side view) \cite{albrow}.} \label{fig:facet-layout1} \end{centering} \end{figure} FACET (Forward-Aperture CMS ExTension) is a new lifetime frontier detector which is proposed to be installed $\sim$100 m upstream of the CMS detector along the beam axis \cite{FACET:talk1}. FACET {is proposed to} be built based on the CMS Phase 2 Upgrade concept, combining silicon tracker, timing detector, HGCAL-type EM/HAD calorimeter, and GEM-type muon system in a compact design \cite{FACET:talk1, FACET:talk2}; the latest design of the FACET detector is shown in Fig.\ \ref{fig:facet-layout1}. The decay volume of the FACET experiment is an enlarged LHC quality vacuum beam pipe which is 18 m long and has a radius of 50 cm \cite{FACET:LOI, FACET:talk1, FACET:talk2}. The FACET detector is shielded by about {35}-50 m of steel (in the Q1-Q3 quadrupoles and D1 dipole) in front of it \cite{FACET:LOI}; additional shielding materials are placed before the decay volume, as shown in Fig.\ \ref{fig:facet-layout1}. The FACET detector, surrounding the LHC beam pipe which has a radius $R = 18$ cm, is placed behind the decay volume. As a new proposed far forward detector, FACET has some {merits}. The {35}-50 m steel shielding before FACET, corresponding to $200-300$ nuclear interaction lengths, is comparable to the shielding material for FASER, which is $\sim 100$ m of concrete/rock, corresponding to $\sim 240$ nuclear interaction lengths. FACET will benefit from the high quality LHC vacuum pipe as the decay volume \cite{FACET:talk1, FACET:talk2}. FACET plans to have both the EM and HAD calorimeters \cite{FACET:LOI}, whereas FASER has only EM calorimeter \cite{Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}. This allows FACET to have a better detection efficiency for the hadronic decays of the DP, especially for the neutral hadronic decays. AL3X (A Laboratory for Long-Lived eXotics) is {an on-axis cylindrical detector} which has been proposed to be installed at ALICE experiment during the LHC Run 5 \cite{Gligorov:2018vkc}. The detector will make use of the existing ALICE time projection chamber and the L3 electromagnet. It is also envisioned to move the ALICE detector by 11.25 m downstream from its current location, providing space for a spherical shell segment of tungsten to shield the detector from the IP. The AL3X detector is then expected to be located $5.25$ m away from the IP along the beam axis, with a 12 m long cylindrical decay volume of a 0.85 m inner radius and a 5 m outer radius. The MoEDAL-MAPP {detector} is the MAPP (Apparatus for the detection of Penetrating Particles) detector at MoEDAL (Monopole and Exotics Detector at the LHC) \cite{Staelens:2019gzt}, which is proposed to be installed at the UGCI gallery near the LHCb experiment (IP8) in future LHC runs. MoEDAL-MAPP is $55$ m from IP8 and with an angle of $5^{\circ}$ away from the beam line, with a fiducial volume of $\sim$150 m$^3$ \cite{Staelens:2019gzt}. \subsection{Central detectors} CODEX-b (Compact Detector for Exotics at LHCb) has been proposed to be constructed at the LHCb cavern \cite{Gligorov:2017nwh, Aielli:2019ivi}. The decay volume is designed to be $10\, \rm{m} \times 10 \,\rm{m} \times 10\, \rm{m}$. It is located $\sim$5 m in the z axis (beam direction) and $\sim$26 m in the transverse direction away from the LHCb IP, with a pseudorapidity coverage of $0.14 <\eta< 0.55$. The demonstrator detector, CODEX-$\beta$ (about $2\, \rm{m} \times 2 \,\rm{m} \times 2\, \rm{m}$) has been developed for the LHC Run 3 \cite{Aielli:2019ivi}. MATHUSLA (MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles) is a new proposed experiment near the ATLAS or CMS {IP} \cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd, Lubatti:2019vkf, Alpigiani:2020tva}. It is proposed to {be} placed $\sim$68 m downstream from the IP and $\sim$60 m above the LHC beam axis with a decay volume of $100\, \rm{m} \times 100\, \rm{m} \times 25\, \rm{m}$ \cite{Alpigiani:2020tva}. MATHUSLA was previously proposed to be installed at $\sim$100 m downstream from the IP and $\sim$100 m above the LHC beam axis with a decay volume of $200\, \rm{m} \times 200\, \rm{m} \times 20\, \rm{m}$ \cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd, Lubatti:2019vkf}. In this analysis, we adopt the parameters from the recent proposal \cite{Alpigiani:2020tva}. ANUBIS (AN Underground Belayed In-Shaft search experiment) \cite{Bauer:2019vqk} is a {new proposed} experiment taking advantage of the 18 m diameter, 56 m long PX14 installation shaft of the ATLAS experiment. The proposed detector consists of four tracking stations which have the same cross section area of 230 m$^2$ and are 18.5 m apart from each other. \subsection{Precision timing detectors} To mitigate the high pile-up background at the HL-LHC, various precision timing detectors will be installed at CMS \cite{CMStiming}, ATLAS \cite{Allaire:2018bof, Allaire:2019ioj,Garcia:2020wxj}, and LHCb \cite{LHCb:2017MTD}, which can be used for LLP searches \cite{Liu:2018wte, Mason:2019okp,Kang:2019ukr, Cerri:2018rkm, Du:2019mlc, Liu:2020vur, Cheung:2021utb}. The CMS-MTD {detector} consists of the precision minimum ionizing particle (MIP) timing detector with a timing resolution of 30 picoseconds \cite{CMStiming}. The timing layers will be installed between the inner trackers and the electromagnetic calorimeter for the barrel and {endcap} regions. The timing detector in the barrel region has a length of 6.08 m along the beam axis direction and a transverse distance of $1.17$ m away from the beam. The timing detectors in the {endcap} regions have a pseudorapidity coverage of $1.6 < | \eta | < 3.0$ and are located $\sim 3.0$ m from the IP. The decay volume of LLPs at CMS-MTD is $\sim 25.4$ ${\rm m}^3$ if one demands that the LLPs decay before arriving {at} the timing layers and the decay vertex has a transverse distance of $0.2\, {\rm m} < L_T < 1.17\, {\rm m}$ from the beam axis \cite{Liu:2018wte, Liu:2020vur}. The HGTD (High Granularity Timing Detector) has been proposed to be installed in front of the ATLAS endcap and forward calorimeters at $z = \pm 3.5$ m from the IP during the ATLAS Phase-II upgrade \cite{Allaire:2018bof, Allaire:2019ioj, Garcia:2020wxj}. The ATLAS-HGTD can cover the pseudorapidity range of $2.4 < | \eta | < 4.0$, and is expected to have a time resolution of 35 ps (70 ps) per hit at the start (end) of HL-LHC \cite{Garcia:2020wxj}. The decay volume of ATLAS-HGTD is $\sim 8.7 \,\rm{m}^3$, if LLPs are required to decay before arriving {at} the timing detector and the decay vertex has a transverse distance of $0.12\, {\rm m} < L_T < 0.64\, {\rm m}$ \cite{Garcia:2020wxj}. The TORCH (Time Of {internally} Reflected CHerenkov light) detector has been proposed to be installed at the next upgrade of LHCb \cite{LHCb:2017MTD}. The TORCH will be located at $z\sim 9.5$ m from the LHCb IP with {the} angular acceptance of 1.6$<\eta<$4.9. The precision of each track in the TORCH system is 15 ps \cite{LHCb:2017MTD}. \section{The dark photon production} \label{sec:DP_production} In our model, there are three main processes to produce dark photon $A'$ at the LHC: rare meson decays (hereafter MD), coherent proton bremsstrahlung (hereafter PB), and hidden sector radiation (hereafter HR); the corresponding Feynman diagrams are shown in Fig.\ (\ref{fig:feyndiag}). The MD and PB processes are common for the dark photon models, because dark photons are produced via interactions between the dark photon and charged particles in the SM in these two processes. The HR process is new in our model \cite{Du:2019mlc}, which is mediated by the interaction between the dark photon and the hidden sector particle $\psi$.\footnote{Here we do not consider the dark photon direct production channel which consists of the following processes $q\bar{q} \to A'$, $q\bar q \to g A'$, $q g\to q A'$ and $\bar{q} g \to \bar{q} A'$, because they suffer from large PDF uncertainties for sub-GeV $A'$ and are suppressed by $\epsilon_1$ which is much smaller than $\epsilon_2$ in the HR process.} \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.28\textwidth]{./figures/feyn-meson-decay} \includegraphics[width=0.3\textwidth]{./figures/feyn-proton-brems} \includegraphics[width=0.26\textwidth]{./figures/feyn-diag-v2} \caption{ Feynman diagrams for the dark photon production at the LHC: from meson decays (left), from the proton bremsstrahlung (middle), and from the hidden fermion radiation (right). } \label{fig:feyndiag} \end{centering} \end{figure} \subsection{Meson decays} \label{subsec:MD} Dark photons can be produced in the $m \to \gamma + A'$ process, where $m$ denotes a light meson, as shown in the left diagram in Fig.\ \ref{fig:feyndiag}; the branching ratio can be computed via \cite{Batell:2009di} \begin{equation} {\rm BR} \left (m \to A' + \gamma \right) = 2\, \epsilon^{2} \left( 1-\frac{M_{A'}^2}{M_{m}^2} \right)^{3} {\rm BR}\left(m \to \gamma \gamma \right), \label{eq:brm} \end{equation} {where $\epsilon$ is the coupling constant given in Eq.\ \eqref{eq:epsilon}.} In the parameter space of interest of our model, one has $\epsilon \approx (0.27/e)\, \epsilon_1$ for $m_1 \lesssim 30$ GeV. For light mesons, one has ${\rm BR}(\pi^{0} \rightarrow \gamma \gamma ) \simeq 0.99$ and ${\rm BR}(\eta \rightarrow \gamma \gamma) \simeq 0.39$ \cite{Zyla:2020zbs}. Since light mesons can be copiously produced in the forward direction at high energy $pp$ collisions, (for example, the production cross section of $\pi^0$ ($\eta$) in each hemisphere at the LHC is $1.6 \times 10^{12}$ pb ($1.7 \times 10^{11}$ pb) \cite{Ariga:2018uku}), dark photon from rare meson decays can be a leading dark photon production mode at the LHC if the decay is kinematically allowed \cite{Feng:2017uoz}. We neglect the $m \to A' A'$ process because we have $\epsilon \ll 1$ for the LLDP. In our analysis, we generate the {four-momentum spectrum} for the $\pi^0/\eta$ mesons using the EPOS-LHC \cite{Pierog:2015} model in CRMC \cite{Ulrich:crmc} with $10^5$ simulation events of $pp$ inelastic collision at the LHC with $\sqrt{s}=13$ TeV. We then boost the momentum of the dark photon (which is isotropically distributed in the $\pi^0/\eta$ rest frame) to the lab frame, by using the meson momentum. Our simulations are found to be consistent with FORESEE \cite{Kling:2021fwx}. {We also simulate the heavy mesons $D^0$, $B^0$ and $J/\psi$ using PYTHIA 8 \cite{Sjostrand:2014zea}. We found that the DP production cross section due to decays of these heavy mesons is about five orders of magnitude smaller than the light mesons ($\pi^0$ and $\eta$). Therefore we neglect the contribution from heavy meson decays in our analysis.} \subsection{Proton bremsstrahlung} \label{subsection:proton_bremss} Proton bremsstrahlung process is another major production mode of light dark photons in high energy $pp$ collisions; the Feynman diagram is shown as the middle diagram in Fig.\ \ref{fig:feyndiag}. The dark photon signal arising from the proton bremsstrahlung process can be computed by the Fermi-Weizsacker-Williams (FWW) method \cite{Fermi:1924, Williams:1934, Weizsacker:1934}, in which {the} proton is treated as a coherent object; the total number of the dark photon produced is given by \cite{Feng:2017uoz} \begin{eqnarray} N_{A'}^{\rm PB} &=& {\cal L} \, {|F_{1}\left(m_{A^{\prime}}^{2}\right)|^{2}} \int d z \, d p_{T}^{2} \,\sigma_{p p}\left(s^{\prime}\right) w\left(z, p_{T}^{2}\right) \Theta\left(\Lambda_{\mathrm{QCD}}^{2}-q_{\rm{min}}^{2}\right), \label{eq:proton-brem} \end{eqnarray} where $N_{A'}^{\rm PB}$ is the number of dark photon events from the PB process, ${\cal L}$ is the integrated luminosity, $F_1$ is the form factor function, $z=p^L_{A'}/p_p$ with $p^L_{A'}$ being the longitudinal momentum of the dark photon and $p_p$ the proton beam momentum, $p_T$ is the transverse momentum of the dark photon, $\sigma_{pp}(s^{\prime})$ is the inelastic cross section \cite{Tanabashi:2018oca} with $s' = 2 m_p (E_p -E_{A'})$ {in the rest frame of one of the colliding protons}, $w\left(z, p_{T}^{2}\right)$ is the splitting function, $\Lambda_{\mathrm{QCD}} \simeq 0.25$ GeV is the QCD scale, and $q$ is the momentum carried by the virtual photon in the middle diagram in Fig. \ref{fig:feyndiag}. The splitting function $w\left(z, p_{T}^{2}\right)$ in Eq.~\eqref{eq:proton-brem} is given by~\cite{Brunner:2014, Kim:1973, Tsai:1977} \begin{eqnarray} \begin{aligned} w\left(z, p_{T}^{2}\right)& \simeq & \frac{\epsilon^{2} \alpha}{2 \pi H}\left\{\frac{1+(1-z)^{2}}{z}-2 z(1-z)\right. \left(\frac{2 m_{p}^{2}+m_{A^{\prime}}^{2}}{H}-z^{2} \frac{2 m_{p}^{4}}{H^{2}}\right) \\ && + \,\; 2 z(1-z)\left(z+(1-z)^{2}\right) \frac{m_{p}^{2} m_{A^{\prime}}^{2}}{H^{2}} \left. + 2 z(1-z)^{2} \frac{m_{A^{\prime}}^{4}}{H^{2}}\right\}, \end{aligned} \end{eqnarray} where $H=p_{T}^{2}+(1-z) m_{A^{\prime}}^{2}+z^{2} m_{p}^{2}$. To guarantee the validity of the FWW approximation, the Heaviside function $\Theta$ is imposed in Eq.~\eqref{eq:proton-brem} with the minimal virtuality of the photon cloud around the beam proton given by \cite{Kim:1973, Tsai:1977} \begin{equation} \left|q_{\min }^{2}\right| \approx \frac{1}{4 E_{p}^{2} z^{2}(1-z)^{2}}\left[p_{T}^{2}+(1-z) m_{A^{\prime}}^{2}+z^{2} m_{p}^{2}\right]^{2}. \end{equation} The form factor $F_1(p_{A'}^2)$ in Eq.~\eqref{eq:proton-brem} is given by~\cite{Feng:2017uoz, Faessler:2009tn} \begin{equation} F_1(p_{A'}^2) = \sum_{V =\rho \, \rho' \, \rho'' \, \omega \, \omega' \, \omega''} \frac{f_V m_V^2}{m_V^2 - p_A'^2 - i m_V \Gamma_V}, \label{eq:PB_formfactor} \end{equation} where $m_V$ ($\Gamma_V$) is the mass (decay width) of the vector meson, $f_{\rho} = 0.616$, $f_{\rho'} = 0.223$, $f_{\rho''} = -0.339$, $f_{\omega} = 1.011$, $f_{\omega'} = -0.881$, and $f_{\omega''} = 0.369$. \subsection{Hidden radiation} Dark photons can also be produced via hidden fermion radiations in the HR process, as shown in the third diagram of Fig.\ \ref{fig:feyndiag}. Within certain parameter space of the models in {Ref.\ \cite{Du:2019mlc}}, the HR process can be more important than the MD and PB processes. For the models considered in this analysis, dark photons in the HR process are produced at the LHC in the radiation process of the hidden sector fermion $\psi$, which are pair-produced at the LHC via the $q \bar q \to \gamma^{*}/Z/Z' \to \bar \psi \psi$ process, as shown in the third diagram in Fig.~\ref{fig:feyndiag}. In the MD and PB processes, the dark photon production cross section is suppressed by the small $\epsilon$ parameter (given in Eq.\ \eqref{eq:epsilon}) needed for the long lifetime of the dark photon. In the HR process, however, the LHC production of $\psi$ is not controlled by $\epsilon$ so that the LHC cross section {of} $\psi$ can be {sizable} even for heavy $\psi$. \begin{figure}[htbp] \includegraphics[width=0.4\textwidth]{figures/xsec_new4} \caption{ The contributions to the $pp \rightarrow \psi\bar{\psi}$ cross section at the LHC from three different mediators: $\gamma$ (blue-dashed), $Z$ (black-dotted), $Z'$ (red-{{dashdotted}}). {The total cross section (green-solid) taking into account all contributions (including the $A'$ contribution and the interference terms) is also shown.} We use $\epsilon_{1}=6\times 10^{-7}$, $\epsilon_{2}= {0.005}$, and $m_{A'} = 0.4$ GeV. The gray shaded region indicates the parameter space excluded by the millicharge constraints \cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx}. We use {NNPDF23LO} \cite{Ball:2012cx} which is the default PDF in MADGRAPH 5. } \label{fig:prop-xsec-psipair} \end{figure} To obtain the contribution from the HR process, we use FeynRules \cite{Alloul:2013bka} to produce the UFO file for our model, which is then passed into MADGRAPH 5 \cite{Alwall:2014hca} to generate the $p p \to \psi \bar{\psi}$ events at the LHC. We further use PYTHIA 8 \cite{Sjostrand:2014zea, Carloni:2010tw, Carloni:2011kk} to simulate the dark radiation process of the $\psi$ particle to obtain the dark photons. Fig.~\ref{fig:prop-xsec-psipair} shows the contributions to the $p p \to \psi \bar\psi$ cross section at the LHC from three different mediators (photon, $Z$, and $Z'$), where the interference effects have been neglected.\footnote{We neglect the process mediated by the dark photon since it is suppressed by the small $\epsilon$ parameter needed for LLDP so that it is several orders of magnitude smaller than the other three mediators in our analysis. } We use MADGRAPH 5 \cite{Alwall:2014hca} to compute the cross sections, where we have fixed $m_{A'} = 0.4$ GeV, $\epsilon_1 = 6 \times 10^{-7}$, and $\epsilon_2 = {0.005}$. For $m_{\psi} \lesssim 8$ GeV the dominant contribution to the $\psi \bar \psi$ pair-production cross section comes from the s-channel photon process; for higher $\psi$ mass, the contributions from $Z$ and $Z'$ exchanges become more important. \subsection{Comparison of the three DP production channels} In Fig.~\ref{fig:compare-distribution}, we compare the three dark photon production channels at the LHC, both in the $4\pi$ solid angle and in the very forward region. The very forward region is defined by the dark photon pseudorapidity $\eta_{A'} > 6$.\footnote{The angular acceptance of FACET is $6< \eta <7.2$ \cite{FACET:talk1}, and the angular acceptance of FASER is $\eta > 9$ \cite{Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}.} We choose $\epsilon_{1} = 10^{-6}$ and $\epsilon_2 = {0.005}$ for both figures in Fig.~\ref{fig:compare-distribution}. The dark photon cross section in the HR process is calculated by $\sigma_{A'}^{\rm HR} = \bar{n}_{A'} \sigma_{p p \to \psi \bar\psi}$ where $\bar{n}_{A'}$ is the expected number of dark photons per $\psi \bar \psi$ event. In our analysis, $\bar{n}_{A'}$ is computed as the ratio of the total number of dark photons to the total number of $\psi \bar \psi$ events in the simulation, namely $\bar{n}_{A'} = N_{A'}^{\rm HR } / N_{\psi \bar \psi}$. \begin{figure}[htbp] \includegraphics[width=0.4\textwidth]{figures/diff-production-full-fwd} \includegraphics[width=0.4\textwidth]{figures/diff-production-full-fwd-0p2} \caption{The LHC production cross section of dark photon $\sigma_{A'}$ from three contributions: HR (red lines), MD (blue lines) and PB (black lines). The solid lines correspond to the cross section in full solid angle, and the dashed lines represent the cross section in the very forward region with $\eta_{A'} > 6$. Here we use $\epsilon_1 = 10^{-6}$ and $\epsilon_2 = 0.005$ for both panels; we choose $m_{1} = 0.4$ GeV in the left panel and $m_{1} = 0.2 \,m_{\psi}$ in the right panel. The gray shaded region ($m_{\psi} \lesssim 0.45$ GeV for $\epsilon_2 = {0.005}$) indicates the parameter space excluded by the millicharge constraints \cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx}. } \label{fig:compare-distribution} \end{figure} The left panel figure of Fig.~\ref{fig:compare-distribution} shows the dark photon cross sections as a function of the hidden fermion mass $m_\psi$ for the case where the dark photon mass is fixed at $m_{A'} \simeq 0.4$ GeV. The dark photon cross section in the HR process decreases with the hidden fermion mass $m_\psi$; the cross sections in the MD and PB processes are independent of $m_\psi$, since these processes do not involve the hidden fermion $\psi$. For light $\psi$ the HR process dominates the MD and PB processes, whereas for heavy $\psi$ the MD and PB processes become more important. In particular, the HR process dominates the dark photon production if $m_\psi \lesssim$ 5 GeV (30 GeV) in the very forward ($4\pi$ solid angle) region. The right panel figure of Fig.~\ref{fig:compare-distribution} shows the dark photon cross sections as a function of the dark photon mass $m_{A'}$ for the case where $m_\psi = 5 \, m_{A'}$. The HR process dominates {the entire mass range except the small} resonance region near $m_{A'} \simeq 0.8$ GeV, where the PB process becomes larger. We note that, in the right panel of Fig.~\ref{fig:compare-distribution}, the resonance in the PB process is due to the pole structure (due to various vector mesons) in the form factor given in Eq.~\eqref{eq:PB_formfactor}, and the kink features in the MD cross section arise because of the mass threshold effects in meson decays. About $10\%$ of the dark photons in the MD and PB processes are produced in the very forward region as shown in Fig.~\ref{fig:compare-distribution}. For the HR process, the number of dark photons produced in the very forward region is sizable in the low $\psi$ mass region, with a fraction up to $\sim {15\%}$ for $m_\psi \simeq 0.5$ GeV, as shown in the left panel figure of Fig.~\ref{fig:compare-distribution}. For heavy $\psi$ mass the cross section in the very forward region is significantly reduced, for example, less than 1\% of the dark photon in the HR process produced in the forward region when $m_\psi \gtrsim 6$ GeV. This is because heavier $\psi$ particles tend to be produced more isotropically than lighter $\psi$ particles and thus lead to {fewer} events in the forward region. \subsection{PDF uncertainties} \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{figures/pdf-uncertainty} \caption{Comparison of LHC cross sections using different PDFs. The LHC cross sections of $\sigma (pp\to \psi \bar\psi)$ (solid), $\sigma(A')$ of the HR process in the $4 \pi$ angular region (dotted), in the forward region $\eta_{A'} > 6$ (dashed), and in the FACET detector ({dashdotted}) are computed with NNPDF23 \cite{Ball:2012cx} (black), NNPDF40 \cite{Ball:2021leu} (red), and CT18 \cite{Hou:2019qau} (blue).} \label{fig:pdf-uncertainty} \end{figure} For light $\psi$ one has to integrate over the small $x$ region in PDFs where there are large uncertainties \cite{Feng:2017uoz}. In the process $pp \to \psi \bar \psi$, the minimum value of $x$ is $ x_{\rm min} = {4 m_\psi^2 / s}, $ if there is no cut on the $\psi$ momentum. Thus, for the $m_\psi = 15 \, (0.5)$ GeV case, one has to integrate over the $x$ range near $x_{\rm min} \simeq 5 \times 10^{-6}\, (6 \times 10^{-9})$. The minimum value of $x$ is $10^{-9}$ in the PDFs sets: NNPDF23LO \cite{Ball:2012cx}, NNPDF40 \cite{Ball:2021leu}, and CT18 \cite{Hou:2019qau}. Thus, for the $m_\psi = 0.5$ GeV case, the dark photon production cross section in the HR process (denoted as $\sigma_{A'}^{\rm HR}$) depends on the PDFs in the $x$ region where PDFs begin . To check the stability of the LHC cross sections (of small $m_\psi$) against different PDFs, we compute various LHC cross sections including $\sigma (pp\to \psi \bar\psi)$, $\sigma_{A'}^{\rm HR}$ in the $4 \pi$ angular region, $\sigma_{A'}^{\rm HR}$ in the forward region $\eta_{A'} > 6$, and $\sigma_{A'}^{\rm HR}$ in the FACET detector, by using three different PDFs: NNPDF23LO (the default PDFs in MADGRAPH 5), NNPDF40, and CT18, in Fig.\ \ref{fig:pdf-uncertainty}. For $\sigma(pp \to \psi \bar\psi)$ at $m_\psi \simeq 0.5$ GeV, the NNPDF40 (CT18) leads to a cross section that is about {$30\%$ ($45\%$)} of that from NNPDF23; for $\sigma_{A'}^{\rm HR}$ in the $4 \pi$ angular region, these two percentage numbers become {$55\%$ ($80\%$)}. This is because the $\psi$ particles have to be energetic enough to radiate dark photons, {which then corresponds} to larger $x_{\rm min}$ values in the PDF integration, leading to less PDF uncertainties. The PDF uncertainties in the $4\pi$ angular region are smaller than the forward region, which is due to the fact that the $4\pi$ region includes the region with significant transverse momentum. In the sensitivity contours of FACET as shown in Fig.\ \ref{fig:psi5DP}, the mass of $\psi$ has to satisfy $m_\psi \gtrsim 1.5$ GeV to be consistent with the millicharge constraints. We find that NNPDF40 (CT18) leads to a cross section of $\sim 33\%$ ($\sim 64\%$) of NNPDF23 at $m_\psi \simeq 1.5$ GeV, as shown in Fig.\ \ref{fig:pdf-uncertainty}. For the $m_\psi \simeq 15$ GeV case (the $\psi$ mass in Fig.\ \ref{fig:psi15GeV}), we find that the cross section computed with NNPDF40 (CT18) is $\sim 80\%$ ($\sim 97\%$) of that with NNPDF23, in Fig.\ \ref{fig:pdf-uncertainty}. Thus the PDF uncertainty on our sensitivity contours is less significant. Furthermore, the sensitivity contours analyzed with different PDFs, as shown in Fig.\ \ref{fig:psi5DP}, show that different PDFs only modify the limits for small $\epsilon_1$ values (the lower edge of the contours), but have unnoticeable effects on {large} $\epsilon_1$ values (the upper edge of the contours). This is due to the fact that the large $\epsilon_1$ values correspond to small decay lengths, and thus the dark photon should have a significant momentum to decay inside the far detectors. For that reason, the $x_{\rm min}$ in the PDF integration becomes larger for the model points with large $\epsilon_1$ values, resulting in insignificant PDF uncertainty. \section{Analysis} \label{sec:simu-and-considerations} In this analysis, we investigate the LLDP signals in the following four detectors: FACET, FASER, MATHUSLA, and CMS-MTD. We carry out analysis for the model points in the parameter space spanned by the DP mass $m_{A'}$ and the DP lifetime $\tau_{A'}$.\footnote{We select 60 grid points in the mass range $m_{A'} \in (0.1,\, 30)$ GeV and 600 grid points $\tau_{A'} \in (10^{-4},\, 10^{2})$ m for FACET and FASER detectors, and 800 grid points $\tau_{A'} \in (10^{-4},\, 10^{4})$ m for MATHUSLA detector. The points on both axes are chosen uniformly on log scale.} For each model point, we compute the DP signal events from the MD, PB and HR processes. For the MD and PB processes, we obtain the DP momentum and the position of its decay vertex, by using the simulations discussed in section \ref{subsec:MD} and section \ref{subsection:proton_bremss}, respectively. We then boost the daughter particles from dark photon decays to the lab frame, from the rest frame of the dark photon, where the daughter particles are isotropic. For the HR process, we use MADGRAPH 5 \cite{Alwall:2014hca} to generate $10^6$ events for the $p p \to \psi \bar{\psi}$ process, and use PYTHIA 8 \cite{Sjostrand:2014zea, Carloni:2010tw, Carloni:2011kk} to simulate the hidden radiation of the $\psi$ particle and the decay of the dark photon, which outputs the momentum information for the DP and its daughter particles, as well as the decay position of the DP. To expedite the analysis (only a small fraction of simulated events from PYTHIA 8 are actually inside the decay volume of the detectors), we disregard the decay position of the dark photon provided by PYTHIA 8 and use the dark photons that decay both inside and outside of the decay volume. Thus, for the three far detectors (FACET, FASER, MATHUSLA), we compute the probability of detecting a DP as follows \begin{equation} P_{A'} = f(\theta, \phi) \int_{L_{\rm min}}^{L_{\rm max}} d \ell \frac{e^{-\ell/\ell_{A'}}} {\ell_{A'}} \, \omega \, , \label{eq:prob-detection} \end{equation} where $L_{\rm min}$ ($L_{\rm max}$) is the minimal (maximum) distance between the decay volume and the IP along the $(\theta, \phi)$ direction {with $\theta$ and $\phi$ the polar and azimuthal angles of the dark photon respectively}, $\ell_{A'} = \tau_{A'} |\vec{p}_{A'}|/m_{A'}$ is the decay length of dark photon with $\tau_{A'}$ being the lifetime, $f(\theta, \phi)$ describes the angular acceptance of the decay volume, and {$\omega$ equals 1 if the decay final states of the DP satisfy additional detector cuts ($\omega$ equals 0 otherwise).} For a cylindrical detector (e.g.\ FASER and FACET) that is placed along the beam direction with a distance $d$ from the IP to the near side of the detector, the parameters in Eq.~\eqref{eq:prob-detection} are given by \begin{eqnarray} L_{\rm min} &=& d, \quad L_{\rm max} = d+L, \\ f(\theta, \phi) &=& \Theta(R/{L_{\rm min}} - \tan \theta) \,\Theta(\tan \theta - r/L_{\rm max}), \label{eq:fthetaphi} \end{eqnarray} where $L$ is the length of decay volume of the detector, $r$ ($R$) is the inner (outer) radius of the decay volume, and $\Theta$ is the Heaviside step function. For the FACET detector, one has $r={18}$ cm and $R=50$ cm; for the FASER (FASER 2) detector, one has $r=0$ and $R=10$ (100) cm. For the cylindrical forward detectors, the pseudorapidity range is often used to describe the acceptance of the detectors, $f(\theta, \phi) = \Theta(\eta_{\rm{max}} - \eta_{A'}) \Theta(\eta_{A'} - \eta_{\rm{min}})$. Thus for the FACET detector, one has $\eta_{\rm{min}} \simeq 6$ and $\eta_{\rm{max}}\simeq {7.2}$;\footnote{$\eta_{\rm{min}} \simeq 6$ corresponds to the left-upper corner of the decay volume, and $\eta_{\rm{max}}\simeq {7.2}$ corresponds to the right-bottom (inner radius) corner of the upper half of the decay volume as shown in the Fig.~\ref{fig:facet-layout1}. } for the FASER (FASER 2) detector, one has $\eta_{\rm{min}} \simeq 9$ (7) and $\eta_{\rm{max}} = +\infty$. For a box-shape detector with height $H$, width $W$, length $L$ and is located at a distance $d$ from the IP along the z-axis and a distance $h$ above the LHC beam (along the x-axis) (e.g.\ MATHUSLA), one has\footnote{Note the distance $d$ here is different from that in Tab.~\ref{tab:detectors}.} \begin{eqnarray} \label{eq:box1-new} L_{\rm max} &=& \left\{ \begin{aligned} &\frac{h+H}{\sin \theta \cos \phi}\quad& {\rm if} \;\; & \tan \theta > \frac{h + H } { (d+L) \cos \phi}\; \&\; |\tan\phi | < \frac{W}{2(h+H)},\\ &\frac{d+L}{\cos \theta }\quad& {\rm if} \;\; & \tan \theta < \frac{h + H } { (d+L) \cos \phi}\; \&\; |\sin\phi | < \frac{W}{2(d+L)\tan\theta}, \\ &\frac{W}{2\sin \theta |\sin\phi|}\quad& {\rm if} \;\; & |\sin\phi | > \frac{W}{2(d+L)\tan\theta}, \end{aligned} \right. \\ \label{eq:box2-new} L_{\rm min} &=& \left\{ \begin{aligned} &\frac{ h }{\sin \theta \cos \phi}\quad& {\rm if} \;\; & \tan \theta < \frac{h } { d \cos \phi}, \\ &\frac{d}{\cos \theta }\quad& {\rm if} \;\; & \tan \theta > \frac{h} {d\cos \phi}, \end{aligned} \right. \\ \label{eq:box3-new} f(\theta, \phi) &=& \Theta \left( \tan \theta - \frac{h}{(d+L)\cos \phi} \right) \, \Theta \left( \frac{h + H }{d\cos \phi} - \tan \theta\right) \, \Theta \left(\frac{W}{2 h} - |\tan \phi| \right) \Theta \left(\cos\phi \right). \end{eqnarray} For the MATHUSLA detector, we use $d=$ 68 m, $h=$ 60 m, $W=$ 100 m, $L=$ 100 m, and $H=$ 25 m \cite{Alpigiani:2020tva}. For FACET, we {further} require both daughter particles from the DP decay to traverse both the tracker and the calorimeter detectors. For {the} FASER detector, we further apply a detector cut on the energy of DP daughter particles $E_{\rm vis} > 100$ GeV \cite{Ariga:2018uku} to reduce the trigger rate and remove possible {background} (BG) at low energies. For {the} FACET detector, because the BG events are expected to be highly suppressed due to the front shielding and the high quality vacuum of the decay volume, no detector cut is required. For {the} MATHUSLA detector, we require both DP daughter particles to hit the ceiling detector and are well-separated with an opening angle $\Delta \theta > 0.01$ \cite{Curtin:2018mvb}; we note that $\omega=0$ for the second and third lines of Eq.~\eqref{eq:box1-new}, by requiring such a cut. Thus the number of events in the far detector can be obtained \begin{equation} N = {\cal L} \cdot \sigma_{A'} \cdot \langle P_{A'} \rangle \quad {\rm with } \quad \langle P_{A'} \rangle = \frac{1}{N_{\rm A'}} \sum_{i=1}^{N_{A'}} P_{A'_i}, \end{equation} where $\sigma_{A'}$ is the total DP production cross section, $\langle P_{A'} \rangle$ denotes the average detection probability of the DP event, $N_{A'}$ is the total number of the DP in the simulation and $P_{A'_i}$ is the individual detection probability of the $i$th dark photon event in the simulation which is given by Eq.~\eqref{eq:prob-detection}. For the CMS-MTD detector, we only consider the DPs produced from the HR process for the CMS-MTD analysis. This is because the CMS-MTD detector does not have sensitivity to the DP mass below $\sim$GeV \cite{Du:2019mlc}. Following {Ref.\ \cite{Du:2019mlc}}, we use MADGRAPH 5 to generate $\psi \bar \psi$ events with an ISR jet to time stamp the event, i.e., $pp \to \psi\bar{\psi}j$ where the ISR jet is required to have $p_T > 30$ GeV and $|\eta| < 2.5$. The DP is required to have a transverse decay length $0.2 {\rm m} < \ell_{A'}^T < 1.17 {\rm m}$ and a longitudinal decay length $|z_{A'}| <3.04$ m. The final state leptons from DP decays are detected by the precision timing detector; the leading lepton should have $p_T > 3$ GeV. The time delay variable \cite{Liu:2018wte} between the ISR jet and the leading lepton is required to $\Delta t > 1.2\ \rm{ns}$ \cite{Du:2019mlc}. \section{Result} \label{sec:result} In this section we discuss the projected sensitivities of the future LLP detectors including FACET, FASER, MATHUSLA and the precision timing detector {CMS-MTD}. Our main results are shown in Figs.\ (\ref{fig:MDplusPB}, \ref{fig:psi15GeV}, \ref{fig:psi5DP}, \ref{fig:FACET-Nevent}), {where sensitivity contours for far detectors are made by requiring the new physics events to be $N = 5$, under the assumption that the SM processes do not contribute any event in the decay volume after various shieldings and detector cuts.} We are only interested in the parameter space in which $m_{A'} < 2 m_\psi$ so that the dark photon {is} kinematically forbidden to decay into the hidden fermion pair, {leading to a long-lived dark photon.}\footnote{{If $m_{A'} > 2 m_\psi$, the dark photon can decay into a pair of hidden fermions, which then leads to a prompt decay dark photon, assuming an order-one gauge coupling in the hidden sector.}} \begin{figure}[htbp!] \begin{centering} \includegraphics[width=0.4\textwidth]{figures/mdplusbrem_300} \includegraphics[width=0.4\textwidth]{figures/mdplusbrem} \caption{Projected sensitivities from FACET (red), FASER (magenta), FASER2 (black), and MATHUSLA (green), at the HL-LHC with the integrated luminosities of ${\cal L} = 300$ fb$^{-1}$ (left panel) and ${\cal L} = 3$ ab$^{-1}$ (right panel) to the ``minimal'' dark photon models in which only the MD and PB processes contribute to the signals. Contours correspond to the expected signal events $N=5$. The dark gray shaded region indicates the parameter space that has been excluded by various experiments; the limits are obtained with the Darkcast package \cite{darkcast}. } \label{fig:MDplusPB} \end{centering} \end{figure} Fig.\ \ref{fig:MDplusPB} shows the projected sensitivities on the minimal dark photon models with $300$ fb$^{-1}$ and $3$ ab$^{-1}$ data, from FACET, FASER, FASER2, and MATHUSLA. We only include the MD and PB processes here; the HR process is absent. For that reason, the analysis in Fig.\ \ref{fig:MDplusPB} is also applicable to the minimal dark photon model. Among the new detectors, the parameter space probed by FACET is larger than the other experiments. In particular, with an integrated luminosity of $ 300\, {\rm fb}^{-1}$ (3 ${\rm ab}^{-1}$) at the HL-LHC, FACET can probe the DP mass up to $\sim 1.3$ GeV {($1.5$ GeV)}, whereas FASER can only probe the DP mass up to $\sim 0.12$ GeV {($0.25$ GeV plus the island near $0.79$ GeV)}, and FASER2 can only probe the DP mass up to $\sim 0.8$ GeV {($1.3$ GeV)}. Because DPs arising from the PB and MD processes are likely to be distributed in the forward region, MATHUSLA, a detector located in the central transverse region, has difficulties to probe the parameter space of the minimal dark photon model. For that reason, MATHUSLA only probe a small parameter region with 3 ${\rm ab}^{-1}$ data, which, however, has been excluded already by the current experimental constraints. We note that the dips at $m_{A'} \sim 0.8$ GeV in the contours are due to the resonance in the PB process, and the kink features at $m_{A'} \sim 0.2$ GeV are due to the mass threshold effects in the MD process. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.4\textwidth]{./figures/psi15contour1_new4} \includegraphics[width=0.4\textwidth]{./figures/psi15contour2_new4} \caption{Projected sensitivities from FACET (red), FASER (magenta), FASER2 (black), and MATHUSLA {(green)}, at the HL-LHC with the integrated luminosities of ${\cal L} = 300$ fb$^{-1}$ (left panel) and ${\cal L} = 3$ ab$^{-1}$ (right panel) to our dark photon model in which all the three dark photon production channels (MD, PB, and HR) contribute to the signals. Here we fix $m_\psi = 15$ GeV and $\epsilon_2 = 0.01$, and require $m_{A'} < 2 m_{\psi}$ so that the dark photon cannot decay into invisible final states. Contours correspond to the expected signal events $N=5$. The dark gray shaded region indicates the excluded dark photon parameter space by various experiments where the HR process is not considered; the limits are obtained with the Darkcast package \cite{darkcast}. } \label{fig:psi15GeV} \end{centering} \end{figure} Fig.~\ref{fig:psi15GeV} shows the projected sensitivities for our dark photon models {from FACET, FASER, FASER2, MATHUSLA, and CMS-MTD.} Here the dark photon production contributions from all channels including the MD, PB and HR processes are considered. With the inclusion of the HR process, the FACET and MATHUSLA sensitivity contours are significantly enlarged to heavier DP mass region, as compared to Fig.~\ref{fig:MDplusPB}; the FASER and FASER2 sensitivity contours, on the other hand, are similar to those in Fig.~\ref{fig:MDplusPB}. With $ 300\, {\rm fb}^{-1}$ ($3\,{\rm ab}^{-1}$) data at the HL-LHC, FACET can probe the parameter space of our model up to $ m_{A'} \simeq 1.9 \, (15)$ GeV. The {CMS-MTD} probes a relative large dark photon mass region: down to dark photon mass $\sim 3 \, (2)$ GeV for $ 300\, {\rm fb}^{-1}$ ($ 3\, {\rm ab}^{-1}$) data at HL-LHC. This is due to the fact that a light dark photon leads to not only a small time delay but also small transverse momenta of the final state leptons, which will suffer from a large SM background for the time delay searches \cite{Du:2019mlc}. Interestingly, this {CMS-MTD} sensitivity region partly overlaps with MATHUSLA sensitivity region for the luminosity of $ 300\, {\rm fb}^{-1}$, and with both FACET and MATHUSLA sensitivity regions for the luminosity of $ 3\, {\rm ab}^{-1}$. Thus, if a dark photon in this overlap region is discovered, one can {see} the FACET and MATHUSLA to verify the results from the CMS-MTD. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.4\textwidth]{./figures/psi5acontour1_new4} \includegraphics[width=0.4\textwidth]{./figures/psi5acontour2_new4} \caption{Same as Fig.\ \ref{fig:psi15GeV} except $m_\psi=5 \, m_{A'}$. The light gray region is excluded by the millicharge constraints \cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx}. For the FACET contours we use NNPDF23 \cite{Ball:2012cx} (red-solid), CT18 \cite{Hou:2019qau} (red-dashed), and NNPDF40 \cite{Ball:2021leu} (red-dotted).} \label{fig:psi5DP} \end{centering} \end{figure} {Fig.~\ref{fig:psi5DP} shows the expected limits from FACET, FASER, FASER2, MATHUSLA, and CMS-MTD to the parameter space of our dark photon model with the mass relation $m_\psi=5 \, m_{A'}$.} {The sensitivity contours are similar to Fig.~\ref{fig:psi15GeV}, but with some changes.} {For light $\psi$, the millicharge constraints are important, which excludes the parameter space $m_{A'} \lesssim 0.3$ GeV (corresponding to $m_\psi > 1.5$ GeV for $\epsilon_2 = 0.01$).} The parameter space probed by FASER with ${\cal L} = 300\, {\rm fb}^{-1}$ ($3\, {\rm ab}^{-1}$) at the HL-LHC is (nearly) excluded by the millicharge constraints. Further, the heavy dark photon mass region can no longer be probed by various detectors as in Fig.~\ref{fig:psi15GeV}. This is because the heavy dark photon mass corresponds to the heavy $\psi$ mass via the mass relation $m_\psi=5 \, m_{A'}$, which then leads to a suppressed $pp \to Z^* \to \psi \psi$ cross section. Similar to the result in Fig.~\ref{fig:psi15GeV}, the {CMS-MTD} sensitivity region is partly overlapped with FACET and MATHUSLA. To check the PDF uncertainties on the sensitivity contours, we further compute the FACET contours using three different sets of PDFs: NNPDF23 \cite{Ball:2012cx} (red-solid), CT18 \cite{Hou:2019qau} (red-dashed), and NNPDF40 \cite{Ball:2021leu} (red-dotted). As shown in Fig.~\ref{fig:psi5DP}, the upper edge of the FACET contours from the three PDFs are almost identical; the lower edge of the FACET contours from the three PDFs, however, can be seen with some visible differences from each other. For example, for $m_{A'} \sim 0.3$ GeV, the lower edge of the FACET contours with $ 300\, {\rm fb}^{-1}$ are located at $\epsilon_1 = 1.9 \times 10^{-8}$ with NNPDF23, $\epsilon_1 = 2.3 \times 10^{-8}$ with CT18, and $\epsilon_1 = 3.2 \times 10^{-8}$ with NNPDF40, as shown on the left panel figure of Fig.~\ref{fig:psi5DP}; for $3\, {\rm ab}^{-1}$ data, $\epsilon_1$ are $5.9 \times 10^{-9}$, $7.3 \times 10^{-9}$, and $1.0 \times 10^{-8}$ respectively, as shown on the right panel figure of Fig.~\ref{fig:psi5DP}. Thus different PDFs will result in changes to the FACET contours but the effects are not significant. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.41\textwidth]{./figures/one_D_slice_ctau} \includegraphics[width=0.4\textwidth]{./figures/one_D_slice_epsilon} \caption{The number of signal events in the FACET detector at the HL-LHC with ${\cal L} = 3 \,{\rm ab}^{-1}$, as a function of the DP lifetime {$c\tau_{A'}$} (left panel) and of the coupling $\epsilon_1$ (right panel). Here we fix $m_\psi = 15$ GeV and $\epsilon_2 = 0.01$, and vary the dark photon mass to be $m_{A'} = 2$ GeV (green), 4 GeV (blue), and 10 GeV (red). } \label{fig:FACET-Nevent} \end{centering} \end{figure} The left panel in Fig.~\ref{fig:FACET-Nevent} shows the number of signal events in the FACET detector as a function of the proper lifetime, for three different dark photon masses. The number of events {decreases} with the dark photon mass. The peak of the distribution of the events shifts {to} a larger $c\tau_{A'}$ value when the dark photon mass increases. The peak shift is due to the detector cut on the DP decay length: a larger $c \tau_{A'}$ is needed for a heavier DP mass so that the DP has the desired decay length to disintegrate in the FACET decay volume. With the criterion of $N > 5$ events, FACET can probe the $c\tau_{A'}$ range of $[0.04\, {\rm m} - 30 \, {\rm m}]$ for DP mass $m_{A'} = 2$ GeV, $c\tau_{A'} \in [0.09\, {\rm m} - 25 \, {\rm m}]$ for $m_{A'} = 4$ GeV, and $c\tau_{A'} \in [0.3\, {\rm m} - 10 \, {\rm m}]$ for $m_{A'} = 10$ GeV. The right panel in Fig.~\ref{fig:FACET-Nevent} shows the number of signal events in the FACET detector as a function of the parameter $\epsilon_1$. With the criterion of $N > 5$ events, FACET can probe the {$\epsilon_1 \in [2.1 \times 10^{-8} - 5.5 \times 10^{-7}]$} for DP mass $m_{A'} = 2$ GeV, {$\epsilon_1 \in [1.4 \times 10^{-8} - 2.3 \times 10^{-7}]$} for $m_{A'} = 4$ GeV, and {$\epsilon_1 \in [1.3 \times 10^{-8} - 7.5 \times 10^{-8}]$} for $m_{A'} = 10$ GeV. \section{Expected number of events in far detectors} \label{sec:facet-faser-comparison} Here we provide an approximated expression for number of dark photon events in the far detectors, and also compare the number of events for two far detectors that are of different sizes and placed with different distances from the IP. Denote the cross sectional area of the decay volume of a far detector as $A$ and the length as $L$; the volume of the decay volume is then $V=AL$. If the far detector is placed at a distance $d$ from the IP with $d\gg L$, the probability {of the DP} to decay within the interval $(d,d+L)$ can be approximated by \begin{equation} P \simeq \exp \left[- {d \over \ell_{A'}} \right] {L \over \ell_{A'} }, \end{equation} where $\ell_{A'}$ is the decay length of the DP. The number of DPs that disintegrate inside the decay volume is then given by \begin{equation} N \simeq N_{\rm IP} {A \over 4 \pi d^2} P = N_{\rm IP} {1 \over 4 \pi } {V \over d^3} \exp \left[- {d \over \ell_{A'}} \right] {d \over \ell_{A'} }, \label{approx_nsig} \end{equation} where $N_{\rm IP}$ is the total number of DPs produced at the IP, and we have assumed an isotropic distribution for DPs for simplicity. Thus for given $N_i$, $V$, and $d$, the optimal decay length to be probed is $\ell_{A'} = d$. Eq.\ \eqref{approx_nsig} also suggests that in order to obtain a large signal of LLPs, one should build a large decay volume and place it close to the IP if the SM backgrounds are under control; see also \cite{FACET:Green} for a similar discussion. Next we compare two detectors with different $V$ and $d$. The ratio of the number of events {is} given by \begin{equation} {N_1 \over N_2} = {V_1 \over V_2} \left[ {d_2 \over d_1} \right]^2 \exp \left[- {d_1-d_2 \over \ell_{A'}} \right] . \end{equation} Using the parameters given in Table \ref{tab:detectors}, we find that ${N_{\rm FACET}/N_{\rm FASER}} \simeq 7 \times 10^3 \exp(380\, {\rm m}/ \ell_{A'})$. Thus the number of events in FACET is at least $7 \times 10^3$ times larger than FASER, if one neglects the background considerations and other effects. This is the main reason that the contours of FACET sensitivity are much larger than FASER. Similarly, we find that ${N_{\rm FACET}/N_{\rm FASER2}} \simeq 18 \exp(380\, {\rm m}/ \ell_{A'})$. We find that these ratios between FACET and FASER(2) estimated here are consistent with the results from our simulations.\footnote{For example, for the model point $m_{A'} = 0.5$ GeV and $\epsilon_1 = 2.9 \times 10^{-7}$ in Fig.~\ref{fig:psi5DP}, we find that ${N_{\rm FACET}/N_{\rm FASER}} \simeq 8400$ and ${N_{\rm FACET}/N_{\rm FASER2}} \simeq 33$ in our simulations.} \section{summary} \label{sec:summary} We study the capability of the various new lifetime frontier experiments in probing long-lived dark photon models. We consider both the minimal dark photon model, and the dark photon model proposed by some of us recently that has an enhanced long-lived dark photon signal at the LHC. In the new dark photon model that has an {enhanced} long-lived dark photon signal at the LHC, the standard model is extended by the Stueckelberg mechanism to include a hidden sector, which consists of two gauge bosons and one Dirac fermion $\psi$. The Stueckelberg mass terms eventually lead to a GeV-scale dark photon $A'$ and a TeV-scale $Z'$ with couplings $\epsilon_1$ and $\epsilon_2$ to the SM sector respectively. The dark photon signal at the LHC in this new model is enhanced because it is proportional to $\epsilon_2$ which can be significantly larger than $\epsilon_1$, which is small so that the dark photon is long-lived. We compute various experimental constraints on the $\epsilon_2$ parameter including the most recent {constraints} on millicharge from the ArgoNeuT and milliQan demonstrator experiments. There are three major production channels for the long-lived dark photon in the parameter space of interest: the MD, PB, and HR processes. The MD and PB are present in both the minimal dark photon model and the new dark photon model, and are mostly distributed in the forward region. The HR process, however, is only present in the new dark photon model, and {has} significant contributions to both the forward region and the transverse region (but still with dominant contributions in the forward region). We find that the HR process provides the dominant contributions for large dark photon mass, which opens up new parameter space to be probed by various new lifetime-frontier detectors. We provide a mini-overview on the various lifetime-frontier detectors and select four detectors for further detailed analysis, which include the far detectors FACET, FASER (and its upgraded version, FASER2), and MATHUSLA, and the future precision timing detector CMS-MTD. We compute the sensitivity contours in the parameter space spanned by the dark photon mass and the parameter $\epsilon_1$. For example, with $ 300\, {\rm fb}^{-1}$ ($3\,{\rm ab}^{-1}$) data at the HL-LHC, FACET can probe the parameter space up to $ m_{A'} \simeq 1.9 \, (15)$ GeV, for the case where $m_\psi = 15$ GeV. We find that the sensitivity contours from FACET and MATHUSLA are significantly enlarged by the HR process, and the CMS-MTD is only sensitive to the HR process. The enhancement in the central transverse detector MATHUSLA is mainly due to the fact that the MD and PB events are highly concentrated in the forward direction, and the HR process has some significant contributions in the transverse direction. We further compare the signal events between the two far forward detectors: FACET and FASER. We find that FACET is likely to detect many more events than FASER, which is mainly due to the larger decay volume of the FACET detector and its smaller distance from the interaction point. The FASER2 detector, with a much larger decay volume than FASER, can somewhat offset the effects of the long distance from the interaction point. Thus we find that the FACET contours are larger than FASER and FASER2 in our analysis. We also find that there exists parameter space that can be probed by different kinds of lifetime-frontier experiments. Thus, for example, if a long-lived dark photon signal {were} found in one precision timing detector (e.g.\ CMS-MTD), it could then be verified by a far forward {detector} (e.g.\ FACET) and a far transverse {detector} (e.g.\ MATHUSLA). \section{Acknowledgement} We thank Michael Albrow for helpful discussions. The work is supported in part by the National Natural Science Foundation of China under Grant No.\ 11775109.
\section{The muon gyromagnetic factor and ``anomalous'' moment} As a result of more than three decades of intense efforts to validate every corner of the standard model (SM) of elementary particles and their interactions, and to submit it to a redundant metrology with an always increasing precision, the SM has only become more and more ``standard'', with some very few exceptions that include the ``tension'' between the theoretical prediction and the unique precise experimental measurement of the ``anomalous'' magnetic moment of the muon, $a_\mu$, which is the relative deviation of the gyromagnetic factor, $g_\mu$, from the value of $g=2$ for a pointlike Dirac particle, i.e. $a_\mu \equiv (g_\mu -2)/2$. \section{$a_\mu$: predictions and measurement} Since the first measurement (for the electron) \cite{Nafe} and its interpretation within the QED framework \cite{Schwinger}, both the prediction and the measurement of $a$ have undergone a tremendous improvement in precision, to the point that hadronic vacuum polarization (VP) i.e. modifications of the photon propagator, hadronic light-by-light scattering (LbL) and weak interactions must be taken into account (Fig. \ref{fig:a}). \begin{figure}[Htb] \begin{center} \small \begin{tabular}{ccccccc} $1^{rst}$ & $2^{nd}$ & $3^{rd}$ & $4^{th}$ & $5^{th}$ \\ \includegraphics[width=0.16\linewidth]{1rst-order.pdf} & \includegraphics[width=0.16\linewidth]{2nd-order.pdf} & \includegraphics[width=0.08\linewidth]{3rd-order-light-by-light.pdf} & \includegraphics[width=0.13\linewidth]{4th-order.pdf} & \includegraphics[width=0.08\linewidth]{5th-order.pdf} \end{tabular} ~ ~ \begin{tabular}{ccccccc} Hadronic Vacuum Polarisation & Hadronic light-by-light & Weak \\ (VP) & Scattering & Interactions \\ \includegraphics[width=0.2\linewidth]{had-VP.pdf} & \includegraphics[width=0.2\linewidth]{light-by-light.pdf} & \includegraphics[width=0.2\linewidth]{Weak.pdf} \end{tabular} \end{center} \caption{Examples of diagrams contributing to the calculation of $a_\mu$. Up: QED diagrams of various orders in $\alpha$. Bottom: VP, LbL and weak-interaction contributions \cite{Jegerlehner:2009ry}. \label{fig:a} } \end{figure} Understanding the value of $a_\mu$ necessitates a precise knowledge of the value of the fine structure constant $\alpha$. From the development \cite{Aoyama:2012wk} of $a_e$ and of $a_\mu$ \footnote{I have truncated the numerical factors.}, \begin{eqnarray*} a_e &=& \gfrac{\alpha}{2\pi} -0.3 \left(\gfrac{\alpha}{\pi}\right)^2 +1.2 \left(\gfrac{\alpha}{\pi}\right)^3 -1.9 \left(\gfrac{\alpha}{\pi}\right)^4 +9.2 \left(\gfrac{\alpha}{\pi}\right)^5 + 1.7 \times 10^{-12} (\ensuremath{\scriptsize \mathrm{QCD}}\xspace + \ensuremath{\scriptsize \mathrm{weak}}\xspace), \\ a_\mu &=& \gfrac{\alpha}{2\pi} +0.8 \left(\gfrac{\alpha}{\pi}\right)^2 +24. \left(\gfrac{\alpha}{\pi}\right)^3 +131. \left(\gfrac{\alpha}{\pi}\right)^4 +753. \left(\gfrac{\alpha}{\pi}\right)^5 +7.1 \times 10^{-8} (\ensuremath{\scriptsize \mathrm{QCD}}\xspace + \ensuremath{\scriptsize \mathrm{weak}}\xspace), \end{eqnarray*} we see that due to the $\mu$-to-$e$ mass difference, the development for $a_e$ converges extremely rapidly and that the non-QED contributions are very small: a precise value of $\alpha$ can be extracted from $a_e$ and then injected in the calculation of $a_\mu$. \begin{table}[Htb] \begin{center} \small \begin{tabular}{cccccccc} $\alpha$ from & $a_\mu^{\ensuremath{\scriptsize \mathrm{QED}}\xspace}$ ($10^{-10}$) \\ \noalign{\vskip5pt} \hline \noalign{\vskip5pt} $a_e$ & 11 658 471.885 \ensuremath{\pm}\xspace 0.004 \\ \noalign{\vskip5pt} Rubidium Rydberg constant & 11 658 471.895 \ensuremath{\pm}\xspace 0.008 \end{tabular} \end{center} \caption{Values of $a_\mu^{\ensuremath{\scriptsize \mathrm{QED}}\xspace}$ computed using values of $\alpha$ extracted from the measured value of $a_e$ and from atomic physics measurements \cite{Aoyama:2012wk}. \label{tab:valeurs:amu} } \end{table} The value of $a_\mu$ so obtained has a very small uncertainty and is compatible with that obtained using a value of $\alpha$ from atomic physics (Table \ref{tab:valeurs:amu}): the QED contribution, which has been computed up to the $5^{th}$ order in $\alpha$ \cite{Aoyama:2012wk}, is under excellent control. Table \ref{tab:PDG:2014} presents the sizable contributions to the prediction and the comparison with experiment as of 2014 \cite{PDG:2014}: \begin{table}[Htb] \begin{center} \small \begin{tabular}{lrl} \hline QED & 11 658 471.895 & \ensuremath{\pm}\xspace 0.008 \\ Leading hadronic vacuum polarization (VP) & 692.3~~~~ & \ensuremath{\pm}\xspace 4.2 \\ Sub-leading hadronic vacuum polarization & $-9.8$~~~~ & \ensuremath{\pm}\xspace 0.1 \\ Hadronic light-by-light (LbL) & 10.5~~~~ & \ensuremath{\pm}\xspace 2.6 \\ Weak (incl. 2-loops) & 15.4~~~~ & \ensuremath{\pm}\xspace 0.1 \\ \hline Theory & 11 659 180.3~~~~ & \ensuremath{\pm}\xspace 4.2 \ensuremath{\pm}\xspace 2.6 \\ Experiment (E821 @ BNL) \cite{Bennett:2006fi} & 11 659 209.1~~~~ & \ensuremath{\pm}\xspace 5.4 \ensuremath{\pm}\xspace 3.3 \\ \hline Exp. $-$ theory & +28.8~~~~ & \ensuremath{\pm}\xspace 8.0 \\ \end{tabular} \end{center} \caption{Contributions to the prediction for $a_\mu$ ($10^{-10}$) and comparison with experiment as of 2014 \cite{PDG:2014}. \label{tab:PDG:2014}} \end{table} \begin{itemize} \item The QED contribution is the main contributor to the value of $a_\mu$, while the uncertainty is dominated by the hadronic contributions (VP and LbL); \item The uncertainties of the prediction and of the measurement are of similar magnitude; \item The measured value exceeds the prediction with, assuming Gaussian statistics, a significance of $\approx 3.6$ standard deviations. \end{itemize} As QCD is not suited to precise low energy calculations, the VP contribution to $a_\mu$ is computed from the ``dispersion integral'' (\cite{Jegerlehner:2009ry} and references therein): \begin{eqnarray} a_\mu^{\ensuremath{\scriptsize \mathrm{VP}}\xspace} = \left(\gfrac{\alpha m_\mu} {3\pi} \right)^2 \int{\gfrac{R(s) \times \hat{K}(s)}{s^2} \dd s}, \end{eqnarray} where $ R(s) $ is the the cross section of $\ensuremath{e^+e^-}\xspace$ to hadrons at center-of-mass (CMS) energy squared $s$, normalized to the pointlike muon pair cross section $\sigma_0$: $ R(s) = \sigma_{\ensuremath{e^+e^-}\xspace\ensuremath{\rightarrow}\xspace \ensuremath{\scriptsize \mathrm{hadrons}}\xspace} / \sigma_0$, and $\hat{K}(s)$ is a known function that is of order unity on the $s$ range $[(2 m_\pi c^2)^2, \infty[$. Technically, the low energy part of the integral is obtained from experimental data (up to a value often chosen to be $E_{\ensuremath{\scriptsize \mathrm{cut}}\xspace} = 1.8 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$), while the high-energy part is computed from perturbative QCD (pQCD). Due to the presence of the $s^2$ factor at the denominator of the integrand, the precision of the prediction of $a_\mu$ relies on precise measurements at the lowest energies, and the channels with the lightest final state particle rest masses, $\ensuremath{\pi^+}\xspace \ensuremath{\pi^-}\xspace$, $\ensuremath{\pi^+}\xspace \ensuremath{\pi^-}\xspace \ensuremath{\pi^0}\xspace$, $\ensuremath{\pi^+}\xspace \ensuremath{\pi^-}\xspace 2\ensuremath{\pi^0}\xspace$, $\ensuremath{\pi^+}\xspace \ensuremath{\pi^-}\xspace\ensuremath{\pi^+}\xspace \ensuremath{\pi^-}\xspace$, $K K$ are of particular importance. \section{\babar\ measurements: the ISR method} The \babar\ experiment \cite{Aubert:2001tu,TheBABAR:2013jta} at the SLAC National Accelerator Laboratory has committed itself over the last decade to the systematic measurement of the production of all hadronic final states using the initial-state radiation (ISR) process. The cross section of the $\ensuremath{e^+e^-}\xspace$ production of a final state $f$ at a CMS energy squared $s'$ can be obtained from the differential cross section of the ISR production $\ensuremath{e^+e^-}\xspace \ensuremath{\rightarrow}\xspace f ~ \gamma$ through the expression: \begin{eqnarray} \gfrac{\dd \sigma_{[\ensuremath{e^+e^-}\xspace \ensuremath{\rightarrow}\xspace f ~ \ensuremath{\gamma}\xspace]}}{\dd s'} (s') = \gfrac{2m}{s} W(s, x) \sigma_{[\ensuremath{e^+e^-}\xspace \ensuremath{\rightarrow}\xspace f]} (s'), \end{eqnarray} where $W(s, x)$, the probability density to radiate a photon with energy $E_\ensuremath{\gamma}\xspace = x\sqrt{s}$, is a known ``radiator'' function \cite{Bonneau:1971mk}, and $\sqrt{s}$ is here the CMS energy of the initial \ensuremath{e^+e^-}\xspace pair, which is close to 10.6\,GeV for \babar. In contrast with the energy scans that provided the earlier experimental information on the variations of $R$ (see Figs. 50.5 and 50.6 in Ref. \cite{PDG:2014} and references in their captions), this ISR method makes an optimal use of the available luminosity and allows a consistent measurement over the full energy range with the same accelerator and detector conditions. In addition, in the case of \babar\, the \ensuremath{e^+e^-}\xspace initial state is strongly boosted longitudinally so the detector acceptance stays sizable down to threshold (Fig. \ref{fig:acceptanceKK} right). \begin{figure}[Htb] \includegraphics[width=0.7\linewidth, trim=0cm 9.7cm 0cm 0cm, clip]{Fig1-pipi.pdf} \hfill \includegraphics[width=0.29\linewidth]{globalAccC3_KKmc_sqrtSp_FinalBinning.pdf} \put(-40,0){\footnotesize $\sqrt{s'} (\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace)$} \caption{Left: $\mu^+\mu^-$ cross section as a function of the $\mu^+\mu^-$ invariant mass compared to the QED prediction, as a sanity check for the \babar\ NLO analyses \cite{Aubert:2009ad,Lees:2012cj,Lees:2013gzt}. Right: The \babar\ acceptance for the $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ analysis as a function of the $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ invariant mass \cite{Lees:2013gzt}. \label{fig:acceptanceKK} } \end{figure} The observation of the hadronic final state alone, if kinematically compatible with a system recoiling against a single massless particle, would allow the reconstruction of the event and the measurement of $s'$, but when in addition the ISR photon is observed ($\gamma$-tagging), a powerful background rejection and a good signal purity can be achieved. We have performed most of these measurements using a leading-order (LO) method, in which the final state $f$ and the ISR photon are reconstructed regardless of the eventual presence of additional photons. For these analyses the differential luminosity is obtained from the luminosity of the collider, known with a typical precision of $1 \%$, and involves a computation of the detection efficiency that relies on Monte Carlo (MC) simulations\footnote{ A review on the {\tt PHOKHARA} and {\tt AfkQed} event generators used in our {\tt GEANT4}-based simulations can be found in section 21 of Ref. \cite{Bevan:2014iga}.} \cite{Aubert:2004kj,Aubert:2006jq,Aubert:2007uf,Aubert:2007ef,Aubert:2007ym}, \cite{Lees:2012cr,Lees:2011zi,Lees:2013uta,Druzhinin:2007cs,Lees:2014xsh,Lees:2013ebn,Lees:2015iba}. This experimental campaign has lead \babar\ to improve the precision of the contribution to $a_\mu$ of most of the relevant channels by a large factor, typically close to a factor of three. A list of the contributions $a_\mu^f$ to $a_\mu^{\ensuremath{\scriptsize \mathrm{VP}}\xspace}$ for a number of individual hadronic final states $f$, available at the time, can be found in Table 2 of Ref. \cite{Davier:2010nc}. \section{BaBar NLO ($\ensuremath{e^+e^-}\xspace \ensuremath{\rightarrow}\xspace f ~ \gamma ~ (\gamma))$ results} \babar\ has also developed a new method that was applied to the dominant channel $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ \cite{Aubert:2009ad,Lees:2012cj} and more recently to the $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ channel \cite{Lees:2013gzt}. The control of the systematics below the \% level made it necessary to perform the analysis at the NLO level, that is, to take into account the possible radiation of an additional photon, be it from the initial (ISR) or from the final (FSR) state. The impossibility to control the global differential luminosity with the desired precision, in particular the MC-based efficiency, lead us to derive the value of $R$ from the ratio of the ISR production of the final state $f$ to the ISR production of a pair of muons, $\mu^+\mu^-$. Most of the systematics, including those related to the absolute luminosity, of the ISR photon reconstruction and of additional ISR radiation, cancel in the ratio. Figure \ref{fig:VDM} shows the obtained form-factor (here squared) distributions extracted from the cross-section distributions, together with fits using the GS parametrization of the VDM model. \begin{figure}[Htb] \hfill \includegraphics[width=0.45\linewidth]{FF_fitGS_log.pdf} \hfill \includegraphics[width=0.45\linewidth]{fitFF_0,98-2,4GeV.pdf} \hfill ~ \put(-300,90){$\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$} \put(-70,90){$\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$} \put(-60,-5){\footnotesize $\sqrt{s'} (\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace)$} \put(-270,-2){\footnotesize $\sqrt{s'} (\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace)$} \caption{\babar\ NLO measurements: Vector dominance model (VDM) fits of the squared form-factors using a Gounaris-Sakurai (GS) parametrization. Left:$\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ \cite{Aubert:2009ad,Lees:2012cj}. Right: $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ \cite{Lees:2013gzt}. \label{fig:VDM} } \end{figure} The values of $a_\mu^{\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace}$ and of $a_\mu^{\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace}$ integrated over the most critical range, that is, from threshold to 1.8\,GeV are more precise than the average of the previous measurements (Table \ref{tab:comparison}). \begin{table} [Htb] \footnotesize \begin{tabular}{l|lllllllll} \hline & $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ & $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ & $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ \\ \hline \babar\ & $514.1 \pm 2.2 \pm 3.1$ \cite{Aubert:2009ad,Lees:2012cj} & $22.93 \pm 0.18 \pm 0.22 \pm 0.03 $ \cite{Lees:2012cr} & $13.64 \pm 0.03 \pm 0.36$ \cite{Lees:2013gzt} \\ Previous average \cite{Davier:2010nc} & $503.5 \pm 4.5$ & $21.63 \pm 0.27 \pm 0.68$ & $13.35 \pm 0.10 \pm 0.43 \pm 0.29$ \\ Their difference $\Delta$ & $\! +10.6 \pm 5.9$ & $\!+ 1.30 \pm 0.79$ & $\! +0.29 \pm 0.63$ \end{tabular} \caption{Contributions to $a_\mu$ for recent \babar\ publications: comparison of the measured value to the previous world average on the energy range $\sqrt{s'}< 1.8\,\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$ (units $10^{-10}$). \label{tab:comparison} } \end{table} Even though neither the time-integrated luminosity nor the absolute acceptance/efficiency were used in these precise $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ and $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ cross-section measurements, we checked that we understand them by comparing the $\mu^+\mu^-$ cross section distribution we observe to the QED prediction: a good agreement is found (Fig. \ref{fig:acceptanceKK} left) within $0.4 \ensuremath{\pm}\xspace 1.1 \%$, which is dominated by the large uncertainty on the time-integrated luminosity ($\ensuremath{\pm}\xspace 0.9 \%$). These NLO analyses were performed assuming that the FSR corrections for the hadronic channel are negligible, as theoretical estimates are well below the systematic uncertainties in the cross section \cite{Aubert:2009ad,Lees:2012cj,Lees:2013gzt}. We have validated this assumption by an experimental study of the ISR-FSR interference in $\mu^+\mu^-$ and $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ ISR production. Because charge parities of the final state pair are opposite for ISR and FSR, the interference between ISR and FSR changes sign with the charge interchange of the two muons (pions). As a consequence, investigation of the charge asymmetry of the process gives access to the interference between ISR and FSR, which enables the separate measurement of the magnitudes of the ISR and of the FSR amplitudes \cite{Lees:2015qna}. For the pion channel, results match a model where final state radiation originates predominantly from the quarks that subsequently hadronize into a pion pair, while for the muon control channel, good consistency is found with QED. \section{Recent BaBar LO ($\ensuremath{e^+e^-}\xspace \ensuremath{\rightarrow}\xspace f ~ \gamma)$ results} \begin{figure}[Htb] \includegraphics[width=0.24\linewidth]{fig16.pdf} \includegraphics[width=0.24\linewidth]{fig23.pdf} \includegraphics[width=0.24\linewidth]{fig32.pdf} \includegraphics[width=0.24\linewidth]{fig42.pdf} \put(-440,90){$\KS\KL$} \put(-300,90){\Magenta{$\KS\KL \ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$}} \put(-185,90){\Magenta{$\KS\KS \ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$}} \put( -70,90){\Magenta{$\KS\KS \ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$}} \medskip \Black{~} \hfill \includegraphics[width=0.24\linewidth]{fig15.pdf} \hfill \includegraphics[width=0.24\linewidth]{fig10.pdf} \hfill \includegraphics[width=0.24\linewidth]{fig19.pdf} \hfill ~ \put(-410,40){$p \ensuremath{\overline p}\xspace$} \put(-210,65){$p \ensuremath{\overline p}\xspace$} \put( -75,65){$\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$} \hfill \includegraphics[width=0.24\linewidth]{fig9new.pdf} \hfill \includegraphics[width=0.24\linewidth]{fig33new.pdf} \hfill ~ \put(-350,80){\Magenta{$\KS\ensuremath{K^+}\xspace\ensuremath{\pi^-}\xspace \ensuremath{\pi^0}\xspace$}} \put(-167,70){\Magenta{$\KS\ensuremath{K^+}\xspace\ensuremath{\pi^-}\xspace \eta$}} \put(-340,65){\Red{preliminary}} \put(-155,55){\Red{preliminary}} \Black{~} \caption{Recent LO results. \Magenta{Magenta: First measurements}. \Black{~} Up: channels with two neutral kaons \cite{Lees:2014xsh}. Center: $p \ensuremath{\overline p}\xspace$ with \cite{Lees:2013ebn} and without \cite{Lees:2013uta} $\gamma$ tagging, and $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ without \cite{Lees:2015iba} $\gamma$ tagging. Bottom: $\KS\ensuremath{K^+}\xspace\ensuremath{\pi^-}\xspace h^0$, the neutral meson $h^0$ being either a $\ensuremath{\pi^0}\xspace$ or an $\eta$ (preliminary). \label{fig:recent:LO} } \end{figure} Recently \babar\ obtained results on channels with two neutral kaons $\KS\KL$, $\KS\KL \ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$, $\KS\KS \ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ and $\KS\KS \ensuremath{K^+}\xspace\ensuremath{K^-}\xspace $ \cite{Lees:2014xsh} (Fig. \ref{fig:recent:LO} up), on $\KS\ensuremath{K^+}\xspace\ensuremath{\pi^-}\xspace \ensuremath{\pi^0}\xspace$ and $\KS\ensuremath{K^+}\xspace\ensuremath{\pi^-}\xspace \eta$ (preliminary) (Fig. \ref{fig:recent:LO} bottom), and updated the $p \bar{p}$ analysis to the full statistics \cite{Lees:2013ebn} (Fig. \ref{fig:recent:LO} center left). The $p \bar{p}$ measurement has also been extended up to 6.5 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace \cite{Lees:2013uta} (Fig. \ref{fig:recent:LO} center center) and the $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ measurement to 8 \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace \cite{Lees:2015iba} (Fig. \ref{fig:recent:LO} center right) by untagged analyses. pQCD is found to fail to describe the $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ form factors extracted from our cross section measurements (Fig. \ref{fig:KK:pQCD}), but there is some hint that the discrepancy is getting better at higher mass, which kind-of supports the use of pQCD for the calculation of the dispersion integral above $E_{\ensuremath{\scriptsize \mathrm{cut}}\xspace}$. Note that given the improvement in precision of the hadronic cross sections, the most recent prediction \cite{Jegerlehner:2015stw} restricts the $s$ range over which pQCD is used to [4.5 -- 9.3]\,GeV and [13\,GeV -- $\infty$[. \begin{figure}[Htb] \hfill \includegraphics[width=0.42\textwidth]{fitFF_HM_2,5-5GeV.pdf} \hfill \includegraphics[width=0.34\textwidth]{fig20.pdf} \hfill ~ \put(-410,40){\Blue \scriptsize CZ (NLO) \Black} \caption{Comparison of the \babar\ $\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace$ results with Chernyak-Zhitnitsky \cite{Chernyak:1977fk} pQCD predictions. With (left, \cite{Lees:2013gzt}) and without (right, \cite{Lees:2015iba}) $\gamma$ tagging. \label{fig:KK:pQCD} } \end{figure} A summary of the \babar\ measurements is provided in Fig. \ref{fig:summary:april2016} and Table \ref{tab:compilation}. The analyses of the $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace\ensuremath{\pi^0}\xspace\piz$ \cite{Druzhinin:2007cs}, of the $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace\ensuremath{\pi^0}\xspace$ \cite{Aubert:2004kj} and of the $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace\eta$ \cite{Aubert:2007ef} channels are presently being updated with the full available statistics: stay tuned. \begin{figure}[Htb] \centerline{ \includegraphics[width=0.85\linewidth]{babar_crosssections_Apr2016_Fedor_Ignatov.pdf} } \caption{Summary of the \babar\ measurements (Courtesy of Fedor V. Ignatov, April 2016). Beware that some channels have the charmonia contribution removed while some others have not. The $\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace\ensuremath{\pi^0}\xspace\piz$ \cite{Druzhinin:2007cs} and $\KS\ensuremath{K^+}\xspace\ensuremath{\pi^-}\xspace \ensuremath{\pi^0}\xspace$ entries are preliminary. The NLO measurements are denoted by an additional ``\ensuremath{\gamma}\xspace''. \label{fig:summary:april2016} } \end{figure} \begin{table}[Htb] \small \caption{Summary of the \babar\ results on ISR production of exclusive hadronic final states (The superseded results have been removed). Channels above the horizonal line have been mentioned in this paper. \label{tab:compilation} } \input compile.tex \end{table} \begin{figure}[Htb] \centerline{ \includegraphics[width=0.8\linewidth]{amu2016bis.pdf} } \caption{ Recent predictions of the value of $a_\mu$ in chronological order \cite{Hagiwara:2006jt,Jegerlehner:2009ry,Davier:2009ag,Davier:2009zi,Davier:2010nc,Hagiwara:2011af,Jegerlehner:2011ti,Jegerlehner:2015stw}, after the experimental value \cite{Bennett:2006fi} is subtracted. \Blue Blue \Black: \ensuremath{e^+e^-}\xspace-based; \Green Green \Black: $\tau$ spectral function-based; Black: \ensuremath{e^+e^-}\xspace and $\tau$ combinations. \label{fig:combination} } \end{figure} \clearpage \section{What about $a_\mu$ then ?} The time evolution of the prediction of $a_\mu$ with the availability of experimental results of increasing precision and with the development of combination techniques is shown in Fig. \ref{fig:combination}. \begin{itemize} \item After $\rho-\gamma$ mixing is taken into account, the discrepancy between the combinations based on \ensuremath{e^+e^-}\xspace results and those based on the $\tau$ decay spectral functions \cite{Davier:2009ag} is resolved \cite{Jegerlehner:2011ti}. \item The discrepancy between the prediction and the measurement still sits close to 3 -- 4 standard deviations. \item Given that the precision of most of \babar\ measurements is now dominated by the contribution of the systematics, it will most likely be difficult to achieve major improvements at a future super-$B$ factory. \item Thanks to the high-precision results obtained up to the end of 2014, the uncertainty on $a_\mu^{\ensuremath{\scriptsize \mathrm{VP}}\xspace}$ is now smaller than $4\times 10^{-10}$ \cite{Jegerlehner:2015stw}. That work includes a NNLO correction for $a_\mu^{\ensuremath{\scriptsize \mathrm{VP}}\xspace}$ \cite{Kurz:2014wya} and a NLO contribution to $a_\mu^{\ensuremath{\scriptsize \mathrm{LbL}}\xspace}$ \cite{Colangelo:2014qya}. Given the spread of the values predicted by the available models of light-by-light scattering, the global uncertainty on $a_\mu^{\ensuremath{\scriptsize \mathrm{LbL}}\xspace}$ is of the same order of magnitude \cite{Jegerlehner:2009ry,Jegerlehner:2015stw}. \item Indeed, new measurements of $a_\mu$ at Fermilab \cite{Venanzoni:2012qa} and at J-PARC \cite{Mibe:2011zz} are eagerly awaited. \end{itemize} \section{Acknowledgements} Many thanks to the fellow BaBarians who helped me to prepare this talk and to Fedor Ignatov who provided me with the \babar\ summary plot (Fig. \ref{fig:summary:april2016}).
\section{Introduction} The abstract theory of Banach modules or Banach algebras plays an important role in various branches of Mathematics, for instance, abstract harmonic analysis, representation theory, operator theory; see \cite{Farashahi1,Fe,Fe1,Farabraz} and the references therein. In particular, convolution structure on the Orlicz spaces to be an Banach algebra or a Banach module over a locally compact group or hypergroup were studied by many researchers \cite{Aghababa,Fe3,Rao2,Kumar,KumarDeb,KumarHaus}. In \cite{Kumar}, the author defined and studied the notion of abstract Banach convolution algebra on Orlicz spaces over homogeneous spaces of compact groups. Recently, Ghaani Farashahi \cite{Farabraz} introduced the notion of abstract Banach convolution function module on the $L^p$-space on coset spaces of compact subgroups in locally compact groups. The purpose of this article is to define and study a new class of abstract Banach module on Orlicz spaces over coset spaces of compact subgroups in locally compact groups. Let us remark that Orlicz spaces are genuine generalization of Lebesgue spaces. It is worth mentioning that an appropriate use of Jensen's inequality \cite[pg. 62]{Rao} plays a key role in this article. In the next section, we present some basics of Orlicz spaces and some classical harmonic analysis on a homogeneous space (the space of left cosets) of a locally compact group. Section 3 is devoted to the study of abstract convolution module structure on the Orlicz space $L^\varphi(G/H, m),$ where $H$ is a compact subgroup of a locally compact group $G,$ $m$ is the normalized $G$-invariant measure on the homogeneous space $G/H$ which satisfies Weil's formula and $\varphi$ is a Young function satisfying $\Delta_2$-condition. In this section, we prove that $L^\varphi(G/H, m)$ is a Banach left $L^1(G/H, m)$-module with respect to a generalized convolution. We also introduce a Banach left $L^1(G/H, m)$-submodule of $L^\varphi(G/H).$ \section{Preliminaries} A non-zero convex function $\varphi : \mathbb{R} \rightarrow [0, \infty]$ is called a {\it Young function} if it is even, left continuous with $\varphi(0)=0$ and $\underset{x \rightarrow \infty}{\lim} \varphi(x)= \infty$. Here we note that every Young function is an integral of a non-decreasing left continuous function \cite[Theorem 1]{Rao}. Let $\Omega$ be a locally compact Hausdorff space and $m$ be a positive Radon measure on $\Omega.$ Denote the space of all equivalence classes of $m$-measurable functions on $\Omega$ by $L^0(\Omega).$ A Young function $\varphi$ satisfies {\it $\Delta_2$-condition} if there exist a constant $C>0$ and $x_0 >0$ such that $ \varphi(2x) \leq C\varphi(x)$ for all $x \geq x_0 $ if $m(\Omega)<\infty$ and $ \varphi(2x) \leq C\varphi(x)$ for all $x \geq 0 $ otherwise. Given a Young function $\varphi$, the {\it modular function} $\rho_\varphi : L^0(\Omega) \rightarrow \mathbb{R}$ is defined by $\rho_\varphi(f):=\int_\Omega \varphi(|f|)\, dm.$ We always assume that Young function $\varphi$ satisfies $\Delta_2$-condition. For a given Young function $\varphi,$ the {\it Orlicz space} $L^\varphi(\Omega, m),$ in short $L^\varphi(\Omega),$ is defined by $$L^{\varphi}(\Omega) := \left\lbrace f \in L^0(\Omega) : \rho_\varphi(af)< \infty~~ \mbox{for ~~some}\,\, a>0 \right\rbrace.$$ Then the Orlicz space is a Banach space with respect to the norm $\|\cdot \|_\varphi^0$ on $L^\varphi(\Omega)$ called {\it Luxemburg norm} or {\it gauge norm} which is defined by $$\|f \|_\varphi^0 := \mbox{inf} \left\lbrace k>0: \int_G \varphi \left(\frac{|f|}{k} \right) \,dm \leq 1 \right\rbrace.$$ If $\varphi(x)= |x|^p$ $1 \leq p <\infty$ then $L^\varphi(\Omega)$ is usual $L^p$-spaces, $1 \leq p <\infty.$ An example of Young function which satisfies $\Delta_2$-condition and gives an Orlicz space other than $L^p$-spaces is given by $\varphi(x)= (e+|x|)\, \text{log}(e+|x|)-e.$ We denote the space of all continuous functions on $\Omega$ with compact support by $\mathcal{C}_c(\Omega).$ It is well known that if $\varphi$ satisfies $\Delta_2$-condition then $\mathcal{C}_c(\Omega)$ is a dense subspace of $L^\varphi(\Omega).$ If $A$ is a measurable subset of $\Omega$ such that $0<m(A)< \infty,$ then we have the {\it Jensen's Inequality} \begin{eqnarray*} \varphi\left(\frac{\int_A f \,dm}{m(A)} \right) \leq \frac{\int_A \varphi(f)\, dm}{m(A)}. \end{eqnarray*} We make use of the above inequality several times in this article. In this paper, we also employ the notations of the author in \cite{Farashahi,Farabraz}. For a locally compact group $G$ with the Haar measure $dx$ and $f,g \in L^\varphi(G),$ define convolution $*_G$ on $L^\varphi(G,m)$ by $$f*_Gg(x)= \int_G f(y) \,g(y^{-1}x)\, dx \,\,\,\,\, (x \in G).$$ It is well-known that $(L^\varphi(G), \|\cdot\|_\varphi^0)$ is a Banach algebra with respect to the convolution product $*_{G}$ (see \cite{Hudzik}) that is, $$\|f *_G g\|_\varPhi^0 \leq \|f\|_\varphi^0\, \|g\|_\varphi^0$$ for all $f, g \in L^\varphi(G).$ Also, if $f \in L^1(G)$ and $f \in L^\varphi(G)$ then the above convolution define a module action of $L^1(G)$ on $L^\varphi(G)$ which makes $L^\varphi(K)$ a Banach left $L^1(K)$-module, that is, \begin{equation} \label{orliczmodule} \|f *_G g\|_\varphi^0 \leq \|f\|_1\, \|g\|_\varphi^0 \end{equation} for all $f \in L^1(G)$ and $g \in L^\varphi(G)$ (see \cite{KumarDeb,Raomodule}). Let $H$ be a compact subgroup of a locally compact group $G$ with the normalized Haar measure $dh.$ The left coset space $G/H$ can be seen as a homogeneous space with respect to the action of $G$ on $G/H$ given by left multiplication. The canonical surjection $q: G \rightarrow G/H$ is given by $q(x)= xH.$ Define $T_H(f)(xH)= \int_H f(xh)\,dh,$ then \begin{equation*} \mathcal{C}_c(G/H) = \left\lbrace T_H(f): f \in \mathcal{C}_c(G)\right\rbrace \end{equation*} The homogeneous space $G/H$ has a unique normalized $G$-invariant positive Radon measure $m$ that satisfies the Weil's formula \begin{equation} \label{weil} \int_{G/H} T_H(f)(xH) \, dm(xH) = \int_G f(x)\, dx, \end{equation} and hence $\|T_H(f)\|_{L^1(G/H, m)} \leq \|f\|_{L^1}(G)$ for all $f \in L^1(G).$ For more details on harmonic analysis on homogeneous spaces of locally compact groups see \cite{Farashahi,Farashahi1,Farashahi2,Farashahi3,Farashahi4,Reiter,Farabraz}. \section{Orlicz modules over Coset Spaces of compact subgroups of locally compact groups} Throughout this section, we assume that the Young function $\varphi$ satisfies $\Delta_2$- condition, $G$ is a locally compact group with a Haar measure $dx$ and $H$ is a compact subgroup of $G$ with the normalized Haar measure $dh.$ It is also assumed that the homogeneous space $G/H$ has the normalized $G$-invariant measure $m$ satisfying the Weil's formula. In this section, we show that the space $L^\varphi(G/H)$ becomes a Banach left $L^1(G/H, m)$-module with respect to the convolution on $\mathcal{C}_c(G/K)$ defined in \cite{Farabraz}. We also define a subspace of $L^\varphi(G/H)$ and show that this subspace is a Banach left $L^1(G/H, m)$-submodule of $L^\varphi(G/H, m).$ We begin this section with following result. \begin{thm} \label{bddT_H} Let $G$ be a locally compact group and let $H$ be a compact subgroup of $G.$ Let $m$ be the normalized $G$-invariant measure on the coset space $G/H.$ Then the linear map $T_H: \mathcal{C}_c(G) \rightarrow \mathcal{C}_c(G/H)$ satisfies \begin{equation} \label{bdd} \|T_H(f)\|_{L^\varphi(G/H, m)}^0 \leq \|f\|_{L^\varphi(G)}^0 \end{equation} for all $f \in \mathcal{C}_c(G).$ Further, if the Young function $\varphi$ satisfies $\Delta_2$-condition. $T_H$ can be uniquely extended to a linear map from $L^\varphi(G)$ onto $L^\varphi(G/H, m).$ \end{thm} \begin{proof} For $f \in \mathcal{C}_c(G)$ and $k >0,$ by using Weil's formula and Jensen's inequality, we have \begin{eqnarray*} \rho_\varphi \left( \frac{T_H(f)}{k}\right) &=& \int_{G/H} \varphi \left( \frac{|T_H(f)|(xH)}{k}\right) \, dm(xH) \\ &=& \int_{G/H} \varphi \left( \left| \int_H \frac{f(xh)}{k} \, dh \right| \right) dm(xH) \\& \leq & \int_{G/H} \varphi \left( \int_H \frac{|f(xh)|}{k} \, dh \right) dm(xH)\\ & \leq & \int_{G/H} \int_H \varphi \left( \frac{|f(xh)|}{k} \right) dh \, dm(xH)\\ &=& \int_{G/H} \left( \int_H \varphi \left( \frac{|f|}{k} \right)(xh)\, dh \right) \, dm(xH) \\ &=& \int_{G/H} T_H\left( \varphi \left( \frac{|f|}{k}\right)\right)(xH)\, dm(xH) \\&=& \int_G \varphi \left( \frac{|f|}{k} \right) dx = \rho_\varphi\left(\frac{f}{k}\right). \end{eqnarray*} Now, \begin{eqnarray*} \|f\|_{L^\varphi(G)}^0 &=& \inf \{k>0 : \rho_\varphi\left( \frac{f}{k}\right) \leq 1\}\\ &\geq & \inf\{k>0 : \rho_\varphi\left( \frac{T_H(f)}{k}\right) \leq 1\}=\|T_H(f)\|_{L^\varphi(G/H,m)}^0. \end{eqnarray*} Therefore, we get $\|T_H(f)\|_{L^\varphi(G/H,m)}^0 \leq \|f\|_{L^\varphi(G)}^0 $ for all $f \in \mathcal{C}_c(G).$ Since $\phi$ is $\Delta_2$-regular, $\mathcal{C}_c(G)$ and $\mathcal{C}_c(G/H)$ are dense in $L^\varphi(G)$ and $L^\varphi(G/H, m)$ respectively. Therefore, we can extend $T_H$ to a bounded linear map from $L^\varphi(G)$ onto $L^\varphi(G/H, m).$ We denote this extension of $T_H$ again by $T_H$ and it satisfies \eqref{bdd}. \end{proof} \begin{prop} \label{f_q} Let $G$ be a locally compact group and let $H$ be a compact subgroup of $G.$ Let $m$ be the normalized $G$-invariant measure on the coset space $G/H.$ Suppose that $\varphi$ satisfies $\Delta_2$-condition. If $f \in L^\varphi(G/H, m)$ and $f_q:= f \circ q$ then we have $f_q \in L^\varphi(G)$ with \begin{equation} \|f_q\|_{L^\varphi(G)}^0 = \|f\|_{L^\varphi(G/H, m)}^0. \end{equation} \end{prop} \begin{proof} For $f \in L^\varphi(G/H, m)$ and $k>0,$ by Weil's formula and the fact that $H$ is compact, we have \begin{eqnarray*} \rho_\varphi \left( \frac{f_q}{k} \right)&=& \int_G \varphi \left( \frac{|f_q|(x)}{k}\right) dx\\ &=& \int_{G/H} T_H \left( \varphi \left( \frac{|f_q|}{k}\right)\right)(xH)\, dm(xH)\\ &=& \int_{G/H} \left(\int_H \varphi \left(\frac{|f_q|(xh)}{k} \right) dh \right) dm(xH)\\ &=& \int_{G/H} \left(\int_H \varphi \left(\frac{|f(xhH)|}{k} \right) dh \right) dm(xH) \\ &=& \int_{G/H} \left(\int_H \varphi \left(\frac{|f(xH)|}{k} \right) dh \right) dm(xH) \\ &=& \left( \int_{G/H} \varphi \left( \frac{|f|(xH)}{k} \right) dm(xH) \right) \left( \int_H \, dh \right) = \rho_\varphi \left(\frac{f}{k} \right). \end{eqnarray*} Therefore $ \rho_\varphi \left( \frac{f_q}{k} \right)= \rho_\varphi \left(\frac{f}{k} \right)$ and consequently, we get $ \|f_q\|_{L^\varphi(G)}^0 = \|f\|_{L^\varphi(G/H, m)}^0.$ \end{proof} \begin{rem} Note that $T_H(f_q)=f$ and therefore, it is clear from above corollary that for $f_q \in L^\varphi(G),$ the equality in Theorem \ref{bddT_H} holds. \end{rem} Here we set certain terminologies for further use. For any continuous function $f$ define the left translation by $L_hf(\cdot)=f(h^{-1} (\cdot))$ and the right translation by $R_hf=f((\cdot) h).$ Let $G$ be a locally compact group and let $H$ be a compact subgroup of $G.$ We set \begin{eqnarray*} \mathcal{C}_c(G:H)&=& \{f \in \mathcal{C}_c(G): R_hf=f \,\forall\, h \in H\}, \\ A(G:H)&=& \{f \in \mathcal{C}_c(G) : L_hf=f \,\forall\, h \in H\} \\ \mbox{and }A(G/H) &=& \{g \in \mathcal{C}_c(G/H) : L_hg=g\, \forall\, h \in H \}.\end{eqnarray*} For a Young function $\varphi \in \Delta_2,$ define $$A^\varphi(G:H)= \{f \in L^\varphi(G) : L_hf =f \,\forall\, h \in H\},$$ and also $$A^\varphi(G/H,m)= \{g \in L^\varphi(G/H, m) : L_hg =g \,\forall\, h \in H\}.$$ Note that $A^\varphi(G/H, m)$ is the topological closure of $A(G/H)$ in $L^\varphi(G/H, m)$ and therefore it is a closed subspace of $L^\varphi(G/H, m).$ Similarly $A^\varphi(G:H)$ is closed subspace of $L^\varphi(G).$ It is known that $T_H$ maps $\mathcal{C}_c(G:H)$ and $A(G:H)$ onto $\mathcal{C}_c(G/H)$ and $A(G/H)$ respectively (see \cite[Proposition 4.2]{Farabraz}). Since $A(G:H)$ is dense in $L^\varphi(G:H)$ and $A(G/H)$ is dense in $A^\varphi(G/H),$ the next lemma follows from the continuity of $T_H.$ \begin{lem} Let $G$ be a locally compact group and let $H$ be a compact subgroup of $G.$ Let $m$ be the normalized $G$-invariant measure on the coset space $G/H.$ Suppose that $\varphi$ satisfies $\Delta_2$-condition. Then the mapping $T_H$ maps $L^\varphi(G:H)$ onto $A^\varphi(G/H, m).$ \end{lem} For $g \in C_c(G/H),$ note that the mapping $xH\mapsto \int_H g(hxH) \, dh$ is in $\mathcal{C}_c(G/H).$ Define $J:\mathcal{C}_c(G/H)\rightarrow\mathcal{C}_c(G/H)$ as $$Jg(xH)= \int_H g(hxH) \, dh.$$ It is clear that $J$ is a linear operator. In addition, the following theorem says that the norm of $J$ is bounded operator with the norm bounded by one. \begin{thm} \label{bbdJ} For $f \in \mathcal{C}_c(G/H),$ we have $$\|Jf\|_{L^\varphi(G/H, m)} \leq \|f\|_{L^\varphi(G/H, m)}.$$ \end{thm} \begin{proof} For $f \in \mathcal{C}_c(G/H)$ we have, \begin{eqnarray*} \rho_\varphi\left( \frac{Jf}{k}\right) &=& \int_{G/H} \varphi \left( \frac{|Jf|(xH)}{k} \right) dm(xH)\\ &=& \int_{G/H} \varphi \left( \left| \frac{1}{k} \int_H f(hxH)\, dh\right| \right) dm(xH)\\ &\leq & \int_{G/H} \int_H \varphi \left(\frac{|f|(hxH)}{k} \right) \, dh \, dm(xH) \\ &=& \int_H \left( \int_{G/H} \varphi \left( \frac{|f|(hxH)}{k}\right)\, dm(xH) \right) dh \\ &=& \int_H \left( \int_{G/H} \varphi \left( \frac{|f|(xH)}{k}\right)\, dm(xH) \right) dh \\ &=& \int_H \left( \int_{G/H} \varphi \left( \frac{|f|}{k}\right)(xH)\, dm(xH) \right) dh = \rho_\varphi \left(\frac{f}{k}\right). \end{eqnarray*} Consequently, we get $\|Jf\|_{L^\varphi(G/H, m)} \leq \|f\|_{L^\varphi(G/H, m)}$ for all $f \in \mathcal{C}_c(G/H).$ \end{proof} It is shown in \cite[Theorem 4.5 (2)]{Farabraz} that $J$ maps $\mathcal{C}_c(G/H)$ onto $A(G/H).$ Now, the following corollary is immediate. \begin{cor} \label{g} Let $G$ be a locally compact group and let $H$ be a compact subgroup of $G.$ Let $m$ be the normalized $G$-invariant measure on the coset space $G/H.$ Suppose that $\varphi$ satisfies $\Delta_2$-condition. The bounded linear map $J: \mathcal{C}_c(G/H) \rightarrow A(G/H)$ can be uniquely extended to a bounded linear map $J_\varphi: L^\varphi(G/H, m) \rightarrow A^\varphi(G/H, m) $ which satisfies $$\|J_\varphi f\|_{L^\varphi(G/H, m)} \leq \|f\|_{L^\varphi(G/H, m)}.$$ Further, the linear operator $J_\varphi: L^\varphi(G/H, m) \rightarrow A^\varphi(G/H, m) $ is an onto map. \end{cor} \begin{proof} The extension of the map $J$ from $\mathcal{C}_c(G/H)$ to $L^\varphi(G/H)$ follows from Theorem \ref{bbdJ} and the density of $\mathcal{C}_c(G/H,m)$ and $A(G/H)$ in $L^\varphi(G/H,m)$ and $A^\varphi(G/H, m),$ respectively. Further, for any $f \in L^\varphi(G/H)$ and $z \in H,$ we have \begin{eqnarray*} L_z(Jf)(xH)= Jf (z^{-1}xH) = \int_H f(hz^{-1}xH) \,dh = \int_H f(hxH) \,dh = Jf(xH). \end{eqnarray*} This shows that $Jf \in A^\varphi(G/H,m).$ Now we prove second part of the corollary. For any $f \in A^\varphi(G/H),$ we get, \begin{eqnarray*} Jf(xH)= \int_H f(hxH) \,dh = \int_H f(xH)\,dh = f(xH), \end{eqnarray*} for all $x \in G.$ Therefore, $Jf=f.$ Hence $J_\varphi: L^\varphi(G/H,m) \rightarrow A^\varphi(G/H,m)$ is a onto map. \end{proof} \begin{rem} \label{r2} It can seen in the proof of Corollary \ref{g} above that $J|_{A^\varphi(G/H,m)}= I_{A^\varphi(G/H,m)}.$ \end{rem} Now, we are ready to define the convolution product `$*_{G/H}$' on $\mathcal{C}_c(G/H)$ same as given in \cite{Farabraz} as follows: let $G$ be a compact group, $H$ a closed subgroup and let $m$ be the normalized $G$-invariant measure on $G/H.$ For $f,g \in \mathcal{C}_c(G/H),$ the convolution $f*_{G/H}g: G/H \rightarrow \mathbb{C}$ is given by \begin{equation} \label{3} f*_{G/H}g(xH)= \int_{G/H} f(yH)Jg(y^{-1}xH)\, dm(yH), \end{equation} for all $xH \in G/H.$ The convolution product `$*_{G/H}$' has the following properties similar to the usual convolution in $\mathcal{C}_c(G)$ (see \cite[Proposition 4.10]{Farabraz}). \begin{itemize} \item[(i)] For any $f,g \in \mathcal{C}_c(G/H),$ $(f,g) \mapsto f*_{G/H}g$ is a bilinear map from $\mathcal{C}_c(G/H) \times \mathcal{C}_c(G/H)$ to $\mathcal{C}_c(G/H)$ and $(\mathcal{C}_c(G/H), *_{G/H})$ is an algebra. \item[(ii)] $f*_{G/H}g = T_H(f_q*_Gg_q)$ and $(f *_{G/H}g)_q= f_q*_G g_q,$ where `$*_G$' is the usual convolution in $C(G).$ \item[(iii)] $L_x(f*_{G/H}g)= (L_x f)*_{G/H}g.$ \end{itemize} The following result says that $\mathcal{C}_c(G/H)$ is a normed algebra with respect to the norm $\|\cdot\|_{L^\varphi(G/H, m)}^0.$ \begin{lem} \label{C_c} If $f,g \in \mathcal{C}_c(G/H),$ then \begin{equation} \|f*_{G/H}g\|_{L^\varphi(G/H, m)}^0 \leq \|f\|_{L^1(G/H, m)} \|g\|_{L^\varphi(G/H, m)}^0. \end{equation} \end{lem} \begin{proof} Let $f,g \in \mathcal{C}_c(G/H).$ Using Proposition \ref{f_q} we get \begin{eqnarray*} \|f*_{G/H}g\|_{L^\varphi(G/H, m)}^0 = \|(f*_{G/H}g)_q\|_{L^\varphi(G)}^0 = \|f_q*_{G}g_q\|_{L^\varphi(G)}^0 \end{eqnarray*} Since $L^\varphi(G)$ is a $L^1(G)$-module then by Proposition \ref{f_q} we get \begin{align*} \|f*_{G/H}g\|_{L^\varphi(G/H, m)}^0 &\leq \|f_q\|_{L^1(G)} \|g_q\|_{L^\varphi(G)}^0 = \|f\|_{L^1(G/H, m)} \|g\|_{L^\varphi(G/H, m)}^0. \qedhere \end{align*} \end{proof} \begin{thm} \label{7} Let $G$ be a locally compact group and let $H$ be a compact subgroup of $G.$ Let $m$ be the normalized $G$-invariant measure on the coset space $G/H.$ Suppose that $\varphi$ satisfies $\Delta_2$-condition. Then the convolution $*_{G/H} : \mathcal{C}_c(G/H) \times \mathcal{C}_c(G/H) \rightarrow \mathcal{C}_c(G/H)$ given by \eqref{3} can be extended to a convolution $*_{G/H}^\varphi: L^1(G/H, m) \times L^\varphi(G/H, m) \rightarrow L^\varphi(G/H, m)$ such that $L^\varphi(G/H,m)$ is a Banach left $L^1(G/H, m)$-module with respect to this extended convolution. \end{thm} \begin{proof} Let $f \in L^1(G/H, m)$ and $ g \in L^\varphi(G/H,m).$ Since $C_c(G/H)$ is dense in $L^1(G/H, m)$ and in $L^\varphi(G/H, m)$ as $\varphi \in \Delta_2,$ there exist $\{f_n\}$ and $\{g_n\}$ in $\mathcal{C}_c(G/H)$ such that $f_n \rightarrow f$ and $g_n \rightarrow g$ as $n \rightarrow \infty.$ Now, define $$f*_{G/H}^\varphi g = \lim_{n \rightarrow \infty} f_n*_{G/H}g_n.$$ Note that $*_{G/H}^\varphi: L^1(G/H,m) \times L^\varphi(G/H,m) \rightarrow L^\varphi(G/H,m) $ is well-defined. By Lemma \ref{C_c} we have $$ \|f *_{G/H}^\varphi g \|_{L^\varphi(G/H,m)}^0 \leq \|f\|_{L^1(G/H,m)} \|g\|_{L^\varphi(G/H, m)}^0. $$ Thus $L^\varphi(G/H, m)$ is a Banach left $L^1(G/H, m)$-module with respect to the extended convolution. \end{proof} The above theorem claims the existence of convolution product $*_{G/H}^\varphi$ but it does not reveal any explicit formula for the convolution product. The following corollary fulfils this objective whose proof is a consequence of the fact that $\mathcal{C}_c(G/H)$ is dense $L^\varphi(G/H, m).$ \begin{cor} \label{8} Let $G$ be a locally compact group and let $H$ be a compact subgroup of $G.$ Let $m$ be the normalized $G$-invariant measure on the coset space $G/H.$ Suppose that $\varphi$ satisfies $\Delta_2$-condition. If $f \in L^1(G/H, m)$ and $ g \in L^\varphi(G/H, m),$ then the convolution $*_{G/H}^\varphi$ is given by \begin{equation} \label{5} (f*_{G/H}^\varphi g) (xH)= \int_{G/H} f(yH)J_\varphi g(y^{-1}xH)\, dm(yH) \end{equation} for all $xH \in G/H.$ \end{cor} In the next corollary we present a Banach left $L^1(G/H, m)$-submodule of $L^\varphi(G/H,m)$ whose proof is a routine check. \begin{cor} Let $G$ be a locally compact group and let $H$ be a compact subgroup of $G.$ Let $m$ be the normalized $G$-invariant measure on the coset space $G/H.$ Suppose that $\varphi$ satisfies $\Delta_2$-condition. Then the space $A^\varphi(G/H,m)$ is a Banach left $L^1(G/H, m)$-submodule of $L^\varphi(G/H,m).$ \end{cor} \subsection*{Acknowledgements} The author thanks Prof. V. Muruganandam for his support and encouragement.
\section{Introduction} \label{sec:Intro} \setcounter{equation}{0} Long-baseline neutrino oscillation experiments aim at studying the phenomenon of neutrino oscillations by taking advantage of the known neutrino oscillation lengths, proportional to (the inverse of) the mass-squared differences $\Delta m^2_{21}\equiv m_2^2-m_1^2$ or $\Delta m^2_{31}\equiv m^3_2-m_1^2$, where $m_{1,2,3}$ are the masses of the neutrino mass eigenstates $\nu_{1,2,3}$, respectively. The neutrino masses are labelled such that $m_2^2>m_1^2$ and $|\Delta m^2_{31}|>\Delta m^2_{21}$. With this definition, the sign of $\Delta m^2_{31}$ is an observable and captures the neutrino-mass ordering: normal ordering (NO) when $\Delta m^2_{31}$ is positive, inverted ordering (IO) when $\Delta m^2_{31}$ is negative. Among the objectives of long-baseline experiments is testing the standard-three-massive-neutrinos paradigm, which states that there are three neutrino mass eigenstates and that these interact via neutral-current and charged-current weak interactions. As far as the charged-current weak interactions are concerned, three orthogonal linear combinations of $\nu_{1,2,3}$ couple to the $W$-boson and the charged leptons $\ell_{\alpha}$ ($\alpha=e,\mu,\tau$). In more detail, $\nu_{\alpha}=U_{\alpha i}\nu_i$ ($i=1,2,3$) couples to $\ell_{\alpha}$ and the $W$-boson, and $U_{\alpha i}$ are the elements of the unitary leptonic mixing matrix. On the other hand, assuming the standard-three-massive-neutrinos paradigm is correct, long-baseline experiments are capable of measuring, sometimes with great precision, the neutrino oscillation parameters -- the parameters which define $U_{\alpha i}$ and the mass-squared differences. One way to test the standard-three-massive-neutrinos paradigm is to assume it is correct; measure the oscillation parameters using different oscillation processes or different experimental setups; and compare the results. If different measurements of the same quantity disagree at a high confidence level, we would claim the underlying formalism -- in this case the standard three-massive-neutrinos paradigm -- is deficient. Among the current generation of long-baseline experiments are the Tokai to Kamioka experiment (T2K)~\cite{T2K:2011ypd,T2K:2021xwb}, in Japan, and the NuMI Off-axis $\nu_e$ Appearance (NOvA) experiment~\cite{NOvA:2016kwd,NOvA:2021nfi}, in the United States. They are sensitive to several of the neutrino oscillation parameters, including some that are, at present, virtually unknown: the neutrino mass-ordering and the CP-odd parameter $\delta_{CP}$ that governs whether and how much CP-invariance is violated in the lepton sector. Data from T2K and NOvA have been analyzed assuming the standard-three-massive-neutrinos paradigm and have led to interesting measurements of the oscillation parameters. Just as interesting, perhaps, is the fact that there is some tension between T2K and NOvA data. The tension, which was first demonstrated by Refs.~\cite{T2KNu2020,NOvANu2020}, has been quantified and examined critically in the three-neutrino framework by various authors~\cite{Kelly:2020fkv,Esteban:2020cvm,deSalas:2020pgw,Capozzi:2021fjo}. In a little more detail, both T2K and NOvA measure electron-like and muon-like events associated to a pion decay-in-flight neutrino source ($\pi\to\mu\nu_{\mu}$). Measurements are performed at both near and far detectors and the detectors are exposed to both ``neutrino'' and ``antineutrino'' beams. With all this information, they can infer the $\nu_{\mu}$ and $\overline{\nu}_{\mu}$ survival probabilities $P(\nu_{\mu}\to\nu_{\mu})$ and $P(\overline{\nu}_{\mu}\to\overline{\nu}_{\mu})$, respectively, and the $\nu_e$ and $\overline{\nu}_e$ appearance probabilities $P(\nu_{\mu}\to\nu_{e})$ and $P(\overline{\nu}_{\mu}\to\overline{\nu}_{e})$, respectively. At T2K, typical neutrino energies are around 600~MeV and the baseline is 295~km. Typical NOvA energies are around 2~GeV and the baseline is 810~km. Assuming the standard-three-massive-neutrinos paradigm, the T2K and NOvA disappearance data are consistent but the appearance data, for both neutrinos and antineutrinos, are in disagreement. Within the NO, T2K prefers $\delta_{\rm CP}$ values close to $3\pi/2$.\footnote{We will use the convention that CP-violating phases are defined over $[0, 2\pi]$.} In contrast, when analyzed under the NO, NOvA data have no strong preference for any particular value of $\delta_{\rm CP}$, however, they disfavor the combination of $\delta_{\rm CP}$ and the mixing angle $\sin^2\theta_{23}$ preferred by T2K at roughly $2\sigma$ confidence. This tension may be addressed by instead considering the IO, where both experiments prefer $\delta_{\rm CP} \approx 3\pi/2$~\cite{Kelly:2020fkv,T2K:2021xwb,NOvA:2021nfi}. However, global fits to all neutrino oscillation data, including those from reactor antineutrino experiments~\cite{DayaBay:2018yms,RENO:2018dro,DoubleChooz:2019qbj}, prefer NO at ${\sim}2-3\sigma$~\cite{Esteban:2020cvm,deSalas:2020pgw,Capozzi:2021fjo,Jimenez:2022dkn}, leaving the T2K-NOvA tension unaddressed. Whether the tension can be alleviated by the presence of physics beyond the standard-three-massive-neutrinos paradigm has also been the subject of intense exploration (see, for example, Refs.~\cite{Denton:2020uda,Miranda:2019ynh,Chatterjee:2020yak,Chatterjee:2020kkm,Forero:2021azc,Rahaman:2021leu,Rahaman:2022rfp}). Here, we would like to explore, in some detail, whether the tension between T2K and NOvA can be interpreted as evidence for new light neutrino states. This issue has been discussed before~\cite{Chatterjee:2020yak}, assuming the new neutrino state $\nu_4$ with mass $m_4$ is relatively heavy: $|\Delta m^2_{41}|\gg|\Delta m^2_{31}|$. Instead, here we concentrate on $|\Delta m^2_{41}|$ values that are ${\cal O}(|\Delta m^2_{31}|)$ or smaller, down to ${\cal O}(\Delta m^2_{21})$, and explore the full parameter space associated with the fourth neutrino. In Sec.~\ref{sec:FourFlavor}, we describe the four-neutrino oscillation formalism of interest. We also discuss how the existence of a light fourth neutrino may help alleviate the T2K--NOvA tension. In Sec.~\ref{sec:Simulation} we present our simulations of NOvA and T2K data and discuss how these are used, in Sec.~\ref{sec:Results}, to compare the standard-three-massive-neutrinos paradigm and the fourth-neutrino hypothesis. We present some concluding remarks in Sec.~\ref{sec:conclusion}. Some results are included in appendices: Appendix~\ref{app:DetailFit} includes detailed numerical results from our analyses, Appendix~\ref{app:LowDm41} presents an alternate, extremely-light sterile neutrino analysis, and Appendix~\ref{app:Pseudoexperiments} discusses some Monte Carlo studies of T2K, NOvA, and their combination in light of the sterile neutrino analyses. \section{Four-Flavor Neutrino Oscillations} \label{sec:FourFlavor} \setcounter{equation}{0} We assume there are four neutrino mass eigenstates $\nu_{1,2,3,4}$, and that these are related to the four interaction eigenstates $\nu_{e,\mu,\tau}$ and $\nu_s$ (where we assume the $\nu_s$ state does not participate in the weak interactions) via a $4\times 4$ unitary mixing matrix: \begin{equation} U=R(\theta_{34})R(\theta_{24}, \delta_{24})R(\theta_{14},\delta_{14})R(\theta_{23})R(\theta_{13},\delta_{13})R(\theta_{12}), \label{eq:rotmatrices} \end{equation} where $R$ are $4\times 4$ rotation matrices in the $ij$-plane associated with a rotation angle $\theta_{ij}$. The nontrivial entries of the different $R$ in Eq.~\eqref{eq:rotmatrices} are given by \[ R(\theta_{ij})= \begin{pmatrix} c_{ij} & s_{ij}\\ -s_{ij} & c_{ij}\\ \end{pmatrix} \hspace{20 pt} R(\theta_{ij},\delta_{ij})= \begin{pmatrix} c_{ij} & s_{ij} e^{-\delta_{ij}}\\ -s_{ij} e^{\delta_{ij}} & c_{ij}\\ \end{pmatrix}, \] where $c_{ij}=\cos{\theta_{ij}}$ and $s_{ij}=\sin{\theta_{ij}}$. This extension to the standard-three-massive-neutrinos paradigm includes one more independent mass-squared difference and five new mixing parameters: three mixing angles $(\theta_{14}, \theta_{24}, \theta_{34})$ and two complex phases $(\delta_{14}, \delta_{24})$. The $4\times 4$ mixing matrix is defined in such a way that, in the limit $\theta_{14}, \theta_{24}, \theta_{34}\to 0$, $\nu_4=\nu_s$ and $\nu_{1,2,3}$ are linear superpositions of only the active states $\nu_{e,\mu,\tau}$. In this limit, we recover the standard-three-massive-neutrinos paradigm. We will be interested in the case where $\theta_{14}, \theta_{24}, \theta_{34}$ are relatively small and will refer to $\nu_{1,2,3}$ as the mostly active states. The mostly active states will be defined in the usual way, including the ordering of their masses, which is either ``normal'' (NO) or ``inverted'' (IO), as discussed in Sec.~\ref{sec:Intro}. With this in mind, we define \begin{equation} \Delta m^2_{4l}\equiv\begin{cases} m^2_4-m^2_1, & \text{if $m_1<m_3$ (NO)}\\ m^2_4-m^2_3, & \text{if $m_3<m_1$ (IO)} \end{cases}. \label{eq:order} \end{equation} In order to allow for all different relevant orderings of the four masses, we allow for both the NO and IO of the mostly active states and for both positive and negative values of $\Delta m^2_{4l}$. The four qualitatively different mass orderings are depicted in Fig.~\ref{fig:MO}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.6\linewidth]{MOFig.pdf} \caption{Definition, including the sign convention, of $\Delta m^2_{4l}$ given the NO or IO for the mostly active states. \label{fig:MO}} \end{center} \end{figure} As far as the magnitude of $\Delta m^2_{4l}$, we will restrict our analyses to $(10^{-5} < |\Delta m^2_{4l}| < 10^{-1}) \text{ eV}^2$. Inside this range, we expect nontrivial oscillation effects to manifest themselves in the far detectors of T2K and NOvA but not in the corresponding near detectors. When $|\Delta m^2_{4l}|$ is smaller than $10^{-5}$~eV$^2$, the new oscillation length associated to $\Delta m^2_{4l}$ is too long and outside the reach of T2K and NOvA. Instead, when $|\Delta m^2_{4l}|$ is larger than $10^{-1}$~eV$^2$, we expect very fast oscillations in the far detectors of T2K and NOvA and nontrivial effects in the corresponding near detectors. This region of parameter space was explored in Ref.~\cite{Chatterjee:2020yak}. The active neutrinos interact with the medium as they propagate from the source to the far detector. These interactions modify the equations that govern the flavor evolution of the neutrino states via effective potentials for forward charged-current (CC) and neutral-current (NC) scattering. The neutrino flavor evolution equation can be written as a Schr\"odinger-like equation with an effective Hamiltonian given by, in the flavor basis, $H_F=1/(2E_{\nu})(U\textbf{M}^2U^{\dagger}+\textbf{A})$, where \begin{equation} \mathbf{M}^2= \begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & \Delta m_{21}^2 & 0 & 0\\ 0 & 0 & \Delta m_{31}^2 & 0\\ 0 & 0 & 0 & \Delta m_{41}^2\\ \end{pmatrix}, \hspace{20 pt} \mathbf{A}= \begin{pmatrix} 2E_{\nu}V_{\rm CC} & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -2E_{\nu}V_{\rm NC}\\ \end{pmatrix}. \label{eq:mattermat} \end{equation} For neutrinos, $V_{\rm CC} = -2V_{\rm NC} = 3.8 \times10^{-5}\ (\text{eV}^2/\text{ GeV}) \rho [\frac{\text{g}}{\text{cm}^3}]$ are the CC and NC matter potentials, respectively. For antineutrinos, the matter potentials have the opposite sign. $\rho$ is the density -- assumed to be constant -- of the medium, assumed to be neutral. In this case, $V_{\rm NC}$ is half as large as $V_{\rm CC}$ and negative. For the NOvA and T2K experiments, we fix the baselines to be $L_{\text{NOvA}}=810\text{ km}$ and $L_{\text{T2K}}=295\text{ km}$, respectively, while the near-far detector average matter densities are taken to be, respectively, $\rho_{\text{NOvA}}=2.8\ \text{g}/\text{cm}^3$~\cite{NOvA:2021nfi} and $\rho_{\text{T2K}}=2.6\ \text{g}/\text{cm}^3$~\cite{T2K:2021xwb}. The sterile nature of the new neutrino interaction eigenstate translates into a nontrivial $\mathbf{A}_{ss}$, obtained after the subtraction of $2E_{\nu}V_{\rm NC}\mathbb{1}$ from the Hamiltonian. Since the tension between T2K and NOvA is mostly driven by the $\nu_e$ appearance channel, Fig.~\ref{fig:plainprobs} depicts the $\nu_e$ appearance probability for both experiments given the three-neutrino and four-neutrino hypotheses. The mixing parameters for the different hypotheses are listed in Table~\ref{tab:OscParams}, except for $\sin^2\theta_{34}$. We see that the new oscillation frequency $\left|\Delta m^2_{4l}\right| \approx 10^{-2}$ eV$^2$ can lead to pronounced oscillations at both NOvA and T2K. We also note that the new effects can be different at T2K relative to NOvA for, roughly, two different reasons. One is that the dominant values of $L/E$, keeping in mind that both beams have a narrow energy profile, are not identical for the two experiments. This means that for relatively ``fast'' $\Delta m^2_{4l}$ the value of the new oscillation phase will not be the same for the two experiments. The other is that the matter effects are more pronounced at NOvA relative to T2K. These allow the effective oscillation frequencies and mixing parameters to be distinct at the two experimental setups. \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{probplot.PDF} \caption{Appearance oscillation probabilities at T2K (top, blue) and NOvA (bottom, purple) comparing three-neutrino oscillation probabilities (solid lines, parameters from Table \ref{tab:OscParams}, column 2 ``$3\nu$ IO'') against four-neutrino ones (non-solid lines, parameters from Table \ref{tab:OscParams}, column 4 ``4$\nu$ IO''). Left panels show probabilities for neutrino oscillation, whereas right ones show antineutrino oscillation. For the four-neutrino probabilities, three choices of $\sin^2{\theta_{34}}$ are used for illustrative purposes: dashed/dot-dashed/dotted lines correspond to $\sin^2{\theta_{34}}=0/0.4/0.8$. \label{fig:plainprobs}} \end{center} \end{figure} In vacuum, $P(\nu_{\mu}\to\nu_e)$ does not depend on $\theta_{34}$; this is not the case in matter. An easy way to see this is to express the propagation Hamiltonian in the mass basis. In the absence of matter effects, the dependency on the mixing parameters is encoded in the initial and final interaction eigenstates and since neither $\nu_e$ nor $\nu_{\mu}$, when expressed as linear superpositions of the mass eigenstates, depend on $\theta_{34}$, then neither can $P(\nu_{\mu}\to\nu_e)$. Instead, when the matter effects are present, the matter potential in the mass basis depends on $\theta_{34}$. Hence we expect $P(\nu_{\mu}\to\nu_e)$ to also depend on $\theta_{34}$ as long as matter effects are relevant. The dependency on $\theta_{34}$ can be seen in Fig. \ref{fig:plainprobs}. As expected, it is rather small at T2K and larger at NOvA, where matter effects are relatively more pronounced. In order to further illustrate the impact of matter effects, Fig.~\ref{fig:ratioprobs} depicts the ratio of the appearance probabilities in matter relative to what those would be in vacuum. We also illustrate the difference between a new interaction state that is active and one that is sterile by depicting the same ratio assuming the neutral current matter potential is zero. The new oscillation frequency is apparent at both experiments and it is easy to see that matter effects are more pronounced at NOvA relative to T2K. The ``sterileness'' of the fourth neutrino is also more pronounced at NOvA, as expected. \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{ratioplot.PDF} \caption{Ratio of appearance oscillation probabilities in matter to those in vacuum at T2K (left) and NOvA (right). Solid lines correspond to the three-neutrino oscillation probabilities. Dashed and dot-dashed lines correspond to a fourth neutrino that is sterile or active, respectively. Parameters are taken from columns 2 and 4 from Table \ref{tab:OscParams} corresponding to the three-neutrino and four-neutrino cases, respectively. \label{fig:ratioprobs}} \end{center} \end{figure} \section{Simulating Data from NOvA and T2K} \label{sec:Simulation} \setcounter{equation}{0} As discussed earlier, both NOvA and T2K operate with beams with a flux of predominantly $\nu_\mu$ ($\overline\nu_\mu$) when operating in (anti)neutrino mode. Both experiments' far detectors are designed to study the disappearance of $\nu_\mu$ and $\overline\nu_\mu$, as well as the appearance of $\nu_e$ and $\overline\nu_e$. Using the most recent publications from NOvA~\cite{NOvA:2021nfi} and T2K~\cite{T2K:2021xwb}, and building off the simulations of Refs.~\cite{Ellis:2020ehi,Ellis:2020hus,Kelly:2020fkv}, we perform simulations to determine the expected event rates in the disappearance and appearance channels of both experiments given a set of three- or four-neutrino oscillation parameters. We then compare these expected event rates against the experiments' published event rates and construct a test statistic using Poissonian bin expectations. In the remainder of this section, we briefly explain the process by which we simulate the expected event rates, as well as the number of data points for each experiment that enter our test statistic. \begin{comment} \begin{table} \begin{center} \caption{Oscillation parameters assumed when depicting oscillation probabilities and expected event rates. The four columns correspond to the three-neutrino ($3\nu$) and four-neutrino ($4\nu$) hypotheses, as well as whether the three mostly-active neutrinos follow the normal (NO) or inverted (IO) mass ordering.\label{tab:OscParams}} \begin{tabular}{|c||c|c||c|c|}\hline Parameter & $3\nu$ NO & $3\nu$ IO & $4\nu$ NO & $4\nu$ IO \\ \hline\hline $\sin^2 \theta_{12}$ & 0.307 & 0.307 & 0.321 & 0.314 \\ \hline $\sin^2 \theta_{13}$ & 0.0217 & 0.0220 & $0.0229$ & $0.0229$ \\ \hline $\sin^2\theta_{23}$ & 0.569 & 0.570 & $0.4278$ & $0.4495$ \\ \hline $\Delta m_{21}^2/10^{-5}$ eV$^2$ & 7.53 & 7.53 & 7.53 & 7.53 \\ \hline $\Delta m_{31}^2/10^{-3}$ eV$^2$ & 2.51 & -2.41 & 2.49 & -2.39 \\ \hline $\delta_{\rm CP}$ & -2.62 & -1.57 & -2.19 & -1.82 \\ \hline\hline $\sin^2\theta_{14}$ & --- & --- & 0.043 & 0.021 \\ \hline $\sin^2\theta_{24}$ & --- & --- & 0.060 & 0.053 \\ \hline $\sin^2\theta_{34}$ & --- & --- & 0.37 & 0.56 \\ \hline $\Delta m_{41}^2$/eV$^2$ & --- & --- & $1.1 \times 10^{-2}$ & $-1.1\times 10^{-2}$ \\ \hline $\delta_{14}$ & --- & --- & 0.01 & -1.40 \\ \hline $\delta_{24}$ & --- & --- & 1.82 & -0.40 \\ \hline\hline \end{tabular} \end{center} \end{table} \end{comment} \begin{table} \begin{center} \caption{Oscillation parameters assumed when depicting oscillation probabilities and expected event rates. The four columns correspond to the three-neutrino ($3\nu$) and four-neutrino ($4\nu$) hypotheses, as well as whether the three mostly-active neutrinos follow the normal (NO) or inverted (IO) mass ordering.\label{tab:OscParams}} \begin{tabular}{|c||c|c||c|c|}\hline Parameter & $3\nu$ NO & $3\nu$ IO & $4\nu$ NO & $4\nu$ IO \\ \hline\hline $\sin^2 \theta_{12}$ & 0.307 & 0.307 & 0.321 & 0.314 \\ \hline $\sin^2 \theta_{13}$ & 0.022 & 0.022 & $0.023$ & $0.023$ \\ \hline $\sin^2\theta_{23}$ & 0.57 & 0.57 & $0.43$ & $0.45$ \\ \hline $\Delta m_{21}^2/10^{-5}$ eV$^2$ & 7.53 & 7.53 & 7.53 & 7.53 \\ \hline $\Delta m_{31}^2/10^{-3}$ eV$^2$ & 2.51 & -2.41 & 2.49 & -2.39 \\ \hline $\delta_{\rm CP}$ & 3.66 & 4.71 & 4.09 & 4.46 \\ \hline\hline $\sin^2\theta_{14}$ & --- & --- & 0.043 & 0.021 \\ \hline $\sin^2\theta_{24}$ & --- & --- & 0.060 & 0.053 \\ \hline $\sin^2\theta_{34}$ & --- & --- & 0.37 & 0.56 \\ \hline $\Delta m_{41}^2$/eV$^2$ & --- & --- & $1.1 \times 10^{-2}$ & $-1.1\times 10^{-2}$ \\ \hline $\delta_{14}$ & --- & --- & 0.01 & 4.88 \\ \hline $\delta_{24}$ & --- & --- & 1.82 & 5.89 \\ \hline\hline \end{tabular} \end{center} \end{table} To center our discussion, we will rely on several benchmark sets of oscillation parameters with which we calculate the expected observables at NOvA and T2K. We adopt two benchmark sets each for the $3\nu$ and $4\nu$ assumptions, listed in Table~\ref{tab:OscParams}, allowing for the mostly-active neutrinos to follow either the normal (NO) or inverted (IO) orderings. As we will discuss in Section~\ref{sec:Results}, these parameters are the best-fit points obtained by our fit to the \textit{combination} of T2K and NOvA under the different hypotheses. \textbf{NOvA ---} Our simulation of NOvA, designed to match the results of Ref.~\cite{NOvA:2021nfi}, includes the disappearance channels of neutrino and antineutrino mode (19 bins each, with neutrino energies ranging from $0$ to $5$ GeV) as well as event rate measurements of the appearance channels\footnote{For simplicity, we sum the expected event rate for the entire neutrino energy range and compare it against the observed 82 (33) appearance events of operation in (anti)neutrino mode.}, totaling 40 data points. This simulation corresponds to a total exposure of $13.6 \times 10^{20}$ ($12.5 \times 10^{20}$) protons on target (POT) in (anti)neutrino mode. \begin{figure}[htbp] \begin{center} \includegraphics[width=\linewidth]{NOVA_Validation.pdf} \caption{Expected and observed event rates in NOvA's $\nu_\mu$ disappearance (left), $\overline\nu_\mu$ disappearance (center), and $\nu_e$/$\overline\nu_e$ appearance (right) channels. We compare the prediction under the $3\nu$ (solid/dashed lines) and $4\nu$ (faint lines/regions) hypotheses, with parameters from Table~\ref{tab:OscParams}, with the observed data (black). Purple curves correspond to the mostly-active neutrinos following the normal mass ordering (NO), where green ones correspond to the inverted mass ordering (IO). In the right panel, the CP-violating phases are allowed to vary in the predicted rates. Data points from Ref.~\cite{NOvA:2021nfi}.\label{fig:NOVAValidation}} \end{center} \end{figure} Fig.~\ref{fig:NOVAValidation} shows the expected event rates in NOvA for neutrino mode $\nu_\mu$ disappearance (left), antineutrino mode $\overline\nu_\mu$ disappearance\footnote{In contrast to Ref.~\cite{NOvA:2021nfi}, our disappearance channel panels depict the event rate per bin as opposed to event rate per unit energy, causing our higher-energy bins (with larger bin width) to appear exaggerated.} (center), and a joint comparison of neutrino ($x$-axis) and antineutrino ($y$-axis) mode $\nu_\mu \to \nu_e$ (or $\overline\nu_\mu \to \overline\nu_e$) appearance (right panel). We compare the NOvA benchmark oscillation predictions, using the parameters in Table~\ref{tab:OscParams} (purple histograms/curves\footnote{Where the faint curves are not visible in the left/center panels, the four-neutrino hypothesis predicts the same rate as the three-neutrino one(s).} for NO, green for IO, and dark curves for $3\nu$, faint ones for $4\nu$), to the observed event rates from the experiment (black). Error bars here are only statistical. In the left and center panels, all oscillation parameters are fixed according to Table~\ref{tab:OscParams}. In contrast, the right panel allows $\delta_{\rm CP}$ to vary for the $3\nu$ curves, and all three CP-violating phases to vary in the $4\nu$ case. This allows for a set of ellipses in this bi-event parameter space instead of a single one. In the right panel, stars indicate the predicted event rates when the CP-violating phases are fixed to their values in Table~\ref{tab:OscParams}. \textbf{T2K ---} We simulate T2K in much the same spirit as NOvA, with the goal of matching the results presented in Ref.~\cite{T2K:2021xwb}. In the case of T2K, the disappearance channels each consist of 30 bins -- $100$ MeV in width from $0$ to $2.9$ GeV, and one bin corresponding to neutrino energies above $2.9$ GeV. For the appearance channel, we take advantage of the expected neutrino-energy spectrum with bins of $125$ MeV width from $0$ to $1.25$ GeV in each channel.\footnote{Refs.~\cite{Ellis:2020ehi,Ellis:2020hus}, however, have demonstrated that total-rate measurements of T2K's appearance channel result in similar parameter estimation to the collaboration's results.} This yields 80 data points in our T2K analysis. Our T2K simulation corresponds to an exposure of $14.94 \times 10^{20}$ ($16.35 \times 10^{20}$) POT in (anti)neutrino mode operation. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{T2K_Validation.pdf} \caption{Expected and observed event rates in T2K's $\nu_\mu$ disappearance (left), $\overline\nu_\mu$ disappearance (center), and $\nu_e$/$\overline\nu_e$ appearance (right) channels. We compare the prediction under the $3\nu$ (solid/dashed lines) and $4\nu$ (faint lines/regions) hypotheses, with parameters from Table~\ref{tab:OscParams}, with the observed data (black). Purple curves correspond to the mostly-active neutrinos following the normal mass ordering (NO), where green ones correspond to the inverted mass ordering (IO). In the right panel, the CP-violating phases are allowed to vary in the predicted rates. Data points from Ref.~\cite{T2K:2021xwb}.\label{fig:T2KValidation}} \end{center} \end{figure} Similar to Fig.~\ref{fig:NOVAValidation}, we show in Fig.~\ref{fig:T2KValidation} our expected event rates in the different T2K channels -- the left panel is for $\nu_\mu$ disappearance, center for $\overline\nu_\mu$ disappearance, and the right panel is the combined $\nu_e$ and $\overline\nu_e$ appearance. For clarity of display, we sum the total expected event rates in the $\nu_e$ and $\overline\nu_e$ channels in the right panel. Here, the oscillation parameters correspond to those given in Table~\ref{tab:OscParams} and, in the right panel, the CP-violating phases are allowed to vary. \textbf{Test Statistic ---} We take the expected and observed event rates in NOvA (40 data points), T2K (80), or a combination of them (120) and construct a test statistic using Poisson statistics for the log-likelihood (matching a $\chi^2$ function in the limit of large event rates): \begin{equation}\label{eq:chi2} \chi^2 = \sum_{i\ \in\ \mathrm{bins}} -2\left(-\lambda_i + x_i + x_i\log{\left(\frac{\lambda_i}{x_i}\right)}\right), \end{equation} where $\lambda_i$ ($x_i$) represents the expected (observed) event rate in bin $i$ for a given experiment/channel. We will be interested in several pieces of information from the test statistic in Eq.~\eqref{eq:chi2}. When performing parameter estimations, we will use contours of $\Delta\chi^2$ about its minimum to represent preferred regions/intervals of parameter space. When comparing best-fit points under different hypotheses, i.e., comparing preference for the $4\nu$ scenario over the $3\nu$ one, we will compare the minimum $\chi^2$ when varying over oscillation parameters, taking into account the number of degrees of freedom in such a fit. \textbf{Analysis \& Priors ---} The main focus of this work is on the long-baseline experiments NOvA and T2K, which are sensitive to oscillation effects associated with mass-squared differences of order of $10^{-3}$ eV$^2$. On the other hand, the solar mass-squared difference has been well-measured by solar neutrino~\cite{Super-Kamiokande:2016yck,SKNu2020} and reactor antineutrino~\cite{KamLAND:2013rgu} experiments to be $\Delta m_{21}^2 = 7.53 \times 10^{-5}$ eV$^2$ while the associated mixing angle is measured to be $\sin^2\theta_{12} = 0.307$, both at the few percent level. Due to the lack of sensitivity to these quantities at NOvA/T2K, we fix them\footnote{Specifically, we fix the matrix-element-squared $|U_{e2}|^2$, which is equal to $\sin^2\theta_{12} \cos^2\theta_{13} \cos^2\theta_{14}$ in the four-neutrino framework, to its best-fit value of $0.300$. This causes $\sin^2\theta_{12}$ to vary for large $\theta_{14}$.} in our analyses. While NOvA and T2K are sensitive to $\sin^2\theta_{13}$ through their appearance channels, their measurement capability is significantly weaker than that of Daya Bay~\cite{DayaBay:2018yms}, RENO~\cite{RENO:2018dro}, and Double Chooz~\cite{DoubleChooz:2019qbj} reactor antineutrino experiments. In our fits, we include Daya Bay's measurement as a Gaussian prior on the quantity $4|U_{e3}|^2 ( 1 - |U_{e3}|^2) = 0.0856 \pm 0.0029$, which is $\sin^2(2\theta_{13})$ when considering the three-neutrino hypothesis~\cite{DayaBay:2018yms}. \section{Results} \label{sec:Results} \setcounter{equation}{0} \setcounter{footnote}{0} This section details the results of our analyses. First, in Section~\ref{subsec:3nuResults}, we summarize the results of fits of our NOvA and T2K simulations and their combination under the three-neutrino hypothesis. Then, Section~\ref{subsec:4nuResults} discusses the results of these fits under the four-neutrino hypothesis, including a comparison of the three-neutrino and four-neutrino hypotheses. \subsection{Three-Neutrino Results}\label{subsec:3nuResults} Our first three-neutrino analysis is focused on finding the best-fit points of each experimental analysis (T2K, NOvA, and a combined fit). For this, we perform two fits for each experiment/combination, one assuming that neutrinos follow the normal mass ordering (NO, $\Delta m_{31}^2 > 0$) and one assuming that they follow the inverted one (IO, $\Delta m_{31}^2 < 0$). Recent results have demonstrated that, under the three-neutrino hypothesis, T2K and NOvA each exhibit mild preference for the NO over the IO, but their combination has a mild preference for the IO~\cite{Kelly:2020fkv,Esteban:2020cvm,deSalas:2020pgw,Capozzi:2021fjo}. When combined with all reactor antineutrino data and other experimental results, the global preference is for the NO at relatively low significance. We find a result consistent with these previous results, summarized in Table~\ref{tab:BFPs3nu}. As in all of our analyses, $\Delta m_{21}^2$ and $\sin^2\theta_{12}$ are fixed, and a prior is included from the results of Daya Bay on $\sin^2(2\theta_{13})$. We present both the overall test statistic at this best-fit point for each analysis as well as the preference for the NO over the IO in the right-most column (positive values indicate preference for NO, negative for IO). We note here that all of the best-fit $\chi^2$ obtained are comparable to (and in the case of T2K and the joint fit, less than) the number of degrees of freedom, implying that these are all good fits to their respective data sets. Finally, we see that the joint-fit $\chi^2$ under the NO hypothesis is around five units of $\chi^2$ larger than the sum of the two individual fits whereas, under the IO hypothesis, it is roughly the same -- this highlights the so-called NOvA/T2K tension, where the results disagree under the NO hypothesis but not under the IO one. The values from the ``Joint'' fit in Table~\ref{tab:BFPs3nu} correspond to the benchmark values we adopted in the three-neutrino case in Table~\ref{tab:OscParams}. \begin{comment} \begin{table}[!htbp] \begin{center} \caption{Best-fit parameters from our analyses of T2K, NOvA, and a combined analysis of the two under the three-neutrino hypothesis. We determine the best-fit point under the normal (NO) and inverted (IO) mass-ordering hypotheses, as well as the overall preference for the NO over IO, $\Delta \chi^2_{\rm NO,IO}$, for each analysis. In each, a prior on $\sin^2(2\theta_{13})$ from Daya Bay is included, and $\sin^2\theta_{12} = 0.307$ and $\Delta m_{21}^2 = 7.53 \times 10^{-5}$ eV$^2$ are fixed to their best-fit points from other experimental results.\label{tab:BFPs3nu}} \begin{tabular}{c| c||c|c|c|c||c||c} \multicolumn{2}{c||}{3$\nu$} & $\sin^2\theta_{13}$ & $\sin^2\theta_{23}$ & $\Delta m_{31}^2/10^{-3}$ eV$^2$ & $\delta_{\rm CP}$ & $\chi^2$ & $\Delta \chi^2_{\rm NO,IO}$ \\ \hline \hline \multirow{2}{*}{T2K} & NO & $0.0219$ & $0.5586$ & $2.5197$ & $-1.7004$ & $66.815$ & \multirow{2}{*}{$1.478$} \\ \cline{2-7} & IO & $0.0220$ & $0.5640$ & $-2.4132$ & $-1.5726$ & $68.193$ & \\ \hline \multirow{2}{*}{NOvA} & NO & $0.0218$ & $0.5781$ & $2.5225$ & $2.3386$ & $43.402$ & \multirow{2}{*}{$0.144$} \\ \cline{2-7} & IO & $0.0218$ & $0.5727$ & $-2.4114$ & $-1.5059$ & $43.546$ & \\ \hline \multirow{2}{*}{Joint} & NO & $0.0217$ & $0.5687$ & $2.5129$ & $-2.6156$ & $115.575$ & \multirow{2}{*}{$-3.759$} \\ \cline{2-7} & IO & $0.0220$ & $0.5700$ & $-2.4084$ & $-1.5655$ & $111.816$ & \\ \hline \end{tabular} \end{center} \end{table} \end{comment} \begin{table}[!htbp] \begin{center} \caption{Best-fit parameters of our analyses of T2K, NOvA, and a combined analysis of the two under the three-neutrino hypothesis. We determine the best-fit point under the normal (NO) and inverted (IO) mass-ordering hypotheses, as well as the overall preference for the NO over IO, $\Delta \chi^2_{\rm NO,IO}$, for each analysis. In each, a prior on $\sin^2(2\theta_{13})$ from Daya Bay is included, and $\sin^2\theta_{12} = 0.307$ and $\Delta m_{21}^2 = 7.53 \times 10^{-5}$ eV$^2$ are fixed to their best-fit points from other experimental results.\label{tab:BFPs3nu}} \begin{tabular}{c| c||c|c|c|c||c||c} \multicolumn{2}{c||}{3$\nu$} & $\sin^2\theta_{13}$ & $\sin^2\theta_{23}$ & $\Delta m_{31}^2/10^{-3}$ eV$^2$ & $\delta_{\rm CP}$ & $\chi^2$ & $\Delta \chi^2_{\rm NO,IO}$ \\ \hline \hline \multirow{2}{*}{T2K} & NO & $0.022$ & $0.56$ & $2.52$ & $4.58$ & $66.82$ & \multirow{2}{*}{$1.48$} \\ \cline{2-7} & IO & $0.022$ & $0.56$ & $-2.41$ & $4.71$ & $68.19$ & \\ \hline \multirow{2}{*}{NOvA} & NO & $0.022$ & $0.58$ & $2.52$ & $2.34$ & $43.40$ & \multirow{2}{*}{$0.14$} \\ \cline{2-7} & IO & $0.022$ & $0.57$ & $-2.41$ & $4.78$ & $43.55$ & \\ \hline \multirow{2}{*}{Joint} & NO & $0.022$ & $0.57$ & $2.51$ & $3.67$ & $115.58$ & \multirow{2}{*}{$-3.76$} \\ \cline{2-7} & IO & $0.022$ & $0.57$ & $-2.41$ & $4.72$ & $111.82$ & \\ \hline \end{tabular} \end{center} \end{table} We also perform a parameter estimation under the three-neutrino hypothesis, both to prepare our expectations for the four-neutrino analyses and to validate our results compared against the official results of the experimental collaborations. The free/fixed parameters and test statistic are identical to those when determining the best-fit points. For simplicity, we perform an analysis of the parameters $\sin^2\theta_{13}$, $\sin^2\theta_{23}$, $\Delta m_{31}^2$, and $\delta_{\rm CP}$ and marginalize over $\sin^2\theta_{13}$ and $\Delta m_{31}^2$ (including both the NO and IO hypotheses), and present the joint measurement of $\sin^2\theta_{23}$ and $\delta_{\rm CP}$. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.6\linewidth]{ThreeNeutrino_DCP_S23.pdf} \caption{Parameter estimation of $\delta_{\rm CP}$ and $\sin^2\theta_{23}$ from T2K (blue), NOvA (purple), and their combination (green) at $2\sigma$ (dashed lines) and $3\sigma$ (solid lines) CL. \label{fig:ThreeNeutrinoScan}} \end{center} \end{figure} Fig.~\ref{fig:ThreeNeutrinoScan} presents the results of this analysis at $2\sigma$ (dashed, filled contours) and $3\sigma$ (solid lines) CL for T2K (blue), NOvA (purple), and the joint fit (green). Stars of each color represent the best-fit points obtained in Table~\ref{tab:BFPs3nu}. Once the mass ordering is marginalized, NOvA has no sensitivity to $\delta_{\rm CP}$, and constrains $\sin^2\theta_{23}$ to be between roughly $0.37$ and $0.65$ at $3\sigma$ CL. In the NO, NOvA can take on nearly any value of $\delta_{CP}$, however it disfavors the combination $\delta_{CP} = 3\pi/2$, $\sin^2\theta_{23} > 1/2$ at relatively high significance. Under the IO, NOvA prefers this combination. Regardless of the mass ordering, T2K prefers $\delta_{\rm CP} = 3\pi/2$ and constrains $\sin^2\theta_{23}$ to be in a similar range as NOvA. When the two are combined, the preferred regions are very similar to those obtained in the fit to T2K data alone. \subsection{Four-Neutrino Results}\label{subsec:4nuResults} We begin our four-neutrino analyses by repeating the process that led to Table~\ref{tab:BFPs3nu} -- we determine the best-fit points under the four-neutrino hypothesis for T2K, NOvA, and their combination. Now that we are considering four-neutrino oscillations, we allow for all four mass orderings discussed in Sec.~\ref{sec:FourFlavor} (see Fig.~\ref{fig:MO}). This amounts to dividing the analysis based on the signs of $\Delta m_{31}^2$ and $\Delta m_{4l}^2$, where $l$ represents $m_1$ in the NO and $m_3$ in the IO, the lightest of the mostly-active neutrinos. Table~\ref{tab:BFPs4nuSmall} summarizes these twelve analyses (four each for NOvA, T2K, and their Joint fit), giving the best-fit parameters as well as the overall $\chi^2$ of each fit in the four-neutrino hypothesis. Near the bottom we give the preferred ordering of masses from each experiment/combination -- T2K and the Joint fit both prefer $m_4 < m_3 < m_1 < m_2$, where NOvA prefers $m_1 < m_2 < m _3 < m_4$. The preference for the sign of $\Delta m_{4l}^2$ is small in all cases -- individual fit results for all four mass orderings and all three experimental combinations are provided for completeness in Appendix~\ref{app:DetailFit}. When allowing for a fourth neutrino, neither T2K nor NOvA have a strong preference for the sign of $\Delta m_{31}^2$. T2K prefers $\Delta m_{31}^2 < 0$ at $\Delta \chi^2 = 0.1$, where NOvA prefers $\Delta m_{31}^2 > 0$ at $\Delta \chi^2 = 0.02$. However, the combined fit prefers $\Delta m_{31}^2 < 0$ at $\Delta \chi^2 = 4.6$ an even stronger preference for negative $\Delta m^2_{31}$ than when data are analyzed under the three-neutrino hypothesis. \begin{comment} \begin{table} \begin{center} \caption{Best-fit parameters of the four-neutrino analyses of T2K, NOvA, and their combination. We allow for all possible orderings of the neutrino mass eigenstates, hence $\Delta m_{31}^2$ and $\Delta m_{4l}^2$ can each be negative. In each analysis, a prior on $|U_{e3}|^2(1-|U_{e3}|^2)$ from Daya Bay is included, and $|U_{e2}|^2 = 0.300$ and $\Delta m_{21}^2 = 7.53 \times 10^{-5}$ eV$^2$ are fixed to their best-fit points from other experimental results.\label{tab:BFPs4nuSmall}} \begin{tabular}{c||c|c|c|} 4$\nu$ & T2K & NOvA & Joint \\ \hline \hline $\sin^2\theta_{13}$ & $0.0237$ & $0.0220$ & $0.0229$ \\ \hline $\sin^2\theta_{23}$ & $0.4335$ & $0.4393$ & $0.4278$ \\ \hline $\Delta m_{31}^2/10^{-3}$ eV$^2$ & $-2.3942$ & $2.4270$ & $-2.3922$ \\ \hline $\delta_{\rm CP}$ & $-1.8667$ & $-0.0048$ & $-1.8158$ \\ \hline\hline $\sin^2\theta_{14}$ & $10^{-1.1086}$ & $10^{-2.1617}$ & $10^{-1.3630}$ \\ \hline $\sin^2\theta_{24}$ & $10^{-1.3849}$ & $10^{-0.9069}$ & $10^{-1.2216}$ \\ \hline $\sin^2\theta_{34}$ & $10^{-0.1081}$ & $10^{-0.5372}$ & $10^{-0.4331}$ \\ \hline $\Delta m_{4l}^2/$eV$^2$ & $-10^{-2.0726}$ & $10^{-1.9841}$ & $-10^{-2.0705}$ \\ \hline $\delta_{14}$ & $1.8241$ & $-2.7734$ & $-1.4022$ \\ \hline $\delta_{24}$ & $2.6439 $& $-3.1339$ & $-0.3975$ \\ \hline \hline $\chi^2_{\rm 4\nu}$ & $61.95$ & $38.10$ & $102.83$ \\ \hline Preference & $m_4 < m_3 < m_1 < m_2$ & $m_1 < m_2 < m_3 < m_4$ & $m_4 < m_3 < m_1 < m_2$ \\ \hline $\chi^2_{\rm 3\nu} - \chi^2_{\rm 4\nu}$ & $4.87$ & $5.30$ & $8.99$ \end{tabular} \end{center} \end{table} \end{comment} \begin{table} \begin{center} \caption{Best-fit parameters of the four-neutrino analyses of T2K, NOvA, and their combination. We allow for all possible orderings of the neutrino mass eigenstates, hence $\Delta m_{31}^2$ and $\Delta m_{4l}^2$ can each be negative. In each analysis, a prior on $|U_{e3}|^2(1-|U_{e3}|^2)$ from Daya Bay is included, and $|U_{e2}|^2 = 0.300$ and $\Delta m_{21}^2 = 7.53 \times 10^{-5}$ eV$^2$ are fixed to their best-fit points from other experimental results.\label{tab:BFPs4nuSmall}} \begin{tabular}{c||c|c|c|} 4$\nu$ & T2K & NOvA & Joint \\ \hline \hline $\sin^2\theta_{13}$ & $0.024$ & $0.022$ & $0.023$ \\ \hline $\sin^2\theta_{23}$ & $0.43$ & $0.44$ & $0.43$ \\ \hline $\Delta m_{31}^2/10^{-3}$ eV$^2$ & $-2.39$ & $2.43$ & $-2.39$ \\ \hline $\delta_{\rm CP}$ & $4.41$ & $0.00$ & $4.46$ \\ \hline\hline $\sin^2\theta_{14}$ & $7.8\times 10^{-2}$ & $6.9\times 10^{-3}$ & $4.3\times 10^{-2}$ \\ \hline $\sin^2\theta_{24}$ & $4.1\times 10^{-2}$ & $1.2\times 10^{-1}$ & $6.0\times 10^{-2}$ \\ \hline $\sin^2\theta_{34}$ & $0.78$ & $0.29$ & $0.37$ \\ \hline $\Delta m_{4l}^2/$eV$^2$ & $-8.5\times10^{-3}$ & $1.0\times10^{-2}$ & $-8.5\times10^{-3}$ \\ \hline $\delta_{14}$ & $1.82$ & $3.51$ & $4.88$ \\ \hline $\delta_{24}$ & $2.64$ & $3.15$ & $5.89$ \\ \hline \hline $\chi^2_{\rm 4\nu}$ & $61.95$ & $38.10$ & $102.83$ \\ \hline Ordering & $m_4 < m_3 < m_1 < m_2$ & $m_1 < m_2 < m_3 < m_4$ & $m_4 < m_3 < m_1 < m_2$ \\ \hline $\chi^2_{\rm 3\nu} - \chi^2_{\rm 4\nu}$ & $4.87$ & $5.30$ & $8.99$ \end{tabular} \end{center} \end{table} The bottom row of Table~\ref{tab:BFPs4nuSmall} presents the improvement in each experimental analysis (as well as the combined one) compared to the results of the three-neutrino analysis. We find that the fits to both the T2K\footnote{This result is consistent with what the T2K collaboration reported in Ref.~\cite{T2K:2019efw}, which found an improvement of $\Delta \chi^2 = 4.7$.} and NOvA data improve by roughly five units in $\chi^2$, and the combined fit improves by nearly nine units. However, we note two very important caveats here: \begin{enumerate} \item The results of the three-neutrino fit in Table~\ref{tab:BFPs3nu} demonstrate that, relative to the number of degrees of freedom, good fits have been achieved. So, when comparing the three-neutrino fit -- four parameters -- to the four-neutrino one -- ten parameters -- one must take into account the fact that this minimization is being performed over an additional six parameters. \item When determining the statistical significance, the comparison of $\chi^2_{3\nu} - \chi^2_{4\nu}$ must be scrutinized to see whether these test statistics follow a $\chi^2$ distribution. We have performed some basic Monte Carlo studies of our T2K and NOvA simulations (see Appendix~\ref{app:Pseudoexperiments}) and found that, when statistical fluctuations are considered, one will often find best-fit points with $\Delta m_{4l}^2 \approx 10^{-2}$ eV$^2$ that improve each experiment's fit by a couple of units of $\chi^2$. This is likely driven by the sizes of the energy bins (around 100~MeV) used in the T2K and NOvA analyses -- at T2K/NOvA baselines/energies, a new oscillation driven by a mass-squared splitting of $10^{-2}$ eV$^2$ will evolve significantly\footnote{For this $\Delta m^2$, the argument of the term $\sin^2(\Delta m^2 L/4E_\nu)$ that enters the oscillation probabilities changes by an appreciable fraction of $\pi$.} over the span of a single bin. This new fast oscillation can ``absorb'' individual bins' statistical fluctuations and lead to an artificial improvement in the test statistic. This is validated by the results of Ref.~\cite{T2K:2019efw}, which found that an improvement of $\Delta \chi^2 = 4.7$ at T2K (between the three-neutrino and four-neutrino hypotheses) corresponds to only ${\sim}1.0\sigma$ preference for a fourth neutrino, in contrast with the preference derived assuming Wilks' theorem~\cite{Wilks:1938dza} holds, ${\sim}1.7\sigma$. \end{enumerate} When considering the results of Table~\ref{tab:BFPs4nuSmall} (and that the best-fit points are close to $|\Delta m_{4l}^2| \approx 10^{-2}$ eV$^2$) in light of these two caveats, we find that, while a very light sterile neutrino improves the ``tension'' between T2K and NOvA, there is not strong evidence in favor of a four-neutrino hypothesis over the three-neutrino one. In order to determine whether the sterile neutrino solution to the NOvA/T2K tension persists in light of caveat 2 above, we also perform an alternate analysis in Appendix~\ref{app:LowDm41} where we restrict $\Delta m_{21}^2 \lesssim \left|\Delta m_{4l}^2\right| < 10^{-3}$ eV$^2$. This allows us to avoid fast oscillations in the T2K/NOvA far detectors and any statistical pathologies that may arise. We find that there remains a preference for four neutrinos over three neutrinos at a level of $\Delta \chi^2 = 4.1$. While this is smaller than what we observed for $\left|\Delta m_{4l}^2\right| \approx 10^{-2}$ eV$^2$, it is nevertheless comparable to the preference for non-standard interactions as a solution to this tension found in Refs.~\cite{Denton:2020uda,Chatterjee:2020kkm} at the level of $\Delta \chi^2 \approx 4.4-4.5$. \begin{figure} \begin{center} \includegraphics[width=0.86\linewidth]{Chi2_BFP_Dm41_4MOs.pdf} \caption{Best-fit $\chi^2$ obtained using our analysis of T2K (top, blue), NOvA (middle, purple), and a joint fit of the two (bottom, green) as a function of different values of $\Delta m_{4l}^2$. Different tones within each panel indicate different mass orderings (the signs of $\Delta m_{31}^2$ and $\Delta m_{4l}^2$). The minimization has been performed across all other oscillation parameters except for $\theta_{12}$ and $\Delta m_{21}^2$, which are fixed. \label{fig:Dm41BFP}} \end{center} \end{figure} We generalize this best-fit procedure by, instead of minimizing over all parameters (including $\Delta m_{4l}^2$), scanning over $\Delta m_{4l}^2$ values. We again allow for both positive and negative values of this new mass-squared difference and for both the normal and inverted mass orderings for the three mostly active states. Fig.~\ref{fig:Dm41BFP} presents the results of this approach. The top panels (blue lines) show the results for T2K, middle panels (purple) for NOvA, and bottom panels (green) for the combined analysis. In each row, the left (right) panel corresponds to negative (positive) values of $\Delta m_{4l}^2$. Dark (light) lines in each case correspond to the NO (IO) among the mostly-active neutrinos. Dashed lines in each panel indicate the best-fit $\chi^2$ under the three-neutrino hypothesis presented in Table~\ref{tab:BFPs3nu}. Stars indicate the overall best-fit point of each analysis (when considering all different mass orderings), and lines are made bold if they constitute the minimum $\chi^2$ for a given experimental analysis for all of these choices of mass orderings. The findings of Table~\ref{tab:BFPs4nuSmall} (and the corresponding tables in Appendix~\ref{app:DetailFit}) are borne out in Fig.~\ref{fig:Dm41BFP}, showing that the fits prefer $\left|\Delta m_{4l}^2\right| \sim 10^{-2}$ eV$^2$ in all cases, with moderate improvements relative to the three-neutrino fits. Above, we discussed the possibility that this preference has to do with the energy resolution and binning of the experiments and the statistical significance when interpreting confidence levels from $\Delta \chi^2$ may be overstated. If we restrict ourselves to $\left|\Delta m_{4l}^2\right| \lesssim 10^{-3}$ eV$^2$ to avoid this concern, we still find moderate preference for a fourth neutrino -- see Appendix~\ref{app:LowDm41} for further discussion. Moving on from best-fit determinations, we now construct constraints on the new parameters, specifically $\sin^2\theta_{24}$ and $\Delta m_{4l}^2$ (the ones to which these experiments have the greatest sensitivity). In order to present constraints at a particular confidence level and compare against other literature results, we assume for this exercise that Wilks' theorem holds~\cite{Wilks:1938dza}. After marginalizing over the remaining oscillation parameters (still fixing $|U_{e2}|^2$ and $\Delta m_{21}^2$), we present $2\sigma$ CL constraints from T2K (blue) and NOvA (purple) in Fig.~\ref{fig:T24Dm41}. In generating these constraints, we have marginalized over the signs of both $\Delta m_{31}^2$ and $\Delta m_{4l}^2$. Colored stars indicate the best-fit point in $(\sin^2\theta_{24}, \left|\Delta m_{4l}^2\right|)$ of the given fits. \begin{figure} \begin{center} \includegraphics[width=0.55\linewidth]{NOVA_T2K_T24_Dm41.pdf} \caption{Constraints on $\sin^2\theta_{24}$ vs. $\Delta m_{4l}^2$ at $2\sigma$ CL from T2K (blue) and NOvA (purple) after marginalizing over all other parameters (except for $|U_{e2}|^2$ and $\Delta m_{21}^2$, which are fixed and a prior from Daya Bay on $|U_{e3}|^2$ -- see text), including the signs of $\Delta m_{31}^2 $ and $\Delta m_{4l}^2$. The green region indicates the preferred region from a combined analysis at $1\sigma$ (dashed) and 90\% (solid) CL, and the grey, dashed line shows the 90\% CL constraint from MINOS/MINOS+~\cite{MINOS:2017cae}. All confidence levels presented here are derived assuming Wilks' theorem holds.\label{fig:T24Dm41}} \end{center} \end{figure} In Fig.~\ref{fig:T24Dm41} we also compare against the 90\% CL constraint from the MINOS/MINOS+ experiment~\cite{MINOS:2017cae} as a faint grey line.\footnote{This result assumed $\Delta m_{31}^2$ and $\Delta m_{41}^2$ to both be positive, however, due to the lack of mass-ordering sensitivity at MINOS, the result likely does not depend strongly on this choice.} Finally, we also present in green the preferred region at $1\sigma$/90\% CL\footnote{We choose 90\% CL for clarity (the $2\sigma$ CL region spans the entire range of $\left|\Delta m_{4l}^2\right|$ of the figure and a comparable region of $\sin^2\theta_{24}$) and for a direct comparison against the MINOS/MINOS+ result.} ($\Delta \chi^2 = 2.3,\ 4.61$ assuming Wilks' theorem for two parameters) by our combined T2K and NOvA analysis. This result is in tension with that of the MINOS/MINOS+ result, however, our preferred region has not been Feldman-Cousins corrected, and the results would likely agree if a higher confidence level were assumed. T2K has reported constraints in the $\sin^2\theta_{24}$ vs. $\Delta m_{41}^2$ parameter space in Ref.~\cite{T2K:2019efw} -- we find comparable results here despite the simplified assumptions we have made in our analysis and the slightly larger data set considered in this work. While Fig.~\ref{fig:T24Dm41} compares constraints and preferred regions in the parameter space $\sin^2\theta_{24}$ vs. $\left|\Delta m_{4l}^2\right|$, it is also important to consider the parameters that have been marginalized in this construction. For concreteness, we focus on the preferred region (green) from the combined T2K/NOvA analysis that we have performed. The best-fit point, at $\left|\Delta m_{4l}^2\right| = 8.5 \times 10^{-3}$ eV$^2$, corresponds to mixing angles \begin{equation}\label{eq:BFAngles} \left\lbrace \sin^2\theta_{14},\ \sin^2\theta_{24},\ \sin^2\theta_{34} \right\rbrace = \left\lbrace 4.3 \times 10^{-2},\ 6.0\times 10^{-2},\ 0.37\right\rbrace, \end{equation} or mixing-matrix elements \begin{equation}\label{eq:BFUSqs} \left\lbrace \left|U_{e4}\right|^2,\ \left|U_{\mu4}\right|^2,\ \left|U_{\tau4}\right|^2\right\rbrace = \left\lbrace 4.3 \times 10^{-2},\ 5.7\times 10^{-2},\ 0.33\right\rbrace. \end{equation} For these low values of $\left|\Delta m_{4l}^2\right|$, the strongest constraints on $\left|U_{e4}\right|^2$ come from reactor antineutrino oscillation experiments such as Daya Bay~\cite{DayaBay:2016qvc} and Bugey-3~\cite{Declais:1994su}. A combined analysis~\cite{MINOS:2020iqj} constrains $\sin^2\theta_{14} \lesssim 4 \times 10^{-3}$ at 90\% CL, in significant tension with the value found in Eq.~\eqref{eq:BFAngles}. Constraints on $\left|U_{\tau4}\right|^2$ are more difficult to extract, as they often arise in tandem with $\left|U_{\mu4}\right|^2$ and depend strongly on $\Delta m_{41}^2$~\cite{Dentler:2018sju}. While specific constraints in this region of $\left|\Delta m_{4l}^2\right|$ have not been explicitly derived, $\left|U_{\tau4}\right|^2 = 0.33$ is possibly in tension with existing results from neutrino experiments. T2K, which analyzed its neutral-current data in addition to the data sets considered here, has constrained $\left|U_{\tau4}\right|^2 \lesssim 0.5$ for both $\Delta m_{41}^2 = 3 \times 10^{-3}$ eV$^2$ and $0.1$ eV$^2$ at 90\% CL \cite{T2K:2019efw}. Atmospheric neutrino experiments, including Super-Kamiokande~\cite{Super-Kamiokande:2014ndf} and IceCube~\cite{IceCube:2017ivd} have constrained $\left|U_{\tau4}\right|^2 \lesssim 0.2$ at high confidence, however, these analyses are restricted to $\Delta m_{41}^2 \gtrsim 0.1$ eV$^2$ where the fourth-neutrino-driven oscillations are averaged out. A more thorough investigation of this $10^{-2}$ eV$^2$ regime would prove useful if this hint persists in future NOvA/T2K data. When discussing Fig.~\ref{fig:Dm41BFP}, we considered the possibility of analyzing only the region $\left|\Delta m_{4l}^2\right| \lesssim 10^{-3}$ eV$^2$, in part to avoid concerns regarding energy resolution and bin widths. We noted that in that region, a solution to the NOvA/T2K tension persists with a preference of $\Delta \chi^2 \approx 4.1$. This regime has the added benefit that constraints from MINOS/MINOS+ (as seen in Fig.~\ref{fig:T24Dm41}), Daya Bay/Bugey-3/others, and Super-Kamiokande/IceCube are considerably weaker. Such an \textit{extremely-light} sterile neutrino, as we discuss in Appendix~\ref{app:LowDm41}, with $\left|\Delta m_{4l}^2\right| \approx 7 \times 10^{-4}$ eV$^2$ should be paid particular attention as more data from T2K and NOvA are unveiled, especially if any tension between the two persists. T2K and NOvA will continue collecting data -- if a very light sterile neutrino does in fact exist with $\left|\Delta m_{4l}^2\right| \approx 10^{-2}$ eV$^2$, more data will continue to shed light and potentially lead to a discovery. In the next generation, the Deep Underground Neutrino Experiment (DUNE)~\cite{DUNE:2020ypp} and Hyper-Kamiokande (HK)~\cite{Hyper-Kamiokande:2018ofw} experiments will have sensitivity to light sterile neutrinos in the same region of $\left|\Delta m_{4l}\right|^2$ given that they operate in a similar $L/E_\nu$ as NOvA and T2K. The two experiments, and any combined analysis, will have excellent sensitivity to test this solution to the T2K/NOvA tension~\cite{Berryman:2015nua,Kelly:2017kch}. \section{Concluding Remarks} \label{sec:conclusion} \setcounter{equation}{0} As more data from neutrino oscillation experiments are collected, we are able to test the standard-three-massive-neutrinos paradigm with better precision. Concurrently, there is always the possibility that disagreements arise, especially when data from multiple experiments are analyzed. In these instances, exploring different explanations of such tensions is invaluable, whether they are related to statistical fluctuations, deeper systematic issues, or new physics beyond the standard-three-massive-neutrinos paradigm. Such a tension has been noted when comparing the latest data from the Tokai to Kamioka (T2K) and NuMI Off-axis $\nu_e$ Appearance (NOvA) experiments. These measure the (dis)appearance of $\nu_e$ ($\nu_\mu$) in a $\nu_\mu$ beam at relatively long baselines. When analyzed under the three-neutrino hypothesis, their results disagree at around the 90\% confidence level. Previous studies of combination T2K and NOvA data have highlighted that this tension is reduced when, for instance, the inverted neutrino mass ordering is considered instead of the normal ordering~\cite{Kelly:2020fkv,Esteban:2020cvm,deSalas:2020pgw,Capozzi:2021fjo}, or when additional, beyond-the-Standard-Model neutrino/matter interactions are included in the analyses~\cite{Denton:2020uda,Chatterjee:2020kkm}. We have demonstrated here that an alternative approach can remedy this tension -- the addition of a fourth, very light, sterile neutrino. This very light new neutrino would be associated to a mass-squared difference, relative to the lightest mostly-active neutrino, of order $10^{-2}$ eV$^2$. We have studied the four-neutrino hypothesis when applied to the T2K and NOvA data independently, as well as their combination. For the combined data, we find that the four-neutrino hypothesis is preferred over the three-neutrino one at the level of $\Delta \chi^2 \approx 9$. When interpreting this in terms of statistical significance, two difficulties arise: first, the number of additional parameters in the four-neutrino hypothesis relative to the three-neutrino one (six additional parameters). Second, the oscillations associated with a new mass-squared difference on the order of $10^{-2}$ eV$^2$ are significant within individual bins in these long-baseline experiments, which leads to an artificial preference for sterile neutrinos due to statistical fluctuations. Due to the second challenge, in order to avoid relatively fast oscillations, we also explored an alternative extremely-light sterile neutrino analysis where the fourth neutrino is fixed to be associated to a mass-squared difference smaller (in magnitude) than $10^{-3}$ eV$^2$. In this context, we find moderate improvement relative to the three-neutrino hypothesis, at the level of $\Delta \chi^2 \approx 4$. While this is less significant, it is comparable to the improvement offered by non-standard neutrino interactions and merits further investigation. NOvA and T2K are still collecting and analyzing data. As they progress, the experiments and combined analyses thereof will allow for deeper testing of these different, interesting regimes of four-neutrino oscillations with a very light or extremely light fourth neutrino. If they confirm the existence of such a new, light fermion state, then future experiments (including the spiritual successors DUNE and Hyper-Kamiokande) will be able to probe the new particle's properties with even greater precision. \section*{Acknowledgements} This work was supported in part by the US Department of Energy (DOE) grant \#de-sc0010143 and in part by the National Science Foundation under Grant Nos.~PHY-1630782 and PHY-1748958. The document was prepared using the resources of the Fermi National Accelerator Laboratory (Fermilab), a DOE, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359.
\section{Details on the Models} \label{app:models} \subsection{The CKM Model and Parameters} \label{app:models:CKM} When evaluating low-energy observables within the \ac{SM}\xspace, the \ac{CKM}\xspace matrix elements are evaluated using the Wolfenstein parametrization~\cite{Wolfenstein:1983yz} expanded to order $\lambda^8$~\cite{Charles:2004jd}. The Wolfenstein parameters $\lambda$ and $A$ are used without modifications. The $\rho$ and $\eta$ parameters are traded for $\bar{\rho}$ and $\bar{\eta}$~\cite{Charles:2004jd}, the coordinates of the apex of the standard unitarity triangle. The two parameters are defined to all orders in $\lambda$ as~\cite{Charles:2004jd} \begin{equation} \begin{aligned} \bar\rho & = -\Re \frac{V_{ud}^{\phantom{*}}\,V_{ub}^*}{V_{cd}^{\phantom{*}}\,V_{cb}^*}\,, & \bar\eta & = -\Im \frac{V_{ud}^{\phantom{*}}\,V_{ub}^*}{V_{cd}^{\phantom{*}}\,V_{cb}^*}\,. \end{aligned} \end{equation} These four parameter can be accessed \textit{via} \code{CKM::lambda}, \code{CKM::A}, \code{CKM::rhobar}, and \code{CKM::etabar}, respectively.\\ A frequent physics use case involves inferring the absolute value or complex argument of a \ac{CKM}\xspace matrix element from data. Choosing the \ac{CKM}\xspace model using \code{'model': 'CKM'} ensures that each complex-valued \ac{CKM}\xspace matrix element is parametrized in terms of its absolute value and complex argument. For example, the parametrization of the \ac{CKM}\xspace matrix element $V_{ub}$ involves the parameters \code{CKM::abs(V_ub)} and \code{CKM::arg(V_ub)}. The names for the remaining \ac{CKM}\xspace parameter follow the same naming scheme. \subsection{The WET Model, Operator Bases, and Parameters} \label{app:models:WET} The observables for low-energy processes below the electroweak scale rely on a description in the \ac{WET}\xspace, both within the \ac{SM}\xspace~\cite{Buchalla:1995vs,Buras:1998raa,Buras:2011we} and in \ac{BSM}\xspace scenarios \cite{Aebischer:2017gaw,Jenkins:2017jig}. Observables can be evaluated within the \ac{WET}\xspace by setting the \code{model} option to \code{WET}. Within this model, \ac{WET}\xspace Wilson coefficients are parametrized by individual \texttt{EOS}\xspace parameters, and \ac{CKM}\xspace matrix elements are treated as in the \code{CKM} model; see \refapp{models:CKM}. Within \texttt{EOS}\xspace, the \ac{WET}\xspace is parametrized as \begin{equation} \mathcal{L}^{\textrm{WET}} = \sum_{\mathcal{S}} \mathcal{L}^{\mathcal{S}}\,, \end{equation} where $\mathcal{S}$ denotes a \emph{sector} of the \ac{WET}\xspace, \emph{i.e.}, a set of operators with definite quantum numbers under global symmetries preserved by the renormalization group evolution~\cite{Aebischer:2017ugx}. For each sector, \texttt{EOS}\xspace follows the \texttt{WCxf}\xspace convention~\cite{Aebischer:2017ugx}: \begin{equation} \mathcal{L}^{\mathcal{S}} \equiv \sum_{\mathcal{O}_i^{\mathcal{S}} \neq \mathcal{O}_i^{\mathcal{S},\dagger}} \left[\mathcal{C}^\mathcal{S}_i \, \mathcal{O}^{\mathcal{S}}_i + \text{h.c.}\right] + \sum_{\mathcal{O}_i^{\mathcal{S}} = \mathcal{O}_i^{\mathcal{S},\dagger}} \mathcal{C}^\mathcal{S}_i \, \mathcal{O}^{\mathcal{S}}_i\,. \end{equation} Here $\mathcal{O}$ denotes the dimension-six operators, and $\mathcal{C}$ is a dimensionless Wilson coefficient renormalized at an appropriate low-energy scale $\mu$. This scale is accessible as an \class{eos.Parameter}. The prefix part of its qualified name corresponds to the sector and the name part of its qualified name is \texttt{mu}, i.e., for the sector \texttt{sbsb} this parameter is named \texttt{sbsb::mu}.\\ As of version 1.0, \texttt{EOS}\xspace supports the following sectors: \begin{itemize} \item \texttt{sb} \item \texttt{sbee}, \texttt{sbmumu}, \texttt{sbtautau} \item \texttt{sbnunu} \item \texttt{sbsb} \item \texttt{cbenue}, \texttt{cbmunumu}, \texttt{cbtaunutau} \item \texttt{ubenue}, \texttt{ubmunumu}, \texttt{ubtaunutau} \end{itemize} Changes to the parameters representing the Wilson coefficients only affect observables that are constructed with the \code{model} option set to \code{'WET'}. By convention the Wilson coefficients comprise both their \ac{SM}\xspace value and potential \ac{BSM}\xspace shifts, i.e.: \begin{align} \mathcal{C}_i(\mu) & = \mathcal{C}_i^\text{SM}(\mu) + \mathcal{C}_i^\text{BSM}(\mu)\,. \end{align} A complete list of the sectors and their operators supported by \texttt{EOS}\xspace is part of the \texttt{WCxf}\xspace basis file \cite{wcxf:EOS-basis}. The parameters describing the Wilson coefficients are listed as part of the \texttt{EOS}\xspace documentation~\cite[List of Parameters]{EOS:doc}. Using \texttt{WCxf}\xspace and the \texttt{wilson}\xspace package~\cite{Aebischer:2018bkb}, constraints on the \ac{WET}\xspace can be readily interpreted as constraints at a different scale in the \ac{WET}\xspace or as constraints on Wilson coefficients in the Standard Model Effective Field Theory~\cite{Buchmuller:1985jz,Grzadkowski:2010es}. The \ac{SM}\xspace values of the \ac{WET}\xspace Wilson Coefficients up to mass dimension six are known to high precision in the \ac{SM}\xspace. By default, \texttt{EOS}\xspace evaluates observables with the \code{model} option set to \code{'SM'}. This choice of option leads \texttt{EOS}\xspace to compute the WET Wilson Coefficients at the electroweak scale $\mu_0$ and evolve them to the appropriate low-energy scale $\mu$~\footnote{% The choice of $\mu_0$ should, in general, not be changed, and for some sectors there is more than one high scale involved. }. \begin{itemize} \item For the sectors \code{sb}, \code{sbee}, \code{sbmumu}, and \code{sbtautau}, the SM values are computed to NNLO in QCD~\cite{Adel:1993ah,Greub:1997hf,Bobeth:1999mk}. The RG evolution to the low-energy scale $\sim m_b$ crucially requires the resummation of radiative QCD and partially also QED corrections~\cite{Chetyrkin:1996vx,Bobeth:2003at,Gorbahn:2004my,Gorbahn:2005sa,Huber:2005ig}. \item For the sector \code{sbnunu}, the SM values are computed to NLO in QCD~\cite{Misiak:1999yg,Buchalla:1998ba}. \item For the sectors \code{cbenue} through \code{ubtaunutau}, the SM values are computed to next-to-leading order in QED~\cite{Sirlin:1981ie}. \item For the sector \code{sbsb}, the SM values are computed to NLO in QCD~\cite{Buras:1990fn} and NLO in EW~\cite{Gambino:1998rt}. The RG evolution to the low scale $\sim m_b$ crucially requires the resummation of radiative QCD corrections~\cite{Buras:2000if}. \end{itemize} \section{Collection of Examples} \label{app:PlotExamples} Here we collect a number of code examples that are used in the main text to produce a variety of plots. They have been moved to this appendix to ease legibility of the main text. \begin{lstlisting}[% language=iPython,% caption={% Histogram samples of the 1D-marginal posterior for $|V_{cb}|$ and plot their kernel density estimate. This code is used to produce \refout{inference:posterior-sample-hist+kde} (left). \label{lst:plot-ex:inference:posterior-sample-hist} } ] plot_args = { 'plot': { 'x': { 'label': r'$|V_{cb}|$', 'range': [38e-3, 47e-3] }, 'legend': { 'location': 'upper left' } }, 'contents': [ { 'type': 'histogram', 'data': { 'samples': parameter_samples[:, 0] } }, { 'type': 'kde', 'color': 'C0', 'label': 'posterior', 'bandwidth': 2, 'range': [38e-3, 47e-3], 'data': { 'samples': parameter_samples[:, 0] } } ] } eos.plot.Plotter(plot_args).plot() \end{lstlisting} \begin{lstlisting}[% language=iPython,% caption={% Plot contours of the joint 2D-marginal posterior of the parameters $|V_{cb}|$ and $f_+(0)$ at $68\%$ and $95\%$ probability using a kernel density estimate. This code is used to produce \refout{inference:posterior-sample-hist+kde} (right). \label{lst:plot-ex:inference:posterior-sample-kde} } ] plot_args = { 'plot': { 'x': { 'label': r'$|V_{cb}|$', 'range': [38e-3, 47e-3] }, 'y': { 'label': r'$f_+(0)$', 'range': [0.6, 0.75] }, }, 'contents': [ { 'type': 'kde2D', 'color': 'C1', 'label': 'posterior', 'levels': [68, 95], 'contours': ['lines', 'areas'], 'bandwidth': 3, 'data': { 'samples': parameter_samples[:, (0,1)] } } ] } eos.plot.Plotter(plot_args).plot() \end{lstlisting} \begin{lstlisting}[% language=iPython,% caption={% Plot the analytical 1D PDFs and 1D-marginal histograms of pseudo events for the decays $\bar{B}\to D\lbrace \mu^-,\tau^-\rbrace \bar{\nu}$. The pseudo events for the semimuonic decay are obtained from \reflst{simulation:sample-1D}. The result is shown in the left plot of \refout{simulation:plot+histogram}. \label{lst:plot-ex:simulation:plot+histogram-1D} } ] plot_args = { 'plot': { 'x': {'label': r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 11.60]}, 'y': {'label': r'$P(q^2)$', 'range': [0.0, 0.30]}, 'legend': {'location': 'upper left'} }, 'contents': [ { 'label': r'samples ($\ell=\mu$)', 'type': 'histogram', 'data': {'samples': mu_samples}, 'color': 'C0' }, { 'label': r'samples ($\ell=\tau$)', 'type': 'histogram', 'data': {'samples': tau_samples}, 'color': 'C1' }, { 'label': r'PDF ($\ell=\mu$)', 'type': 'signal-pdf', 'pdf': 'B->Dlnu::dGamma/dq2;l=mu', 'kinematic': 'q2', 'range': [0.02, 11.60], 'kinematics': {'q2_min': 0.02, 'q2_max': 11.60}, 'color': 'C0' }, { 'label': r'PDF ($\ell=\tau$)', 'type': 'signal-pdf', 'pdf': 'B->Dlnu::dGamma/dq2;l=tau', 'kinematic': 'q2', 'range': [3.17, 11.60], 'kinematics': {'q2_min': 3.17, 'q2_max': 11.60}, 'color': 'C1' }, ] } eos.plot.Plotter(plot_args).plot() \end{lstlisting} \section{Constraints Data Format} \label{app:constraints-format} Constraints are stored as \texttt{YAML}\xspace~\cite{YAML} files within the \texttt{EOS}\xspace source repository in the directory \texttt{eos/constraints/}. Each constraint file is an associative array, with the top-level keys corresponding to the constraint's qualified name, and the value describing the constraint data. The constraint data itself is also an associative array. The \texttt{type} key determines the type of the likelihood, and therefore which other keys must be present. \texttt{EOS}\xspace supports the following types of likelihood: \begin{description} \item[\hlred{\texttt{Gaussian}}] The likelihood is a univariate Gaussian density. It requires the following keys: \smallskip \begin{description} \item[\hlgreen{\texttt{observable}}] The name of the observable that appears in this likelihood, as an \class{eos.QualifiedName}. \smallskip \item[\hlgreen{\texttt{kinematics}}] The kinematic variables and their values that underlay the likelihood's observable, as an associative array. \smallskip \item[\hlgreen{\texttt{options}}] The option keys and values that underlay the likelihood's observable, as an associative array. \smallskip \item[\hlgreen{\texttt{mean}}] The mean of the likelihood, as a floating point value. \smallskip \item[\hlgreen{\texttt{sigma-stat}}] The statistical uncertainty of the likelihood, as an associative array with keys \texttt{hi} and \texttt{lo}. For a completely symmetric uncertainty, set both keys to the same value. \smallskip \item[\hlgreen{\texttt{sigma-sys}}] The systematic uncertainty of the likelihood, as an associative array with keys \texttt{hi} and \texttt{lo}. For a completely symmetric uncertainty, set both keys to the same value. \smallskip \item[\hlgreen{\texttt{dof}}] The degrees of freedom, as a floating point value. Must be set to \texttt{1} to be backward compatible. \end{description} % \medskip % \item[\hlred{\texttt{MultivariateGaussian(Covariance)}}] The likelihood is a multivariate Gaussian density, and correlations and total uncertainties are specified through the covariance matrix. It requires the following keys: \smallskip \begin{description} \item[\hlgreen{\texttt{dim}}] The dimension of the covariance matrix, as an integer. Denoted below as $D$. \smallskip \item[\hlgreen{\texttt{dof}}] The degrees of freedom, as an integer. \smallskip \item[\hlgreen{\texttt{observables}}] The names of the observables that appear in this likelihood, as an ordered list of \class{eos.QualifiedName} of length $P$. Denoted below as $\vec{o}$. \smallskip \item[\hlgreen{\texttt{kinematics}}] The kinematic configuration for each of the observables, as an ordered list of length $P$ of associative arrays. \smallskip \item[\hlgreen{\texttt{options}}] The options for each of the observables, as an ordered list of length $P$ of associative arrays. \smallskip \item[\hlgreen{\texttt{means}}] The mean values of the likelihood, as an ordered list of floating point values. Denoted below as $\mu$. \smallskip \item[\hlgreen{\texttt{covariance}}] The $D\times D$-dimensional covariance matrix of the likelihood, as an ordered list of ordered lists of floating point values (row-first ordering). Denoted below as $\Sigma$. \smallskip \item[\hlgreen{\texttt{response}}] The optional $D\times P$-dimensional response matrix that converts a $P$ dimensional theory prediction into a $D$ dimension measurement. If not specified, \texttt{EOS}\xspace assumes that $P=D$ and that the response matrix is the identity matrix. Specified as an ordered list of ordered lists of floating point values (row-first ordering). The response matrix is used to fold the theory predictions. This enables fits involving experimental results that have not or cannot be unfolded. Denoted below as $R$. \smallskip \end{description} The logarithm of the likelihood $L$ reads \begin{equation} -2 \ln L = -2 \ln \mathcal{N}_D(R \vec{o}\,|\,\vec{\mu}, \Sigma) \sim \left(\vec\mu - R \vec{o}\right)^T \Sigma^{-1} \left(\vec\mu - R \vec{o}\right)\,. \end{equation} In the above, $\mathcal{N}_D(\cdot\,|\,\vec{\mu},\Sigma)$ denotes a $D$-variate Gaussian \ac{PDF}\xspace centered at $\mu$ with covariance $\Sigma$. % \medskip % \item[\hlred{\texttt{Mixture}}] The likelihood is a mixture density, with all mixture components being multivariate Gaussian densities. Their correlations and total uncertainties are specified through their respective covariance matrices. It requires the following keys: \smallskip \begin{description} \item[\hlgreen{\texttt{dim}}] The dimension of each covariance matrix, as an integer. Denoted below as $D$. \smallskip \item[\hlgreen{\texttt{observables}}] The names of the observables that appear in this likelihood, as an ordered list of \class{eos.QualifiedName} of length $D$. Denoted below as $\vec{o}$. \smallskip \item[\hlgreen{\texttt{kinematics}}] The kinematic configuration for each of the observables, as an ordered list of length $D$ of associative arrays. \smallskip \item[\hlgreen{\texttt{options}}] The options for each of the observables, as an ordered list of length $D$ of associative arrays. \smallskip \item[\hlgreen{\texttt{components}}] The description of the mixture components as a list of length $N$. Each list element is an associative array that requires the following keys: \begin{description} \item[\hlorange{means}] The mean values of this component, as a list of floats of length $D$. Denoted below as $\vec{\mu}_n$. \smallskip \item[\hlorange{covariance}] The covariance of this component, as a list of lists of floats. Denoted below as $\Sigma_n$. \end{description} \smallskip \item[\hlgreen{\texttt{weights}}] The weights of the mixture components as a list of length $N$. Denoted below as $\alpha_n$. \end{description} The likelihood $L$ reads \begin{equation} L = \sum_{n=1}^N \alpha_n\, \mathcal{N}_D(\vec{o}\,|\,\vec{\mu}_n, \Sigma_n) \qquad \text{with} \qquad \sum_{n=1}^N \alpha_n = 1 \,. \end{equation} \end{description} \begin{lstlisting}[% language=yaml,% caption={ Example of a multivariate Gaussian constraint as recorded in the \texttt{EOS}\xspace source code repository, representing binned measurements of the $\bar{B}^0\to D^+e^-\bar\nu$ branching ratio by the Belle experiment~\cite{Belle:2015pkj}. \label{lst:constraint:Belle2015A} } ] B^0->D^+e^-nu::BRs@Belle:2015A: type: MultivariateGaussian(Covariance) dim: 10 observables: [ B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR ] options: [ { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d } ] kinematics: - { q2_min: 10.44, q2_max: 11.63 } - { q2_min: 9.26, q2_max: 10.44 } - { q2_min: 8.07, q2_max: 9.26 } - { q2_min: 6.89, q2_max: 8.07 } - { q2_min: 5.71, q2_max: 6.89 } - { q2_min: 4.52, q2_max: 5.71 } - { q2_min: 3.34, q2_max: 4.52 } - { q2_min: 2.15, q2_max: 3.34 } - { q2_min: 0.97, q2_max: 2.15 } - { q2_min: 0.01, q2_max: 0.97 } means: [ 4.154e-05, 6.106e-04, 1.255e-03, 1.635e-03, 1.901e-03, 2.758e-03, 3.524e-03, 4.216e-03, 4.371e-03, 4.050e-03 ] covariance: - [1.912e-09, 1.726e-10, 3.427e-10, 4.424e-10, 5.041e-10, 7.320e-10, 9.624e-10, 1.096e-09, 1.049e-09, 8.840e-10] - [1.726e-10, 1.478e-08, 1.811e-09, 2.383e-09, 2.738e-09, 3.977e-09, 5.166e-09, 5.959e-09, 5.903e-09, 5.209e-09] - [3.427e-10, 1.811e-09, 2.863e-08, 4.849e-09, 5.579e-09, 8.101e-09, 1.051e-08, 1.215e-08, 1.214e-08, 1.075e-08] - [4.424e-10, 2.383e-09, 4.849e-09, 3.786e-08, 7.398e-09, 1.071e-08, 1.387e-08, 1.602e-08, 1.603e-08, 1.425e-08] - [5.041e-10, 2.738e-09, 5.579e-09, 7.398e-09, 4.355e-08, 1.241e-08, 1.606e-08, 1.860e-08, 1.873e-08, 1.678e-08] - [7.320e-10, 3.977e-09, 8.101e-09, 1.071e-08, 1.241e-08, 6.176e-08, 2.334e-08, 2.709e-08, 2.731e-08, 2.440e-08] - [9.624e-10, 5.166e-09, 1.051e-08, 1.387e-08, 1.606e-08, 2.334e-08, 8.585e-08, 3.533e-08, 3.555e-08, 3.156e-08] - [1.096e-09, 5.959e-09, 1.215e-08, 1.602e-08, 1.860e-08, 2.709e-08, 3.533e-08, 1.022e-07, 4.194e-08, 3.744e-08] - [1.049e-09, 5.903e-09, 1.214e-08, 1.603e-08, 1.873e-08, 2.731e-08, 3.555e-08, 4.194e-08, 1.005e-07, 3.887e-08] - [8.840e-10, 5.209e-09, 1.075e-08, 1.425e-08, 1.678e-08, 2.440e-08, 3.156e-08, 3.744e-08, 3.887e-08, 8.132e-08] dof: 10 \end{lstlisting} \section{Basic Classes and Concepts} \label{sec:basics} \texttt{EOS}\xspace provides a number of \texttt{Python}\xspace classes that make it possible to fulfill the physics use cases discussed in \refsec{usage}. Three of the most relevant classes are used as follows: \begin{itemize} \item hadronic and \ac{BSM}\xspace parameters are represented by objects of the \class{eos.Parameter} class; \smallskip \item physical observables and pseudo-observables (such as hadronic form factors) are represented by objects of the \class{eos.Observable} class; \smallskip \item likelihood functions, stemming from either experimental measurements or theoretical calculations, are represented by objects of the \class{eos.Constraint} class. \end{itemize} To facilitate their handling, \texttt{EOS}\xspace has databases for all known objects of these classes. The user can interactively inspect these databases within a \texttt{Jupyter}\xspace notebook in the following way: \begin{lstlisting}[language=iPython] display(eos.Parameters()) # only run one line at a time, since the output is lengthy display(eos.Observables()) display(eos.Constraints()) \end{lstlisting} \texttt{EOS}\xspace provides a rich display for most classes, including the above, which is not shown here for brevity.\\ All three databases can be searched by name of the target object. \texttt{EOS}\xspace uses the same naming scheme for all three databases, which is enforced through use of the \class{eos.QualifiedName} class. The naming scheme is \begin{center} \texttt{\hlred{PREFIX}::\hlred{NAME}[@\hlred{SUFFIX}][;\hlred{OPTIONLIST}]} \end{center} where parts shown in square brackets are optional. The individual parts have the following meaning: \begin{description} \item[\texttt{\hlred{PREFIX}}] The prefix part is used to separate objects with (otherwise) identical names into different namespaces, to avoid conflicts. Examples of prefixes include parameter categories (e.g., \code{mass} or \code{decay-constant}), physical processes (e.g., \code{B->Kll}), or sectors of the \ac{WET}\xspace (e.g., \code{sbsb}). \smallskip \item[\texttt{\hlred{NAME}}] The name part is used to identify objects within its \code{PREFIX} namespace. Examples include observable names (e.g., \code{BR} for a branching ratio) or names of \ac{WET}\xspace Wilson coefficients (e.g., \code{cVL} for a coefficient of a left-handed vector operator). \smallskip \item[\texttt{\hlred{SUFFIX}}] The (optional) suffix part is used to distinguish between objects of otherwise identical names based on context. One example is the parameter describing $\Lambda_b$ baryon polarization, which takes different values based on the experimental environment. Generally, $\Lambda_b$ polarization would be represented by \code{Lambda_b::polarization}. The use of \code{@LHCb} and \code{@unpolarized} as a suffix distinguishes between the average polarization encountered within the LHCb experiment and an unpolarized setting (e.g. when using the whole phase space of the ATLAS and CMS experiments). \smallskip \item[\texttt{\hlred{OPTIONLIST}}] The option list is an optional comma-separated list of key/value pairs, which allows to modify the named object in an unambigious way. One example is \code{model=SM,l=mu,q=s}, which instructs an observable to use the Standard Model, $\mu$ lepton flavor, and strange-flavored spectator quarks. Details on possible options are discussed in \refsec{basics:observables}. \end{description} In the remainder of this section we discuss how to use the six representation classes and their corresponding database classes \begin{itemize} \item \class{eos.Parameter} within \class{eos.Parameters}, \item \class{eos.KinematicVariable} within \class{eos.Kinematics}, \item \class{eos.Option} within \class{eos.Options}, \item \class{eos.Observable} within \class{eos.Observables}, \item \class{eos.Constraint} within \class{eos.Constraints}, and \item \class{eos.SignalPDF} within \class{eos.SignalPDFs}, \end{itemize} and the utility classes \class{eos.Analysis} and \class{eos.Plotter}. The relationship between the first four sets of classes are illustrated in \reffig{basics:class-diagram}. We provide a few examples here. However, for more exhaustive and interactive examples we refer to the notebook named \href{https://github.com/eos/eos/tree/v1.0/examples/basics.ipynb}{basics.ipynb}, which is part of the collection of \texttt{EOS}\xspace example notebooks~\cite{EOS:examples}. \begin{figure}[t] \centering \includegraphics[width=.6\textwidth,trim=0 100 0 50,clip]{figures/ClassDiagram.pdf} \caption{% Visual representation of the basic \texttt{EOS}\xspace classes and their relationships. \label{fig:basics:class-diagram} } \end{figure} \subsection[Classes eos.Parameters and eos.Parameter] {Classes \class{eos.Parameters} and \class{eos.Parameter}} \label{sec:basics:parameters} \texttt{EOS}\xspace makes extensive use of the \class{eos.Parameter} class, which provides access to a single real-valued scalar parameter. Any such \class{eos.Parameter} object is part of a large set of built-in parameters. Users cannot directly create new objects of the \class{eos.Parameter} class. However, new named sets of parameters can be created from which the parameter of interest can be extracted, inspected, and altered.\\ We begin by creating and displaying a new set of parameters: \begin{lstlisting}[language=iPython] parameters = eos.Parameters() display(parameters) \end{lstlisting} The new variable \object{parameters} now contains a representation of all parameters known to \texttt{EOS}\xspace. The \texttt{Jupyter}\xspace \command{display} command has been augmented to provide a sectioned list of the known parameters, which is rather lengthy and not shown here. It is equivalent to the section ``List of Parameters`` in the \texttt{EOS}\xspace documentation~\cite{EOS:doc}. The display provides the user with an overview of all parameter names, their canonical physical notation, and their value and unit. A single parameter, here the muon mass as an example, can be isolated: \begin{lstlisting}[language=iPython] parameters['mass::mu'] \end{lstlisting} Again, the user is provided with an overview of the parameter, including its qualified name, unit, default value, and current value. The value of an \class{eos.Parameter} object can be altered with the \method{eos.Parameter}{set} method: \begin{lstlisting}[language=iPython] m_mu = parameters['mass::mu'] display(m_mu) # shows a value of 0.10566 m_mu.set(1.779) # we just made the muon as heavy as the tauon! display(m_mu) # shows a value of 1.779 \end{lstlisting} In this example, the muon mass parameter within \object{parameters} has been set to the measurd value for the tauon mass, and the \object{m\_mu} object, which represents this parameter, has transparently changed its value. Put differently: any \class{eos.Parameter} object ``remembers`` the set of parameters (i.e., the \class{eos.Parameters} object) that it belongs to and forwards all changes to that set. To obtain an independent set of parameters, the user can use \begin{lstlisting}[language=iPython] parameters2 = eos:Parameters() parameters2['mass::mu'] display(parameters2 == parameters) # prints 'False', since the two sets are not identical! \end{lstlisting} A parameter's properties can be readily accessed through the methods \method{eos.Parameter}{name}, \method{eos.Parameter}{latex}, and \method{eos.Parameter}{evaluate} \begin{lstlisting}[language=iPython] display(m_mu.name()) # shows 'mass::mu' display(m_mu.latex()) # shows 'm_\mu' display(m_mu.evaluate()) # shows 1.779, since we changed it above. \end{lstlisting} A parameter object can be used like any other \texttt{Python}\xspace object, e.g., as an element of a \pyclass{list}, a \pyclass{dict}, or a \pyclass{tuple}: \begin{lstlisting}[language=iPython] lepton_masses = [parameters['mass::' + l] for l in ['e', 'mu', 'tau']] [display(p) for p in lepton_masses] translation = {p.name(): p.latex() for p in lepton_masses} display(translation) \end{lstlisting} These properties allow to bind a function (e.g., the functional expression of an observable or a likelihood function) to an arbitrary number of parameters, let the function evaluate these parameters in a computationally efficient way, and let the user change these parameters at a whim. Parameter sets are meant to be \emph{shared}, i.e., a single set of parameters is meant to be used by any number of functions. The sharing of parameters across observables makes it possible for \texttt{EOS}\xspace to consistently and efficiently evaluate a large number of functions. The default set of parameters is stored in \texttt{YAML}\xspace files that are installed together with the binary \texttt{EOS}\xspace library and the \texttt{Python}\xspace modules and scripts. The default parameter set can be replaced. To do this, the user must set the environment variable \code{EOS_HOME} to point to an accessible directory. The \texttt{YAML}\xspace files found within \code{EOS_HOME/parameters} will be used \emph{instead} of the default set of parameter contained in the \texttt{EOS}\xspace package. The class \class{eos.Parameters} facilitate creating such files, through the \method{eos.Parameters}{dump} method, which writes the current set of parameters to a \texttt{YAML}\xspace file. Alternatively, to use mostly the default parameter set, but override a subset of parameters in persistent way, the user can use the \method{eos.Parameters}{override\_from\_file} method to load only a subset of parameters from a given file. \subsection[Classes eos.Kinematics and eos.KinematicVariable] {Classes \class{eos.Kinematics} and \class{eos.KinematicVariable}} \label{sec:basics:kinematics} \texttt{EOS}\xspace uses the \class{eos.Kinematics} class to store a set of real-valued scalar kinematic variables by name. Contrary to the class \class{eos.Parameters}, there are neither default variables nor default values. Instead, \class{eos.Kinematics} objects are empty by default. Moreover, their variables are only defined within the scope of a single \class{eos.Observable} object: two observables that do not share an \class{eos.Kinematics} object can use identically-named, independent kinematic variables. Therefore, the names of kinematic variables do not require any prefix, and are simply (short) strings.\\ An empty set of kinematic variables can be created by \begin{lstlisting}[language=iPython] kinematics = eos.Kinematics() \end{lstlisting} A new kinematic variable can be declared with the existing (empty) set by providing a key/value pair to the \object{kinematics} object, e.g.\index{eos.Kinematics!declare} \begin{lstlisting}[language=iPython] k1 = kinematics.declare('q2', 1.0) # 1 GeV^2 k2 = kinematics.declare('E_pi', 0.139) # 139 MeV, a pion at rest! k3 = kinematics.declare('cos(theta_pi)', -1.0) # negative values are OK! \end{lstlisting} In this example, we have also captured the newly created kinematic variables as objects \object{k1}, \object{k2}, and \object{k3} of class \class{eos.KinematicVariable} for latter use. \texttt{EOS}\xspace uses the following guidelines for names and units of kinematic variables: \begin{itemize} \item using \code{'q2'}, \code{'p2'}, and so on for the squares of four momenta $q^\mu$, $p^\mu$; \smallskip \item using \code{'E\_pi'}, \code{'E\_gamma'}, and so on for the energies of states $\pi$, $\gamma$ in the rest frame of a decaying particle; \smallskip \item using \code{'cos(theta\_pi)'} and similar for the cosine of a helicity angle $\theta_\pi$; \smallskip \item using natural units, i.e., expressing all momenta and energies as powers of $\ensuremath{\mathrm{GeV}}$. \end{itemize} The new \class{eos.KinematicVariable} objects are now collected within the \object{kinematics} object. They can be collectively inspected using \begin{lstlisting}[language=iPython] display(kinematics) \end{lstlisting} In addition, the individual objects \object{k1}, \object{k2}, etc.~can also be inspected \begin{lstlisting}[language=iPython] display(k1) display(k2) \end{lstlisting} To directly obtain an \code{eos.Kinematics} object pre-populated with the variables one needs, a \texttt{Python}\xspace \pyclass{dict} can be provided to the constructor:\index{eos.Kinematics} \begin{lstlisting}[language=iPython] kinematics = eos.Kinematics({ 'q2': 1.0, 'E_pi': 0.139, 'cos(theta_pi)': -1.0 }) \end{lstlisting} To extract a previously declared kinematic variable from the \object{kinematics} object, the \class{eos.Kinematics} provides access via the subscript operator \code{[...]} \begin{lstlisting}[language=iPython] k1 = kinematics['q2'] k1.set(16.0) \end{lstlisting} In the above, the \method{eos.KinematicVariable}{set} method is used to change the value of \object{k1}.\\ Kinematic variables and their naming usually pertain to only one observable, which will be discussed below in \refsec{basics:observables}. Therefore, when creating observables, the user should create only a single independent set of kinematic variables per observable. Nevertheless, it is possible to create observables that have a common set of kinematic variables. This makes it possible to investigate correlations among observables that share a kinematic variable (e.g., LFU ratios such as $R_K$ as a function of the lower dilepton momentum cut-off). \subsection[Class eos.Options] {Class \class{eos.Options}} \label{sec:basics:options} \texttt{EOS}\xspace uses objects of the \code{eos.Options} class to modify the behavior of observables at runtime. A new and empty set of options is created as follows\index{eos.Options} \begin{lstlisting}[language=iPython] options = eos.Options() \end{lstlisting} This object is usually populated with individual options, which are key/value pairs of \pyclass{str} objects. Typical keys and their respective values include: \begin{description} \item[\hlred{\texttt{model}}] is used to change the behavior of the low-energy observables. As of \texttt{EOS}\xspace version 1.0\xspace, it can take the values \code{SM}, \code{CKM}, \code{WET}. \smallskip When choosing \code{SM}, the observables are computed within the \ac{SM}\xspace, and the values of the \ac{WET}\xspace Wilson coefficients are computed from \ac{SM}\xspace parameters. \ac{CKM}\xspace matrix elements are computed within the Wolfenstein parametrization. Details, such as the relevant parameter names, are discussed in \refapp{models:CKM}. \smallskip When choosing \code{CKM}, the observables are computed with \ac{SM}\xspace values for the \ac{WET}\xspace Wilson coefficients. However, the \ac{CKM}\xspace matrix elements are not computed from the Wolfenstein parameters. Instead, each \ac{CKM}\xspace matrix elements is parametrized in terms of two parameters for its absolute value and complex argument. This choice makes fitting \ac{CKM}\xspace matrix elements possible. Details, such as the relevant parameter names, are discussed in \refapp{models:CKM}. \smallskip When choosing \code{WET}, the observables are computed with generic values for the \ac{WET}\xspace Wilson coefficients. The \ac{CKM}\xspace matrix elements are treated as in the \code{CKM} case. This choice makes fitting \ac{WET}\xspace Wilson coefficients possible. Details, such as the \texttt{EOS}\xspace convention for the basis of \ac{WET}\xspace operators and the relevant parameter names, are discussed in \refapp{models:WET}. \smallskip \item[\hlred{\texttt{form-factors}}] is used to select from one of the available parametrizations of hadronic form factors that are pertinent to the process. Its values are process-specific. For true observables (e.g., a semileptonic branching ratio) a sensible default choice is always provided. For pseudo-observables (e.g., the hadronic form factors $f_+(q^2)$ in $B\to \pi$ transitions) the choice must be made by the user. \smallskip \item[\hlred{\texttt{l}}] is used to select the charged lepton flavor in processes with at least one charged lepton. Allowed values are generally \code{e}, \code{mu} and \code{tau}. Individual processes might restrict the set of allowed values further, e.g., when hadronic matrix elements relevant to semitauonic decays are either unknown or unimplemented. \smallskip \item[\hlred{\texttt{q}}] is used to select the spectator quark flavor. Allowed values are typically \code{u}, \code{d}, \code{s}, and \code{c}. Individual processes might restrict the set of allowed values further. Processes with \code{s} and \code{c} spectator quarks are typically accessible through explicit specification of the spectator quark flavor in the process name, e.g., \code{B_s->K^*lnu}. \end{description} Obtaining the full list of option keys pertaining to a specific observable and their allowed keys is discussed in \refsec{basics:observables}.\\ Adding new options to an existing \object{options} object is achieved as follows\index{eos.Options!declare} \begin{lstlisting}[language=iPython] options.declare('model', 'CKM') options.declare('form-factors', 'BSZ2015') options.declare('l', 'mu') # Since we are all so "cautiously excited"! options.declare('q', 's') display(options) \end{lstlisting} Analogously to the kinematic variables, an \class{eos.Options} object can be created pre-populated with the values one needs using a \texttt{Python}\xspace \pyclass{dict} \begin{lstlisting}[language=iPython] options = eos.Options({ 'form-factors': 'BSZ2015', # Bharucha, Straub, Zwicky 2015 'model': 'WET', 'l': 'tau', 'q': 's' }) \end{lstlisting} \subsection[Classes eos.Observables and eos.Observable] {Classes \class{eos.Observables} and \class{eos.Observable}} \label{sec:basics:observables} \texttt{EOS}\xspace uses the \class{eos.Observable} class to provide theory predictions for a variety of flavor physics processes and their associated (pseudo-)observables. The complete list of observables known to \texttt{EOS}\xspace is available as part of the online documentation~\cite[List of Observables]{EOS:doc} and interactively in a \texttt{Jupyter}\xspace notebook via\index{eos.Observables} \begin{lstlisting}[language=iPython] eos.Observables() \end{lstlisting} Within this list, all observables are uniquely identified by an \class{eos.QualifiedName} object; see the beginning of \refsec{basics} for information on how such a name is structured. To ease recognition, the typically used mathematical symbol for each observable is shown next to its name. To search within this list, keyword arguments for the prefix part, name part, or suffix part of a qualified name will filter the output. For example, the following code displays only branching ratios (\code{BR}) in processes involving a $B^\mp$ meson (\code{B_u})\index{eos.Observables} \begin{lstlisting}[language=iPython] eos.Observables(prefix='B_u', name='BR') \end{lstlisting} Amongst others, this command lists the observable \code{B_u->lnu::BR}, representing the branching ratio of $B^\mp\to \ell^\mp\bar\nu$ decays. As part of the output the user is notified that this particular observable requires no kinematic variables. The user is also notified about the \class{eos.Options} keys recognized by this observable, which include \code{model} and \code{l}.\\ To create a new \class{eos.Observable} object the user needs to \begin{itemize} \item identify it by name; \item provide a set of parameters that can optionally be shared with other observables; \item provide a set of kinematic variables that can optionally be shared with other observables; and \item specify the relevant options. \end{itemize} Again, the branching ratio of $B^\mp \to \ell^\mp\bar\nu$ is used as an example, specifically for a $\tau$ in the final state. The observable is created as an \object{eos.Observable} object as follows\index{eos.Observable!make} \begin{lstlisting}[language=iPython] observable1 = eos.Observable.make('B_u->lnu::BR', eos.Parameters(), eos.Kinematics(), eos.Options({'l': 'tau', 'model': 'WET'})) \end{lstlisting} Here \code{B\_u->lnu::BR} is the \class{eos.QualifiedName} for this particular observable, and default parameters are provided when using \code{eos.Parameters()}. This observable does not require any kinematic variables, and therefore an empty \class{eos.Kinematics} object is provided. Setting the \code{l} option to \code{tau} selects a $\tau$ final state. Setting the \code{model} option to \code{WET} enables the user to evaluate the observable in the \ac{WET}\xspace for arbitrary values of the Wilson coefficients; see \refapp{models:WET} for details.\\ The \class{eos.Observable} class provides access to the name, parameters, kinematics, options, and current value of an observable by means of the following methods \index{eos.Observable!name}\index{eos.Observable!parameters}\index{eos.Observable!kinematics}\index{eos.Observable!options}\index{eos.Observable!evaluate} \begin{lstlisting}[language=iPython] display(observable1.name()) # shows 'B_q->lnu::BR' observable1.parameters() # accesses the parameters observable1.kinematics() # accesses the (emtpy) set of kinematic variables display(observable1.options()) # shows the options used to create the observable display(observable1.evaluate()) # shows the current value \end{lstlisting} Note that each observable is associated with one object of the class \class{eos.Parameters}. To illustrate this feature, the above code is repeated to create a second observable \object{observable2}\index{eos.Observable!make} \begin{lstlisting}[language=iPython] observable2 = eos.Observable.make('B_u->lnu::BR', eos.Parameters(), eos.Kinematics(), eos.Options({'l': 'tau', 'model': 'WET'})) \end{lstlisting} Even though the two objects \object{observable1} and \object{observable2} share the same name and options, their respective parameter sets are independent, as can be checked as follows:\index{eos.Observable!parameters} \begin{lstlisting}[language=iPython] observable1.parameters() == observable2.parameters() # yields False \end{lstlisting} To correlate any number of observables, it is necessary to create \emph{all of them} using the same \object{eos.Parameters} object; this will be further discussed in \refsec{usage:inference}. In the above, this is \emph{not the case}, since for the creation of each observable the call to \code{eos.Parameters()} created a new, independent set of parameters as explained in \refsec{basics:parameters}.\\ In many cases, observables have a default set of options, e.g., the default choice of hadronic form factors or the default choice of a \ac{BSM}\xspace model. In some cases, it does not make sense to have a default choice. In such cases, an error will be shown through a \texttt{Python}\xspace exception if the user does not provide a valid option value. An example for this behavior are the form factor pseudo-observables, e.g., \code{B->K^*::V(q2)}, which always require providing a valid value for the option \code{form-factors}. This is achieved by including the option as part of the \class{eos.QualifiedName}. In this case, \code{B->K^*::V(q2);form-factors=BSZ2015} selects the form factor parametrization as used by Bharucha, Straub, and Zwicky~\cite{Straub:2015ica} in 2015. A full list of all option keys and their respective valid values is available as part of the online documentation~\cite{EOS:doc} and by displaying \texttt{eos.Observables()} in an interactive \texttt{Jupyter}\xspace notebook.\\ Contrary to parameters and kinematic variables, modifying the \class{eos.Options} object of any observable after its creation has no effect\index{eos.Observable!options} \begin{lstlisting}[language=iPython] observable1.options().set('l', 'mu') # does not affect observable1 \end{lstlisting} This design decision ensures high-performance evaluations of all observables.\\ Objects of type \class{eos.Observable} are regular \texttt{Python}\xspace objects. For example, they can be collected in a \pyclass{list}, which is useful for evaluating a number of identical observables at different points in their phase space. This can be achieved as follows\index{eos.Observable!make}\index{eos.Observable!evaluate} \begin{lstlisting}[language=iPython] import numpy parameters = eos.Parameters() observables = [eos.Observable.make('B->D^*lnu::A_FB(q2)', parameters, eos.Kinematics(q2=q2_value), eos.Options()) for q2_value in numpy.linspace(1.00, 10.67, 10)] values = [o.evaluate() for o in observables] display(values) \end{lstlisting} Here the instantiation of all observables with the same \class{eos.Parameters} object \object{parameters} ensures that they share the same numerical values for all parameters. As a consequence, changes of numerical values within \object{parameters} are broadcasted to all these instances and are taken into account in their subsequent evaluations. \subsection[Classes eos.Constraints and eos.Constraint] {Classes \class{eos.Constraints} and \class{eos.Constraint}} \label{sec:basics:constraints} \texttt{EOS}\xspace uses the class \class{eos.Constraint} to manage and create individual likelihood functions at run time. To this end, objects of type \class{eos.Constraint} contain both information on the concrete likelihood (e.g., mean values and standard deviation of a Gaussian measurement) and meta-information about the constrained observables (e.g., the \texttt{EOS}\xspace internal names for an observable, relevant kinematic variables, and required options). Besides (multivariate) Gaussian likelihood functions, \texttt{EOS}\xspace also supports LogGamma and Amoroso functions~\cite{crooks2015amoroso}, and Gaussian mixture densities. The database of constraints makes it possible to construct a likelihood function for any experimental measurement and/or theory input in terms of \class{eos.Observable} objects. Hence, \class{eos.Constraint} objects are the building blocks for parameter inference studies that use the \texttt{EOS}\xspace software.\\ \texttt{EOS}\xspace provides a database of constraints, which is available as part of the online documentation~\cite[List of Constraints]{EOS:doc} as well as interactively accessible in a \texttt{Jupyter}\xspace notebook via\index{eos.Constraints} \begin{lstlisting}[language=iPython] display(eos.Constraints()) \end{lstlisting} This database is stored within \texttt{EOS}\xspace in a series of \texttt{YAML}\xspace files. Most \texttt{EOS}\xspace users will not require knowledge about the file format. However, advanced users may need to provide constraints that are not part of the built-in database. In such a case, the user can specify a \emph{manual} constraint; see \refsec{basics:analysis} and ref.~\cite{EOS:API} for details. Alternatively, similar to the \class{eos.Parameters} database, the user can set the \code{EOS_HOME} environment variable to point to an accessible directory. All \texttt{YAML}\xspace files within \code{EOS_HOME/constraints} will be loaded and used \emph{instead} of the default \class{eos.Constraints} database. We document the format in \refapp{constraints-format} and an example entry is shown in \reflst{constraint:Belle2015A}.\\ Examples of built-in constraints that are used later on in this document include: \begin{itemize} \item The constraint \code{B->D::f_++f_0@FNAL+MILC:2015B} describes a lattice QCD result for the $\bar{B}\to D$ form factors $f_+$ and $f_0$. Here the suffix indicates that this constraint has been extracted from ref.~\cite{Lattice:2015rga}, which is included in the \texttt{EOS}\xspace list of references as \code{FNAL+MILC:2015B}. The constraint can be used to create a likelihood function for the model parameters for the form factors $f_+$ and $f_0$, e.g., when using the BSZ2015 parametrization as the form factor model. Using \code{B->D::f_++f_0@FNAL/MILC:2015B;form-factors=BSZ2015} (i.e., the constraint name including an option list that specificies the form factor model) ensures that the correct form factor model (here: BSZ2015) is used when creating a likelihood from this constraint. \vspace*{\smallskipamount} \item The constraint \code{B^0->D^+e^-nu::BRs@Belle:2015A} describes the correlated measurement of the $\bar{B}^0\to D^+e^-\bar\nu$ branching ratio in $10$ bins of the kinematic variable $q^2$. Here the suffix indicates that the results have been extracted from ref.~\cite{Belle:2015pkj}, which is included in the \texttt{EOS}\xspace list of references as \code{Belle:2015A}. \end{itemize} \subsection[Classes eos.SignalPDFs and eos.SignalPDF] {Classes \class{eos.SignalPDFs} and \class{eos.SignalPDF}} \label{sec:basics:signalpdf} \texttt{EOS}\xspace uses the \class{eos.SignalPDF} class to provide a theoretical prediction for the \ac{PDF}\xspace that describes a physical process, be it a decay or a scattering process. The dependence on an arbitrary number of kinematic variables are modeled through a shared object of class \class{eos.Kinematics}, and its \class{eos.KinematicVariable} objects. Parameters can be modified or inferred through a shared \class{eos.Parameters} object. Hence, each \class{eos.SignalPDF} object works very similar to an \class{eos.Observable} object. The list of \acp{PDF} can be accessed using the \class{eos.SignalPDFs} class. Searching for a specific \ac{PDF}\xspace in the \texttt{EOS}\xspace database of signal PDFs is possible by filtering by the prefix part, name part, or suffix part of the signal \ac{PDF}\xspace qualified name, very similar to how the database of observables is searchable \begin{lstlisting}[language=iPython] eos.SignalPDFs(prefix='B->Dlnu') # display a list of all known SignalPDF objects # this include 'B->Dlnu::dGamma/dq2', which requires # 'q2_min', 'q2_max', 'q2' as kinematic variables. \end{lstlisting} The signal \ac{PDF}\xspace \code{B->Dlnu::dGamma/dq2} features one kinematic variable, \code{q2}. Its boundaries are also passed by means of \class{eos.KinematicVariable} objects, which are conventionally named \code{q2_min} and \code{q2_max}. \begin{lstlisting}[language=iPython] pdf = eos.SignalPDF.make('B->Dlnu::dGamma/dq2', eos.Parameters(), eos.Kinematics({'q2_min': 0.0, 'q2_max': 10.0, 'q2': 5.0), eos.Options({'l': 'mu', 'model': 'WET'})) \end{lstlisting} The \ac{PDF}\xspace's parameters, kinematics, and options can be accessed with eponymous methods. This design permits the user some flexibility. It makes it possible to produce pseudo-events within the \ac{SM}\xspace and in the generic \ac{WET}\xspace; see \refsec{usage:simulation} for this use case. In addition, it enables unbinned likelihood fits; their description goes beyond the scope of this document. \subsection[Class eos.Analysis] {Class \class{eos.Analysis}} \label{sec:basics:analysis} \texttt{EOS}\xspace uses the \class{eos.Analysis} as an interface for the user to describe a Bayesian analysis to infer one or more parameters. When creating an \class{eos.Analysis} object, the following arguments are used: \begin{description} \item[\hlred{\texttt{priors}}] is a mandatory \pyclass{list} describing the univariate priors. This argument must describe at least one prior. Each prior is described through a \pyclass{dict} object, the structure of which is documented as part of the \texttt{Python}\xspace API documentation~\cite[\texttt{eos.Analysis}]{EOS:API}. \smallskip \item[\hlred{\texttt{likelihood}}] is a mandatory \pyclass{list} describing all the constraints that enter the likelihood. Each element is a \pyclass{str} or \class{eos.QualifiedName}, specifying a single constraint. Although it is a mandatory parameter, this list can be left empty. \smallskip \item[\hlred{\texttt{global\_options}}] is an optional \pyclass{dict} describing the options that will be applied to all the observables that enter the likelihood. Note that these global options \textit{override} those specified via the qualified name scheme. For example, in a \ac{BSM}\xspace analysis, it is useful to include \code{'model': 'WET'} as a global option, to ensure that all observables will be evaluated using a selectable point in the \ac{WET}\xspace parameter space. \smallskip \item[\hlred{\texttt{fixed\_parameters}}] is an optional \pyclass{dict} describing parameters that shall be fixed to non-default values as part of the analysis. For example, to carry out a \ac{BSM}\xspace analysis of $b\to c\tau\nu$ processes for a non-default renormalization scale, the user can set the scale parameter to a fixed value of $3\,\ensuremath{\mathrm{GeV}}$ using \code{'cbtaunutau::mu': '3.0'}. \smallskip \item[\hlred{\texttt{manual\_constraints}}] is an optional \pyclass{dict} describing constraints that are not yet included in the \texttt{EOS}\xspace database of constraints. The constraint format is described in \refapp{constraints-format}. Note that to use any of the manual constraints as part of the likelihood, their qualified names must still be added to the \code{likelihood} argument. \end{description} \enlargethispage{2em} \begin{lstlisting}[ language=iPython,% caption={% Example for a Bayesian analysis to extract the \ac{CKM}\xspace parameter $|V_{cb}|$ from $\bar{B}\to D\lbrace e^-,\mu^-\rbrace \bar\nu$ data by the Belle experiment and lattice QCD input by the HPQCD and Fermilab/MILC collaborations. \label{lst:basics:analysis:definition}\index{eos.Analysis}\index{eos.Parameter!set} } ] analysis_args = { 'global_options': { 'form-factors': 'BSZ2015', 'model': 'CKM' }, 'priors': [ {'parameter': 'CKM::abs(V_cb)', 'min': 38e-3, 'max': 45e-3, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f+_0@BSZ2015', 'min': 0.0, 'max': 1.0, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f+_1@BSZ2015', 'min': -4.0, 'max': -1.0, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f+_2@BSZ2015', 'min': 4.0, 'max': 6.0, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f0_1@BSZ2015', 'min': -1.0, 'max': 2.0, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f0_2@BSZ2015', 'min': -2.0, 'max': 0.0, 'type': 'uniform'} ], 'likelihood': [ 'B->D::f_++f_0@HPQCD:2015A', 'B->D::f_++f_0@FNAL+MILC:2015B', 'B^0->D^+e^-nu::BRs@Belle:2015A', 'B^0->D^+mu^-nu::BRs@Belle:2015A' ] } analysis = eos.Analysis(**analysis_args) \end{lstlisting} In \reflst{basics:analysis:definition} we define a statistical analysis for the inference of $|V_{cb}|$ from measurements of the $\bar{B}\to D\ell^-\bar\nu$ branching ratios by the Belle experiment. This example will be further discussed in \refsec{usage:analysis}. First, we define all the arguments used in our analysis. \begin{itemize} \item Using the \code{global_options}, we choose the \code{BSZ2015} parametrization~\cite{Straub:2015ica} to model the hadronic form factors that enter semileptonic $\bar{B}\to D$ transitions. We also choose the \code{CKM} model to ensure that $|V_{cb}|$ is represented by a single parameter. \smallskip \item Priors for both the $|V_{cb}|$ parameter and the \code{BSZ2015} parameters are described in \code{priors}. Here, each parameter is assigned a uniform prior, which is chosen to contain at least $98\%$ ($\sim 3\,\sigma$) of the \emph{ideal} posterior probability, i.e., the priors have been chosen to be wide enough to ``contain`` the posterior defined by this analysis. \smallskip \item The likelihood is defined through a list of constraints, which in the above includes both theoretical lattice QCD results as well as experimental measurements by the Belle collaboration. For the first part we combine the correlated lattice QCD results published by the Fermilab/MILC and HPQCD collaborations in 2015 \cite{Na:2015kha,MILC:2015uhg}. For the second part, we combine binned measurements of the branching ratio for $\bar{B}^0\to D^+e^-\bar\nu$ and $\bar{B}^0\to D^+\mu^-\bar\nu$ decay. We reiterate that \texttt{EOS}\xspace treats genuine physical observables and pseudo-observables identically. \end{itemize} \noindent The class \class{eos.Analysis} further provides convenience methods to carry out the statistical analysis: \begin{description} \item[\hlred{\texttt{optimize}}] \index{eos.Analysis!optimize} uses the \code{scipy.optimize} module to find the best fit point of the posterior. Optional parameters determine the abort condition for the optimization and the starting point. \smallskip \item[\hlred{\texttt{sample}}] \index{eos.Analysis!sample} uses the \code{pypmc} module to produce random variates of the posterior using an adaptive version of the Metropolis-Hastings algorithm~\cite{doi:10.1063/1.1699114,10.1093/biomet/57.1.97,10.2307/3318737} with a single Markov chain. This method can be run several times to repeatedly explore the posterior density and accurately sample from it. \smallskip \item[\hlred{\texttt{sample\_pmc}}] \index{eos.Analysis!sample\_pmc} uses the \code{pypmc} module to produce random variates of the posterior using the Population Monte Carlo algorithm~\cite{2010MNRAS.405.2381K}. To this end, an initial guess of the posterior in form of a Gaussian mixture density is created~\cite{2013arXiv1304.7808B} from Markov chain Monte Carlo samples obtained using \method{eos.Analysis}{sample}. \end{description} At any point, the attribute \method{eos.Analysis}{parameters} can be used to access the analysis' parameter set, e.g., to save the set to file via the \method{eos.Parameters}{dump} method. We refer to the documentation of the \texttt{EOS}\xspace \texttt{Python}\xspace API~\cite{EOS:API} for further information.\\ Note that the \texttt{C++}\xspace backend used by \class{eos.Analysis} parallelizes the evaluation of the likelihood function. By default, the number of concurrent threads will match the number of available processors. Users who need to limit this number (e.g., due to using \texttt{EOS}\xspace on a multi-user system in parallel to other users' jobs) can do so by setting the \texttt{EOS\_MAX\_THREADS} environment variable to the limit. \subsection[Class eos.Plotter] {Class \class{eos.Plotter}} \label{sec:basics:plotter} \texttt{EOS}\xspace implements a versatile plotting framework based on the class \class{eos.Plotter}, which relies on \texttt{matplotlib}\xspace~\cite{matplotlib} for the actual plotting. Its input must be formatted as a dictionary containing two keys: \code{plot} contains metadata and \code{contents} describes the plot items. The value associated to the \code{plot} key is a dictionary; it describes the layout of the plot, including axis labels, positioning of the legend, and similar settings that affect the entire plot. The value associated to the \code{contents} key is a list; it describes the contents of the plot, expressed in terms of independent plot items. Possible types of plot items include points, bands, contours, histograms. \begin{lstlisting}[ language=iPython,% caption={% High-level description of the arguments for the \class{eos.Plotter} class. The plot will appear inline in a \texttt{Jupyter}\xspace notebook (if \code{FILENAME} is not specified) or be written to \code{FILENAME} (if specified). In the latter case, the output format will be determined based on the file extension. \label{lst:basics:plotter:description}\index{eos.Plotter} }% ] plot_desc = { 'plot': { 'x': { ... }, # description of the x axis 'y': { ... }, # description of the y axis 'legend': { ... }, # description of the legend ... # further layouting options }, 'contents': [ { ... }, # first plot item { ... }, # second plot item ] } eos.plot.Plotter(plot_desc, FILENAME).plot() \end{lstlisting} Each of the items is represented by a dictionary that contains a \code{type} key and an optional \code{name} key. A full description of all item types and their parameters is available as part of the \texttt{EOS}\xspace \texttt{Python}\xspace API documentation~\cite{EOS:API}. Here, we provide a brief summary for the most common types, which are used within examples in the course of this document: \begin{description} \item[\hlred{\texttt{observable}}] \index{eos.Plotter!observable} plots a single \texttt{EOS}\xspace observable without uncertainties as a function of one kinematic variable or one parameter. See \reflst{usage:BtoDlnu:BR} for an example. \smallskip \item[\hlred{\texttt{histogram}}] \index{eos.Plotter!histogram} \item[\hlred{\texttt{histogram2D}}] \index{eos.Plotter!histogram2D} plots either a 1D or a 2D histogram of pre-existing random samples. These samples can be contained in \texttt{Python}\xspace objects within the notebook's memory or contained in a datafile on disk. See \reflst{usage:plot-prior-prediction-int} and \reflst{simulation:histogram-2D} for examples. \smallskip \item[\hlred{\texttt{uncertainty}}] \index{eos.Plotter!uncertainty} plots the uncertainty band of an observables as a function of one kinematic variable or one parameter. The random samples for the observables can be contained in \texttt{Python}\xspace objects within the notebook's memory or contained in a datafile on disk. See \reflst{usage:plot-prior-prediction-diff} for an example. \smallskip \item[\hlred{\texttt{constraint}}] \index{eos.Plotter!constraint} displays a constraint either from the \texttt{EOS}\xspace library or a manually added constraint. See \reflst{inference:posterior_samples_uncertainties} for an example. \smallskip \end{description} Beyond \code{type} and \code{name} keys, all item types also recognise the following optional keys: \begin{description} \item[\hlred{\texttt{alpha}}] \index{eos.Plotter!alpha} A \pyclass{float}, between 0.0 and 1.0, which describes the opacity of the plot item expressed as an alpha value. A value of 0.0 means completely transparent, 1.0 means completely opaque. \smallskip \item[\hlred{\texttt{color}}] \index{eos.Plotter!color} A \pyclass{str}, containing any valid \texttt{matplotlib}\xspace color specification, which describes the color of the plot item. Defaults to one of the colors in the \texttt{matplotlib}\xspace default color cycler. \smallskip \item[\hlred{\texttt{label}}] \index{eos.Plotter!label} A \pyclass{str}, containing LaTeX commands, which describes the label that appears in the plot’s legend for this plot item. \end{description} In \reflst{basics:plotter:description}, \texttt{FILENAME} is an optional argument naming the file into which the plot shall be placed. The file format is automatically determined based on the file name extension. \subsection[Classes eos.References, eos.Reference, and eos.ReferenceName] {Classes \class{eos.References}, \class{eos.Reference}, and \class{eos.ReferenceName}} \label{sec:basics:references} \texttt{EOS}\xspace strives to give complete credit to the various works that underpin the theory predictions and the experimental and phenomenological analyses that provide likelihoods. To this end, \texttt{EOS}\xspace keeps a database of bibliographical metadata, which is accessible via the \class{eos.References} class. Each entry is a tuple of an \class{eos.ReferenceName} object that uniquely identifies the reference and the metdata data of the reference as an \class{eos.Reference} object. For a complete list of works used within \texttt{EOS}\xspace, we refer to the documentation~\cite[List of References]{EOS:doc}. Each observable provides a list of reference names, corresponding to the pertinent pieces of literature that were used in their implementations. This list is obtained via the \method{eos.Observable}{references} method, which returns a generator of \class{eos.ReferenceName} objects: \begin{lstlisting}[language=iPython] obs = eos.Observable.make('B_u->lnu::BR', eos.Parameters(), eos.Kinematics(), eos.Options({'l': 'tau', 'model': 'WET'})) display([rn for rn in obs.references()]) # shows 'DBG:2013A', amongst others \end{lstlisting} Further information on this reference can be obtained from its \class{eos.Reference} object: \begin{lstlisting}[language=iPython] ref = eos.References()['DBG:2013A'] display(ref) # displays the reference's title, authors, and eprint hyperlink (if available) \end{lstlisting} In a similar way, by convention the suffix part of each \class{eos.Constraint} is a valid reference name. Therefore, to look up the reference that provides a constraint (e.g., \code{B->D::f_++f_0@FNAL+MILC:2015B}) the user can look up the associated bibliographical metadata based on the name's suffix part: \begin{lstlisting}[language=iPython] display(eos.References()['FNAL+MILC:2015B']) \end{lstlisting} If you feel that your work should be listed as part of a reference for any of the \texttt{EOS}\xspace observables, please contact the authors to include it. \section{Conclusion \& Outlook}\label{sec:summary} We have presented the \texttt{EOS}\xspace software in version 1.0\xspace and explained its three main use cases at the hand of concrete examples in the field of flavor physics phenomenology. Beyond these examples, \texttt{EOS}\xspace has been used extensively for numerical evaluations, statistical analyses and plots in a number of peer-reviewed publications. We plan to extend \texttt{EOS}\xspace with further processes and observables, while keeping the \texttt{Python}\xspace interface unchanged.\\ To keep this document concise, some advanced aspects of \texttt{EOS}\xspace have not been discussed. These aspects, documented in the online documentation~\cite{EOS:doc}, include \begin{itemize} \item the possibility to combine existing observables in arithmetic expressions at run time; \item the command line interface intended to use as part of massively parallelized batch jobs in grid or cluster environments; and \item the addition of \texttt{C++}\xspace code for new observables and processes. \end{itemize} Despite ongoing unit testing and development of the software, we are conscious that \texttt{EOS}\xspace is neither free of bugs nor providing all the features the user could possibly need. We therefore encourage the users to report any and all bugs found and to request additional features. We ask that any such reports or requests are communicated as issues within the \texttt{EOS}\xspace Github repository~\cite{EOS:repo}. We are very happy to discuss the addition of further observables and processes with interested parties from the phenomenological and experimental communities. \section{Introduction and Physics Case} \label{sec:intro} Flavor physics phenomenology has a long history of substantial impact on the development of the \ac{SM}\xspace of particle physics. Over the last decades, two developments are particularly noteworthy:\\ First, the determination of the \ac{CKM}\xspace matrix elements has developed into a precision enterprise, thanks in large parts to the efforts at the B-factory experiments BaBar and Belle~\cite{Bevan:2014iga} and more recently the LHCb experiment~\cite{LHCb:2012myk}, the technological progress in lattice gauge theory predictions~\cite{FlavourLatticeAveragingGroup:2019iem}, and the development of precision phenomenology with continuum methods.\\ Second, the emergence of the so-called ``$b$ anomalies'' has led to cautious excitement in the community. These anomalies are substantial tensions between theory predictions of $b$-quark decay observables and their measurements by the ATLAS, BaBar, Belle, CMS, and LHCb experiments, which present a coherent pattern that might be due to \ac{BSM}\xspace effects, but do not yet reach individually the required significance of $5\,\sigma$; see e.g.\ refs.~\cite{Albrecht:2021tul,Bernlochner:2021vlv} for recent reviews.\\ Both developments have led to increasingly sophisticated phenomenological analyses.\\ Such analyses involve researchers regularly carrying out structurally similar and recurring tasks. These typical \emph{use cases} include \begin{enumerate} \item predicting flavor observables and assessing their theory uncertainties both within the \ac{SM}\xspace and for general \ac{BSM}\xspace scenarios in the \ac{WET}\xspace; \smallskip \item inferring hadronic, \ac{SM}\xspace, and/or \ac{WET}\xspace parameters from an extendable database of experimental and theoretical likelihoods; \smallskip \item simulating flavor processes and producing high-quality pseudo events for use in sensitivity studies and for the preparation of experimental analyses. \end{enumerate} The \texttt{EOS}\xspace software~\cite{EOS} has been continuously developed since 2011~\cite{vanDyk:2012zla,EOS:repo} to achieve these tasks. \texttt{EOS}\xspace is free software published under the GNU General Public License 2~\cite{GPLv2}. It has produced publication-quality results for approximately 30 peer-reviewed and published phenomenological studies~\cite{% Bobeth:2010wg,% Bobeth:2011gi,Bobeth:2011nj,% Beaujean:2012uj,Bobeth:2012vn,% Beaujean:2013soa,% Faller:2013dwa,SentitemsuImsong:2014plu,Boer:2014kda,% Beaujean:2015gba,Feldmann:2015xsa,Mannel:2015osa,% Bordone:2016tex,Meinel:2016grj,Boer:2016iez,Serra:2016ivr,% Bobeth:2017vxj,Blake:2017une,% Boer:2018vpx,Feldmann:2018kqr,Gubernari:2018wyi,% Boer:2019zmp,Bordone:2019vic,Blake:2019guk,Bordone:2019guc,% Gubernari:2020eft,% Bruggisser:2021duo,Leljak:2021vte,Bobeth:2021lya% }. Besides applications in phenomenology, \texttt{EOS}\xspace also has been used in a number of published experimental studies by the CDF~\cite{% CDF:2011tds% }, the CMS~\cite{% CMS:2013mkz,% CMS:2015bcy% } and the LHCb~\cite{% LHCb:2012bin,% LHCb:2013zuf,% LHCb:2014auh,% LHCb:2015svh,% LHCb:2018jna,% LHCb:2020lmf% } experiments. The Belle II experiment has included \texttt{EOS}\xspace as part of the external software~\cite{basf2ext} within the Belle II software analysis framework~\cite{Kuhr:2018lps}.\\ In this article, we describe the \texttt{EOS}\xspace user interface. Although the software is developed mainly in \texttt{C++}\xspace, it is designed to be used in \texttt{Python}\xspace \cite{python}. As such, \texttt{EOS}\xspace relies heavily on the \code{numpy}~\cite{Harris:2020xlr} and \texttt{pypmc}\xspace~\cite{pypmc} packages. We \emph{highly} recommend new users to use \texttt{EOS}\xspace within a \texttt{Jupyter}\xspace notebook environment \cite{jupyter}.\\ \texttt{EOS}\xspace can be installed in binary form on Linux-based systems\footnote{% This is limited to systems with \texttt{Python}\xspace version 3.6 or later that also fulfill the ``manylinux\_2\_17\_x86\_64'' platform requirement as defined in PEP~600~\cite{PEP-600}. } with a single command: \begin{lstlisting}[language=bash] pip3 install eoshep \end{lstlisting} Afterwards, the \texttt{EOS}\xspace \texttt{Python}\xspace module can be accessed, e.g. within a \texttt{Jupyter}\xspace notebook, using \begin{lstlisting}[language=ipython] import eos \end{lstlisting} We note that this means of installation also works for the ``Windows Subsystem for Linux v2 (WSL2)''. For the purpose of installing \texttt{EOS}\xspace, WSL2 can be treated like any Linux system.\\ Although \texttt{EOS}\xspace can also be built and installed from source on macOS systems, we do not currently support these. For macOS users we recommend to install on a remote-accessible Linux system and access a \texttt{Jupyter}\xspace notebook via \code{SSH}; our recommendation is described in detail as part of the frequently-asked questions~\cite{EOS:doc}. Prospective \texttt{EOS}\xspace developers find detailed instructions on how to build \texttt{EOS}\xspace from source in the installation section of the documentation~\cite{EOS:doc}.\\ Presently, \texttt{EOS}\xspace provides a total of 844 (pseudo-)observables\footnote{% \texttt{EOS}\xspace does not distinguish between true observables, which can be unambiguously measured in an experimental setting, and pseudo-observables, which can not be unambiguously inferred from experimental or theoretical data. Pseudo-observables in \texttt{EOS}\xspace include hadronic form factors and other auxiliary hadronic quantities. } % pertaining to a large variety of flavor processes. Obtaining and browsing the full list of observables is discussed in \refsec{basics:observables}. The processes implemented include \begin{itemize} \item (semi)leptonic charged-current $\bar{B}$ meson decays (e.g., $\bar{B}\to D^*\tau\bar\nu$); \item semileptonic charged-current $\Lambda_b$ baryon decays (e.g., $\Lambda_b\to \Lambda_c(\to \Lambda \pi)\mu\bar\nu$); \item rare (semi)leptonic and radiative neutral-current $\bar{B}$ meson decays (e.g., $\bar{B}\to \bar{K}^*\mu^+\mu^-$); \item rare semileptonic and radiative neutral-current $\Lambda_b$ baryon decays (e.g., $\Lambda_b\to \Lambda(\to p \pi) \mu^+\mu^-$); and \item $B$-meson mixing observables (e.g., $\Delta m_s$). \end{itemize} \texttt{EOS}\xspace is designed to be self documenting: a complete list of processes and their respective observables is automatically generated as part of documentation, which is accessible both through the software itself and online~\cite{EOS:doc}. The theoretical descriptions of most observables use the \ac{WET}\xspace to account for both \ac{SM}\xspace and \ac{BSM}\xspace predictions. Details of the \texttt{EOS}\xspace bases of \ac{WET}\xspace operators are described in \refapp{models:WET}.\\ Although \texttt{EOS}\xspace is --- to our knowledge --- the first publicly available open-source flavor physics software~\cite{vanDyk:2012zla,EOS:repo}, it is by far not the only one. \texttt{EOS}\xspace competes with the \texttt{flavio}\xspace~\cite{Straub:2018kue}, \texttt{SuperISO}\xspace~\cite{Neshatpour:2021nbn}, HEPfit~\cite{DeBlas:2019ehy} and \texttt{FlavBit}\xspace~\cite{Workgroup:2017myk} software. Major distinctions between \texttt{EOS}\xspace and these competitors are: \begin{itemize} \item \texttt{EOS}\xspace focuses on the simultaneous inference of hadronic and \ac{BSM}\xspace parameters; \item \texttt{EOS}\xspace ensures modularity of hadronic matrix elements, i.e., the possibility to select from various hadronic models and parametrizations at run time; \item \texttt{EOS}\xspace provides means to produce pseudo events for use in sensitivity studies and in preparation for experimental measurements; and \item \texttt{EOS}\xspace provides means to predict hadronic matrix elements from QCD sum rules. \end{itemize} These distinctions make analyses possible that cannot currently be carried out with the competing software~\cite{% Beaujean:2012uj,Beaujean:2013soa,Bobeth:2017vxj,Feldmann:2018kqr,bsll2021% }, e.g., due to multi-modal or otherwise complicated posteriors that cannot be be captured by Markov chain Monte Carlo methods alone. However, this benefit comes with an increased level of complexity, which we address in the \texttt{EOS}\xspace documentation~\cite{EOS:doc} and --- to some extent --- in this article.\\ \subsection{How to Read This Document} Although this paper will give you a first impression of \texttt{EOS}\xspace and basic examples to try in a \texttt{Jupyter}\xspace notebook, it is not meant to be a stand-alone document. To obtain a deeper understanding, additional documentation, and further examples, the user is referred to refs.~\cite{EOS:doc,EOS:API,EOS:examples}. Wherever we list \texttt{Python}\xspace code, we assume that the reader evaluates it within a \texttt{Jupyter}\xspace notebook environment, to make full use its rich display capabilities.\\ In \refsec{basics}, we illustrate the basic usage of \texttt{EOS}\xspace, beginning with an overview of the various classes and concepts available through the \texttt{Python}\xspace interface. In \refsec{usage} we continue with a discussion of and examples for the main use cases. In a series of appendices we provide further details. \begin{itemize} \item We describe the three physics models available in \texttt{EOS}\xspace in \refapp{models}. \item We relegate lengthy \texttt{Python}\xspace code examples that would otherwise interrupt reading \refsec{usage} to \refapp{PlotExamples}. \item We document the \texttt{EOS}\xspace internal data format for storing experimental and theoretical likelihoods in \refapp{constraints-format}. \item We include a glossary of the main \texttt{EOS}\xspace objects and associated methods in \refapp{glossary}. \end{itemize} This article is accompanied by a number of auxiliary files, containing example \texttt{Jupyter}\xspace notebooks for the basic usage and each of the use cases. These notebooks correspond to the examples contained in the public source code repository~\cite{EOS:examples} as of \texttt{EOS}\xspace version 1.0\xspace. \section*{Acknowledgments} DvD is grateful to Gudrun Hiller, Thomas Mannel, Gino Isidori, and Nico Serra, whose support made the development of \texttt{EOS}\xspace possible in the first place. We thank all \texttt{EOS}\xspace contributors who are not authors of this paper, including Bastian M\"uller, Romy O'Connor, Stefanie Reichert, Martin Ritter, Eduardo Romero, Ismo Toijala, and Christian Wacker.\\ The work of DvD, EE, NG, SK, and MR and the development of EOS is supported by the German Research Foundation (DFG) within the Emmy Noether Programme under grant DY 130/1-1 and by the National Natural Science Foundation of China (NSFC) and the DFG through the funds provided to the Sino-German Collaborative Research Center TRR110 ``Symmetries and the Emergence of Structure in QCD'' (NSFC Grant No. 12070131001, DFG Project-ID 196253076 -- TRR 110). The work of TB is supported by the Royal Society (United Kingdom). The work of CB was supported by the DFG under grant BO-4535/1-1. The work of MB and MJ is supported by the Italian Ministry of Research (MIUR) under grant PRIN 20172LNEEZ. The work of NG is also supported by the DFG under grant 396021762 -- TRR 257 ``Particle Physics Phenomenology after the Higgs Discovery''. The work of RSC and EG was supported by the Swiss National Science Foundation (SNSF) under contracts 159948, 172637 and 174182. The work of PL is supported by the Cluster of Excellence ``ORIGINS'' funded by the DFG under Germany's Excellence Strategy -- EXC-2094 -- 390783311. The work of JV is supported by funding from the Spanish MICINN through the ``Ram\'on y Cajal'' program RYC-2017-21870, the ``Unit of Excellence Mar\'ia de Maeztu 2020-2023” award to the Institute of Cosmos Sciences (CEX2019-000918-M) and from PID2019-105614GB-C21 and 2017-SGR-929 grants. \newpage \input{appendix.tex} \printindex \section{Use Cases} \label{sec:usage} Each of the three major use cases introduced in \refsec{intro} is discussed in details in sections \ref{sec:usage:predictions} to \ref{sec:usage:simulation}. \subsection{Theory Predictions} \label{sec:usage:predictions} [\textit{The example developed in this section can be run interactively from the example notebook for theory predictions available from ref.~\cite{EOS:repo}, file \href{https://github.com/eos/eos/tree/v1.0/examples/predictions.ipynb}{examples/predictions.ipynb}}]\\ \texttt{EOS}\xspace is equipped to produce theory predictions including their parametric uncertainties for any of its built-in observables using Bayesian statistics. This requires knowledge of the probability density function (PDF) of the pertinent parameters. Here and throughout we will denote the set of parameters as $\ensuremath{\vec\vartheta}$, with \begin{equation*} \ensuremath{\vec\vartheta} \equiv (\ensuremath{\vec{x}}, \ensuremath{\vec\nu})\, \end{equation*} where $\ensuremath{\vec{x}}$ represents the parameters of interest, and $\ensuremath{\vec\nu}$ represents the nuisance parameters. This distinction is entirely a semantic one, and no technical differences arise from treating a parameter either way. Production of theory predictions then falls into one of the following cases: \begin{enumerate} \item theory predictions for fixed values of all parameters $\ensuremath{\vec\vartheta} = \ensuremath{\vec\vartheta}^*$; \item \textit{a-priori} predictions with propagation of uncertainties due to the \emph{prior} PDF $P_0(\ensuremath{\vec\vartheta})$; \item \textit{a-posteriori} predictions with propagation of uncertainties due to the \emph{posterior} PDF $P(\ensuremath{\vec\vartheta}|D)$, where $D$ represents some data $D$. \end{enumerate} Case 1 has been already mentioned with the concluding example of \refsec{basics:observables}. In \refsec{usage:predictions:fixed} we provide an example showcasing how to efficiently obtain these predictions. Cases 2 and 3 can be handled identically in a Monte-Carlo framework and are discussed collectively in \refsec{usage:predictions:sampling}. \subsubsection{Direct Evaluation for Fixed Parameters} \label{sec:usage:predictions:fixed} In \refsec{basics} we have explained how to evaluate an observable for a single configuration of the kinematic variables, e.g., an integrated branching ratio with fixed integration boundaries, or a differential branching ratio at one point in the kinematic phase space. Commonly, users need to plot such differential observables as a function of the kinematic variable but for fixed values of its parameters. To illustrate how this can be achieved with \texttt{EOS}\xspace, we use the differential branching ratios for $\bar{B} \to D\lbrace \mu^-,\tau^-\rbrace \bar\nu$ as an example. The \class{eos.Plotter} class (see \refsec{basics:plotter}), provides means to plot any \texttt{EOS}\xspace observable as a function of a single kinematic variable (here: $q^2$). \begin{lstlisting}[% language=iPython,% caption={% Plot the $q^2$-differential branching ratios for $\bar{B} \to D\lbrace \mu^-,\tau^-\rbrace \bar\nu$. The results are shown as the two central curves in the right plot \refout{usage:prior-prediction}. \label{lst:usage:BtoDlnu:BR}\index{eos.Plotter!plot} }% ] plot_args = { 'plot': { 'x': {'label': r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 11.60] }, 'y': {'label': r'$d\mathcal{B}/dq^2$', 'range': [0.0, 5e-3] }, 'legend': { 'location': 'upper center' } }, 'contents': [ { 'type': 'observable', 'observable': 'B->Dlnu::dBR/dq2;l=mu', 'variable': 'q2', 'range': [0.02, 11.60], 'label': r'$\ell=\mu$', }, { 'type': 'observable', 'observable': 'B->Dlnu::dBR/dq2;l=tau', 'variable': 'q2', 'range': [3.17, 11.60], 'label': r'$\ell=\tau$', } ] } eos.plot.Plotter(plot_args).plot() \end{lstlisting} The output is a plot containing the branching ratios for $\ell=\mu, \tau$, where $x$ axis shows the kinematic variable $q^2$, and the $y$ axis shows the value of the differential branching ratio. The output corresponds to the central curves shown in the right plot of \refout{usage:prior-prediction}. In the listing above, the statement \code{'variable': 'q2'} specifies that the kinematic variable \code{q2} is varied in the available \code{range}.\\ Similarly, we can plot an observable as a function of a single parameter, with all other parameters kept fixed and for a given kinematic configuration. To this end, the \code{'xrange'} requires adjustment compared to the previous example, and the contents should be replaced by \begin{lstlisting}[language=iPython] ... 'contents': [ { 'type': 'observable', 'observable': 'B->Dlnu::dBR/dq2;l=mu,model=WET', 'kinematics': {'q2': 2.0}, 'parameters': {'CKM::abs(V_cb)' : 0.042} 'variable': 'cbmunumu::Re{cSL}', 'range': [-1.0, 1.0], 'label': r'$\ell=\mu$', } ] ... \end{lstlisting} Here the dependence of the differential branching fraction at $q^2 = 2\,\ensuremath{\mathrm{GeV}}^2$ on the real part of the \ac{WET}\xspace Wilson coefficient $C_{S_L}$ in the $\bar{c}b\mu\nu_\mu$ sector of the \ac{WET}\xspace is plotted. Note that \code{kinematics} key is used to provide the fixed set of kinematic variables and the \code{parameters} key is used to modify parameter values. As before, \code{variable} selects the entity that is plotted on the $x$ axis, which is now recognized to be an \class{eos.Parameter} object rather than an \class{eos.KinematicVariable} object. \subsubsection{Predictions from Monte Carlo Sampling} \label{sec:usage:predictions:sampling} \texttt{EOS}\xspace provides the means for a more sophisticated estimation of theory uncertainties using Monte Carlo techniques, including importance sampling techniques. For the sampling of a probability density function, \texttt{EOS}\xspace relies on the \texttt{pypmc}\xspace package that provides methods for adaptive Metropolis-Hastings~\cite{doi:10.1063/1.1699114,10.1093/biomet/57.1.97,10.2307/3318737} and Population Monte Carlo~\cite{2010MNRAS.405.2381K,2013arXiv1304.7808B} sampling. The uncertainty of an observable $O$ is estimated from its random variates. We recall that $O \sim P(O)$ with~\cite{gelmanbda04} \begin{align} P(O) & = \int\mathrm{d}\ensuremath{\vec\vartheta}\, P(O, \ensuremath{\vec\vartheta}) = \int\mathrm{d}\ensuremath{\vec\vartheta}\, P(O|\ensuremath{\vec\vartheta}) P(\ensuremath{\vec\vartheta}) = \int\mathrm{d}\ensuremath{\vec\vartheta}\, \delta\left[O - f_O(\ensuremath{\vec\vartheta})\right] P(\ensuremath{\vec\vartheta})\,. \end{align} Here the Dirac $\delta$-function was used and $f_O(\ensuremath{\vec\vartheta})$ is the theoretical expression that predicts $O$ for a given set of parameters \ensuremath{\vec\vartheta}. With this knowledge at hand, we approach the two cases 2 and 3 as discussed in \ref{sec:usage:predictions} in a basically identical way:\\ For \emph{case 2}, we use $P(\ensuremath{\vec\vartheta}) = P_0(\ensuremath{\vec\vartheta})$, i.e., the prior PDF. We note that \texttt{EOS}\xspace treats all priors $P_0$ as \emph{univariate PDFs} and therefore as uncorrelated. Mathematically, a multivariate prior is equivalent to a multivariate likelihood with flat, univariate priors. By design, \texttt{EOS}\xspace implements multivariate correlated priors in terms of a multivariate correlated likelihood. For example, the parameters in the parameterizations of hadronic form factors are constrained by various theoretical methods like lattice QCD calculations, light-cone sum rule calculations, unitarity bounds and constraints that arise in the limit of a heavy-quark mass. Under these circumstances one might still use the terminology \textit{prior prediction} whenever the included constraints are only of theoretical nature, i.e. no experimental information was used.\\ For \emph{case 3}, we use $P(\ensuremath{\vec\vartheta}) = P(\ensuremath{\vec\vartheta} | D)$, i.e., the posterior PDF as obtained from a previous fit given some data $D$. Although based on case 3, the examples below also illustrate case 2, since this distinction is entirely a semantic one.\\ We continue using the integrated branching ratios of $B^-\to D^0 \lbrace\mu^-, \tau^-\rbrace\bar\nu$ decays as examples. The largest source of theoretical uncertainty in these decays arises from the hadronic matrix elements, i.e., from the form factors $f^{\bar{B}\to D}_+(q^2)$ and $f^{\bar{B}\to D}_0(q^2)$. Both form factors have been obtained independently using lattice QCD simulations by the HPQCD \cite{Na:2015kha} and FNAL/MILC \cite{Lattice:2015rga} collaborations. In the following this information is used as part of the data $D$ in the form of a joint likelihood. The form factors at different $q^2$ values of each calculation are available in \texttt{EOS}\xspace as \class{eos.Constraint} objects under the names \code{B->D::f_++f_0@HPQCD:2015A} and \code{B->D::f_++f_0@FNAL+MILC:2015B}. Here, we use these two constraints to construct a multivariate Gaussian prior as follows: \begin{lstlisting}[language=iPython] analysis_args = { 'priors': [ {'parameter': 'B->D::alpha^f+_0@BSZ2015', 'min': 0.0, 'max': 1.0, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f+_1@BSZ2015', 'min':-5.0, 'max':+5.0, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f+_2@BSZ2015', 'min':-5.0, 'max':+5.0, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f0_1@BSZ2015', 'min':-5.0, 'max':+5.0, 'type': 'uniform'}, {'parameter': 'B->D::alpha^f0_2@BSZ2015', 'min':-5.0, 'max':+5.0, 'type': 'uniform'} ], 'likelihood': [ 'B->D::f_++f_0@HPQCD:2015A', 'B->D::f_++f_0@FNAL+MILC:2015B' ] } prior = eos.Analysis(**analysis_args) \end{lstlisting} Next we create two observables: the semimuonic branching ratio and the semitauonic branching ratio. By using \object{prior.parameters} in the construction of these observables, we ensure that our observables and the \object{prior} share the same parameter set. This means that changes to \object{prior.parameters} will affect the evaluation of both observables. \begin{lstlisting}[language=iPython, caption={% Produce samples of the prior and prior-predictive samples for two observables. \label{lst:usage:prior-samples-int} }% ] obs_mu = eos.Observable.make('B->Dlnu::BR', prior.parameters, eos.Kinematics({'q2_min': 0.02, 'q2_max': 11.60}), eos.Options({'l':'mu', 'form-factors':'BSZ2015'})) obs_tau = eos.Observable.make('B->Dlnu::BR', prior.parameters, eos.Kinematics({'q2_min': 3.17, 'q2_max': 11.60}), eos.Options({'l':'tau','form-factors':'BSZ2015'})) observables = (obs_mu, obs_tau) parameter_samples, _, observable_samples = prior.sample(N=5000, pre_N=1000, observables=observables) \end{lstlisting} In the above, we provide the option \code{'form-factors': 'BSZ2015'} to ensure that the form factor plugin corresponds to the set of parameters that are described by \object{prior}. Sampling from the natural logarithm of the prior PDF and -- at the same time -- producing prior-predictive samples of both observables is achieved using the \method{eos.Analysis}{sample} method. This method runs one Markov chain using the \texttt{pypmc}\xspace package, and it is discussed in more detail in \refsec{usage:inference}. Here \code{N=5000} samples of both the parameter set and the observable set are produced, and we discard the values of the log prior for each parameter sample by assigning the return value to \code{_}. Note that the production of posterior-predictive samples is achieved in the same way. The distinction between a prior PDF and a posterior PDF is entirely a semantic one.\\ To illustrate the prior-predictive samples we use \texttt{EOS}\xspace' plotting framework: \begin{lstlisting}[% language=iPython,% caption={% Histogram prior-predictive samples of two observables. The output is shown in the left plot \refout{usage:prior-prediction}. \label{lst:usage:plot-prior-prediction-int}\index{eos.Plotter!plot} }% ] plot_args = { 'plot': { 'x': { 'label': r'$d\mathcal{B}/dq^2$', 'range': [0.0, 3e-2] }, 'legend': { 'location': 'upper center' } }, 'contents': [ { 'label': r'$\ell=\mu$', 'type': 'histogram', 'bins': 30, 'data': { 'samples': observable_samples[:, 0] } }, { 'label': r'$\ell=\tau$','type': 'histogram', 'bins': 30, 'data': { 'samples': observable_samples[:, 1] } }, ] } eos.plot.Plotter(plot_args).plot() \end{lstlisting} The arithmetic mean and the variance of the samples can be determined with standard techniques, e.g., using the \texttt{NumPy}\xspace routines \code{numpy.average} and \code{numpy.var}.\\ A further recurring task is to produce and plot uncertainty bands for differential observables. Here, we use the differential branching ratios for the previously discussed semimuonic and semitauonic decays. Using \texttt{EOS}\xspace we approach this task by creating two lists of observables. The first list includes only the $\bar{B}\to D\mu^-\bar\nu$ at various points in its phase space. Due to the strong dependence of the branching ratio on $q^2$, we do not distribute the points equally across the full phase space. Instead, we equally distribute half of the points in the interval $[0.02\,\ensuremath{\mathrm{GeV}}^2, 1.00\,\ensuremath{\mathrm{GeV}}^2]$ and the other half in the remainder of the phase space. The second list is constructed similarly for $\bar{B}\to D\tau^-\bar\nu$. We then pass these lists to \method{eos.Analysis}{sample}, to obtain prior-predictive samples of the observables: \begin{lstlisting}[% language=iPython,% caption={% Produce prior-predictive samples for the differential $\bar{B}\to D\lbrace \mu^-,\tau^-\rbrace \bar\nu$ branching ratios at various points in their respective phase spaces. The results are used in \reflst{usage:plot-prior-prediction-diff} to produce the output shown in the right plot of \refout{usage:prior-prediction}. \label{lst:usage:prior-samples-diff} }% ] mu_q2values = numpy.unique(numpy.concatenate((numpy.linspace(0.02, 1.00, 20), numpy.linspace(1.00, 11.60, 20)))) mu_obs = [eos.Observable.make( 'B->Dlnu::dBR/dq2', prior.parameters, eos.Kinematics(q2=q2), eos.Options({'form-factors': 'BSZ2015', 'l': 'mu'})) for q2 in mu_q2values] tau_q2values = numpy.linspace(3.17, 11.60, 40) tau_obs = [eos.Observable.make( 'B->Dlnu::dBR/dq2', prior.parameters, eos.Kinematics(q2=q2), eos.Options({'form-factors': 'BSZ2015', 'l': 'tau'})) for q2 in tau_q2values] _, _, mu_samples = prior.sample(N=5000, pre_N=1000, observables=mu_obs) _, _, tau_samples = prior.sample(N=5000, pre_N=1000, observables=tau_obs) \end{lstlisting} We plot the so-obtained prior-predictive samples with \texttt{EOS}\xspace's plotting framework: \begin{lstlisting}[% language=iPython,% caption={% Plot the previously obtained prior-predictive samples. The production of the samples is achieved in \reflst{usage:prior-samples-diff}, and the output is shown in the right plot of \refout{usage:prior-prediction}. \label{lst:usage:plot-prior-prediction-diff}\index{eos.Plotter!plot} }% ] plot_args = { 'plot': { 'x': {'label': r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 11.60] }, 'y': {'label': r'$d\mathcal{B}/dq^2$', 'range': [0.0, 5e-3] }, 'legend': { 'location': 'upper center' } }, 'contents': [ { 'label': r'$\ell=\mu$', 'type': 'uncertainty', 'range': [0.02, 11.60], 'data': { 'samples': mu_samples, 'xvalues': mu_q2values } }, { 'label': r'$\ell=\tau$','type': 'uncertainty', 'range': [3.17, 11.60], 'data': { 'samples': tau_samples, 'xvalues': tau_q2values } }, ] } eos.plot.Plotter(plot_args).plot() \end{lstlisting} \begin{joutput}[t] \centering \includegraphics[width=0.48\linewidth]{figures/predictions/prior-prediction-int.pdf} \includegraphics[width=0.48\linewidth]{figures/predictions/prior-prediction-diff.pdf} \caption{% Plot of the branching ratios of $\bar{B}\to D\lbrace \mu^-,\tau^-\rbrace\bar\nu$. Left: prior-predictive samples for the integrated branching ratios obtained from the code in \reflst{usage:prior-samples-int}. Right: differential branching ratios as functions of $q^2$. The central curves are obtained from \reflst{usage:BtoDlnu:BR}. The uncertainty bands are obtained from the samples obtained in \reflst{usage:prior-samples-diff} using the plotting code in \reflst{usage:plot-prior-prediction-diff}. } \label{out:usage:prior-prediction} \end{joutput} \subsection{Parameter Inference} \label{sec:usage:inference} [\textit{The example developed in this section can be ran interactively from the example notebook for parameter inference available from ref.~\cite{EOS:repo}, file \href{https://github.com/eos/eos/tree/v1.0/examples/inference.ipynb}{examples/inference.ipynb}}]\\ \texttt{EOS}\xspace infers parameters from a database of experimental or theoretical constraints in combination with its built-in observables. This section illustrates how to construct an \class{eos.Analysis} object that represents the statistical analysis and to infer the best-fit point and uncertainties of a list of parameters through optimization and Monte Carlo methods. We pick up the example introduced in \refsec{basics:analysis} to illustrate the above-mentioned features of \texttt{EOS}\xspace. In particular, we use the two experimental constraints \code{B^0->D^+e^-nu::BRs@Belle:2015A} and \code{B^0->D^+mu^-nu::BRs@Belle:2015A}, to infer the value of the \ac{CKM}\xspace matrix element $|V_{cb}|$. \subsubsection{Defining the Statistical Analysis} \label{sec:usage:analysis} To define our statistical analysis for the inference of $|V_{cb}|$ from measurements of the $\bar{B}\to D\ell^-\bar\nu$ branching ratios, some decisions are needed. First, we must decide how to parametrize the hadronic form factors that describe semileptonic $\bar{B}\to D$ transitions. For what follows we will use the parametrization of Ref. \cite{Straub:2015ica}, referred to as \code{[BSZ:2015A]}. Next, we must decide the theory input for the form factors. For this, we will combine the correlated lattice QCD results published by the Fermilab/MILC and HPQCD collaborations in 2015 \cite{Na:2015kha,MILC:2015uhg}. The corresponding \object{eos.Analysis} object is shown in \reflst{basics:analysis:definition}; it has been used previously as an example in \refsec{basics:analysis}. The global options ensure that our choice of form factor parametrization is used throughout, and that for \ac{CKM}\xspace matrix elements the \code{CKM} model is used. The latter provides parametric access to the $V_{cb}$ matrix element through two objects of type \object{eos.Parameter}: the absolute value \code{CKM::abs(V_cb)} and the complex phase \code{CKM::arg(V_cb)}. The latter is not accessible from $b\to c\ell\bar\nu$. We also set the starting value of \code{CKM::abs(V_cb)} to a sensible value of $42 \@ifstar\@@ng\@ng*{\cdot} 10^{-3}$ \textit{via} \begin{lstlisting}[language=iPython] analysis.parameters['CKM::abs(V_cb)'].set(42.0e-3) \end{lstlisting} \begin{joutput}[t] \resizebox{\textwidth}{!}{% \begin{tabular}{ll} \toprule parameter & value \\ \midrule $|V_{cb}|$ & 0.0422 \\ \texttt{B->D::alpha\^{}f+\_0@BSZ2015} & 0.6671 \\ \texttt{B->D::alpha\^{}f+\_1@BSZ2015} & -2.5314 \\ \texttt{B->D::alpha\^{}f+\_2@BSZ2015} & 4.8813 \\ \texttt{B->D::alpha\^{}f0\_1@BSZ2015} & 0.2660 \\ \texttt{B->D::alpha\^{}f0\_2@BSZ2015} & -0.8410 \\ \bottomrule \end{tabular} % \hspace*{1em} % \begin{tabular}{lll} \toprule constraint & $\chi^2$ & d.o.f. \\ \midrule \texttt{B->D::f\_++f\_0@HPQCD:2015A} & 3.4847 & 7 \\ \texttt{B->D::f\_++f\_0@FNAL+MILC:2015B} & 3.1016 & 5 \\ \texttt{B\^{}0->D\^{}+e\^{}-nu::BRs@Belle:2015A} & 11.8206 & 10 \\ \texttt{B\^{}0->D\^{}+mu\^{}-nu::BRs@Belle:2015A} & 5.2242 & 10 \\ \bottomrule \end{tabular} % \hspace*{1em} % \begin{tabular}{ll} \toprule total $\chi^2$ & 23.6310 \\ total degrees of freedom & 26 \\ p-value & 59.7053\% \\ \bottomrule \end{tabular} } \caption{% Display of the best-fit point and goodness-of-fit summary obtained from optimizing the the $\bar{B}\to D\ell^-\bar\nu$ analysis shown in \reflst{basics:analysis:definition}. } \label{out:inference:bfpgof} \end{joutput} To maximize the (logarithm of the) posterior density we can call the \method{eos.Analysis}{optimize} method, as shown in \reflst{inference:bfpgof}. In a \texttt{Jupyter}\xspace notebook, it is useful to display the return value of this method, which illustrates the best-fit point. Further useful information is contained in the goodness-of-fit summary. The latter lists each constraint, its degrees of freedom, and its $\chi^2$ value (if applicable\footnote{% Note that \texttt{EOS}\xspace supports likelihood functions that do not have a $\chi^2$ test statistic or any test statistic at all. }), alongside the $p$-value for the entire likelihood. \begin{lstlisting}[% language=iPython,% caption={% Optimize the posterior density and provide the best-fit point and goodness-of-fit summary. The output is shown in \refout{inference:bfpgof}. \label{lst:inference:bfpgof} }% ] bfp = analysis.optimize() display(bfp) display(analysis.goodness_of_fit()) \end{lstlisting} Instead of setting individual parameters to sensible values as we did for \code{CKM::abs(V_cb)} earlier, a starting point can alternatively be provided to \method{eos.Analysis}{optimize} using the \code{start_point} keyword argument. The maximization of the posterior by means of \method{eos.Analysis}{optimize} uses \texttt{SciPy}\xspace's \pyclass{optimize} module~\cite{scipy}. The default optimization algorithm is the Sequential Least SQuares Programming (SLSQP). Other algorithms can be selected and configured through keyword arguments that \method{eos.Analysis}{optimize} forwards to \pyclass{scipy.optimize}.\\ To interface with optimizers other than available within \texttt{SciPy}\xspace, \texttt{EOS}\xspace provides the \method{eos.Analysis}{log\_pdf} method. As its first argument, it expects the list of the parameter values. The parameters' ordering must correspond to the ordering of \object{analysis.varied_parameters}, and each parameter's values must be rescaled to the interval $[-1, +1]$, where the boundaries correspond to the minimal/maximal value in the prior specification. \subsubsection{Importance Sampling of the Posterior} To sample from the posterior, \texttt{EOS}\xspace provides the \method{eos.Analysis}{sample} method. Optionally, this can also produce posterior-predictive samples for a list of observables. We can use these samples to illustrate the results of our fit in relation to the experimental constraints.\\ For this example, we produce such posterior-predictive samples for the differential $\bar{B}\to D^+\mu^-\bar\nu$ branching ratio in the 40 points of the kinematic variable $q^2$ used in the previous examples (redifined in the following listing for completeness). \begin{lstlisting}[% language=iPython,% caption={% Produce posterior-predictive samples for the differential $\bar{B}\to D^+\mu^-\bar\nu$ branching ratio. \label{lst:inference:posterior_predictive_sample_distribution} }% ] mu_q2values = numpy.unique(numpy.concatenate((numpy.linspace(0.02, 1.00, 20), numpy.linspace(1.00, 11.60, 20)))) mu_obs = [eos.Observable.make('B->Dlnu::dBR/dq2', analysis.parameters, eos.Kinematics(q2=q2), eos.Options({'form-factors': 'BSZ2015', 'l': 'mu', 'q': 'd'})) for q2 in mu_q2values] parameter_samples, log_weights, mu_samples = analysis.sample(N=20000, stride=5, pre_N=1000, preruns=5, start_point=bfp.point, observables=mu_obs) \end{lstlisting} In the above we start sampling at the best-fit point as obtained earlier through optimization, which is optional. We carry out 5 burn-in runs/preruns of 1000 samples each. The samples obtained in each of these preruns are used to adapt the Markov chain but are then discarded. The main run produces a total of \code{N * stride = 100000} random Markov Chain samples. The latter are thinned down by a factor of \code{stride = 5} to obtain \code{N = 20000} samples, which are stored in \code{parameter_samples}. The thinning reduces the autocorrelation of the samples. The values of the log(posterior) are stored in \code{log_posterior}. The posterior-predictive samples for the observables are stored in \code{e_samples}, and are only returned if the observables keyword argument is provided.\\ We can now illustrate the posterior samples either as a histogram or as a \ac{KDE}\xspace using the built-in plotting functions, see \refout{inference:posterior-sample-hist+kde} and \reflst{plot-ex:inference:posterior-sample-hist}. Contours at given levels of posterior probability, as shown in \refout{inference:posterior-sample-hist+kde}, can be obtained for any pair of parameters using \reflst{plot-ex:inference:posterior-sample-kde}.\\ \begin{joutput}[t] \centering \includegraphics[width=0.48\linewidth]{figures/inference/posterior-sample-hist.pdf} \includegraphics[width=0.48\linewidth]{figures/inference/posterior-sample-kde.pdf} \caption{% Distribution of samples (left) of the 1D-marginal posterior of $|V_{cb}|$ as a regular histogram and as a kernel density estimate (blue line); and (right) of the 2D-marginal joint posterior of $|V_{cb}|$ and $f^{\bar{B}\to D}_+(0)$ as contours at $68\%$ and $95\%$ probability (orange lines and filled areas). The plots are produced by \reflst{plot-ex:inference:posterior-sample-hist} and \reflst{plot-ex:inference:posterior-sample-kde}, respectively. } \label{out:inference:posterior-sample-hist+kde} \end{joutput} Sampling with the Metropolis-Hastings algorithm is known to work well for unimodal densities. However, in cases of multimodal densities or blind directions, problems regularly arise. \texttt{EOS}\xspace provides the means to follow the approach of ref.~\cite{2013arXiv1304.7808B}, which proposes to use (potentially unadapted) Markov chains to explore the parameter space to initialize a Gaussian mixture density. The latter is then adapted using the Population Monte Carlo algorithm~\cite{2010MNRAS.405.2381K}, for which \texttt{EOS}\xspace uses the \texttt{pypmc}\xspace package~\cite{pypmc}. Within \texttt{EOS}\xspace, we use schematically the following approach: \begin{lstlisting}[ language=iPython,% caption={% Create a mixture density from a number of Markov chains, and adapt it to the posterior through a call to \code{eos.Analysis.sample_pmc}\index{eos.Analysis!sample\_pmc}. \label{lst:inference:sample_pmc} }% ] from pypmc.mix_adapt.r_value import make_r_gaussmix chains = [] for i in range(10): # run Markov Chains for your problem chain, _ = analysis.sample(...) # use relevant settings for your analysis in the '...' chains.append(chain) # please consult the pypmc documentation for details on the call below proposal_density = make_r_gaussmix(chains, K_g=3, critical_r=1.1) # adapt the proposal to the posterior and obtain high-quality samples analysis.sample_pmc(proposal_density, ...) # use relevant settings for your analysis in the '...' \end{lstlisting} We can visualize the posterior-predictive samples using: \begin{lstlisting}[% language=iPython,% caption={% Plot posterior-predictive importance samples for the differential $\bar{B}\to D^+\mu^-\bar\nu$ branching ratio vs. $q^2$. The result is shown in \refout{inference:posterior-prediction-diff}. \label{lst:inference:posterior_samples_uncertainties}\index{eos.Plotter!plot} }% ] plot_args = { 'plot': { 'x': { 'label': r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 11.63] }, 'y': { 'label': r'$d\mathcal{B}/dq^2$', 'range': [0.0, 5e-3] }, 'legend': { 'location': 'lower left' } }, 'contents': [ { 'label': r'$\ell=\mu$', 'type': 'uncertainty', 'range': [0.02, 11.60], 'data': { 'samples': mu_samples, 'xvalues': mu_q2values } }, { 'label': r'Belle 2015 $\ell=e,\, q=d$', 'type': 'constraint', 'color': 'C0', 'constraints': 'B^0->D^+e^-nu::BRs@Belle:2015A', 'observable': 'B->Dlnu::BR', 'variable': 'q2', 'rescale-by-width': True }, { 'label': r'Belle 2015 $\ell=\mu,\,q=d$', 'type': 'constraint', 'color': 'C1', 'constraints': 'B^0->D^+mu^-nu::BRs@Belle:2015A', 'observable': 'B->Dlnu::BR', 'variable': 'q2', 'rescale-by-width': True }, ] } eos.plot.Plotter(plot_args).plot() \end{lstlisting} Note that the use of \code{'rescale-by-width': True} converts the database's existing entry for the \emph{bin-integrated} branching ratio into the \emph{bin-averaged} branching ratio. Only that latter can be meaningfully compared with the differential branching ratio's curve. \FloatBarrier \begin{joutput}[t] \centering \includegraphics[width=0.48\linewidth]{figures/inference/posterior-prediction-diff.pdf} \caption{ Plot of the posterior-predictive importance samples for the differential $\bar{B}\to D^+\mu^-\bar\nu$ branching ratio vs. $q^2$, juxtaposed with bin-averaged measurements of the $\bar{B}\to D^+\lbrace e^-,\mu^-\rbrace\bar\nu$ branching ratio by the Belle experiment. } \label{out:inference:posterior-prediction-diff} \end{joutput} \subsection{Event Simulation} \label{sec:usage:simulation} [\textit{The example developed in this section can be run interactively from the example notebook for event simulation available from ref.~\cite{EOS:repo}, file \href{https://github.com/eos/eos/tree/v1.0/examples/simulation.ipynb}{examples/simulation.ipynb}}]\\ \texttt{EOS}\xspace contains built-in probability density functions (PDFs) from which pseudo events can be simulated using Markov chain Monte Carlo techniques. \subsubsection{Constructing a 1D PDF and Simulating Pseudo Events} The simulation of events is performed using the \method{eos.SignalPDF}{sample\_mcmc} method. For example, the construction of the one-dimensional PDF describing the $B\to D\ell\nu_\ell$ decay distribution in the variable $q^2$ and for $\ell=\mu$ leptons requires: \begin{itemize} \item the \code{q2} kinematic variable that can be set to an arbitrary starting value. \item the boundaries, \code{q2_min} and \code{q2_max}, for the phase space from which we want to sample. If needed, the phase space can be shrunk to a volume smaller than physically allowed; the normalization of the PDF will automatically adapt. \end{itemize} For $B\to D\ell\nu_\ell$, the Markov chains can self adapt to the PDF in 3 preruns with 1000 pseudo events/samples each. The simulation of \code{stride*N=250000} pseudo events/samples from the PDF, which are thinned down to \code{N=50000}, is performed with the following code: \begin{lstlisting}[% language=iPython,% caption={% Produce importance samples of the one-dimensional \code{SignalPDF} for the $\bar{B}\to D\ell^-\bar\nu$ differential branching ratio. The samples are compared to the analytic expression in the left figure of \refout{simulation:plot+histogram}, which is produced from \reflst{plot-ex:simulation:plot+histogram-1D}. \label{lst:simulation:sample-1D} }% ] rng = numpy.random.mtrand.RandomState(123456) # Defines a seeded random number generator mu_kinematics = eos.Kinematics({'q2': 2.0, 'q2_min': 0.02, 'q2_max': 11.6}) mu_options = eos.Options({'l': 'mu'}) mu_pdf = eos.SignalPDF.make('B->Dlnu::dGamma/dq2', eos.Parameters(), mu_kinematics, mu_options) mu_samples, mu_weights = mu_pdf.sample_mcmc(N=50000, stride=5, pre_N=1000, preruns=3, rng=rng) \end{lstlisting} Samples for other lepton flavors, e.g., $\ell=\tau$, require only a change of the \class{eos.Options} object to use \code{'l': 'tau'} instead and adjustment of the phase space. Similar to observables, \class{eos.SignalPDF} objects can be plotted as a function of a single kinematic variable, while keeping all other kinematic variables fixed. The fixed kinematic variables are provided as a \pyclass{dict} via the \code{kinematics} key. We show two such plots in combination with histograms of the \ac{PDF}\xspace samples in \refout{simulation:plot+histogram} (left). The output shows excellent agreement between the simulations and the respective analytic expressions for the 1D \acp{PDF}. \subsubsection{Constructing a 4D PDF and Simulating Pseudo Events} Samples can also be drawn for \acp{PDF} with more than one kinematic variable. As an example, we use the full four-dimensional \ac{PDF}\xspace for $\bar{B}\to D^*\ell\bar{\nu}$ decays. Declaration and initialization of all four kinematic variables (\code{q2}, \code{cos(theta_l)}, \code{cos(theta_d)}, and \code{phi}) is similar to the 1D case. \begin{lstlisting}[language=iPython, caption={% Produce importance samples of the four-dimensional \code{SignalPDF} for the $\bar{B}\to D^*(\to D\pi)\ell^-\bar\nu$ differential branching ratio. \label{lst:simulation:sample-4D} }% ] dstarlnu_kinematics = eos.Kinematics({ 'q2': 2.0, 'q2_min': 0.02, 'q2_max': 10.5, 'cos(theta_l)': 0.0, 'cos(theta_l)_min': -1.0, 'cos(theta_l)_max': +1.0, 'cos(theta_d)': 0.0, 'cos(theta_d)_min': -1.0, 'cos(theta_d)_max': +1.0, 'phi': 0.3, 'phi_min': 0.0, 'phi_max': 2.0 * numpy.pi }) \end{lstlisting} We then produce the samples in a similar way as for the 1D \ac{PDF}\xspace: \begin{lstlisting}[language=iPython] rng = numpy.random.mtrand.RandomState(74205) # Defines a seeded random number generator dstarlnu_pdf = eos.SignalPDF.make('B->D^*lnu::d^4Gamma', eos.Parameters(), dstarlnu_kinematics, eos.Options()) dstarlnu_samples, _ = dstarlnu_pdf.sample_mcmc(N=1e6, stride=5, pre_N=1000, preruns=3, rng=rng) \end{lstlisting} The samples of the individual kinematic variables can be accessed as the columns of the \code{dstarlnu_samples} object. We can now show correlations of the kinematic variables by plotting 2D histograms, e.g. $q^2$ vs $\cos\theta_\ell$: \begin{lstlisting}[% language=iPython,% caption={% Plot a 2D histogram for samples of the $\bar{B}\to D^*(\to D\pi)\mu^-\bar\nu$ PDF in the variables $q^2$ and $\cos(\theta_\ell)$. The samples are obtained from \reflst{simulation:sample-4D}, and the output is shown in the right plot of \refout{simulation:plot+histogram}. \label{lst:simulation:histogram-2D}\index{eos.Plotter!plot} }% ] plot_args = { 'plot': { 'x': { 'label':r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 10.50]}, 'y': { 'label':r'$cos(\theta_\ell)$', 'range': [-1.0, +1.0]} }, 'contents': [ { 'label': r'samples ($\ell=\mu$)', 'type': 'histogram2D', 'data':{ 'samples': dstarlnu_samples[:, (0,1)] }, 'bins': 40 }, ] } eos.plot.Plotter(plot_args).plot() \end{lstlisting} \begin{joutput}[t] \centering \includegraphics[width=0.48\linewidth]{figures/simulation/plot+histogram-1D.pdf} \includegraphics[width=0.48\linewidth]{figures/simulation/histogram-2D.pdf} \caption{% Left: Distribution of $B\to D\ell\nu_\ell$ events for $\ell=\mu, \tau$, as implemented in \texttt{EOS}\xspace (solid lines) and as obtained from Markov Chain Monte Carlo importance sampling (histograms). The samples are produced from \reflst{simulation:sample-1D}, and the plot is produced by \reflst{plot-ex:simulation:plot+histogram-1D}. Right: 2D histogram of the $\bar{B}\to D^*(\to D\pi)\mu^-\bar\nu$ PDF in the variables $q^2$ and $\cos(\theta_\ell)$. This output is produced by the code shown in \reflst{simulation:histogram-2D}. } \label{out:simulation:plot+histogram} \end{joutput}
\section{Introduction} The universality of critical exponents is an important and remarkably elegant property of standard second order transitions, which has been explored in great detail through the Renormalization Group Theory (RGT). The universality hypothesis states that for all systems within a universality class the critical exponents are rigorously identical and do not depend on the microscopic parameters of the model. However, universality is not strictly universal; there are known \lq\lq eccentric\rq\rq models which are exceptions and violate the universality rule in the sense that their critical exponents vary continuously as functions of a control variable. The most famous example is the eight vertex model solved exactly by Baxter \cite{baxter:71}; there are other scattered cases, all in dimension two as far as we are aware. For Ising Spin Glasses (ISGs), the form of the interaction distribution is a microscopic control parameter. It has been assumed tacitly or explicitly that the members of the ISG family of transitions obey standard universality rules, following the generally accepted statement that \lq\lq Empirically, one finds that all systems in nature belong to one of a comparatively small number of universality classes\rq\rq \cite{stanley:99}. However, we know of no formal proof that universality must hold in ISGs; it was found thirty years ago that the $\epsilon$-expansion for the critical exponents \cite{gardner:84} in ISGs is not predictive since the first few orders have a non-convergent behavior and higher orders are not known. This can be taken as an indication that a fundamentally different theoretical approach is required for spin glass transitions. Indeed "Classical tools of RG analysis are not suitable for spin glasses" \cite{parisi:01,castellana:11,angelini:13}. ISG transition simulations are much more demanding numerically than are those on, say, pure ferromagnet transitions with no interaction disorder. The traditional approach in ISGs has been to study the temperature and size dependence of observables in the near-transition region and to estimate the critical temperature and exponents through finite size scaling relations after taking means over large numbers of samples. Finite size corrections to scaling should be allowed for explicitly which can be delicate. From numerical data, claims of universality have been made repeatedly for ISGs \cite{bhatt:88,katzgraber:06,hasenbusch:08,jorg:08} even though the estimates of the critical exponents are very sensitive to the precise value of the critical temperature and have varied over the years (see Ref.~\cite{katzgraber:06} for a tabulation of historic estimates). We have estimated the critical exponents of the bimodal ISG in dimension $4$ using complementary strategies. First we use the standard finite size crossing points of the Binder cumulant and other phenomenological couplings to obtain estimates for the critical temperature $\beta_c$ through finite size scaling \cite{notebeta}. We also register the size dependence of the peaks of thermodynamic derivatives which give independent estimates for $\beta_{c}$ and for $\nu$ \cite{ferrenberg:91,weigel:09}. We finally measure the temperature dependence of the thermodynamic limit (ThL) ISG susceptibility $\chi(\beta,\infty)$ and second moment correlation length $\xi(\beta,\infty)$ over a wide temperature range. Using the scaling variable and scaling expressions appropriate for ISGs as cited below \cite{daboul:04,campbell:06} together with the optimal $\beta_{c}$ from the above measurements, we estimate the critical exponents and the confluent corrections to scaling \cite{wegner:72} from data taken over almost the entire paramagnetic temperature range. The numerical data on different ISGs in dimension $4$ show conclusively that the critical exponents depend on the form of the interaction distribution. It is relevant that it has been shown experimentally that in Heisenberg spin glasses the critical exponents depend on the strength of the Dzyaloshinski-Moriya interaction \cite{campbell:10}. \section{Ising Spin Glass simulations} The Hamiltonian is as usual \begin{equation} \mathcal{H}= - \sum_{ij}J_{ij}S_{i}S_{j} \label{ham} \end{equation} with the near neighbor symmetric distributions normalized to $\langle J_{ij}^2\rangle=1$. The Ising spins live on simple hyper-cubic lattices with periodic boundary conditions. We have studied bimodal ($\pm J$), Gaussian, and Laplacian distributions in $4$d. Here we will discuss the bimodal ISG and will compare with published measurements on two other $4$d ISGs \cite{jorg:08}. The simulations were carried out using the exchange Monte-Carlo method for equilibration, on $512$ individual samples at each size. Data were registered after equilibration for the energy $E(\beta,L)$, correlation length $\xi(\beta,L)$, for the spin overlap moments $\langle |q|\rangle$, $\langle q^2\rangle$, $\langle |q^3|\rangle$, $\langle q^4\rangle$, and for the link overlap $q_{\ell}$ moments. In addition the correlations between the energy and certain observables $\langle E\,U\rangle$ were also registered so that thermodynamic derivatives could be evaluated using the relation $\partial U/\partial \beta = \langle U\,E\rangle-\langle U \rangle\langle E\rangle$ where $E$ is the energy \cite{ferrenberg:91}. Bootstrap analyses of the errors in the derivatives as well as in the observables themselves were carried out. For the present analysis we have observed the behavior of various "phenomenological couplings", not only the familiar Binder cumulant and correlation length ratio $\xi(\beta,L)/L$ but also other observables showing critical behavior such as the kurtosis of the spin overlap distribution, the kurtosis of the absolute spin overlap distribution, and the variance and kurtosis of the link overlap distribution. Only part of these data are reported here. Near criticality in a ferromagnet the heights of the peaks of the thermodynamic derivative of many observables $\partial U(\beta,L)/\partial \beta$ scale for large $L$ as \cite{ferrenberg:91,weigel:09} \[ \lbrack\partial U(\beta,L)/\partial \beta\rbrack_{\max} \propto L^{1/\nu} \left(1+ b\, L^{-\omega/\nu}\right) \] and the temperature location of the derivative peak $\beta_{\max}(L)$ scales as $\beta_{c}-\beta_{\max}(L) \propto L^{-1/\nu} \left(1+b'\,L^{-\omega/\nu}\right)$ The observables used for $U(\beta,L)$ \cite{ferrenberg:91} can be for instance the Binder cumulant $g(\beta,L) =(3-\langle q^4\rangle/\langle q^2\rangle^2)/2$, the logarithm of the finite size susceptibility $\ln(\chi(\beta,L))$, or the logarithm of the absolute value of the spin overlap $\ln(|q|(\beta,L))$. Each of these data sets can give independent estimates of $\nu$ and $\beta_c$ without any initial knowledge of either parameter. For the present analysis we note that both the minimum of the inverse derivative $\lbrack\partial\beta/\partial U(\beta,L)\rbrack_{\min}$ and the temperature location difference $\beta_{c}-\beta_{\min}(L)$ are proportional to $L^{-1/\nu}$ to leading order. Hence $\lbrack\partial\beta/\partial U(\beta,L)\rbrack_{\min}$ plotted against $\beta_{\min}(L)$ with $L$ as an implicit variable must tend linearly to an intercept $\lbrack\partial\beta/\partial U(\beta,L)\rbrack_{\min}=0$ at $\beta_{\min} \equiv \beta_c$ for large $L$. All $\lbrack\partial\beta/\partial U(\beta,L)\rbrack_{\min}$ against $\beta_{\min}(L)$ plots should extrapolate consistently to the true $\beta_c$. Turning to spin glasses, for ISGs with symmetric interaction distributions and a non-zero $\beta_c$ a general natural scaling variable is $\tau = 1-(\beta/\beta_{c})^2$ ($w = 1-(\tanh(\beta)/\tanh(\beta_{c}))^2$ is also suitable for the bimodal case) \cite{singh:86,daboul:04,campbell:06}. In the ISG context $\beta^2$ replaces $\beta$ in the thermodynamic derivative scaling rules but otherwise the same methodology can be used as in the ferromagnet. The thermodynamic derivative analysis, which as far as we are aware has not been used previously in spin glasses, provides reliable and precise estimates for $\beta_{c}^2$. These $\beta_{c}^2$ estimates are consistent with those from the traditional crossing point approach; which method has the least sensitivity to finite size corrections depends on the individual system. The ThL SG susceptibility $\chi(\tau)$ including the leading nonanalytic confluent correction term \cite{wegner:72} can be written \begin{equation} \chi(\beta)= C_{\chi}\tau^{-\gamma}\left(1+a_{\chi}\tau^{\theta}+\cdots\right) \label{chiweg} \end{equation} where $\gamma$ is the critical exponent and $\theta$ the Wegner non-analytic correction exponent, both of which are characteristic of a universality class. Following a protocol well-established in ferromagnets \cite{kouvel:64,butera:02} one can define a temperature dependent effective ThL exponent $\gamma(\beta)= -\partial\ln\chi(\beta)/\partial\ln\tau$. $\gamma(\beta)$ tends to the critical $\gamma$ as $\beta^{2} \to \beta_{c}^2$ and to $2d\beta_{c}^2$ as $\beta^{2} \to 0$ in simple [hyper]-cubic lattices. As long as samples of finite size $L$ are in the ThL regime, $\chi(\beta,L)$, $\xi(\beta.L)$ and other observables are independent of $L$. Working in the ThL has a number of advantages: the temperatures studied are higher than the critical temperature so equilibration is facilitated, the sample to sample variations are automatically much weaker than at criticality, and there are no finite size scaling corrections to take into account although the confluent correction terms must be allowed for. In ferromagnets the ThL susceptibility and correlation length data can be fitted accurately over the entire paramagnetic temperature range \cite{campbell:08,campbell:11,lundow:11} by including just one further effective correction term $k\tau^{\lambda}$ beyond the leading non-analytic term, which bundles together all the higher order correction terms. We use this approximation also in the ISGs. Hence \begin{equation} \gamma{\beta}= \gamma - \left(a_{\chi}\theta\tau^{\theta} +k\lambda\tau^{\lambda}\right)/ \left(1+a_{\chi}\tau^{\theta} +k\tau^{\lambda}\right) \label{gamweg} \end{equation} A very effective method for analysing the ThL susceptibility data is to plot $y = \partial\beta^2/\partial\ln\chi(\beta)$ against $x = \beta^2$. With the two correction terms the expression used to fit the ThL regime data is: \begin{equation} \frac{\partial\beta^2}{\ln\chi(\beta)} = \frac{(\beta^2-\beta_c^2)(1+a_{\chi}\tau^{\theta}+k_{\chi}\tau^{\lambda_{\chi}})} {\gamma + (\gamma-\theta) a_{\chi} \tau^{\theta}+(\gamma-\lambda_{\chi})k_{\chi}\tau^{\lambda_{\chi}}} \label{dbsqdlns} \end{equation} The critical intercept $y=0$ occurs when $x=\beta_{c}^2$, and the initial slope starting at the intercept is $\partial y/\partial x =-1/\gamma$. The analogous natural scaling expression for the ISG second moment correlation length $\xi(\beta)$ is \cite{campbell:06} \begin{equation} \xi(\beta)/\beta = C_{\xi}\tau^{-\nu}\left(1+a_{\xi}\tau^{\theta}+k_{\xi}\tau^{\lambda}\right) \label{xiweg} \end{equation} with a temperature dependent effective exponent defined as $\nu(\beta) = -\partial\ln(\xi(\beta)/\beta)/\partial\ln\tau$. The reason for the factor $1/\beta$ arises from the generic form of the ISG $\xi(\beta)$ high temperature series \cite{campbell:06}. The $\beta=0$ limit in ISGs in simple hyper-cubic lattices of dimension $d$ is $\nu(\beta=0)= (d-K/3)\beta_c^2$ where $K$ is the kurtosis of the interaction distribution. The derivative corresponding to Eq.~\eqref{dbsqdlns} takes the form \begin{equation} \frac{\partial\beta^2}{\ln(\xi(\beta)/\beta)} = \frac{\beta_c^2\tau(1+a_{\xi}\tau^{\theta})}{\nu + (\nu-\theta) a_{\xi} \tau^{\theta}} \label{dbsqdlnTxi} \end{equation} with the same $\beta_c^2$ and $\theta$ as for $\chi(\beta)$. The $y=0$ intercept is again $x=\beta_c^2$, with an initial slope at the intercept equal to $\partial y/\partial x=-1/\nu$. \section{ISG transitions in dimension $4$} \begin{figure} \includegraphics[width=3.5in]{J4d_fig1.eps} \caption{(Color online) The Binder cumulant $g(\beta,L)$ for even $L$ $4$d bimodal interaction samples. Symbol coding: blue squares $L=4$, red circles $L=6$, black triangles $L=8$, down triangles $L=10$, olive diamonds $L=12$, purple left triangles $L=14$. The vertical red line corresponds to $\beta_{c}=0.505$.}\protect\label{fig:1} \end{figure} \begin{figure} \includegraphics[width=3.5in]{J4d_fig2.eps} \caption{(Color online) The $4$d bimodal ISG thermodynamic derivative peak height $[\partial \ln(\chi(L)) \partial \beta^2]_{\max}$ as a function of size $L$. Black squares : measured, red circles : fit.}\protect\label{fig:2} \end{figure} \begin{figure} \includegraphics[width=3.5in]{J4d_fig3.eps} \caption{(Color online) $\partial\beta^2/\partial\ln\chi(\beta)$ for $4$d bimodal interaction samples. Even $L$ data only are shown to avoid clutter. Symbol coding as in Fig.~\ref{fig:1}. Large navy squares are the minima locations, odd and even $L$. The full red curve is the the ThL calculated directly from HTSE \cite{daboul:04}. Blue curve: fit Eq.~\eqref{dbsqdlns}.}\protect\label{fig:3} \end{figure} \begin{figure} \includegraphics[width=3.5in]{J4d_fig4.eps} \caption{(Color online) $\partial\beta^2/\partial\ln(\xi(\beta)/\beta)$ for $4$d bimodal interaction samples. Even $L$ data only are shown to avoid clutter. Symbol coding as in Fig.~\ref{fig:1}. Large navy squares are the minima locations, odd and even $L$. Blue curve: fit Eq.~\eqref{dbsqdlnTxi}.}\protect\label{fig:4} \end{figure} \begin{figure} \includegraphics[width=3.5in]{J4d_fig5.eps} \caption{(Color online) The effective exponent $2-\eta(\beta,L)$ Eq.~\eqref{meta} for $4$d bimodal interaction samples. Even $L$ data only are shown to avoid clutter. Symbol coding as in Fig.~\ref{fig:1}. Red curve: fit through the ThL regime data. Red arrow : exact high temperature limit. The estimate of Ref.~\cite{banos:12} for the critical value $2-\eta(\beta_{c})$ is $2.320(13)$.}\protect\label{fig:5} \end{figure} High precision simulation measurements have been published on the $4$d Gaussian ISG, and on a $4$d bimodal ISG with diluted interactions ($65\%$ of the interactions having $J=0$) \cite{jorg:08}. The critical temperature for the 4d Gaussian ISG was estimated from Binder parameter and correlation length ratio measurements to be $\beta_{c}^2 = 0.307(3)$ in full agreement with earlier simulation estimates $0.308(3)$ \cite{parisi:96,ney:98} and with the HTSE estimate $\beta_{c}^2 = 0.314(4)$. The simulations gave essentially identical exponents for the two systems $\nu =1.02(2)$ $\eta = -0.275(25)$ so indirectly $\gamma = 2.32(8)$. Present data on the $4$d Gaussian (not shown) using the thermal derivative analysis as above lead to a $\beta_c^2$ in full agreement with that of Ref~\cite{jorg:08}, and $\gamma= 2.36(3)$, with very weak corrections. It seems very reasonable to assume that if the present procedure were applied to the diluted bimodal system, it would confirm the conclusions of Ref.~\cite{jorg:08} for that system also. It can be noted that for this system the finite size correction to scaling in the Binder cumulant is so small as to be unobservable. For the $4$d bimodal ISG the HTSE critical temperature and exponent estimates are \cite{daboul:04} $\beta_{c}^2 = 0.26(2)$, $\gamma = 2.5(3)$, and $\theta \sim 1.5$. From extensive domain wall free energy measurements to $L = 10$ Hukushima gave an estimate $\beta_{c}^2 = 0.25(1)$ \cite{hukushima:99}. Inspection of the raw data show strong finite size corrections; extrapolation to larger $L$ leads to an infinite size limit definitely greater than $0.25$. From early simulation measurements up to $L = 10$ a critical temperature $\beta_{c}^2 = 0.243(7)$ was estimated \cite{marinari:99} using the Binder parameter crossing point criterion. However, finite size corrections to scaling were not allowed for. Recent simulations up to $L=16$ \cite{banos:12} show large $L$ Binder cumulant crossings up to $\beta^2 =0.252$. Our present Binder cumulant data up to $L=14$ show very similar results, Figure 1. In this figure crossing points for the largest $L$ can be seen to cluster around $\beta^2 =0.255$, which provides a lower limit on $\beta_{c}^2$. In addition, it can be seen that the value of the cumulant at the present high $L$ crossing points is $g_{cross}(\beta^2) = 0.520(3)$, which is a lower limit on the infinite size critical $g_{c}(\beta_{c}^2)$. In comparison $g_{c}(\beta_{c}^2)$ has been estimated at $0.470(5)$ and $0.472(2)$ for the $4$d Gaussian and diluted bimodal ISG respectively \cite{jorg:08}. The bimodal critical crossing point is thus 25 standard deviations above the diluted bimodal critical value. As the critical $g_c(\beta_{c}^2)$ is, for a given geometry, a parameter characteristic of a universality class, the bimodal and diluted bimodal systems are not in the same universality class; neither are the bimodal and the Gaussian ISGs. We will now turn to the inverse susceptibility derivative data, Figures 2, 3 and 4. In Figure 2 the thermodynamic derivative peak height $y(L) = [\partial \ln(\chi(\beta^2,L))/ \partial \beta^2]_{\max}$ is fitted by $y(L) = 11.0L^{1/\nu}(1-0.95L^{\theta/\nu})$ with $\nu = 1.20(2)$ and $\theta$ fixed at $1.75$, from ThL data discussed below. The value of the estimate for $\nu$ requires no information on the critical temperature $\beta_{c}$. It is significantly higher than the estimate $\nu = 1.068(7)$ given by Ref.~\cite{banos:12}. In Figure 3 the $[\partial \beta^2/\partial\ln\chi(\beta^2,L)]_{\min}$ points for different $L$ have a straight line limit at large $L$ which tends to an intercept at $x = 0.258(1)$. Derivative minima plots of the same type for other observables (not shown) confirm this value for $\beta_{c}^2$. These estimates are very reliable as they come from parameter free straight line limit fits. The estimate of Ref.~\cite{banos:12} is $\beta_{c}^2 = 0.2523(6)$; this value is sensitive to the estimate for the correction to scaling exponent. The curves for individual $L$ and the HTSE curve all lie on a size independent ThL envelope curve to the left of the figure. A satisfactory fit passing through all the ThL data and the critical point can be made with a single correction term only. The optimal fit parameters are $\gamma = 3.05(5)$, $\theta = 1.75(5)$, $a_{\chi} = 1.40(3)$. A similar analysis made on the correlation length data, Figure 4, provides a completely consistent estimate for $\beta_{c}^2$ from the linear variation of the minima points. The ThL data are fitted with the parameters $\nu =1.20(3)$ and $a_{\xi}=0.17(2)$ together with the same $\theta = 1.75$ as for the susceptibility fit. An estimate of the exponent $\eta$ can be made from a plot of \begin{equation} 2 -\eta(\beta,L) = \partial \ln(\chi(\beta,L)) / \partial \ln(\xi(\beta,L)/\beta) \label{meta} \end{equation} against $\beta/\xi(\beta,L)$, Figure 5. Extrapolating the ThL regime data to criticality at $\beta/\xi(\beta,L)=0$ leads to a direct estimate $\eta = -0.48(3)$ without needing any assumption concerning $\beta_{c}^2$ or finite size corrections. The data clearly show that the estimate $\eta = -0.320(13)$ of Ref.~\cite{banos:12} is low. The present exponents are reliable and much more accurate than the HTSE estimates principally because the uncertainty in $\beta_{c}^2$ is reduced by a factor of more than $10$ thanks principally to the thermal derivative peak simulation data. The exponents can be compared to the values found \cite{jorg:08} for the $4$d Gaussian and diluted bimodal systems which were almost identical to each other : $\gamma=2.32(8)$ and $\nu= 1.02(2)$ for the Gaussian and $\gamma =2.33(6)$, $\nu = 1.025(15)$ for the diluted bimodal. The critical exponents of the $4$d bimodal ISG are quite different from those of the $4$d Gaussian and diluted bimodal ISGs. \section{Conclusions} Simulations on the $4$d bimodal ISG up to size $L=14$ provide numerical data on finite size scaling observables, on the ISG susceptibility and on the correlation length. The critical temperature $\beta_{c}$ derived from the simulation data using a thermodynamic derivative technique \cite{ferrenberg:91} is in full agreement with, but is considerably more precise than, the estimate from HTSE alone \cite{daboul:04}. Because of the analysis techniques used it is also more reliable than previous numerical estimates. Data in the thermodynamic limit regime were analysed to obtain considerably improved critical exponent $\gamma$, $\nu$ and $\theta$ estimates together with the strengths of leading confluent correction terms. The accurate estimates of $\gamma$ and $\nu$ and the critical value of the Binder cumulant show that the $4$d bimodal ISG is in a different universality class from the $4$d Gaussian or diluted bimodal ISGs \cite{jorg:08}. Other results on ISGs in dimension $4$ \cite{lundow} and in dimension $5$ \cite{lundow:13a} confirm that spin glasses with different interaction distributions have different critical exponents. These results clearly demonstrate that the standard RGT universality rules do not apply in ISGs. \section{Acknowledgements} We are very grateful to Koji Hukushima for comments and communication of unpublished data. We thank Amnon Aharony for constructive criticism. The computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the High Performance Computing Center North (HPC2N).
\section{Introduction} This article will describe a set of physical assumptions which are sufficient for a semiclassical gravitational theory to obey the generalized second law (GSL) of thermodynamics \cite{hawking75}. From these physical assumptions, a proof of the GSL will be given for rapidly evolving matter fields and arbitrary horizon slices. This shows that the GSL holds in differential form, i.e. the entropy is increasing at each spacetime point on the horizon. As far as I am aware, this is the first time such a general proof of the GSL has been given. The GSL appears to hold on any causal horizon, i.e. the boundary of the past of any future infinite worldline \cite{JP03}. Causal horizons include black hole event horizons, as well as Rindler and de Sitter horizons. The GSL states that on any horizon, the total entropy of fields outside the horizon, plus the total entropy of the horizon itself, must increase as time passes. This total increasing quantity is known as the generalized entropy. More precisely, for any complete spatial slice $\Sigma$ intersecting the horizon $H$, the generalized entropy of $\Sigma$ is given by \begin{equation} S_\mathrm{H} + S_\mathrm{out}. \end{equation} In general relativity, the horizon entropy is proportional to the area\footnote{Because $S_\mathrm{out}$ is a c-number, for consistency it is necessary to interpret $S_\mathrm{H}$ as a c-number as well. In this article, this will be done by taking the semclassical approximation, in which the area $A$ is a classical quantity, sourced by the expectation value of a quantum operator. However, this semiclassical approximation can only be an approximation to the true quantum gravity theory, in which the area $A$ becomes an operator. In Ref. \cite{10proofs} I argued that one should then interpret $A$ as being the expectation value of the quantum area operator.}): \begin{equation} S_\mathrm{H} = \frac{A}{4\hbar G}|_{\Sigma\,\cap\,H}. \end{equation} The second term is the von Neumann entropy of the matter fields restricted to the region outside of the horizon: \begin{equation} S_\mathrm{out} = -\mathrm{tr}(\rho\,\ln\,\rho)|_{\Sigma\,\cap\,I^{-}(H)}. \end{equation} However, this outside entropy term has an ultraviolet divergence at the horizon due to the entanglement entropy of fields at very short distances. So to define the generalized entropy, some kind of renormalization scheme must be employed to subtract off these divergences (cf. section \ref{ren}). Historically, the laws of thermodynamics for matter have provided substantial clues about the microscopic statistical mechanics of atomic systems. It seems probable that the GSL will provide similar insight into the statistical mechanics of spacetime itself \cite{sorkin83}. Because quantum gravity is currently outside of our experimental range of detection, any help which can be obtained from the GSL would be very useful. The GSL is especially evocative because of how surprising it is: it essentially says that an apparently open system (the exterior of the horizon) behaves in roughly the way that we would expect a closed thermodynamic system to behave. There are several different claims that in order for the GSL to be true, certain restrictions must hold even semiclassically on e.g. bounds on the entropy and/or number of particle species proposed by Bekenstein \cite{bek81b}, Bousso \cite{bousso02b}, or Dvali \cite{DS08}, bounds on the fine structure constant \cite{davies08}, the unbrokenness of the Lorentz group \cite{EFJW07}, and/or energy conditions \cite{anec}. If true, these claims hint at important restrictions on any good theory of quantum gravity. (However, in my opinion, only the last two of these claims have been clearly established.) One way to test these proposed requirements is by proving the GSL, and thus seeing explicitly what assumptions are necessary. Once we know what key assumptions are necessary for the GSL to hold semiclassically, we will be in a better position to guess background-free constructions of quantum gravity based on thermodynamic principles. Until recently, there were satisfactory proofs of the semiclassical GSL only in the `quasi-steady' case in which the fields falling into the black hole are slowly changing with time \cite{10proofs}. One such quasi-steady argument was the illuminating but incomplete proof by Sorkin \cite{sorkin98} (reviewed in Ref. \cite{10proofs}). Sorkin considered the case of a physical process $\mathcal{P}$ (which may involve information loss), with the property that a thermal state \begin{equation} \sigma = \frac{e^{-\beta H}}{Z} \end{equation} evolves to itself under the process: \begin{equation} \mathcal{P}(\sigma) = \sigma. \end{equation} He then invoked a theorem saying that whenever this happens, the free energy of any other state $\rho$ cannot increase under the same time evolution: \begin{equation} (\langle H \rangle - TS)_\rho \ge (\langle H \rangle - TS)_{\mathcal{P}(\rho)} \end{equation} The free energy can then be related to the generalized entropy using the ``first law'' of horizon thermodynamics \begin{equation} dE = T\,dS_{\mathrm{H}} \end{equation} (which applies only to slowly changing horizons). Unfortunately, the proof founders when applied to black holes \cite{10proofs}, since the state outside the black hole could only be shown to be thermal outside of the bifurcation surface, while a nontrivial application of the GSL requires time evolution from one slice of the horizon to another slice. Furthermore the Hartle-Hawking thermal state exists only for nonrotating black holes, so there are even worse problems in applying the proof to Kerr black holes. My previous proof in Ref. \cite{rindler} side-stepped these problems for the special case of (perturbed) Rindler wedges evolving to other Rindler wedges. In this case it was possible to show that the GSL holds semiclassically even for rapid changes to the horizon, at every instant of time, using a reasonable assumption about the renormalization properties of $S_\mathrm{out}$. However, this proof was limited to Rindler horizons sliced by flat planes; it was unable to reach de Sitter space, black holes, or even arbitrary slices of Rindler horizons. The basic problem is that the proof requires not only a boost symmetry of each wedge (in order to show that the state restricted to the wedge is thermal), it also needs a null translation symmetry (so that there will be multiple thermal wedges). But this is more symmetry than is possessed by most spacetimes with stationary horizons. In this article I will generalize the proof to (semiclassical perturbations of) arbitrary slices $\Sigma$ of the future horizon $H$. The new ingredient is the technique of restricting the quantum fields to a null hypersurface. In particular (at least for free fields) there is an infinite dimensional symmetry group due to the freedom to reparameterize each horizon generator separately \cite{schroer09}.\footnote{This group is isomorphic to the subgroup of the Bondi-Metzner-Sachs group which preserves horizon generators.} This symmetry will play an important role in the proof of the GSL in section \ref{proofon}. Restriction to a null surface is helpful for solving a variety of quantum field theory problems, e.g. deep inelastic scattering in QCD, because of the insight it gives into the quantum vacuum \cite{burkardt96}. The technique was used by Sewell to derive the Hawking effect in a very illuminating way \cite{sewell82}. More recently, it has also been used as a simple way to characterize quantum fields on Schwarzschild past horizons \cite{DMP09} and future horizons \cite{MP03}, certain past cosmological horizons \cite{DMP08}, 1+1 Rindler horizons \cite{MP04}, de Sitter horizons \cite{pinamonti05} and the conformal boundary of asymptotically flat spacetimes \cite{moretti05}.\footnote{Some of this work refers to this principle of restricting to a null surface by the name of ``holography'', because the null surface has one less dimension than the rest of the spacetime. But this use of the term is somewhat misleading when compared with the normal usage in quantum gravity, in which it refers to the ability to determine spacetime data from a codimension 2 surface. Holography in this latter sense should normally only arise when gravitational effects are taken into account.} The algebra of observables $\mathcal{A}(H)$ on the horizon plays an important role in the proof: it is required to exist and satisfy four axioms described in section \ref{sym}. In the case of free fields and 1+1 conformal field theories, it will be shown that there exists a horizon algebra satisfying these axioms. In the case of general interacting quantum field theories, the restriction of the fields to a null hypersurface is a more delicate matter. Nevertheless, there are reasons to believe that interacting field theories also satisfy the axioms. At least at the level of formal perturbation theory, the horizon algebra is completely unaffected by the addition of certain kinds of interactions, including both nonderivative couplings, and nonabelian Yang-Mills interactions. However, renormalization effects can lead to the introduction of additional higher derivative couplings, as well as infinite field strength renormalization. Because of these issues, it is not completely clear whether general interacting field theories have a null hypersurface formulation. Some arguments for and against will be given in section \ref{nonpert}. The plan of this article is as follows: Section \ref{arg} will outline the physical assumptions used to prove the GSL, and show why the GSL follows from them. Section \ref{scalar} will describe in detail the null hypersurface formulation for a free scalar field. Section \ref{spin} will generalize these results to free spinors, photons, and gravitons. Section \ref{int} will discuss what happens when interactions are included. Conventions: The metric signature will be plus for space and minus for time. On the horizon, $y$ is a system of $D-2$ transverse coordinates which is constant on each horizon generator, $\lambda$ is an affine parameter on each horizon generator, and $k^a$ points along each horizon generator and satisfies $k^a \nabla_a \lambda = 1$. When moving off the horizon, $u$ will be a null coordinate such that the horizon is located at $u = 0$, and $v$ will be a null coordinate which satisfies $v = \lambda$ on the horizon, such that the metric on the horizon is \begin{equation} ds^2 = -du\,dv + h_{ij} dy^i dy^j. \end{equation} To reduce clutter, I will use the notation $v^a X_a \equiv X_v$. \section{Argument for the GSL}\label{arg} \subsection{Outline of Assumptions}\label{outline} In order to prove the GSL, I need to make three basic physical assumptions: \begin{enumerate} \item \textbf{Semiclassical Einstein Gravity.} The proof will apply to the semiclassical regime (section \ref{sreg}), in which all physical effects can be controlled by an expansion in $\hbar G / \lambda^2$, where $\lambda$ is the characteristic de Broglie wavelength of the matter fields. This expansion is valid when $\lambda \gg L_\mathrm{planck}$. By holding $\lambda$ and $G$ fixed, one can regard this as an expansion in $\hbar$. The leading order physics is given by quantum field theory on a fixed classical spacetime. However, at higher orders in $\hbar$ there are perturbations to the spacetime metric due to gravitational back-reaction. These perturbations affect the horizon area $A$ at $\mathcal{O}(\hbar^1)$, and therefore affect $S_\mathrm{H}$ at $\mathcal{O}(\hbar^0)$. At this order, the gravitational backreaction will be treated as a c-number, and will be calculated using the semiclassical Einstein equation $G_{ab} = 8\pi G \langle T_{ab} \rangle$. It will also be assumed that the matter is minimally coupled to the metric. \item \textbf{The Existence of a Null Hyperspace Formalism.} Ignoring the backreaction, matter is described by a quantum field theory on the background spacetime. The interesting case is when the horizon is stationary (for example, a Killing horizon plus nonstationary matter far from the horizon); otherwise the GSL reduces to the classical Area Law (cf. section \ref{sreg}). In this case, the QFT which describes matter must have a null hypersurface formulation, i.e. there must be a nontrivial algebra of operators $\mathcal{A}(H)$ corresponding to fields restricted to the horizon itself. This algebra must satisfy four axioms (section \ref{sym}): \textit{Determinism} means that all information outside of the horizon can be predicted from the horizon algebra $\mathcal{A}(H)$ together with the algebra $\mathcal{A}(\mathcal{I}^+)$ at future null infinity. \textit{Ultralocality} means that the fields on different horizon generators are independent, so that the algebra $\mathcal{A}(H)$ tensor-factorizes for spatially-disjoint open subsets in the transverse $y$-directions.\footnote{This is a stronger statement than Microcausality, the assertion that all commutators vanish at spacelike separation. For example, Ultralocality implies that in the vacuum state, all $n$-point functions of the fields vanish at spacelike separations. This property may be surprising at first to those familiar with canonical quantization of fields on spacelike surfaces. However, for free fields on a null surface it obtains because there are no derivatives in the formulae for the null stress-energy $T_{kk}$ or the commutators of fields.} (Because the fields are distributions it is still necessary to smear them in the transverse directions to obtain well-defined operators.) \textit{Local Lorentz Symmetry} means that the degrees of freedom on each horizon generator are symmetric under translations and boosts. And \textit{Stability} is the requirement that the fields on each horizon generator have positive energy with respect to the null translation symmetry. (These four axioms will be explicitly shown for free QFT's in section \ref{scalar}-\ref{spin}.) In the case of a free field $\phi$, this algebra can contain operators that depend on the pullback of $\phi$ to the horizon $\phi(u = 0)$, but not on e.g. the derivative moving away from the horizon $\nabla_u \phi(u = 0)$. For this definition, all four axioms will be shown to hold for fields with various spins (sections \ref{scalar}-\ref{spin}). But in the case of interacting fields, it is not clear which operator(s) should be regarded as the fundamental field. In this case it will simply be taken as an assumption that there exists some algebra $\mathcal{A}(H)$ satisfying these properties. Some tentative arguments for and against this assumption will be discussed in section \ref{int}. \item \textbf{A Renormalization Scheme for the Generalized Entropy.} Because the entanglement entropy outside of the horizon diverges, any proof that generalized entropy increases must be formal unless this divergence is regulated and renormalized. Rather than specify a particular renormalization scheme, I will simply describe what properties the scheme must have (section \ref{ren}). The proof of the GSL depends on proving that the free boost energy $K - TS$ cannot increase as time passes. Formally, this quantity can be divided into two parts: the boost energy $K$ and the entropy $S$. Although $K - TS$ can be rigorously defined and is finite, both $K$ and $S$ suffer divergences which must be renormalized. It is necessary to assume that, when $K$ is written in terms of the renormalized stress-energy tensor, and $S$ is written in terms of the renormalized entropy, the expected relationship between these three quantities continues to hold. Since this property can be rigorously shown for infinite lattice spin systems \cite{AS77}, it is reasonable to believe that it also holds for quantum field theories. \end{enumerate} \noindent In the remainder of this section, the consequences of these three assumptions will be described in more detail. One of these consequences is that the horizon is thermal with respect to dilations about any slice \ref{thermal}. This---together with an information theory result known as the ``monotonicity of the relative entropy'' (section \ref{relative})---implies the GSL (sections \ref{proofon}-\ref{outside}). \subsection{The Semiclassical Regime}\label{sreg} In the semiclassical approximation, we add certain quantum fields $\phi$ to the classical spacetime, and use their expected stress-energy $\langle T_{ab} \rangle$ as a source for an order $\hbar$ perturbation to the metric. In the semiclassical limit one takes $\hbar$ to be small, so that the perturbation to the metric is small compared to the classical metric.\footnote{The semiclassical $\hbar$ regime invoked here should be distinguished from the large $N$ semiclassical regime in which one has a large number of particle species and takes $\hbar \to 0$ while holding $\hbar N$ fixed. In that kind of semiclassical regime the quantum corrections to the metric can be of the same order as the classical metric, so that it is not possible to regard it as a small perturbation. Proving the GSL in the large $N$ regime will be left for another day.} The perturbed metric can be expanded in $\hbar$ as: \begin{equation} g_{ab} = g_{ab}^0 + g_{ab}^{1/2} + g_{ab}^1 + \mathcal{O}(\hbar^{3/2}). \end{equation} The zeroth order term is the classical background metric, the half order term is due to quantized graviton fluctuations, and the first order term is due to the gravitational field of matter or gravitons. Since the GSL is an inequality, in the limit of $\hbar \to 0$, the truth or falsity of the GSL is determined solely based on the highest order in $\hbar$ contribution to the time derivative of the generalized entropy. The back-reaction of the quantized fields is the $\mathcal{O}(\hbar^1)$ part of the metric, and will be calculated using the semiclassical Einstein equation: \begin{equation}\label{semi} G_{ab} = 8\pi G \langle T_{ab} \rangle, \end{equation} in which the Einstein tensor $G_{ab}$ is regarded as a c-number, while the stress-energy tensor $T_{ab}$ is a quantum operator. A few words are in order about the justification of Eq. (\ref{semi}). In reality, the metric tensor ought to be quantized just as the matter fields are. When this is done, one should use not the semiclassical Einstein equation, but the full Einstein equation, interpreted as an operator equation. However, in the linearized weak-field approximation limit, the semiclassical Einstein equation should be recoverable from the operator Einstein equation by taking expectation values of the $\mathcal{O}(\hbar^1)$ part of the metric \cite{10proofs}. In addition, there should be higher order in $\hbar$ corrections to the Einstein equation, coming from renormalization theory. However, because this article only treats back-reaction at leading order in $\hbar$, effects which are higher order in $\hbar$ may be neglected. Hence, because this article uses the semiclassical expansion only when controlled by an $\hbar$ expansion, the results are presumably in correspondence with the full quantum theory. This regime is much more circumscribed than the ``self-consistent'' semiclassical solutions of e.g. Flanagan and Wald \cite{FW}. In particular, pathological features such as run-away solutions are outside of the scope of this regime, since they show up only when all orders in $\hbar$ become important. \paragraph{Semiclassical Expansion of the Raychaudhuri Equation.} In the strictly classical $\hbar \to 0$ limit, the horizon entropy $S_\mathrm{H} = 1/{4G \hbar}$ of the GSL dominates over the $S_\mathrm{out}$ term. For any classical manifold with classical fields obeying the null energy condition $T_{kk} = 0$, the area of any future horizon is required to be nondecreasing by Hawking's area increase theorem \cite{hawking71}. Let $\theta$ be the expansion of the horizon, and $\sigma_{ab}$ the shear. Then it follows from the convergence property of the Raychaudhuri equation: \begin{equation} \nabla_k \theta = -\frac{\theta^2}{D-2} - \sigma_{ab}\sigma^{ab} - R_{kk}, \end{equation} together with the null-null component of the Einstein equation \begin{equation} R_{kk} = 8\pi G\,T_{kk}, \end{equation} and the absence of any singularities on the horizon itself, that \begin{equation} \theta \ge 0. \end{equation} Furthermore, if any generator of the horizon has nonvanishing null energy or shear anywhere, the entropy is strictly increasing along that horizon generator prior to that time. This is the classical area increase theorem. This classical result can be used to divide the semiclassical GSL into three cases based on the classical $\mathcal{O}(\hbar^0)$ part of the metric. Either: 1) the horizon is classically growing, 2) it is classically stationary, or 3) it is classically growing up to a certain time $t$, after which it becomes stationary. In case (1), the zeroth order area increase corresponds to an $\mathcal{O}(\hbar^{-1})$ increase in the generalized entropy, which dominates over all other effects. Therefore the GSL holds. In case (2) quantum effects can cause the area to decrease, and therefore it is an interesting question whether the GSL holds or not. In case (3), the GSL must be true before time $t$, so the only question is whether it holds after $t$. But the GSL after $t$ makes no reference to anything that occurred before $t$. Consequently without loss of generality we need consider only case (2), in which the horizon is always classically stationary. Any violation of the GSL must come from quantum effects, corresponding to order $\hbar^{0}$ contributions to the generalized entropy.\footnote{This article will not consider contributions to the generalized entropy which are higher order in $\hbar$. In the semiclassical limit, the only way these higher order corrections could violate the GSL is if the GSL is saturated at order $\hbar^0$. This would require the fields on the horizon to be in a special state for which the time derivative of the generalized entropy is exactly \emph{zero} at order $\hbar^0$. Probably the only such equilibrium state is the Hartle-Hawking state. But in this state, the GSL holds to all orders in $\hbar$, by virtue of time translation symmetry. Thus, the GSL can be expected to hold to all orders in $\hbar$, in the semiclassical regime. A more interesting question is what happens outside the semiclassical regime, when all orders in $\hbar$ can become equally important.} Since there is no half-order contribution to $T_{ab}$ or $\sigma_{ab}\sigma^{ab}$, the half order Raychaudhuri equation says \begin{equation}\label{Rayhalf} \nabla_k \theta^{1/2} = 0. \end{equation} We can now write the first order part of the Raychaudhuri equation as \begin{equation}\label{Ray1} \nabla_k \theta^1 = - \langle \sigma_{ab}^{1/2}\sigma^{ab\phantom{i}{1/2}} \rangle - 8\pi \langle T_{kk}^{1} \rangle. \end{equation} The $\theta^2$ term is of order $\mathcal{O}(\hbar^{2})$ and is therefore negligible. If one ignores gravitons, then the shear term $\sigma_{ab}^{1/2}\sigma^{ab\phantom{i}{1/2}}$ can be neglected. On the other hand, in processes involving gravitons, the shear term must be included (cf. section \ref{grav}). The easiest way to handle gravitons is to lump the shear squared term in with $T_{kk}$ as a gravitational analogue of the null energy flux. Below, the stress-energy tensor should be read as including the shear-squared term, thus: \begin{equation}\label{linRay} \nabla_k \theta = -8\pi G\,\langle T_{kk} \rangle. \end{equation} So when energy falls across the classically stationary horizon, it makes it no longer stationary at order $\hbar^1$. Let us now calculate the area $A$ of a slice $\Sigma$ cutting the horizon. A specific slice $\Sigma$ may be defined by specifying the affine parameter $\lambda = \Lambda(y)$ as a function of the horizon generator. In order to calculate the effects of $T_{kk}$ on the area $A(\Lambda)$ of the slice, we use the relation between the expansion and the area: \begin{equation} \theta = \frac{1}{A}\frac{dA}{d\lambda} = A^{-1} \nabla_k A, \end{equation} where $A$ is the area of an infinitesimal cross section of the horizon. This allows the left-hand side of Eq. (\ref{linRay}) to be rewritten as: \begin{equation} \nabla_k A^{-1} \nabla_k A = A^{-1} \nabla_k^2 A - A^{-2} (\nabla_k A)^2, \end{equation} where the second term can be dropped in the semiclassical approximation because it is nonlinear in $\nabla_k A$. Thus the linearized Raychaudhuri Eq. (\ref{linRay}) can be rewritten as \begin{equation}\label{lin2} \nabla_k^2 A = -8\pi G\,\langle T_{kk} \rangle A. \end{equation} After integrating this twice in the $\lambda$ direction, one obtains for the left-hand side of Eq. (\ref{lin2}) \begin{equation} \int_\Lambda^\infty d\lambda^\prime \int_{\lambda^\prime}^\infty d\lambda\,\nabla_k^2 A(\lambda) = -\int_\Lambda^\infty d\lambda^\prime\,\nabla_k A(\lambda^\prime) = A(\Lambda) - A(\infty), \end{equation} by using the fundamental theorem of calculus twice, as well as applying the ``teleological'' boundary condition suitable for a future event horizon: \begin{equation} \theta(+\infty) = 0. \end{equation} Meanwhile, the identical transformation of Eq. (\ref{lin2})'s right-hand side is \begin{equation} -8\pi G \int_\Lambda^\infty d\lambda^\prime \int_{\lambda^\prime}^\infty d\lambda\, \langle T_{kk} \rangle A = -8\pi G \int_\Lambda^\infty \langle T_{kk} \rangle A (\lambda - \Lambda)\,d\lambda. \end{equation} The final step is to integrate the infinitesimal areas A in the $D - 2$ transverse $y$-directions. One obtains the key relationship \begin{equation}\label{Tint} A(\Lambda) = A(+\infty) - 8\pi G \int_\Lambda^\infty \langle T_{kk} \rangle \,(\lambda - \Lambda) \,d\lambda\,d^{D-2}y \equiv 8\pi G\, \langle K(\Lambda) \rangle, \end{equation} where the area element has been absorbed into the definition of the transverse integration measure $d^{D-2}y$. In the next section it will be seen that $K(\Lambda)$ is the generator of a ``boost'' transformation on the horizon about the slice $\Lambda$. Thus the physical interpretation of Eq. (\ref{Tint}) is that, up to an additive constant, the boost energy $K$ is proportional to the area: \begin{equation}\label{AB} A(\Lambda) = C - 8\pi G\, \langle K(\Lambda) \rangle. \end{equation} The constant $C$ can be dropped for purposes of the GSL, which is only concerned with area differences. In the special case where $\Sigma$ is the bifurcation surface of the unperturbed horizon, Eq. (\ref{Tint}) is the `physical processes' version of the first law of black hole thermodynamics \cite{GW01}, while Eq. (\ref{AB}) indicates that the horizon area is canonically conjugate to the Killing time \cite{CT93}. But to show the GSL, it is important that these formulae hold even when $\Sigma$ is not the bifurcation surface. \subsection{Properties of the Horizon Algebra}\label{sym} As stated above, we are assuming that our matter quantum field theory has a valid null-hypersurface initial- value formalism. That means that there must be a field algebra $\mathcal{A}(H)$ which can be defined on any stationary horizon $H$ without making reference to anything outside of $H$. More precisely, all properties of the algebra must be defined using no more than: \begin{enumerate} \item some set of quantum operators defined as a local net of algebras over $H$ (e.g. based on local quantum fields which make sense as operator-valued distributions $\phi(\lambda, y)$ on $H$; this will done for free fields in sections \ref{scalar}-\ref{spin}), \item the transverse components of the metric $g_{ij}$ on $H$, and \item an affine parameter $\lambda$ on each horizon generator (which actually depends on a Christoffel symbol $\Gamma^v_{vv} = g_{uv,v}$ in null coordinates). \end{enumerate} Assuming that an algebra can be so defined, one expects it to obey the four axioms: Determinism, Ultralocality, Local Lorentz Symmetry, and Stability. These axioms will be shown in sections \ref{scalar}-\ref{spin} for free fields, but plausibly hold even for interacting fields, assuming that a null hypersurface restriction makes sense at all for such fields (cf. section \ref{int}). The axiom of Determinism says that $\mathcal{A}(H)$ gives a complete specification of all information falling across the horizon, so that together with the information in $\mathcal{A}(\mathcal{I}^+)$ at null infinity, one can determine all the information outside the event horizon (Fig. 1a). Consequently, any symmetries of the horizon $H$ will correspond to hidden symmetries of the theory on the bulk. Thus by working out the symmetry group of $\mathcal{A}(H)$, hidden properties of the bulk dynamics will become manifest. \begin{figure}[ht] \centering \includegraphics[width=.85\textwidth]{nullslice.eps} \caption{\footnotesize a) An eternal black hole spacetime is shown. The GSL says that the generalized entropy must increase from time slice $1$ to time slice $2$. However, all information outside of the horizon must either fall across the horizon $H$ or else reach future null infinity $\mathcal{I}^+$ (Determinism). Hence one can ``push forward'' each of the two time slices to part of $H$ and all of $\mathcal{I}^+$ without losing any information. In addition to the Killing symmetry which acts on the horizon as a dilation, there is a translation symmetry of $H$ (shown as an arrow) which is not a symmetry of the whole spacetime. b) A transverse view of $H$ in the same spacetime. Vertical lines represent horizon generators. Each horizon generator can be independently translated and dilated (Local Lorentz Symmetry); this permits any two slices on $H$ to be translated into each other, and ensures that region above each slice on $H$ is thermal with respect to dilations about that slice. In order to prove the GSL this thermal property is needed for both slice $1$ and slice $2$. } \label{nullslice} \end{figure} The axiom of Ultralocality says that the degrees of freedom on different horizon generators are independent systems. So if the set of horizon generators is written as a disjoint union $H = \sum_n H_n$ of open regions in the transverse $y$-space, then the algebra can be written as a tensor product $\mathcal{A}(H) = \prod_n \mathcal{A}(H_n)$, where $\mathcal{A}(H_n)$ is the algebra of fields restricted to $H_n$. Ultralocality is stronger than Microcausality, which merely asserts that the commutators of fields vanish at spacelike separation. In particular, in the vacuum state (whose existence is guaranteed by the other axioms of Local Lorentz Invariance and Stability), Ultralocality implies that all $n$-point functions of field operators in $\mathcal{A}(H)$ vanish when evaluated on $n$ distinct horizon generators. This property may be shocking to those who are used to canonical quantization on a spacelike initial data surface, because on a spacelike surface it is impossible for any Hadamard state to have vanishing entanglement across short spatial distances. By contrast, on a stationary null surface the vacuum entanglement is arranged solely along each horizon generator and not between different horizon generators \cite{schroer09}. In the case of a free bosonic field $\phi$, this vanishing of $n$-point functions is possible because 1) the null stress-energy $T_{kk}$ does not depend on transverse $y$-derivatives of the field, but only the null derivative $\nabla_k \phi$, and 2) the horizon algebra $\mathcal{A}(H)$ does not include the field $\phi$ itself (which has nonvanishing $n$-point functions at spacelike separation on the null surface), but only $\nabla_k \phi$ (which does not).\footnote{A nice exercise is to demostrate explicitly, for a free massive scalar $\Phi$ in Minkowski space, in a null coordinate system $(u, v, y_i)$ that the $n$-point functions of $\nabla_v \Phi$ vanish when evaluated on the null surface $u = 0$ for distinct values of $y$.} Because Ultralocality requires that the different horizon generators can be treated as independent systems (although the field operators still need to be integrated in the transverse $y$-directions to give well-defined operators), the remaining two axioms, Lorentz Symmetry and Stability, can without loss of generality be applied to each horizon generator separately. Local Lorentz Symmetry means that the algebra $\mathcal{A}(H)$ has an infinite dimensional group $G$ of symmetries (i.e. automorphisms) corresponding to affine transformations of each horizon generator: \begin{equation} \delta \lambda = a(y) + b(y) \lambda, \end{equation} $a$ and $b$ being functions of $y$. Each horizon field $\phi(\lambda, y)$ must transform in some representation of this group (just as fields in flat spacetime transform in some representation of the Poincar\'{e} group). Thus, each element $g \in G$ has both a geometrical interpretation (acting on horizon points) and an operator interpetation (acting on fields). Compatibility of these two interpretations requires that if an operator $\mathcal{O}$ is localized in a region $R$, then $g(\mathcal{O})$ is localized in $g(R)$. This is quite a bit more symmetry than can be possessed by the spacetime in which $H$ is embedded (Fig 1b). These secret symmetries of $H$, together with the other assumptions, will turn out to imply the GSL. (In the case of free fields, it will be shown in section \ref{conf} that the horizon algebra is also invariant under special conformal transformations $\delta \lambda = c(y) \lambda^2$, but this additional symmetry is not required to prove the GSL.) In order to implement these symmetries, we need not only the field $\phi$, but also certain integrals of the $T_{kk}$ component of the stress-energy tensor. This component of the stress-energy tensor represents the flux of null energy across the horizon. Since the null energy is the generator of null diffeomorphisms, $T_{kk}$ can be integrated to obtain the generator of affine reparameterizations. The generator of a null translation $\delta \lambda = a(y)$ is given by \begin{equation}\label{pka} p_k(a) \equiv \int T_{kk}\,d\lambda\,a(y)\,\,d^{D-2}y. \end{equation} (Here and below, the area element of the horizon will be considered to be implicit in the integration measure $d^{D-2}y$.) Stability says that so long as $a(y) > 0$, $p_k \ge 0$. In other words, the generator of null translations must be nonnegative. By taking the limit in which the amount of translation is a delta function ($a(y) \to \delta^{D-2}(y)$), one finds that Stability is equivalent to the average null energy condition (ANEC) \cite{borde87}, as evaluated on horizon generators; \begin{equation} p_k(y) \equiv \int_{-\infty}^{+\infty} T_{kk}\,d\lambda. \end{equation} The ANEC is a manifestation of the positivity of energies in a quantum field theory.\footnote{The ANEC can be derived from the stability of the quantum field theory by the following argument: any stationary horizon $H$ can be embedded in a spacetime $\mathcal{M}_{1,1} \otimes (\Sigma \cap H)$, where the first factor is 1+1 dimensional Minkowski space, and the second is some $D-2$ dimensional Riemannian manifold. Now suppose that the quantum fields have their energy bounded below, relative to time translation on $\mathcal{M}_{1,1}$. By Lorentz symmetry and continuity, the null energy on $\mathcal{M}_{1,1}$ must also be bounded below. All null energy must eventually cross the horizon $H$, hence the null energy on $H$ is bounded below. But by Ultralocality this is only possible if each horizon generator is separately stable.} It is possible to show that the ANEC holds on the null generators of a stationary horizon by invoking the GSL \cite{anec}. Here we go in the converse direction, using the ANEC to help prove the GSL. Given any $a(y) > 0$, it is possible to define the vacuum state $|0\rangle$ on the horizon as being the ground state with respect to the null energy $p_k(a)$ \cite{sewell82}. However, in an ultralocal theory, there can be no interaction between the different horizon generators. Therefore the state factorizes: it is a ground state with respect to each $p_k(y)$ separately. This means that each possible choice of $a(y) > 0$ defines the \emph{same} vacuum state. We can also perform a dilation $\delta \lambda = b(y)\lambda$. This symmetry is generated by \begin{equation} K(y) \equiv \int T_{kk}\,\lambda\,d\lambda\,b(y)\,d^{D-2}y. \end{equation} For any particular spatial slice of the horizon located at $\lambda = \Lambda(y)$, one can define a canonical `boost energy' $K$ of the horizon in the region $\lambda > \Lambda(y)$: \begin{equation} K(\Lambda) \equiv \int_\Lambda^\infty T_{kk}\,(\lambda - \Lambda) \,d\lambda\,d^{D-2}y. \end{equation} The definition of $K$ depends on the slice $\Lambda(y)$ in two different ways: not only does the lower limit of integration change, but the horizon Killing vector $\lambda - \Lambda$ which preserves the slice $\Lambda$ also changes. The next section will show that the vacuum state $|0\rangle$ is thermal in the region $\lambda > \Lambda(y)$ with respect to $K(\lambda)$, no matter what slice $\Lambda$ is chosen (Fig. 1b). \subsection{Thermality of the Horizon}\label{thermal} The purpose of this section is to show that $|0\rangle$ is thermal with respect to boosts when evaluated above any arbitrary slice $\Lambda$ on the horizon. The boost acts geometrically on each horizon generator $y$: \begin{equation} (\lambda - \Lambda(y)) \to e^{t} (\lambda - \Lambda(y)). \end{equation} The axiom of Local Lorentz Invariance requires that this geometrical action of the boost correspond to an automorphism of the algebra of observables $\mathcal{A}(\lambda > \Lambda)$ localized above the slice $\Lambda$. This induces a 1-parameter group of automorphisms $\alpha_t$ acting on operatos in $\mathcal{A}(\lambda > \Lambda)$. \paragraph{KMS States.} The thermality of the vacuum state $|0\rangle$ means that it obeys the Kubo-Martin-Schwinger (KMS) condition: For any two observables $A$ and $B$, $\langle B \alpha_{t}(A) \rangle_0$ must be an analytic function of $z$ when $0 < \textrm{Im}(t) < i \hbar \beta$, and also \begin{equation}\label{KMS} \langle AB \rangle_0 = \langle B\,\alpha_{i \hbar \beta}(A) \rangle_0, \end{equation} where $\beta = 2\pi / \hbar$ is the inverse Unruh temperature. In order to establish this, we appeal to an analogue of the Bisognano-Wichmann theorem. The Bisongano-Wichmann theorem \cite{BW76} implies that for any set of quantum fields in Minkowski space (interacting or not) satisfying the Wightman axioms, in the vacuum state $|0\rangle$, the fields restricted to a Rindler wedge $W$ are thermal with respect to the boost energy. This is the Unruh effect. The basic inputs of the theorem are 1) the Lorentz symmetry of the wedge, and 2) the spectral condition (i.e. positivity of energies) with respect to time translation. The basic idea of their (highly technical) theorem is to analytically continue the boost symmetry of the group to complex values. One can then boost a Rindler wedge by an amount $i\pi$ in order to ``rotate'' it into the complementary Rindler wedge region $W^\prime$ on the other side of the bifurcation surface. This rotation corresponds to acting with the operator $e^{\pi K /\hbar}$, where $K$ is the generator of the boost symmetry in $W$. Using the spectral condition to ensure convergence, Bisognano and Wichmann showed that the wedge algebra $\mathcal{A}(W)$ satisfies \begin{equation}\label{BW} J e^{-\pi K / \hbar} \mathcal{A}(W) |0\rangle = \mathcal{A}^*(W) |0\rangle, \end{equation} where $J$ is the (antiunitary) CPT symmetry transformation corresponding to reflecting one time and one space dimension through the bifurcation surface of $W$, and $*$ is hermitian conjugation. Sewell \cite{sewell80} observed that Eq. (\ref{BW}) implies that $|0\rangle$ is a KMS state with temperature $\hbar / 2\pi$, with respect to boosts, when restricted to $W$. This is because \begin{eqnarray} \langle 0| AB |0\rangle = \langle 0| A^* e^{-\pi K/\hbar} J \cdot J e^{-\pi K/\hbar} B^* |0\rangle = \\ \langle 0| B e^{-2\pi K/\hbar} A |0\rangle = \langle 0| B\,\alpha_{2i\pi}(A) |0\rangle, \end{eqnarray} where in going from the first line to the second we have used the fact that $J$, being antiunitary, converts bras to kets and vice versa. Sewell also observed that, given certain axioms, the Bisognano-Wichmann theorem could be applied to black hole spacetimes to derive thermality outside of the bifurcation surface of a black hole. In a later article \cite{sewell82}, Sewell generalized the Bisongano-Wichmann theorem further to the case of a quantum field algebra restricted to a stationary horizon (under the assumption that this algebra exists). In this generalization, 1) the dilation symmetry $b$ is analogous to the boost symmetry, and 2) Stability with respect to translation symmetry $a$ is analogous to the spectral condition. This generalization can be used here to show that when the vacuum state $|0\rangle$ is restricted to the region $\lambda > \Lambda$, it is a KMS state with respect to the dilation generated by $K(\Lambda)$, with a temperature $T = \hbar / 2\pi$. This is just the Unruh/Hawking effect as viewed on the horizon itself. In Sewell's construction, $|0\rangle$ is simply the Hartle-Hawking state associated with the fields on the horizon $H$ itself. This means that if the bulk spacetime possesses a Hartle-Hawking state, it will restrict to $|0\rangle$ on $H$. However, even in spacetimes which do not possess a Hartle-Hawking state (such as the Kerr black hole), the state $|0\rangle$ is still well-defined. This fills a lacuna in certain previous proofs of the GSL, which did not apply to such horizons \cite{10proofs}. There is also an applicable proof that the vacuum is KMS relative to boosts using in the algebraic approach to QFT \cite{CLTW11}, at least in situations where the horizon generators also a possess special conformal symmetry $\delta \lambda = c(y) \lambda^2$. \paragraph{Gibbs States.} Another definition of thermal states which is sometimes used is the Gibbs definition, in which a thermal state with respect to some Hamiltonian (in this case the boost energy $K$) is defined as the exponentially decaying density matrix \begin{equation}\label{Gibbs} \frac{e^{-\beta K}}{\mathrm{tr}(e^{-\beta K})}, \end{equation} where the denominator is the partition function. The relationship between the KMS and Gibbs definitions is as follows: in situations with a finite number of degrees of freedom, in which the algebra of observables $\mathcal{A}$ is just type I (i.e. the complete collection of operators on a Hilbert Space), the Gibbs and the KMS definitions are equivalent.\footnote{The standard way to show this is to plug Eq. (\ref{Gibbs}) into Eq. (\ref{KMS}), and use the cyclic property of the trace.} However, in QFT there are an infinite number of degrees of freedom, and typically the resulting algberas are type III (meaning that there is no trace operation). In this case, the KMS definition still works, while the Gibbs definition becomes ill-defined. Nevertheless, it is a common practice in QFT to formally manipulate expressions like Eq. (\ref{Gibbs}) in order to extract finite answers. Such procedures can in principle be justified by a renormalization procedure in which one regulates the divergences in Eq. (\ref{Gibbs}) and then renormalizes. Using this less rigorous Gibbs definition, the thermality of the Rindler wedge can also be proven by a simple path integral argument developed by Unruh and Weiss \cite{UW84}. Assuming that the vacuum state $|0\rangle$ is the lowest energy state, it can be generated by the boundary of a Euclidean path integral extending from time $t = -\infty$ to $t = 0$. Expressed in terms of the Hamiltonian $H$ and the partition function $Z$, \begin{equation} |0\rangle = \lim_{t \to \infty} \frac{e^{- tH / \hbar}}{\mathrm{tr}(e^{- tH / \hbar})}. \end{equation} However, this same Euclidean path integral can be viewed in radial coordinates as a path integral extending from an angle $\theta = 0$ to an angle $\theta = \pi$. This indicates that when one traces out over the degrees of freedom in the complementary wedge $W^\prime$, the state of $W$ is \begin{equation} \sigma = \frac{e^{-2 \pi K/\hbar}}{\mathrm{tr}(e^{- 2\pi H / \hbar})}, \end{equation} which is thermal. In order to show an analogous Unruh-Wiess thermality for the horizon algbera, one would have to find a way to write the vacuum state $|0\rangle$ in terms of a path integral over a complexified $\lambda$ coordinate. The periodicity of the path integral in radial coordinates would then imply the thermality of the restricted vacuum with respect to the boosts. However, it is not entirely clear what the conditions are for such a path integral to exist. In sections \ref{scalar} and \ref{spin}, it will be shown how to reduce free fields restricted to the horizon to free left-moving conformal fields in 1+1 dimension, which would allow the vacuum state to be written in terms of free two-dimensional path integrals. In conclusion, there exist proofs of the thermality of the vacuum in the Wightman, algebraic, and path integral approaches to QFT. The first two approaches prove that the vacuum is thermal in the KMS sense, while the third is a formal demonstration using the less rigorous Gibbs definition. All three approaches are potentially capable of being adapted to the observables living on the horizon itself. However, the algberaic approach currently assumes special conformal symmetry, and the path integral approach must of course assume the existence of a path integral. \subsection{The Relative Entropy}\label{relative} In order to prove that the generalized entropy increases, I need to use a monotonicity property of an information-theoretical quantity known as the ``relative entropy''. The relationship between the relative and generalized entropies was made explicit in Casini \cite{casini08}, and was used in my earlier proof of the GSL for Rindler wedges \cite{rindler}. For a finite dimensional system, the relative entropy of two states $\rho$ and $\sigma$ is defined as \begin{equation}\label{relIII} S(\rho\,|\,\sigma) = \mathrm{tr}(\rho\,\ln\,\rho) - \mathrm{tr}(\rho\,\ln\,\sigma). \end{equation} For a QFT system with infinitely many degrees of freedom, it may be defined as the limit of this expression as the number of degrees of freedom go to infinity \cite{araki75}.\footnote{The von Neumann algebra of a bounded region in a QFT is a hyperfinite type III algebra \cite{BAF87}. Hyperfinite means that one can approximate it by a series of finite dimensional algebras; hence the limit. Because of the monotonicity property, it does not matter how the limit is taken.} The relative entropy lies in the range $[0,\,+\infty]$. In some sense it measures how far apart the two states $\rho$ and $\sigma$ are, but it is asymmetric: $S(\rho\,|\,\sigma)$ is not in general the same as $S(\sigma\,|\,\rho)$. \paragraph{Examples} When the two states are the same the relative entropy vanishes: \begin{equation} S(\rho\,|\,\rho) = 0. \end{equation} When $\sigma = \Psi$ is a pure state and $\rho \ne \Psi$, the relative entropy is infinite: \begin{equation} S(\rho\,|\,\Psi) = +\infty. \end{equation} Normally, one wants to use a faithful state for $\sigma$ (i.e. one without probability zeros) so that $S(\rho\,|\,\sigma)$ is finite on a dense subspace of the possible choices for $\rho$. When $\sigma$ is the maximally mixed state in an $N$ state system, the relative entropy is just the entropy difference: \begin{equation} S(\rho\,|\,1/N) = \ln N - S_\rho. \end{equation} When $\sigma$ is a Gibbs state with respect to a some Hamiltonian `energy' $H$, the relative entropy $S (\rho\,|\,\sigma)$ is the difference of free energy, divided by the temperature: \begin{equation}\label{FE} S(\rho\,|\,\sigma) = [(\langle H \rangle_\rho - T_\sigma S_\rho) - (\langle H \rangle_\sigma - T_\sigma S_\sigma)]/T_\sigma, \end{equation} where $T_\sigma$ is the temperature of $\sigma$. This can be verified by inserting Eq. (\ref{Gibbs}) into Eq. (\ref{relIII}). One would also like to be able to apply Eq. (\ref{FE}) to KMS states of systems with infinitely many degrees of freedom, even when the Gibbs definition of thermality is ill-defined.\footnote{In fact, all faithful states can be regarded as KMS states with respect to some notion of `time' defined relative to that state \cite{summers05}. This notion of time evolution is known as the ``modular flow''.} Although the relative entropy itself is typically finite for sufficiently reasonable states, the individual components $H$ and $S$ can diverge. The GSL proof presented in the next section assumes that Eq. (\ref {FE}) can be applied even in this context so long as one uses the \emph{renormalized} entropy and energy values. Some evidence for this unproven assumption will be discussed in section \ref{ren}. \paragraph{Monotonicity} However, the most important property of the relative entropy is that it monotonically decreases under restriction. Given any two mixed states $\rho$ and $\sigma$ defined for a system with algebra $M$, if we restrict to a smaller system described by a subalgebra of observables $M^\prime$, the relative entropy cannot increase \cite{lindblad75}: \begin{equation} S(\rho\,|\,\sigma)_M \ge S(\rho\,|\,\sigma)_{M^\prime}. \end{equation} Intuitively, since the relative entropy measures how different $\rho$ is from $\sigma$, if there are less observables which can be used to distinguish the two states, the relative entropy should be smaller. \subsection{Proving the GSL on the Horizon}\label{proofon} The monotonicity property looks very similar to the GSL. And in fact, with the right choice of $\rho$ and $\sigma$ it is the GSL. It was observed in section \ref{thermal} that the vacuum state $|0\rangle$ defined on $H$ is a KMS state with respect to $K(\Lambda)$, no matter what $\Lambda$ slice is chosen. Therefore, under horizon evolution a thermal state restricts to another thermal state. Of course, the GSL holds trivially for this vacuum state $|0\rangle$ because of null translation symmetry---the goal is to prove it for some other arbitrary mixed state of the horizon. Let $\rho(H)$ be the state of the horizon algebra $\mathcal{A}(H)$ which we wish to prove the GSL for, and let $\sigma = | 0 \rangle \langle 0 |$ be the vacuum state with respect to null translations. Since $\sigma$ is a KMS state when restricted to the region above any slice, the relative entropy $S(\rho\,|\,\sigma)$ is a free energy difference of the form Eq. (\ref{FE}), where $E$ is the boost energy $K(\Lambda)$ of the region $\lambda > \Lambda$, $S$ is the entropy of $\lambda > \Lambda$, and $T = \hbar/{2\pi}$ is the Unruh temperature. Furthermore by virtue of null translation symmetry, $(\langle K \rangle - TS)_\sigma$ is just a constant. So the monotonicity of relative entropy therefore tells us that as we evolve from a slice $\Lambda$ to a later slice $\Lambda^\prime$, \begin{equation} \frac{2\pi}{\hbar} \langle K(\Lambda) \rangle - S(\Lambda) \ge \frac{2\pi}{\hbar} \langle K(\Lambda^\prime) \rangle - S(\Lambda^\prime), \end{equation} Using Eq. (\ref{AB}), this implies that the GSL holds on the horizon for the state $\rho(H)$: \begin{equation} \frac{A}{4\hbar G}(\Lambda^\prime) + S(\Lambda^\prime) \ge \frac{A}{4\hbar G}(\Lambda) + S(\Lambda). \end{equation} \subsection{The Region Outside the Horizon}\label{outside} This does not yet amount to a complete proof of the GSL, because the GSL refers to the entropy $S_\mathrm{out}$ on a spacelike surface $\Sigma$ \emph{outside} of $H$, not just to the entropy which falls across $H$. Depending on how $H$ is embedded in the spacetime, it cannot necessarily be assumed that all of the information on $\Sigma$ will fall across the horizon. Some of it may escape.\footnote{One might wonder how it is possible for the GSL to hold both on the horizon and outside the horizon, considering that for an evaportating black hole, the generalized entropy only increases due to counting the entropy of Hawking radiation that escapes from the black hole. The resolution involves the role of the UV cutoff on entanglement entropy, discussed in section \ref{ren}. The proof of the GSL on the horizon involves choosing the UV cutoff to be at a fixed affine parameter distance $\Delta \lambda$ from each slice. On the other hand, the GSL outside the horizon requires the UV cutoff to be at a fixed proper distance $\Delta x = \Delta \lambda \Delta u$, where $u$ is the other null coordinate. Both cutoffs can be simultaneously implemented by choosing $\Delta u$ to be covariantly constant with respect to $\lambda$, but in that case $\Delta u$ is exponentially growing with respect to the radial coordinate $r$, and as a result no Hawking radiation escapes past $\Delta u$. Alternatively, one could boost the cutoff $\Delta x$ so that it is invariant under the Killing symmetry, but then $\Delta \lambda$ is no longer constant, so the horizon GSL no longer applies. However, $S_\mathrm{out}$ is unaffected, so the outside GSL remains valid.} Suppose we have an arbitrary quantum state $\rho$ defined on the region of spacetime $R$ exterior to some stationary horizon $H$. All of the information in $R$ should either fall across the horizon $H$ or else escape to future infinity $\mathcal{I}^+$. (This assumes that any singularities are hidden behind $H$---otherwise the information falling into these will need to be included as well.) $H$ and $\mathcal{I}^+$ should factorize into independent Hilbert spaces, but $\rho$ may be some entangled state on $H\,\cup\,\mathcal{I}^+$. We can now generalize the proof above by choosing a reference state $\sigma$ that factors into the vacuum state on $H$ times some other state: \begin{equation} \sigma(H\,\cup\,\mathcal{I}^+) = |0\rangle \langle 0|(H) \otimes \sigma(\mathcal{I}^+). \end{equation} The second factor $\sigma(\mathcal{I}^+)$ can be chosen to be any faithful state (so long as the relative entropy $S(\rho\,|\,\sigma)$ is finite). After slicing the horizon at $\Lambda(y)$, the relative entropy is then once again a free energy with respect to some modular energy $E$: \begin{equation} S(\rho\,|\,\sigma) = ( \langle E \rangle - S)_\rho - (\langle E \rangle - S)_\sigma, \end{equation} where because $\sigma$ is a product state, the modular energy $E$ is a sum of terms for the horizon system $H_{\lambda > \Lambda}$ and $\mathcal{I}^+$: \begin{equation} E(H_{\lambda > \Lambda}\,\cup\,\mathcal{I}^+) = \frac{2\pi}{\hbar} K(\Lambda) + E(\mathcal{I}^+), \end{equation} with $E(\mathcal{I}^+)$ being the modular energy conjugate to the modular flow of $\sigma(\mathcal{I}^+)$. The addition of the new modular energy term $E(\mathcal{I}^+$ makes no difference to $\Delta E$, the change in the relative entropy with time, because $E(\mathcal{I}^+)_\rho$ is not a function of the horizon slice $\Lambda$. Consequently one can still use Eq. (\ref{AB}) to show that \begin{equation} \langle \Delta E \rangle = \frac{2\pi}{\hbar} \langle \Delta K \rangle = -\frac{\Delta A}{4\hbar G}. \end{equation} On the other hand, $S$ is now interpreted as the total entropy of $\rho$ on on the combined system $H_{\lambda > \Lambda}\,\cup\,\mathcal{I}^+$. Because of unitarity, the entropy $S(\Sigma)$ of any slice $\Sigma$ that intersects the horizon at $\Lambda$ must be the same as the entropy $S(H_{\lambda > \Lambda}\,\cup\,\mathcal{I}^+)$. In other words, $S = S_\mathrm{out}$, for any state $\rho$. (Note that $\rho$, unlike $\sigma$, may have entanglement between $H$ and $\mathcal{I}^+$.) Thus, the monotonicity property of $S(\rho\,|\,\sigma)$ is equivalent to the GSL. \subsection{Renormalization}\label{ren} It should be noted that in every QFT, $K$ and $S$ are both subject to divergences. The relative entropy packages all of these divergent quantities together in a way that can be rigorously defined for arbitrary algebras of observables \cite{araki75}. However, in order to apply the Raychaudhuri equation (as needed to obtain Eq. (\ref{AB})) it is necessary to unpackage the relative entropy into separate $K$ and $S$ terms, each of which needs to be renormalized separately. Because of the connection between the relative entropy and the free energy for finite dimensional subsytems, one expects that after defining $K$ in terms of the renormalized stress-energy tensor $\tilde{T}_{kk}$, and the entropy in terms of some renormalized entropy $\tilde{S},$\footnote{The proper way to renormalize the entropy is not completely clear, but one promising regulator scheme uses the ``mutual information'' between two regions at finite spatial separation \cite{casini06}.} that Eq. (\ref{FE}) still holds: \begin{equation} S(\rho\,|\,\sigma) = [(\langle \tilde{K} \rangle - T\tilde{S})_\rho - (\langle \tilde{K} \rangle - T \tilde{S})_\sigma]/T. \end{equation} This is especially plausible given that the only quantities that enter into Eq. (\ref{FE}) are energy and entropy \emph{differences}. As in my previous proof for Rindler horizons \cite{rindler}, I will assume that this equation is in fact true in an appropriate renormalization scheme. There is a theorem to this effect for quantum spin systems on an infinite lattice \cite{AS77}, and it seems likely that any QFT can be approximated arbitrarily well by such a lattice. If one wishes to interpret the GSL as a statement about a regulated entanglement entropy on a spacelike surface, then it is also necessary for the regulator scheme defining $\tilde{S}$ on the null surface $H\,\cup\,\mathcal{I}^+$ to give the same answer as the regulator scheme defining $\tilde{S}_\mathrm{out}$ on a spacelike surface $\Sigma$. This is a plausible assumption since there exist choices of $\Sigma$ which are arbitrarily close to $H$. But it is not entirely trivial, because the way that the entropy divergence is localized on a null surface is different from the way it is localized on a spacelike surface. In the case of a spacelike surface the entropy can be regulated by cutting off all entropy closer than a certain distance $x_0$ to the boundary. As $x_0 \to 0$, the divergence with respect to that cutoff then scales like $x_0^{2-D}$ on dimensional grounds. This method cannot work on $H$ because there is no invariant notion of distance along the horizon generators. By dimensional analysis, this means that the entropy must be logarithmically divergent along the null direction. Therefore, there is an infrared divergence as well as an ultraviolet divergence. Even if one cuts off the entropy at an affine distance $\lambda_U$ in the ultraviolet and $\lambda_I$ in the infrared, the entanglement entropy is still infinite due to the infinite number of horizon generators. One must in addition regulate by e.g. discretizing the space of horizon generators to a finite number $N$. One then finds that the entropy divergence of the vacuum state scales like \begin{equation}\label{nulldiv} S_\mathrm{div} \propto N (\ln \lambda_I - \ln \lambda_U). \end{equation} (Cf. section \ref{conf} for a justification of this statement.) The renormalized entropy $\tilde{S}$ can then found by subtracting the entropy of the vacuum state: \begin{equation} \tilde{S}(\rho) = S(\rho) - S(\sigma). \end{equation} It is reasonable to hope that this renormalized entropy is the same as the renormalized entropy defined on a spatial slice. Formally, one can simply take the limit of the entropy difference as a spatial slice $\Sigma$ slants closer and closer to $H$. However, the renormalization of the generalized entropy is itself a limiting process, so there are issues involving orders of limits. The analysis of section \ref{outside} implicitly assumes that these limits commute. Another consequence of renormalization is to add higher curvature contributions to the Lagrangian (cf. section \ref{nonmin}) \cite{jacobson94}. For example, for free fields in 4 dimensional spacetime, the coefficients of the curvature squared terms in the Lagrangian are logarithmically divergent. This would invalidate the assumption that the matter is minimally coupled to general relativity. Fortunately, this effect can be neglected here, because the effects of these higher order terms on the generalized entropy are of higher order in $\hbar$. \section{Quantizing a Free Scalar on the Horizon}\label{scalar} The proof of the GSL in section \ref{arg} was incomplete: it depended on four axioms describing the properties of quantum fields on the null surface. The purpose of this section is to explicitly show how these axioms are satisfied in the simplest case: a free scalar field. This completes the proof in section \ref{arg} of the semiclassical GSL. Since the reader may not be familiar with the technical issues regarding null quantization, this section will demonstrate null surface quantization for a free, minimally coupled scalar field $\Phi$ with mass $m^2 \ge 0$ in $D > 2$ dimensions. This is a quick way to construct the algebra of observables $\mathcal{A}(H)$. It will be shown that this algebra is nontrivial, and obeys the four axioms required to prove the GSL: Determinism, Ultralocality, Local Lorentz Symmetry, and Stability. It will also be shown that the horizon algebra can be approximated by the left-moving modes in a large number of 1+1 dimensional conformal field theories. This allows one to understand, using the conformal anomaly, why the horizon algebra is not symmetric under arbitrary reparameterizations of $\lambda$, but only special conformal transformations. The discussion of null quantization will be confined mostly to those issues which are of interest in determining the symmetry properties of the horizon. For a more detailed review of null quantization, including a fuller treatment of the technically difficult ``zero modes'', consult Burkardt \cite{burkardt96}. \subsection{Stress-Energy Tensor} The Lagrangian of the Klein-Gordon field is \begin{equation} \mathcal{L} = \Phi(\nabla^2 - m^2)\Phi /2. \end{equation} The classical stress-energy tensor on the horizon $H$ can be derived by varying with respect to the null-null component of the inverse metric\footnote{A previous version of this article incorrectly included a factor of (1/2) in the formula for $T_{kk}$. There was a compensating factor of 2 error in the null commutator (\ref{nullcom}).}: \begin{equation}\label{PhiTkk} T_{kk} = -2\frac{\mathcal{\delta L}}{\delta g^{ab}} k^a k^b = (\nabla_k \Phi)^2. \end{equation} This is positive except when $\Phi$ is constant, and depends only on the pullback of $\Phi$ to $H$. The total null energy on the horizon can be found by inserting Eq. (\ref{PhiTkk}) into Eq. (\ref{pka}):\footnote{This formula would have to be modified if the scalar field had a nonminimal coupling term $\Phi^2 R$. However, the horizon entropy $S_\mathrm{H}$ is also modified (cf. section \ref{nonmin}). The nonminimally coupled theory must also obey the GSL, since it is equivalent to a minimally coupled scalar by a field redefinition \cite{2ndlaw}.} \begin{equation}\label{nullE} p_k = \int (\nabla_k \Phi)^2\,d\lambda\,d^{D-2}y. \end{equation} The positivity of this quantity indicates that $\mathcal{A}(H)$ satisfies Stability. Classically this positivity is obvious. Quantum mechanically, this expression is divergent. After subtracting off this divergence, one finds that $T_{kk}$ is actually unbounded below. Nevertheless, the integral of $T_{kk}$ is bounded below by a vacuum state. This will become obvious after a Fock space quantization is performed in section \ref{fock}. \subsection{Equation of Motion and Zero Modes} For the purposes of specifying initial data, $\lambda$ acts more like a space dimension than a time dimension, in the sense that the value of $\Phi$ at one value of $\lambda$ is (almost) independent of the value of $\Phi$ at other values of $\lambda$. However, there are some zero mode constraints on the field which must be treated carefully. There are also some convergence properties required if the total flux of momentum across the null surface is to be finite. The Klein-Gordon equation of motion is \begin{equation} (\nabla^2 - m^2)\Phi = 0. \end{equation} This equation can be written in terms of horizon coordinates as \begin{equation}\label{invert} \nabla_u \Phi = \nabla_v^{-1} (\nabla_y^2 - m^2)\Phi. \end{equation} This equation \emph{almost} permits us to arbitrarily specify $\Phi(y,\,\lambda)$ as `initial data' on $H$. The only constraint is that $\nabla_u \Phi$ must be finite. This requires that the operator $\nabla_v$ be invertible, which places constraints on the zero modes of $\Phi(\lambda)$. If one decomposes $\Phi$ into its Fourier modes: \begin{equation}\label{omega0} \tilde{\Phi}(y,\,\omega) = \int \frac{e^{-i\omega \lambda}}{\sqrt{2\pi}} \Phi(y,\,\lambda)\,d\lambda, \end{equation} then $\nabla_v^{-1} = 1/\omega$, which is singular at $\omega = 0$. Thus for Eq. (\ref{invert}) to be well-defined, it is necessary to require that \begin{equation}\label{zeromode} \int_{-\infty}^{+\infty} \Phi\,d\lambda = \mathrm{finite}. \end{equation} An exception for this arises when $m = 0$, for solutions which are also zero modes in the $y$ direction (i.e. they lie in the kernel of $\nabla_y^2)$. In this case, Eq. (\ref{invert}) becomes undefined rather than infinite. Thus one can add a mode defined by \begin{equation}\label{zero} \int^{+\infty}_{-\infty} \Phi\,d\lambda = C, \end{equation} for some $C$ which is constant over the whole (connected component of) $H$. In addition to the zero mode constraints, it is natural to require that the flux of stress-energy across the horizon be finite. In order for the null momentum to be finite, one needs the integral of $T_{kk}$ to converge: \begin{equation} \int_{-\infty}^{+\infty} (\nabla_k \Phi)^2 \,d\lambda = \mathrm{finite}. \end{equation} One can also demand that the other components of momentum have finite flux over the horizon. This leads to an additional constraint: \begin{equation}\label{intphi} \int_{-\infty}^{+\infty} m^2 \Phi^2 \,d\lambda = \mathrm{finite}, \end{equation} which is a nontrivial constraint only for a massive field. This permits massless fields to have soliton-like solutions in which the asymptotic behavior of $\Phi$ at $\lambda = +\infty$ may differ from the behavior at $\lambda = -\infty$. In the Fourier transformed description, the field should look like this near $\omega = 0$: \begin{equation} \tilde{\Phi}(y,\,\omega) = c_1 \delta(0) + \frac{c_2}{\omega} + c_3(y) + \mathcal{O}(\omega), \end{equation} where $c_1$ corresponds to constant $\Phi$, $c_2$ corresponds to a soliton with $\Phi(+\infty) = -\Phi(-\infty)$, and $c_3$ corresponds to the value of the integral (\ref{intphi}). For a massive field, $c_1 = c_2$ = 0.\footnote{Because of the noninvertibility of $\omega = 0$, one might be tempted to require that $c_3 = 0$ as well, but this would be a mistake. First of all, $\tilde{\Phi}(0)$ can be defined as $\lim_{\omega \to 0} \tilde{\Phi}(\omega)$ using continuity. Secondly, the requirement $c_3 = 0$ is not invariant under special conformal transformations such as the inversion $\lambda \to 1/\lambda$.} None of the zero mode constraints are physically important when proving the GSL. That is because they relate to infrared issues on the horizon---to modes which are very long wavelength with respect to $\lambda$. In other words, they relate to the behavior of the fields at $\lambda \to \pm \infty$. But the GSL has to do with the relationship between two horizon slices at finite values of $\lambda$. Any information which can only be measured at $\lambda = -\infty$ is totally irrelevant because it does not appear above either horizon slice. On the other hand, information stored at $\lambda = +\infty$ can without loss of generality be equally well regarded as present in the asymptotic region $\mathcal{I}^+$ which `meets' the horizon at $\lambda = +\infty$. Consequently the zero modes can simply be ignored. This is a relief because zero mode issues tend to be one of the trickier aspects of quantum field theory on a null surface \cite{burkardt96}. Since the mass $m$ only matters for calculating the zero mode and finite energy constraints, it will not be of significance for anything that follows. \subsection{Smearing the Field}\label{smear} Now $\Phi(x)$ is not a \textit{bona fide} operator, because the value of a field at a single point undergoes infinite fluctuations and therefore does not have well-defined eigenvalues (even though its expectation value $\langle \Phi(x) \rangle$ is well-defined for a dense set of states). In order to get an operator, we need to smear the field in some $n$ of the $D$ dimensions with a smooth quasi-localized test function $f$: \begin{equation} \Phi(f) = \int f \Phi\,d^n x \end{equation} Because free fields are Gaussian, a finite width probability spectrum is sufficient to show that the operator is well-behaved. So to check that $\Phi(f)$ has finite fluctuations, one can look to see whether its mean square $\langle \Phi(f)^2 \rangle$ is well-defined in the vacuum state. Since spacetime is locally Minkowskian everywhere, the leading-order divergence can be calculated in momentum space using the Fourier transform of the smearing function $\tilde{f}$. Because $f(x)$ is smooth, $\tilde{f}$ falls off faster than any polynomial at large $p$ values in all dimensions in which it is smeared, while it is constant in all the other dimensions. Up to error terms associated with $m^2$ and the curvature (whose degree of divergence must be less by 2 powers of the momentum), the fluctuations in $\Phi$ are thus given by: \begin{equation}\label{pE} \langle \Phi(f)^2 \rangle \propto \int d^D p\,\delta(p^2) H(p_0) \tilde{f}^2(p) = \int_{E = |p|} \frac{d^{D-1}p}{2E}\,\tilde{f}^2(E, p), \end{equation} where $H$ is the Heaviside step function. This means that in order to damp out the divergences coming from large $p$ values, it is sufficient to smear either in all the space directions or in the time dimension. But neither of these is convenient for a null quantization procedure. Instead one wants to be able to smear the integral in a null plane. To do this we rewrite Eq. (\ref{pE}) in a null coordinate system $(p_u, p_v, p_y)$ where $y$ represents all transverse directions. The mass shell condition is \begin{equation}\label{nullshell} p_v = \frac{p_y^2 + m^2}{p_u}, \end{equation} and the integral over the lightcone (again neglecting mass and curvature) is \begin{equation}\label{spectral} \langle \Phi(f)^2 \rangle \propto \int_{p_u p_v = p_y^2} d^{D-2}p_y\,H(p_u) \frac{dp_u}{p_u} \tilde{f}^2(p_v, p_y), \end{equation} where $f$ is smeared in the $v$ and $y$ dimensions but not in the $u$ dimension. The integral is dominated by momenta that point nearly in the $p_u$ direction. It falls off like $1/p_u$ for large $p_u$, so it is logarithmically divergent. Therefore $\Phi$ does \emph{not} make sense as an operator when restricted to a horizon. However, $\nabla_k \Phi$ does make sense as an operator, since its mean square has two extra powers of the null energy $p_v$ (one for each derivative): \begin{equation} \langle [\nabla_k \Phi(f)]^2 \rangle \propto \int_{p_u p_v = p_y^2} d^{D-2}p_y\,H(p_u)\frac{dp_u}{p_u} p_v^2 \tilde{f}^2(p_v, p_y). \end{equation} By substituting in Eq. (\ref{nullshell}), this integral becomes \begin{equation} \int_{p_u p_v = p_y^2} d^{D-2}p_y\,H(p_u)\frac{dp_u\,p_y^4}{p_u^3} \tilde{f}^2(p_u, p_y) \end{equation} which is convergent. (This may seem surprising, because taking derivatives normally makes fields more divergent, not less. The extra factors of $p_v$ do make the integral more divergent in the $v$ direction, but that direction is already very convergent because of the rapid falloff of $\tilde{f}$.) Since $\nabla_k \Phi(f)$ is a genuine operator, it generates an algebra $\mathcal{A}(H)$ on the horizon. \subsection{Determinism} Specifying $\Phi$ on $H$ is \emph{almost} enough to determine the value of $\Phi$ outside the horizon as well, by using Eq. (\ref{invert}) as a time evolution equation in the $u$ direction. Since Eq. (\ref{invert}) is first-order in $\nabla_u$ it is not necessary to specify the velocities of the field, only their positions. The reason it does not quite work is that $\nabla_v^{-1}$ is a nonlocal operator, making other boundary conditions potentially relevant. Whether or not $\Phi$ can actually be determined is therefore a global issue depending on the causal structure of the whole spacetime. In the case of a de Sitter horizon, $\Phi$ is determined by the value on $H$ since it is almost a complete Cauchy surface once one adds a single point a conformal timelike infinity (the value of a free field must exponentially die away when approaching this conformal timelike point, so the addition of this point doesn't change anything). In the case of a Rindler horizon in Minkowski space the field is generically determined, since the only modes which are not determined are massless modes propagating in the exact same direction as the horizon. But for a black hole horizon, the field $\Phi$ is not determined, since fields can also leave to future timelike or null infinity ($\mathcal{I}^+$). Let $\Sigma$ be a complete Cauchy surface of the exterior of $H$, which includes both $H$ itself, and the asymptotic future $\mathcal{I}^+$ outside of $H$. $H$ and $\mathcal{I}^+$ can be connected only at $\lambda = +\infty$. However, any zero mode information measurable at $\lambda = +\infty$ can be assigned to the system $\mathcal{I}^+$. In order to remove this redundant information from $H$, one can write the field at one time as the boundary term in an integral: \begin{equation}\label{break} \Phi(\lambda) = \Phi(+\infty) - \int^{+\infty}_{\lambda} \nabla_k \Phi\,d\lambda^\prime, \end{equation} showing that classically, all the information in $\Phi(\lambda)$ not measurable at $\lambda = +\infty$ is stored in the derivative $\nabla_k \Phi$. And this derivative, as shown in section \ref{smear}, is a well defined operator after smearing with a test function. Thus the algebra of the whole spacetime can therefore be factorized into $\mathcal{A}(H) \otimes \mathcal{A}{(\mathcal{I}^+)}$, ignoring any degrees of freedom in the zero modes. This means that there also exist states that factorize: \begin{equation}\label{factor} \Psi(\Sigma) = \Psi[\Phi(H)] \otimes \Psi[\Phi(\mathcal{I}^+)] \end{equation} The existence of these factor states is needed for the validity of the proof of the GSL in section \ref{outside}. If there are any operators in the algebra which depend on the zero modes of $\Phi$, these may be considered part of the algebra of $\mathcal{I}^+$. \subsection{Commutation Relations} Ordinarily we are used to quantizing a scalar field with equal-time canonical commutation relation: \begin{equation} [\Phi(x_1),\,\dot{\Phi}(x_2)] = i\hbar \delta^{D-1}(x_1 - x_2). \end{equation} On a curved spacetime this relation can be covariantly adapted to any spacelike slice $\Sigma$ by using the determinant of the spatial metric $q$ and $\Sigma$'s future orthonormal vector $n^a$: \begin{equation}\label{spacecom} [\Phi(x_1),\,\nabla_n \Phi(x_2)] = i\hbar \delta^{D-1}(x_1 - x_2)/ \sqrt{q}. \end{equation} In order to obtain the commutation relations on a null surface, one can take the limit of an infinitely boosted spacelike surface. Measured in any fixed coordinate system, each side of Eq. (\ref{spacecom}) diverges like $1/\sqrt{1 - v^2}$ due to the Lorentz transformation of $n^a$ or $1/\sqrt{q}$. By dividing out the common divergent factor as one takes the limit, one ends up with\footnote{A previous version failed to include the factor of (1/2) in the following commutator, but a more careful derivation of the infinite boost limit shows that it is present. This is easiest to see for massless fields in 1+1 dimensions, where the (1/2) appears because only left-moving modes contribute.} \begin{equation}\label{nullcom} [\Phi(y_1,\,\lambda_1),\,\nabla_k \Phi(y_2,\,\lambda_2)] = \frac{i\hbar}{2} \delta^{D-2}(y_1 - y_2) \delta(\lambda_1 - \lambda_2)/ \sqrt{h} \end{equation} where $h$ is the determinant of the $D - 2$ spatial components of the horizon metric. From now on, the factors of $1/\sqrt{g}$ or $1/\sqrt{h}$ will be automatically be absorbed into the definition of the delta functions $\delta^{D-1}(x)$ or $\delta^{D-2}(y)$ respectively. By integrating Eq. (\ref{nullcom}) in the $\lambda_1$ direction, one can find the commutator of $\Phi$ with itself in terms of the Heaviside step function $H$: \begin{equation}\label{phicom} [\Phi(y_1,\,\lambda_1),\,\Phi(y_2,\,\lambda_2)] = \frac{i\hbar}{2} \delta^{D-2}(y_1 - y_2) [H(\lambda_2 - \lambda_1) - H(\lambda_1 - \lambda_2)]/2, \end{equation} where because the constant of integration only affects the zero modes, I have chosen it so that the commutator is antisymmetric.\footnote{One should not attempt to use Eq. (\ref{phicom}) in situations where zero modes are important, because then the constant of integration is undefined. This happens because the commutator of the full spacetime theory is ill-defined for null separations. The reason Eq. (\ref{phicom}) can be used for the horizon theory is because all horizon observables will ultimately be expressed in terms of $\nabla_k \Phi$.} Notice how even though the null surface acts like an initial data slice, there are nontrivial commutation relations of $\Phi$ on the horizon. Since neither the commutation relations nor the generator of local null translations $T_{kk}$ carry any derivatives in the space directions, the horizon theory satisfies Ultralocality---i.e. the horizon theory is just the integral over a bunch of independent degrees of freedom for each horizon generator. \subsection{Fock Space Quantization}\label{fock} In order to perform Fock quantization, the fields will be analyzed in terms of the modes $\tilde{\Phi}$ with definite null-frequency $\omega$: \begin{equation} \tilde{\Phi}(y,\,\omega) = \int \frac{e^{-i\omega \lambda}}{\sqrt{2\pi}} \Phi(y,\,\lambda)\,d\lambda, \end{equation} taking $\omega \ne 0$ in order to ignore the zero modes. Because of Ultralocality, it is possible to define a Fock representation even when $y$ is kept in the position basis. The commutation relations of the field in this basis can be calculated by taking the Fourier transform of Eq. (\ref{phicom}): \begin{equation} [\tilde{\Phi}(y_1,\,\omega_1),\,\tilde{\Phi}(y_2,\,\omega_2)] = \hbar \frac{\delta(\omega_1 + \omega_2)}{\omega_2 - \omega_1} \delta^{D-2}y \end{equation} One can use this to define creation and annihilation operator densities \begin{equation} a^{\dagger}(y,\,\omega) = \tilde{\Phi}(y,\,\omega) \sqrt{\frac{2\omega}{\hbar}}, \quad a(y,\,\omega) = \tilde{\Phi}(y,\,-\omega) \sqrt{\frac{2\omega}{\hbar}}, \end{equation} which create and destroy particles of any frequency $\omega > 0$, and satisfy the commutation relations \begin{equation} [a(y_1,\,\omega_1),\,a^\dagger(y_2,\,\omega_2)] = \delta(\omega_1 - \omega_2) \delta^{D-2}(y_1 - y_2). \end{equation} The single particle Hilbert Space corresponds to normalizable wavefunctions in the space $\Psi(y,\,\omega)$ ($\omega > 0$) of creation operators. By taking the Fock space, one constructs the full Hilbert space of the scalar field on the horizon. Because $T_{kk}$ is quadratic in the free field $\Phi$, the divergent part of the null energy $p_k$ is a state-independent constant. In order to be Lorentz invariant the Hartle-Hawking vacuum $|0\rangle$ must have $p_k = 0$, so any physically reasonable renormalization of $p_k$ (e.g. point-splitting) is equivalent to simply subtracting off the zero-point energy of the vacuum state. Hence the renormalized null energy of the state can be calculated by rewriting Eq. (\ref{nullE}) in terms of the normal-ordered creation and annihilation operators: \begin{equation} p_k = \int^{\infty}_{\omega=-\infty} \!\!\!\!\!\! \omega^2 \!:\!\tilde{\Phi}^*\tilde{\Phi}\!: d\omega\,d^{D-2}y = \int^{\infty}_{\omega=0} \hbar\omega\,a^{\dagger}a\,d\omega\,d^{D-2}y = \sum_n \hbar \omega_n, \end{equation} where the last equality is evaluated in the Fock basis of states which have a definite number of quanta of frequency $\omega_1 \ldots \omega_n$. Thus the particles satisfy the Planck quantization formula. The resulting picture of the scalar field theory on the horizon is surprisingly simple: each state is simply a superposition of a finite number of particles localized at distinct positions on the horizon, each with some positive amount of null energy $\hbar \omega$. In contrast to the usual quantization on a spacelike surface, each particle can be arbitrarily well-localized near any horizon generator. The particles cannot however be localized with respect to the $\lambda$ coordinate on the horizon generator. No two particles can reside on exactly the same horizon generator, because that would not be a normalizable vector in the Fock space. There is an enormous amount of symmetry of the scalar field theory on the horizon. The only geometrical structures used in the quantization are the affine parameters of each horizon generator (up to rescaling), and the area-element (coming in via the $d^{D-2}y$ integration), which comes in through the commutation relation (\ref{nullcom}). Therefore the Fock space is invariant under 1) arbitrary translations and dilations of the affine parameter of each horizon generator independently, 2) area-preserving diffeomorphisms acting on the space of horizon generators, and even 3) any non-area-preserving diffeomorphism that sends $d^{D-2}y \to \Omega(y)^2 d^{D-2}y$ so long as one also sends $\Phi \to \Omega(y)^{-1} \Phi$. This is so much symmetry that the only invariant quantity is the total number $n$ of particles; every n-particle subspace of the Hilbert space is a single irreducible representation of the group of symmetries.\footnote{To see that this is the case, note that every $n$-particle state can be written as a superposition of states in which each of the $n$ identical particles is localized in a delta function on $n$ different horizon generators. All such states are equivalent to one another by the symmetry transformations, so pick one of them, $\Psi$. If the $n$-particle representation were reducible, there would have to exist a projection operator which is invariant under all the symmetry and acts nontrivially on this state by turning it into a linearly independent state $\Psi^\prime$. But by virtue of the symmetry, $\Psi^\prime$ must be zero except on the $n$ horizon generators initially chosen, and therefore linearly dependent on $\Psi$. Consequently the projection operator does not exist and the representation is irreducible.} \subsection{Conformal Symmetry}\label{conf} Even this does not exhaust the symmetries of the scalar field on the horizon (minus zero modes); one is actually free to perform any special conformal transformation of each $\lambda(y)$, i.e. any combination of a translation, dilation, and inversion $\lambda \to 1/\lambda$. It is easiest to see this if the quantization is done in a slightly different way: by discretizing the horizon into a finite number of horizon generators. Let there be $N$ discrete horizon generators spread evenly throughout the horizon area $A$, and let the field $\Phi(n,\,\lambda)$ be defined only on this discretized space. The commutator is \begin{equation} [\Phi(m,\,\lambda_1),\,\nabla_k \Phi(n,\,\lambda_2)] = \frac{i\hbar}{2}\frac{A}{N}\,\delta_{mn} \delta(\lambda_1 - \lambda_2), \end{equation} and the null energy is \begin{equation} p_k = \sum_{n = 1}^N \frac{A}{N} \int (\nabla_k \Phi_n)^2 \, d\lambda. \end{equation} These expressions converge to Eq. (\ref{nullcom}) and (\ref{nullE}) respectively as $N \to \infty$. Since the theory is ultralocal there are no divergences associated with the transverse directions, so the limit should exist. Every continuum horizon state can be described as the $N \to \infty$ limit of a sequence of states in the discretized model. However, not every smooth seeming limit of states in the discretized model corresponds to a state in the continuum model; for example, there is no continuum limit of states in which one horizon generator has two particles on it and the rest are empty. The discretized model is nothing other than a collection of $N$ different conformal field theories each of which is the left-moving sector of one massless scalar field in $1+1$ dimensions. The entanglement entropy divergence is therefore just the same as in a conformal field theory (CFT) with $N$ scalar fields, which has central charge $c = N$ \cite{ginsparg89}: \begin{equation} S_\mathrm{div} = \frac{c}{12} \ln \left( \frac{\lambda_I}{\lambda_U} \right) \end{equation} where $\lambda_I$ is the affine distance of the infrared cutoff from the boundary, and $\lambda_U$ is the affine distance of the ultraviolet cutoff. This justifies Eq. (\ref{nulldiv}) mentioned in section \ref{ren} on renormalization. In any CFT, the vacuum state $| 0 \rangle$ is invariant under all special conformal transformations. Since the $N \to \infty$ limit of $| 0 \rangle$ is just the vacuum of the continuum theory, the continuum vacuum is also invariant under the group of special conformal transformations $SO(2,\,1)$. A $1+1$ dimensional CFT is also invariant under general conformal transformations, i.e. arbitrary reparameterizations of a null coordinate $v \to f(v)$. However, the vacuum state is not invariant under general conformal transformations. This is a consequence of the anomalous transformation law of the stress energy tensor $T_{vv}$ \cite{ginsparg89}: \begin{equation} T_{vv} \to f^\prime(v)^{-2} T_{vv} + \frac{c}{12} S(f), \end{equation} where $c = 1$ is the central charge of one scalar field, and $S(f)$ is the Schwarzian derivative: \begin{equation} S(f) = \frac{f^{\prime\prime\prime}}{f^\prime} - \frac{3}{2}\frac{(f^{\prime\prime})^2}{(f^\prime)^2}, \end{equation} which vanishes only when $f(v)$ is special. Since the vacuum must have $T_{vv} = 0$, any nonspecial conformal transformation of the vacuum must produce a nonvacuum state with positive expectation value of the null energy $p_k$. What if one tries to perform a general conformal transformation $\lambda \to f(\lambda,\,y)$ of the horizon generator parameters $\lambda$ for $D > 2$ dimensions? In the discretized model, the null energy of the transformed vacuum is \begin{equation} p_k = \sum_{n = 1}^N \frac{1}{12} \int S(f,\,n) d\lambda \end{equation} and the integrand is positive. But now disaster strikes---as $N \to \infty$, $p_k \to \infty$ too! The general conformal transformation takes the vacuum out of the Hilbert space altogether, by creating infinitely many quanta. So the conformal anomaly prevents $\lambda$ from being reparameterized, except by a special conformal transformation. Since the stress-energy $T_{kk}$ is the generator of reparameterizations, this means that most integrals of $T_{kk}$ on the horizon do not give rise to operators in the Hilbert space. Since $T_{kk} = (\nabla_k \Phi)^2/2$ is a product of two fields, there is a danger of divergence. The fact that only special conformal transformations of the vacuum are allowed implies that the only integrals of $T_{kk}$ which are horizon observables are those of this form: \begin{equation} \int^{+\infty}_{-\infty} T_{kk}\,[a(y) + b(y)\lambda + c(y)\lambda^2]\,d\lambda\,d^{D-2}y. \end{equation} For example, the restricted boost energy \begin{equation} K(\Lambda) = \int_\Lambda^\infty T_{kk}\,(\lambda - \Lambda) \,d\lambda\,d^{D-2}y \end{equation} is not an operator because of the limitation of the integral to $\lambda > \Lambda$. However, the proof is only concerned with the expectation value $\langle K(\Lambda) \rangle$. This is a function of $\langle T_{kk}(x) \rangle$, which does not need to be smeared to be finite. \section{Other Spins}\label{spin} In this section some basic details of null quantization for alternative spins will be briefly provided, omitting detailed derivations and neglecting zero modes. \subsection{Spinors} The Lagrangian of any free spinor field can be written as \begin{equation}\label{spinorL} \mathcal{L} = \gamma^{ABi} \Psi_A \nabla_i \Psi_B + m \epsilon^{AB} \Psi_A \Psi_B, \end{equation} where $A$ or $B$ belong to spinor representations written in a real (Majorana) basis, $\gamma^{ABi}$ is the gamma matrix, and $\epsilon^{AB}$ is the invariant symplectic structure on the spinor space.\footnote{In dimensions $D\,\mathrm{mod}\,8 = 0,\,1,\,2,\,6$, the irreducible spinor representations do not possess an invariant symplectic structure $\epsilon^{AB}$. Consequently, for $m > 0$ it is necessary to use reducible spinor representations. The Majorana spinor basis has been chosen in order to keep the spinor expressions homogeneous across different spacetime dimensions. Dirac and/or Weyl spinors may be obtained from representations which admit a complex structure.} As long as $D > 2$, the qualitative features of null surface quantization are the same for every kind of spinor.\footnote{In $D = 2$, the chirality of the field determines whether it propagates to the left or to the right. Only fields which propagate across a null surface can be quantized on that surface.} The equation of motion is \begin{equation}\label{maj} \nabla_i \Psi_B \gamma^{ABi} = m \Psi^A, \end{equation} using $\epsilon^{AB}$ to raise the spinor index. At any point on a spacelike slice of the horizon, the $D$ dimensional spinor decomposes into the tensor product of a Majorana spinor in $D - 2$ dimensional space, and a Dirac spinor on a $1 + 1$ dimensional spacetime. The Dirac spinor in $1 + 1$ dimensions decomposes into the direct sum of a left-handed spinor $\Psi_L$ and a right-handed spinor $\Psi_R$, where we take $\gamma^{RRa}$ to point in the $k^a$ direction and $\gamma^{LLa}$ to point along the other lightray $l^a$. The Majorana equation (\ref{maj}) takes the schematic form: \begin{eqnarray} \nabla_{LL} \Psi_R + \nabla_{LR} \Psi_L + m \Psi_L = \nabla_k \Psi_R + \nabla_y \Psi_L + m \Psi_L \label{L}; \\ \nabla_{RR} \Psi_L + \nabla_{RL} \Psi_R + m \Psi_R = \nabla_l \Psi_L + \nabla_y \Psi_R + m \Psi_R.\label{R} \end{eqnarray} The first equation (\ref{R}) only involves derivatives that lie on the horizon itself, and can be used to define $\Psi_R$ as a function of $\Psi_L$ (up to zero modes): \begin{equation} \Psi_R(\lambda) = \Psi_R(+\infty) - \int_\lambda^{+\infty} (\nabla_y \Psi_L + m \Psi_L)\,d\lambda^\prime. \end{equation} On the other hand, Eq. (\ref{L}) determines the derivative of $\Psi_L$ off the horizon, and so it does not act as a constraint. Therefore, the spinor degrees of freedom are determined by the arbitrary specification of $\Psi_L(y,\,\lambda)$ on the horizon. From now on we will focus on just the $\Psi_L(y,\,\lambda)$ degrees of freedom. $\Psi_L(y,\,\lambda)$ yields a (fermionic) operator when smeared over the horizon directions by a test function $f$. The mean-square of a massless spinor in momentum space is \begin{equation} \langle \Psi_L(f)^2 \rangle \propto \int_{p_u p_v = p_y^2} d^{D-2}p_y\,H(p_u)\frac{dp_u}{p_u} p_v \tilde{f}^2(p_u, p_y). \end{equation} The extra power of $p_{LL} = p_v = (p_y^2 + m^2)/ p_u$ comes from the contraction of the momentum with the spin in the propagator, and serves to render the integral convergent. Thus for spinors there is no need to take a $\nabla_k$ derivative in order to restrict the field to the horizon. The anticommutator of the field on a spatial slice $\Sigma$ with normal vector $n^a$ is: \begin{equation} \{ \Psi_A(x_1) ,\, \Psi_B(x_2) \} = -i\hbar\,\gamma_{ABn} \delta^{D-1}(x_1 - x_2). \end{equation} By making an infinite boost, one can obtain the anticommutator for the field $\Psi_L$ on the horizon: \begin{equation} \{ \Psi_{IL}(y_1,\,\lambda_1) ,\, \Psi_{JL}(y_2,\,\lambda_2) \} = -\frac{i\hbar}{2} g_{IJ} \delta(\lambda_1 - \lambda_2) \delta^{D-2}(y_1 - y_2), \end{equation} where $I$ and $J$ are (real) spinor representations of $SO(D - 2)$ (the group of rotations of the $D - 2$ dimensional transverse space). Since these representations are unitary, there is a natural metric $g^{IJ} = \gamma^{ILJL}_k$ on the transverse spinor space. The null-null component of the stress-energy is\footnote{The stress-tensor is easiest to calculate canonically by contracting the null momentum (\ref{conpi}) by the velocity $\nabla_k \Psi_L$. The gravitational $T_{kk}$ is the same, but calculating it requires intorducing an $n$-bein, and varying the Lagrangian (\ref{spinorL}) with respect to it.} \begin{equation} T_{kk} = -g^{IJ} \Psi_{IL} \nabla_k \Psi_{JL}. \end{equation} $T_{kk}$ and the anticommutation relations look just like the integral of the corresponding quantities for left-moving spinor fields in $1+1$ dimensions. Therefore, if the horizon generators are discretized, the corresponding CFT is that of $N/2$ massless left-moving chiral fermions, where $N$ is the number of components of the spinor field. \subsection{Photons}\label{phot} The Maxwell Lagrangian is $\mathcal{L} = -\frac{1}{4}F_{ab}F^{ab}$. It is convenient to impose Lorenz gauge $\nabla_a A^a = 0$ and null gauge $A_k = 0$. Let $i$ be the transverse directions restricted to the horizon, and let $l$ be a null direction pointing away from the horizon, such that $g_{kl} = -1$ and $g_{il} = 0$. By integrating the Lorenz gauge, one can solve for $A_l$ (up to zero modes) in terms of the transverse components $A_i$: \begin{equation} A_l = A_l(+\infty) -\int^{+\infty}_\lambda \! \nabla_i A^i \,d\lambda, \end{equation} where we have used null gauge and the fact that $R_{klki}$ vanishes on a stationary horizon. Hence the only independent (nonzero-mode) degrees of freedom are the transverse directions $A_i$ on the horizon. The commutator is \begin{equation} [A_i(y_1,\,\lambda_1), \nabla_k A_j(y_2,\,\lambda_2)] = \frac{i\hbar}{2} g_{ij} \delta^{D-2}(y_1 - y_2) \delta(\lambda_1 - \lambda_2), \end{equation} and the stress-energy tensor is \begin{equation} T_{kk} = g^{ij} (\nabla_k A_i) \nabla_k A_j. \end{equation} $A_i$ cannot be smeared to make a valid operator on the horizon, but $\nabla_k A_i$ can. After discretization of horizon generators, the CFT of each horizon generator consists of $D - 2$ left-moving massless scalars. \subsection{Gravitons}\label{grav} In the semiclassical limit the metric can be described as a background metric $g_{ab} \equiv g_{ab}^0$ plus an order $\hbar^{1/2}$ metric perturbation $h_{ab} = g_{ab}^{1/2}$. Impose Lorenz gauge $\nabla_a h^a_b - \frac{1}{2} \nabla_a h^b_b = 0$ and null gauge $h_{ka} = 0$. The Lagrangian and equations of motion are simply that of perturbative general relativity (GR). The only constraint on $h_{ab}$ on the horizon at half order is the null-null component of the Einstein equation: \begin{equation}\label{Gkk} G_{kk} = 0. \end{equation} By integrating $\nabla_k \theta^{1/2} = 0$ (the half order Raychaudhuri equation (\ref{Rayhalf}), one finds that there is no half order contribution to the area: \begin{equation}\label{freeze} h_{ij} g^{ij} = 0. \end{equation} In order to keep things simple, the trace degree of freedom of $h_{ij}$ will therefore be set to zero before quantization. Only the traceless part of $h_{ij}$ represents physical graviton degrees of freedom.\footnote{Rotational symmetry assures that the commutator of the trace degrees of freedom cannot mix with the commutator of the traceless degrees of freedom. The constraint (\ref{Gkk}) generates diffeomorphisms in the $k$ direction. Consequently if one wished to impose this constraint after quantization, for consistency it would also be necessary to include as a physical degree of freedom the parameter $\lambda$ which breaks this symmetry.} $h_{ij}$ cannot be smeared to make an operator on the horizon, but $\nabla_k h_{ij}$ can. Thus, the only physical components of the field are the transverse shear components $\sigma_{ij} \propto \nabla_k h_{ij}$. In GR, gravitons do not contribute to the gravitational stress-energy tensor $T_{ab}$ found by varying the matter Lagrangian with respect to the metric, since gravitons do not contribute to the matter Lagrangian. And if one varies with respect to the full gravitational Lagrangian, the resulting tensor vanishes when the equations of motion are satisfied. However, in perturbative GR, one can still define a stress-energy tensor perturbatively by varying the Lagrangian with respect to the \emph{background} metric, rather than the perturbed metric. The resulting stress-energy tensor is proportional to the contribution of $h_{ab}$ to the Einstein tensor: \begin{equation} T_{ab}^1 = G_{ab}^{1} / 8\pi G, \end{equation} to first order in $\hbar$. On the horizon this is just \begin{equation}\label{gravT} T_{kk} = (\nabla_k h_{ij}) \nabla_k h^{ij} / 32\pi G. \end{equation} The canonically conjugate quantities for canonical general relativity on a spacelike slice $\Sigma$ are the spatial metric $q_{ab}$ and the extrinsic curvature $K_{ab} = \nabla_{n} q_{ab}/2$ \cite{ADM}: \begin{equation} [q_{ab}(x_1),\,(K^{cd} - q^{cd} K)(x_2)] = \frac{i\hbar}{2} (8\pi G) \frac{\delta_a^c\delta_b^d + \delta_b^c\delta_a^d}{2} \delta^{D-1}(x_1 - x_2) \end{equation} If one takes the infinite boost limit, the spatial extrinsic curvature $K_{ij}$ with $i,\,j$ lying in the transverse plane becomes the null extrinsic curvature: \begin{equation} K_{ij} \to B_{ij} = \nabla_k h_{ij}/2 = \sigma_{ij} + \frac{1}{D-2} g_{ij} \theta. \end{equation} Because the trace part has been made to vanish by Eq. (\ref{freeze}), only the traceless shear part remains. Therefore the commutator is \begin{equation} [h_{ij}(y_1,\,\lambda_1),\, \sigma^{lm}(y_2,\,\lambda_2)] = \frac{i\hbar}{2} (8 \pi G) \delta_{ij}^{lm} \delta^{D-2}(y_1 - y_2) \delta(\lambda_1 - \lambda_2), \end{equation} where $\delta^{lm}_{ij} = \frac{1}{2}(\delta^l_i \delta^m_j + \delta^l_j \delta^m_i) - \frac{1}{D-2} g_{ij}g^{lm}$ is the Kroneker delta for the traceless symmetric representation. As for the other bosonic fields, $\sigma_{ij}$ is an observable when smeared on the horizon, but $h_{ij}$ is not. When the horizon generators are discretized, the graviton CFT is that of $\frac{1}{2}(D^2 - 3D)$ left-moving scalar fields. \section{Interactions}\label{int} Does the argument given in section \ref{arg} for the GSL continue to work when the quantum fields have nontrivial interactions besides the minimal coupling to gravity? The question is whether one can continue to define a horizon algebra $\mathcal{A}(H)$ satisfying the four axioms required for the proof described in sections \ref{outline} and \ref{sym}: Determinism, Ultralocality, Local Lorentz Invariance, and Stability. Except for free fields and 1+1 CFT's (see below), it is not obvious that this is the case. Some evidence for and against the existence of such an algebra will be presented below. Hopefully future work will clarify these issues. \subsection{Perturbative Yang-Mills and Potential Interactions} Let $\phi_i$ stand for a field (indexed by $i$) in any free field theory, of any spin. What happens to the horizon algebra upon adding interactions? In general, the addition of arbitrary terms to the Lagrangian will change both the commutation relations and the value of the null stress-energy tensor $T_{kk}$. But for certain special kinds of interactions, the null algebra may remain unaffected. In particular, at least at the level of formal perturbation theory, the horizon fields $\phi_i$ do not care about the addition of an arbitrary potential term $V(\phi)$ to the Lagrangian. In order to be a potential, $V$ must depend only on the fields and the metric, not field derivatives or the Riemann tensor. The general horizon commutator can be written as \begin{equation} [\phi_i ,\, \Pi^i] = \frac{i\hbar}{2} \delta^{D-2}(y_1 - y_2) \delta(\lambda_1 - \lambda_2), \end{equation} where the conjugate momentum to the field in the null direction is given by \begin{equation}\label{conpi} \Pi^i = -\frac{ \partial \mathcal{L} }{\partial \nabla^a \phi_i } k^a, \end{equation} and the commutator is replaced with an anticommutator for fermionic fields. $V$ does not depend on any derivatives of the field: \begin{equation} \frac{ \partial V}{\partial \nabla^a \phi_i } = 0, \end{equation} so the momentum $\Pi^i$ is the same as in the free theory. Since the horizon algebra is generated by the free field operators subject to the above commutation relation, the horizon algebra $\mathcal{A}(H)$ is unaffected by the perturbation. A similar result holds for Yang-Mills interactions. The Yang-Mills Lagrangian coupled to spinors and scalars is \begin{equation} \mathcal{L} = -\frac{1}{4} F_{ab} F^{ab} - \frac{1}{2} \nabla_a \Phi \nabla^a \Phi + \gamma^{ABi} \Psi_A \nabla_i \Psi_B, \end{equation} where $F_{ab} = \nabla_a A_b - \nabla_b A_a$. Because $\nabla_a$ is the covariant derivative, there are cubic boson interactions which depend on the $\nabla^k$ derivative, of the form $A^a A_k \nabla^k A_a$ and $A_k \Phi \nabla^k \Phi$. However, these interactions both depend on $A_k$, which vanishes in null gauge (which was used to obtain the horizon algebra in section \ref{phot}). The spinor interactions do not depend on $\nabla_k$. So Yang-Mills interactions also do not affect $\mathcal{A}(H)$, as a special consequence of gauge symmetry. Because the horizon algebra is the same, the generator of null translations $T_{kk}$ must also be the same. Since for minimally coupled theories the canonical stress-tensor and the gravitational stress-tensor of matter are the same up to boundary terms at infinity \cite{fursaev99}, this means that the formula for the area $A$ in terms of $T_{kk}$ is the same. Also, the (translation-invariant) vacuum state $|0\rangle$ of the interacting field theory is the same as the free field vacuum, up to zero modes \cite{burkardt96}. This is because, unlike spatial surfaces, null surfaces have a kinematic momentum operator $p_k$ which is required to be positive.\footnote{In the case of spacelike surfaces, the interacting vacuum cannot even lie in the Fock space of the free vacuum \cite{EF05}.} Since everything in $\mathcal{A}(H)$ is exactly the same as in the free case, at the level of formal perturbation theory the entire proof goes through without depending in any way on the interactions. However, this entire discussion needs to be taken with a large grain of salt, because it assumes that the interactions in the Lagrangian can be treated as a finite perturbation. Once loop corrections are taken into account, there will be divergences which have to be absorbed into the coupling constants. Even if one starts with an interaction potential $V(\phi)$ which seems not to have any harmful derivative couplings in it, renormalization will typically produce derivative couplings which will affect the commutation relations. For example, a field strength renormalization of the propagator term will change the overall coefficient of the commutation relation. This field strength renormalization will usually be infinite---except when the theory is superrenormalizable. So for e.g. Yang-Mills or $\Phi^4$ in $D = 3$, the above arguments suggest that the horizon algebra should be unaffected. This however, has not been shown rigorously at the nonperturbative level. For marginally renormalizable or nonrenormalizable theories, the horizon algebra might be deformed, or it might not exist at all. Nevertheless, null quantization methods have been useful for ($D=4$) QCD calculations \cite{burkardt96}, notwithstanding the fact that it may not be rigorously justified. In the case of spacelike hypersurfaces, there is a series of theorems \cite{powers67} which show that for any quantum field theory which is reducible to bosons and fermions satisfying the equal time canonical (anti-)commutation relations (ETCCR), the theory must be free unless the interactions are sufficiently weak in the ultraviolet. Superrenormalizable theories do obey the ETCCR, nonrenormalizable theories cannot obey the ETCCR (even if they can be defined using a UV fixed point), while the status of marginally renormalizable theories is unclear. The problem arises because of infinite renormalization of the fields. Thus there exist at least some QFT's which do not satisfy the equal time ETCCR. One \emph{possible} interpretation of this result is that the ``equal time'' is at fault, and it is necessary to smear the fields in time as well as in space in order to get a well defined operator. This probably would mean that such fields are not well defined when smeared on a null surface either. However, it could still be that there exist a different set of fields which do not obey canonical commutation relations, and can be defined on the horizon algebra. \subsection{Conformal Field Theories}\label{nonpert} So do nonperturbatively interacting QFT's really have a horizon algebra? One can get some insight by studying conformal field theory (CFT). Any physically consistent QFT must have good ultraviolet behavior as length scales are taken to zero. The conventional wisdom is that this happens if and only if the theory approaches an ultraviolet fixed point of the renormalization group flow. At short distances, the theory is therefore scale invariant. All known scale invariant QFT's are also conformally invariant, so let us ask whether CFT's have a null surface formalism. Since the near-horizon limit is a type of ultraviolet limit, it seems probable that a QFT has a null surface formulation if and only if the scaling limit CFT does. The situation is very different for 1+1 CFT's (which have an infinite conformal group) and higher dimensional CFT's (which have a finite conformal group). \paragraph{\textbf{1+1 CFT}.} In the case of 1+1 CFT's, there always exists a nontrivial algebra of observables $\mathcal{A}(H)$ on the horizon (i.e. on a lightray), which is simply the algebra of the left-moving chiral fields. To see this, we remind the reader of some facts about 1+1 CFT's (from e.g. \cite{ginsparg89}). The operators of a CFT fall into infinite dimensional representations of the conformal algebra associated with the theory's central charges $c$ and $\tilde{c}$. These representations are classified by the weight spectrum of primary operators $(h,\,\tilde{h})$, which specify the weight of the primary operator in the representation with respect to left and right dilations. Descendants of these operators have weights given by the primary operators plus integers. The algebra of operators which are well-defined on the horizon is simply the algebra of left-moving chiral operators (i.e. the algebra generated by quasi-primary operators weight $(h,\,0)$). Such fields do not depend on the $u$ coordinate and therefore must be localizable to the horizon. (On the other hand, the two two-point function of a non-chiral operator diverges when the two points are null separated on the horizon, so such operators cannot be smeared in one null direction alone.) Since the identity operator has weight $(0,\,0)$, there is always an infinite sequence of such operators, including the null stress-energy $T_{kk}$ of weight $(2,\,0)$. Thus there is always an infinite nontrivial horizon algebra $\mathcal{A}(H)$, which includes the generators of the conformal group itself. We now examine whether this horizon algebra obeys the necessary axioms described in section \ref{sym} for the proof of the GSL. Ultralocality is trivial in 1+1 dimensions, since there is only one horizon generator. Lorentz Symmetry and Stability hold by virtue of the normal QFT axioms.\footnote{Although the discussion in this subsection is entirely about QFT on a fixed background spacetime, the reader may wonder why one would want to consider a 1+1 CFT's for a matter sector given that GR is topological in 2 dimensions. The answer is that the proof given in section \ref{arg} is equally applicable to 2d dilaton gravity, in which the dilaton plays the role of the ``area''.} The only tricky point is Determinism, which requires the exterior of the horizon to be determined by $\mathcal{A}(H)$ and $\mathcal{A}(\mathcal{I}^+)$. In the case of a chiral CFT which breaks into independent left-moving and right-moving sectors, Determinism is obvious. In the case of a non-chiral theory, the only new issue is that there may be superselection constraints relating the left-moving and right-moving fields. For example, in the theory of a free fermion, it is possible to introduce a ``twist operator'' with weight $(1/16,\,1/16)$, but one cannot view this operator as a product of two operators with weight $(1/16,\,0)$ and $(0,\,1/16)$ without destroying modular invariance \cite{ginsparg89}. Thus there might be nontrivial constraints relating $\mathcal{A}(H)$ with $\mathcal{A}(\mathcal{I}^+)$. However, a non-chiral CFT can be extended into a chiral CFT simply by ignoring these superselection constraints and making the left-moving and right-moving sectors independent. This can ruin modular invariance, but modular invariance was not needed for the proof of the GSL. By the operator-state correspondence, it will also increase the number of states of the theory, but it does not affect the vacuum state $\sigma$, and one can simply choose the state $\rho$ to satisfy the constraints of the non-chiral CFT. \paragraph{\textbf{Higher dimensional CFT.}} In higher dimensional interacting CFT's, a local field will no longer obey the free wave equation. This means that it must have a nonzero anomalous dimension $\eta$. For example, a primary scalar field in $D$ dimensions will have a dimension $\Delta = (D - 2)/2 + \eta$, with $\eta > 0$ due to the unitarity bound. Such fields do not form operators when smeared on the horizon alone. This can be seen from evaluating the square of the smeared field using the spectral decomposition of the operator: \begin{equation} \langle \Phi(f)^2 \rangle \propto \int_{p^2 < 0} d^{D}p\,H(p_0) \frac{\tilde{f}^2(p_v, p_y)}{(-p^2)^{1 - \eta}}, \end{equation} where $\tilde{f}$ is the Fourier transform of the smearing integral on the horizon. This expression is the analogue of Eq. (\ref{spectral})), but now the integral is performed over all timelike momenta $p^2 < 0$. Because of the smearing, the integral is dominated by momenta which point nearly in the $p_u$ direction. Since $p^2 = p_y^2 - p_u p_v$, the integral falls off in the $p_u$ direction like $p_u^{\eta - 1}$. This is divergent for all permitted values of $\eta$. Consequently no operator can be defined. Unlike the free case, it is no longer possible to improve the situation by taking $\nabla_v$ derivatives, since the $p_u$ and $p_v$ directions are no longer related by the null mass shell condition. Similar arguments rule out operators formed from interacting fields with spin $\phi_I$, where $I$ transforms in a spin-$s$ irrep. Let the conjugate field be written $\phi^*_{I^\prime}$. In this case it is necessary (but not always sufficient) to satisfy the unitary bound that the primary have weight $\Delta = (D - 2)/2 + s + \eta$ for an $\eta > 0$ \cite{mack}. The absolute square of the field smeared on the horizon looks like: \begin{equation} \langle \phi(f)_I \phi^* (f)_I^\prime \rangle \propto \int_{p^2 < 0} d^{D}p\,H(p_0) \epsilon_{II^\prime}(p) \frac{\tilde{f}^2(p_v, p_y)}{(-p^2)^{1 - s - \eta}}, \end{equation} where $\epsilon_{II^\prime}(p)$ is the scalar product of the spins $I$ and $I^\prime$ in the little group $SO(D - 1)$ that preserves the momentum $p$. At fixed $p_v$ and large $p_u$, $\epsilon$ can scale like $p_u^{2x}$ where $-s \le x \le s$ depends on the weight of the particular polarization under Lorentz boosts. This integral is still divergent. So it is also impossible to construct $\mathcal{A}(H)$ from fields of higher spin. Nevertheless, this does not entirely rule out the possibility that there might be a nontrivial horizon algebra $\mathcal{A}(H)$, so long as it is constructed from operators that do not come from smearing local fields. As an analogy, there exist CFT's in which fields cannot be defined by smearing on a $D - 1$ dimensional spacelike surface $\Sigma$.\footnote{This can be seen by doing a spectral decomposition of a primary scalar field with $\eta \ge 1/2$.} Nevertheless, one can still define a local algebra on an incomplete spatial surface $\Sigma$ by means of the Hodge duality $\mathcal{A}(\Sigma) = \mathcal{A}^\prime(\Sigma^\prime)$, i.e. by defining $\mathcal{A}(\Sigma)$ to include any operator which commutes with all observables that are spacelike separated from $\Sigma$. It may be that some similar trick can be used to define the observables on a null surface. A possible argument that $\mathcal{A}(H)$ should exist is that in a CFT there is no distinction between finite and infinite distances. Consequently, one can apply a Weyl rescaling $g_{ab} \to \Omega^2(x) g_{ab}$ with the property that the affine distance to the horizon becomes infinite. Because curvature has mass dimension $2$, this also should lead to the scaling away of any curvature effects. The existence of an algebra on the horizon is now equivalent to the existence of final scattering observables for particles travelling into this new, nearly flat asymptotic region. This converts the ultraviolet problem of null restriction to the infrared problem of final scattering states. However, because a CFT has no mass gap, there are long range interactions, and the asymptotic states might not form a Fock space, due to the possibility of creating an infinite number of soft massless particles. In order to apply the proof of the GSL in section \ref{arg}, one would need to show that despite the existence of these long range forces, the final scattering algebra can be decomposed into a part associated with $H$ and a part associated with $\mathcal{I}^+$: \begin{equation} \mathcal{A}(H\,\cup\,\mathcal{I}^+) = \mathcal{A}(H) \otimes \mathcal{A}(\mathcal{I}^+)), \end{equation} and also show that $\mathcal{A}(H)$ obeys the other three axioms: Ultralocality, Local Lorentz Invariance, and Stability. If there are any QFT's in which the algebra $\mathcal{A}(H)$ does not exist, extending the proof would presumably require a more delicate near-horizon limit. One would have to show that a small smearing of fields out from the horizon does not break the symmetry group of the horizon sufficiently to spoil the proof. \subsection{Higher Curvature and Nonminimal Coupling}\label{nonmin} Further generalization of the proof is necessary when the gravity theory goes beyond the Einstein theory, either because the matter fields are nonminimally coupled, or because there are higher curvature terms in the gravitational Lagrangian. In general, the presence of such terms will not only change the metric field equations, but also lead to the addition of extra terms in the horizon entropy $S_\mathrm{H}$. These corrections can be calculated for stationary black holes by means of the Wald Noether charge method \cite{WI94}; however, there are certain ambiguities which arise for the case of dynamically evolving horizons. Except for some special cases like $f(R)$ gravity (which can be related by field redefinitions to scalar fields minimally coupled to general relativity \cite{2ndlaw}) it is unknown whether such theories even obey a classical second law, let alone a generalized one. For example, it appears that the Wald entropy can decrease when black holes merge in Lovelock gravity \cite{lovelock}. Although the present work is restricted to the Einstein theory, some insight into these problems might be gained by analyzing the structure of horizon observables in non-Einstein theories. The reason why the GSL holds on black holes in general relativity is that $\mathcal{A}(H)$ is small enough to have lots of symmetry (Local Lorentz Invariance) and yet large enough to contain all the information falling across the horizon (Determinism). In general, alternative gravities will require $\mathcal{A}(H)$ to depend on additional information besides the metric and affine parameter on the horizon, e.g. curvature components. If this additional information breaks the ability to translate each horizon generator independently, this may account for the failure of the second law in these theories. Another reason why theories may fail to obey the second law is if the theory permits negative energy excitations, violating the Stability axiom. On the other hand, if a horizon field theory for matter and gravitons can be found which still obeys all four axioms used in section \ref{arg}, this is auspicious for the GSL. It might be that the ambiguities in the Wald Noether charge can be fixed by requiring that $S_\mathrm{H}$ depend only on quantities measurable in $\mathcal{A}(H)$ itself. Suppose that this were done. Then the GSL might be shown by the following argument: First we need an analogue of Eq. (\ref{Tint}), relating the horizon entropy to the boost energy falling across the horizon: \begin{equation}\label{Sint} S_\mathrm{H}(\Lambda) = S_\mathrm{H}(+\infty) - \frac{2\pi}{\hbar} \int_\Lambda^\infty \langle T_{kk} \rangle \,(\lambda - \Lambda) \,d\lambda\,d^{D-2}y. \end{equation} But the Wald Noether charge method shows that this is true in any classical diffeomorphism invariant theory when $T_{kk}$ is interpreted as a \emph{canonical} stress-energy current \cite{WI94}. (The ``gravitational'' stress energy tensor defined by varying with respect to the metric is not very meaningful at this level of generality, because it is not invariant under field redefinitions of the metric). Wald's argument is classical, so in order to use Eq. (\ref{Sint}), one would have to show that it survives a semiclassical quantization of the matter fields. Since the canonical stress-energy tensor generates diffeomorphisms, one can also rewrite Eq. (\ref{Sint}) in terms of $K(\Lambda)$, the generator of boost symmetries about a horizon slice with $\lambda = \Lambda$: \begin{equation} S_\mathrm{H}(\Lambda) = C - 8\pi G\,\langle K(\Lambda) \rangle. \end{equation} Since the canonical stress-energy tensor is the generator $K$ of boost symmetries, by the Bisongano-Wichmann theorem, the quantum fields should be in a thermal state with respect to $K$. Assuming that a non-Einstein gravity theory satisfies each of the criteria described above, it too should obey a semiclassical GSL. \small \subsection*{Acknowledgments} I am grateful for discussions with Ted Jacobson, Rafael Sorkin, Rob Myers, William Donnelly, and Sudipta Sarkar, and for comments by Renaud Parentani. Supported in part by the National Science Foundation grants PHY-0601800 and PHY-0903572, the Maryland Center for Fundamental Physics, the Simons Foundation, the Perimeter Institute, and the Institute for Gravitation and the Cosmos at Penn State.
\section{Introduction} The coincidence of the kernel with the nucleolus -- that is, the kernel consists of a single point -- is only known for some classes of transferable utility games. In particular, it was established by~\citet{MPSh:72} that for the class of convex games -- introduced by~\citet{Shapley:71} -- the kernel and the nucleolus coincide. Recently,~\citet{getraf:12} were able to extend this result to the class of zero-monotonic almost-convex games. However, for the class of average-convex games, there is only some evidence that both solution concepts coalesce. In order to advance our understanding about TU games and game classes which possess an unique pre-kernel element, we propose an alternative approach to investigate this issue while applying results and techniques recently provided in the book by~\citet{mei:13}. There, it was shown that the pre-kernel of the grand coalition can be characterized by a finite union of solution sets of a family of quadratic and convex functions (Theorem 7.3.1). This dual representation of the pre-kernel is based on a Fenchel-Moreau generalized conjugation of the characteristic function. This generalized conjugation was introduced by~\citet{mart:96}, which he called the indirect function. Immediately thereafter, it was~\citet{mes:97} who proved that the pre-kernel can be derived from an over-determined system of non-linear equations. This over-determined system of non-linear equations is equivalent to a minimization problem, whose set of global minima is equal to the pre-kernel set. However, an explicit structural form of the objective function that would allow a better and more comprehensive understanding of the pre-kernel set could not be performed. The characterization of the pre-kernel set by a finite union of solution sets was possible due to a partition of the domain of the objective function into a finite number of payoff sets. From each payoff vector contained into a particular payoff set the same quadratic and convex function is induced. The collection of all these functions on the domain composes the objective function from which a pre-kernel element can be singled out. Moreover, each payoff set creates a linear mapping that maps payoff vectors into a vector subspace of balanced excesses. Equivalent payoff sets which reflects the same underlying bargaining situation produce the same vector subspace. The vector of balanced excesses generated by a pre-kernel point is contained into the vector subspace spanned by the basis vectors derived from the payoff set that contains this pre-kernel element. In contrast, the vectors of unbalanced excesses induced from the minima of a quadratic function do not belong to their proper vector subspace. An orthogonal projection maps these vectors on this vector subspace of the space of unbalanced excesses (cf.~\citet[Chap. 5-7]{mei:13}). From this structure a replication result of a pre-kernel point can be attained. This is due that from the payoff set that contains the selected pre-kernel element, and which satisfies in addition the non-empty interior condition, a null space in the game space can be identified that allows a variation within the game parameter without affecting the pre-kernel properties of this payoff vector. Even though the values of the maximum surpluses have been varied, the set of most effective coalitions remains unaltered by the parameter change. Hence, a set of related games can be determined, which are linear independent, and possess the selected pre-kernel element of the default game as well as a pre-kernel point (cf.~\citet[Sect. 7.6]{mei:13}). In the sequel of this paper, we will establish that the set of related games, which are derived from a default game exhibiting a singleton pre-kernel, must also possess the same unique pre-kernel, and therefore coincides with the pre-nucleolus. Notice, that these games need not necessarily be convex, average-convex, totally balanced, or zero-monotonic. They could belong to different subclasses of games, however, they must satisfy the non-empty interior condition. Moreover, we show that the pre-kernel correspondence in the game space restricted to the convex hull that is constituted by the extreme points, which are specified by the default and related games, is single-valued, and therefore continuous. The structure of the paper is organized as follows: In the Section~\ref{sec:prel} we introduce some basic notations and definitions to investigate the coincidence of the pre-kernel with the pre-nucleolus. Section~\ref{sec:dprk} provides the concept of the indirect function and gives a dual pre-kernel representation in terms of a solution set. In the next step, the notion of lexicographically smallest most effective coalitions is introduced in order to identify payoff equivalence classes on the domain of the objective function from which a pre-kernel element can be determined. Moreover, relevant concepts from~\citet{mei:13} are reconsidered. Section~\ref{sec:siva} studies the uniqueness of the pre-kernel for related games. However, Section~\ref{sec:lhc} investigates the continuity of the pre-kernel correspondence. In Section~\ref{sec:prspn} some sufficient conditions are worked out under which the pre-nucleolus of a default game can preserve the pre-nucleolus property for related games. A few final remarks close the paper. \section{Some Preliminaries} \label{sec:prel} A cooperative game with transferable utility is a pair $\langle N,v \rangle $, where $N$ is the non-empty finite player set $N := \{1,2, \ldots, n\}$, and $v$ is the characteristic function $v: 2^{N} \rightarrow \mathbb{R}$ with $v(\emptyset):=0$. A player $i$ is an element of $N$, and a coalition $S$ is an element of the power set of $2^{N}$. The real number $v(S) \in \mathbb{R}$ is called the value or worth of a coalition $S \in 2^{N}$. Let $S$ be a coalition, the number of members in $S$ will be denoted by $s:=|S|$. We assume throughout that $v(N) > 0$ and $n \ge 2$ is valid. In addition, we identify a cooperative game by the vector $v := (v(S))_{S \subseteq N} \in \mathcal{G}^{n} = \mathbb{R}^{2^{n}}$, if no confusion can arise. Finally, the relevant game space for our investigation is defined by $\mathcal{G}(N) := \{v \in \mathcal{G}^{n}\,\arrowvert\, v(\emptyset) = 0 \land v(N) > 0\}$. If $\mathbf{x} \in \mathbb{R}^{n}$, we apply $x(S) := \sum_{k \in S}\, x_{k}$ for every $S \in 2^{N}$ with $x(\emptyset):=0$. The set of vectors $\mathbf{x} \in \mathbb{R}^{n}$ which satisfies the efficiency principle $v(N) = x(N)$ is called the {\bfseries pre-imputation set} and it is defined by \begin{equation} \label{eq:pre-imp} \mathcal{I}^{\,0}(v):= \left\{\mathbf{x} \in \mathbb{R}^{n} \;\arrowvert\, x(N) = v(N) \right\}, \end{equation} where an element $\mathbf{x} \in \mathcal{I}^{\,0}(v)$ is called a pre-imputation. Given a vector $\mathbf{x} \in \mathcal{I}^{\,0}(v)$, we define the {\bfseries excess} of coalition $S$ with respect to the pre-imputation $\mathbf{x}$ in the game $\langle N,v \rangle $ by \begin{equation} \label{eq:exc} e^{v}(S,\mathbf{x}):= v(S) - x(S). \end{equation} Take a game $v \in \mathcal{G}^{n}$. For any pair of players $i,j \in N, i\neq j$, the {\bfseries maximum surplus} of player $i$ over player $j$ with respect to any pre-imputation $\mathbf{x} \in \mathcal{I}^{\,0}(v)$ is given by the maximum excess at $\mathbf{x}$ over the set of coalitions containing player $i$ but not player $j$, thus\begin{equation} \label{eq:maxexc} s_{ij}(\mathbf{x},v):= \max_{S \in \mathcal{G}_{ij}} e^{v}(S,\mathbf{x}) \qquad\text{where}\; \mathcal{G}_{ij}:= \{S \;\arrowvert\; i \in S\; \text{and}\; j \notin S \}. \end{equation} The set of all pre-imputations $\mathbf{x} \in \mathcal{I}^{\,0}(v)$ that balances the maximum surpluses for each distinct pair of players $i,j \in N, i\neq j$ is called the~\hypertarget{hyp:prk}{{\bfseries pre-kernel}} of the game $v$, and is defined by \begin{equation} \label{eq:prek} \mathcal{P\text{\itshape r}K}(v) := \left\{ \mathbf{x} \in \mathcal{I}^{\,0}(v)\; \arrowvert\; s_{ij}(\mathbf{x},v) = s_{ji}(\mathbf{x},v) \quad\text{for all}\; i,j \in N, i\neq j \right\}. \end{equation} In order to define the pre-nucleolus of a game $v \in \mathcal{G}^{n}$, take any $\mathbf{x} \in \mathbb{R}^{n}$ to define a $2^{n}$-tuple vector $\theta(\mathbf{x})$ whose components are the excesses $e^{v}(S,\mathbf{x})$ of the $2^{n}$ coalitions $S \subseteq N$, arranged in decreasing order, that is, \begin{equation} \label{eq:compl_vec} \theta_{i}(\mathbf{x}):=e^{v}(S_{i},\mathbf{x}) \ge e^{v}(S_{j},\mathbf{x}) =:\theta_{j}(\mathbf{x}) \qquad\text{if}\qquad 1 \le i \le j \le 2^{n}. \end{equation} Ordering the so-called complaint or dissatisfaction vectors $\theta(\mathbf{x})$ for all $\mathbf{x} \in \mathbb{R}^{n}$ by the lexicographic order $\le_{L}$ on $\mathbb{R}^{n}$, we shall write \begin{equation} \theta(\mathbf{x}) <_{L} \theta(\mathbf{y}) \qquad\text{if}\;\exists\;\text{an integer}\; 1 \le k \le 2^{n}, \end{equation} such that $\theta_{i}(\mathbf{x}) = \theta_{i}(\mathbf{y})$ for $1 \le i < k$ and $\theta_{k}(\mathbf{x}) < \theta_{k}(\mathbf{y})$. Furthermore, we write $\theta(\mathbf{x}) \le_{L} \theta(\mathbf{y})$ if either $\theta(\mathbf{x}) <_{L} \theta(\mathbf{y})$ or $\theta(\mathbf{x}) = \theta(\mathbf{y})$. Now the pre-nucleolus $\mathcal{P\text{\itshape r}N}(v)$ over the pre-imputations set $\mathcal{I}^{\,0}(v)$ is defined by \begin{equation} \label{eq:prn_sol} \mathcal{P\text{\itshape r}N}(v) = \left\{\mathbf{x} \in \mathcal{I}^{\,0}(v)\; \arrowvert\; \theta(\mathbf{x}) \le_{L} \theta(\mathbf{y}) \;\forall\; \mathbf{y} \in \mathcal{I}^{\,0}(v) \right\}. \end{equation} The {\bfseries pre-nucleolus} of any game $v \in \mathcal{G}^{n}$ is non-empty as well as unique, and it is referred to as $\nu(v)$ if the game context is clear from the contents or $\nu(N,v)$ otherwise. \section{A Dual Pre-Kernel Representation} \label{sec:dprk} The concept of a Fenchel-Moreau generalized conjugation -- also known as the indirect function of a characteristic function game -- was introduced by~\citet{mart:96}, and provides the same information as the $n$-person cooperative game with transferable utility under consideration. This approach was successfully applied in~\citet{mei:13} to give a dual representation of the pre-kernel solution of TU games by means of solution sets of a family of quadratic objective functions. In this section, we review some crucial results extensively studied in~\citet[Chap.~5 \&~6]{mei:13} as the building blocks to investigate the single-valuedness of the pre-kernel correspondence. The {\bfseries convex conjugate} or {\bfseries Fenchel transform} $f^{*}: \mathbb{R}^{n} \to \overline{\mathbb{R}}$ (where $\overline{\mathbb{R}} := \mathbb{R} \cup \{ \pm\;\infty\}$) of a convex function $f: \mathbb{R}^{n} \to \overline{\mathbb{R}}$ (cf.~\citet[Section 12]{Rocka:70}) is defined by \begin{equation*} f^{*}(\mathbf{x}^{\,*}) = \sup_{\mathbf{x} \in \mathbb{R}^{n}} \{\langle\; \mathbf{x}^{\,*}, \mathbf{x} \;\rangle - f(\mathbf{x})\} \qquad \forall \mathbf{x}^{\,*} \in \mathbb{R}^{n}. \end{equation*} Observe that the Fenchel transform $f^{*}$ is the point-wise supremum of affine functions $p(\mathbf{x}^{\,*}) = \langle\; \mathbf{x}, \mathbf{x}^{\,*} \;\rangle - \mu$ such that $(\mathbf{x},\mu) \in (\CMcal{C} \times \mathbb{R}) \subseteq (\mathbb{R}^{n} \times \mathbb{R})$, whereas $\CMcal{C}$ is a convex set. Thus, the Fenchel transform $f^{*}$ is again a convex function. We can generalize the definition of a Fenchel transform (cf.~\citet{mart:96}) by introducing a fixed non-empty subset $\CMcal{K}$ of $\mathbb{R}^{n}$, then the conjugate of a function $f: \CMcal{K} \to \overline{\mathbb{R}}$ is $f^{c}: \mathbb{R}^{n} \to \overline{\mathbb{R}}$, given by \begin{equation*} f^{c}(\mathbf{x}^{\,*}) = \sup_{\mathbf{x} \in \CMcal{K}} \{\langle\; \mathbf{x}^{\,*}, \mathbf{x} \;\rangle - f(\mathbf{x})\} \qquad \forall \mathbf{x}^{\,*} \in \mathbb{R}^{n}, \end{equation*} which is also known as the {\bfseries Fenchel-Moreau conjugation}. A vector $\mathbf{x}^{\,*}$ is said to be a subgradient of a convex function $f$ at a point $\mathbf{x}$, if \begin{equation*} f(\mathbf{z}) \ge f(\mathbf{x}) + \langle\; \mathbf{x}^{\,*}, \mathbf{z} - \mathbf{x} \;\rangle \qquad\forall \mathbf{z} \in \mathbb{R}^{n}. \end{equation*} The set of all subgradients of $f$ at $\mathbf{x}$ is called the subdifferentiable of $f$ at $\mathbf{x}$ and it is defined by \begin{equation*} \partial f(\mathbf{x}):= \{ \mathbf{x}^{\,*} \in \mathbb{R}^{n}\;\arrowvert\; f(\mathbf{z}) \ge f(\mathbf{x}) + \langle\; \mathbf{x}^{\,*}, \mathbf{z} - \mathbf{x} \;\rangle \quad (\forall \mathbf{z} \in \mathbb{R}^{n})\}. \end{equation*} The set of all subgradients $\partial f(\mathbf{x})$ is a closed convex set, which could be empty or may consist of just one point. The multivalued mapping $\partial f: \mathbf{x} \mapsto \partial f(\mathbf{x})$ is called the subdifferential of $f$. \begin{theorem}[\citet{mart:96}] \label{th:mart7} The indirect function $\pi: \mathbb{R}^{n} \to \mathbb{R}$ of any $n$-person TU game is a non-increasing polyhedral convex function such that \begin{itemize} \item[(i)] $\partial{\pi(\mathbf{x})}{} \cap \{-1, 0\}^{n} \neq \emptyset \qquad\forall \mathbf{x} \in \mathbb{R}^{n}$, \item[(ii)] $ \{-1,0\}^{n} \subset \bigcup_{\mathbf{x} \in \mathbb{R}^{n}} \partial{\pi(\mathbf{x})}{}$, and \item[(iii)] $\min_{\mathbf{x} \in \mathbb{R}^{n}}\; \pi(\mathbf{x}) = 0$. \end{itemize} Conversely, if $\pi: \mathbb{R}^{n} \to \mathbb{R}$ satisfies $(i)$-$(iii)$ then there exists an unique $n$-person TU game $\langle N,v \rangle$ having $\pi$ as its indirect function, its characteristic function is given by \begin{equation} \label{eq:mart15} v(S) = \min_{\mathbf{x} \in \mathbb{R}^{n}}\bigg\{\pi(\mathbf{x}) + \sum_{k \in S}\; x_{k}\bigg\} \qquad\forall\; S \subset N. \end{equation} \end{theorem} According to the above result, the associated {\bfseries indirect function} $\pi: \mathbb{R}^{n}\to \mathbb{R}_{+}$ is given by: $$ \pi(\mathbf{x}) = \max_{S \subseteq N}\, \bigg\{v(S) - \sum_{k \in S}\;x_{k}\bigg\},$$ for all $\mathbf{x}\in\mathbb{R}^{n}$. A characterization of the pre-kernel in terms of the indirect function is due to~\citet{mes:97}. Here, we present this representation in its most general form, although we restrict ourselves to the trivial coalition structure $\mathcal{B}=\{N\}$. The pre-imputation that comprises the possibility of compensation between a pair of players $i, j \in N, i \neq j$, is denoted as $\mathbf{x}^{\;i,j,\delta} = (x^{\;i,j,\delta}_{k})_{k \in N}\in \mathcal{I}^{\,0}(v)$, with $\delta \ge 0$, which is given by \begin{equation*} \mathbf{x}^{\;i,j,\delta}_{N\backslash\{i,j\}} = \mathbf{x}_{N\backslash\{\;i,j\}},\; x^{i,j,\delta}_{i} = x_{i} - \delta\quad\text{and}\quad x^{\;i,j,\delta}_{j} = x_{j} + \delta. \end{equation*} \begin{proposition}[\citet{mes:97,mei:13}] \label{prop:mese1} For a TU game with indirect function $\pi$, a pre-imputation $\mathbf{x} \in \mathcal{I}^{\,0}(v)$ is in the pre-kernel of $\langle N,v \rangle$ for the coalition structure $\mathcal{B} = \{B_{1}, \ldots, B_{l} \}$, $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v,\mathcal{B})$, if, and only if, for every $k \in \{1,2, \ldots, l \}$, every $i,j \in B_{k},\; i < j$, and some $\delta \ge \delta_{1}(v,\mathbf{x})$, one gets \begin{equation*} \pi(\mathbf{x}^{\;i,j,\delta}) = \pi(\mathbf{x}^{\;j,i,\delta}). \end{equation*} whereas $\delta_{1}(\mathbf{x},v) := \max_{k \in N, S \subset N\backslash\{k\}}\; |v(S \cup \{k\}) - v(S) - x_{k}|$. \end{proposition} \citet{mes:97} was the first who recognized that based on the result of Proposition~\ref{prop:mese1} a pre-kernel element can be derived as a solution of an over-determined system of non-linear equations. For the trivial coalition structure $\mathcal{B} = \{N\}$ the over-determined system of non-linear equations is given by \begin{equation} \label{eq:fij} \begin{cases} f_{ij}(\mathbf{x}) = 0 & \forall i,j \in N, i < j\\[.5em] f_{0}(\mathbf{x}) = 0 \end{cases} \end{equation} where, for some $\delta \ge \delta_{1}(\mathbf{x},v)$, \begin{equation*} f_{ij}(\mathbf{x}) := \pi(\mathbf{x}^{\;i,j,\delta}) - \pi(\mathbf{x}^{\;j,i,\delta}) \qquad\forall i,j \in N,i<j,\tag{\ref{eq:fij}-a} \end{equation*} and \begin{equation*} f_{0}(\mathbf{x}) := \sum_{k \in N}\; x_{k} - v(N).\tag{\ref{eq:fij}-b} \end{equation*} To any over-determined system an equivalent minimization problem is associated such that the set of global minima coincides with the solution set of the system (cf.~\citet[Sec. 5.3]{mei:13}). The solution set of such a minimization problem is the set of values for $\mathbf{x}$ which minimizes the following function \begin{equation} \label{eq:objfh} h(\mathbf{x}) := \sum_{\substack{i,j \in N\\ i < j}}\; (f_{ij}(\mathbf{x}))^2 + (f_{0}(\mathbf{x}))^2 \ge 0 \qquad\;\forall\,\mathbf{x} \in \mathbb{R}^{n}. \end{equation} As we will notice in the sequel, this optimization problem is equivalent to a least squares adjustment. For further details see~\citet[Chap. 6]{mei:13}. From the existence of the pre-kernel and objective function $h$ of type~\eqref{eq:objfh}, we get the following relation: \begin{corollary}[\citet{mei:13}] \label{cor:rep} For a TU game $\langle N,v \rangle$ with indirect function $\pi$, it holds that \begin{equation} \label{eq:prkbyh} h(\mathbf{x}) = \sum_{\substack{i,j \in N \\ i < j}}\; (f_{ij}(\mathbf{x}))^2 + (f_{0}(\mathbf{x}))^2 = \min_{\mathbf{y} \in \mathcal{I}^{0}(v)}\; h(\mathbf{y}) = 0, \end{equation} if, and only if, $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v)$. \end{corollary} \begin{proof} To establish the equivalence between the pre-kernel set and the set of global minima, we have to notice that in view of Theorem~\ref{th:mart7} $\min_{\mathbf{y}} h = 0$ is in force. Now, we prove necessity while taking a pre-kernel element, i.e.~$\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v)$, then the efficiency property is satisfied with $f_{0}(\mathbf{x}) = 0$ and the maximum surpluses $s_{ij}(\mathbf{x}, v)$ must be balanced for each distinct pair of players $i,j$, implying that $f_{ij}(\mathbf{x}) = 0$ for all $i,j \in N, i < j$ and therefore $h(\mathbf{x}) = 0$. Thus, we are getting $\mathbf{x} \in M(h)$. To prove sufficiency, assume that $\mathbf{x} \in M(h)$, then $h(\mathbf{x}) = 0$ with the implication that the efficiency property $f_{0}(\mathbf{x}) = 0$ and $f_{ij}(\mathbf{x}) = 0$ must be valid for all $i,j \in N, i < j$. This means that the difference $f_{ij}(\mathbf{x}) = (\pi(\mathbf{x}^{i,j,\delta}) - \pi(\mathbf{x}^{j,i,\delta}))$ is equalized for each distinct pair of indices $i,j \in N, i < j$. Thus, $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v)$. It turns out that the minimum set coincides with the pre-kernel, i.e., we have: \begin{equation} \label{eq:prk} M(h) = \{\mathbf{x} \in \mathcal{I}^{\,0}(v)\,\arrowvert\; h(\mathbf{x}) = 0 \} = \mathcal{P\text{\itshape r}K}(v), \end{equation} with this argument we are done. \end{proof} To understand the structural form of the objective function $h$, we will first identify equivalence relations on its domain. To start with, we define the set of {\bfseries most effective} or {\bfseries significant coalitions} for each pair of players $i,j \in N, i \neq j$ at the payoff vector $\mathbf{x}$ by \begin{equation} \label{eq:bsc_ij} \mathcal{C}_{ij}(\mathbf{x}):=\{S \in \mathcal{G}_{ij}\,\arrowvert\, s_{ij}(\mathbf{x},v) = e^{v}(S,\mathbf{x}) \}. \end{equation} When we gather for all pair of players $i,j \in N, i \neq j$ all these coalitions that support the claim of a specific player over some other players, we have to consider the concept of the collection of most effective or significant coalitions w.r.t. $\mathbf{x}$, which we define as in~\citet[p. 315]{MPSh:79} by \begin{equation} \label{eq:bsc} \mathcal{C}(\mathbf{x}) := \bigcup_{\substack{i,j \in N \\ i \neq j} } \; \mathcal{C}_{ij}(\mathbf{x}). \end{equation} Notice that the set $\mathcal{C}_{ij}(\mathbf{x})$ for all $i,j \in N, i \neq j$ does not have cardinality one, which is required to identify a partition on the domain of function $h$. Now let us choose for each pair $i,j \in N, i \neq j$ a descending ordering on the set of most effective coalitions in accordance with their size, and within such a collection of most effective coalitions having smallest size the lexicographical minimum is singled out, then we obtain the required uniqueness to partition the domain of $h$. This set is denoted by $\mathcal{S}_{ij}(\mathbf{x})$ for all pairs $i,j \in N, i \neq j$, and gathering all these collections we are able to specify the set of lexicographically smallest most effective coalitions w.r.t. $\mathbf{x}$ through \begin{equation} \label{eq:mec} \mathcal{S}(\mathbf{x}) := \{ \mathcal{S}_{ij}(\mathbf{x}) \,\arrowvert\, i,j \in N, i \neq j \}. \end{equation} This set will be indicated in short as the set of {\bfseries lexicographically smallest coalitions} or just more succinctly {\bfseries most effective coalitions} whenever no confusion can arise. Notice that this set is never empty and can uniquely be identified. This implies that the cardinality of this set is equal to $n \cdot (n-1)$. In the following we will observe that from these type of sets equivalence relations on the domain $dom\, h$ can be identified. To see this, consider the correspondence $\mathcal{S}$ on $dom\, h$ and two different vectors, say $\mathbf{x}$ and $\vec{\gamma}$, then both vectors are said to be equivalent w.r.t. the binary relation $\sim$ if, and only if, they induce the same set of lexicographically smallest coalitions, that is, $\mathbf{x} \sim \vec{\gamma}$ if, and only if, $\mathcal{S}(\mathbf{x}) = \mathcal{S}(\vec{\gamma})$. In case that the binary relation $\sim$ is reflexive, symmetric and transitive, then it is an {\bfseries equivalence relation} and it induces {\bfseries equivalence classes} $[\vec{\gamma}]$ on $dom\, h$ which we define through $[ \vec{\gamma} ] := \{ \mathbf{x} \in dom\;h \;\arrowvert \mathbf{x} \sim \vec{\gamma}\}$. Thus, if $\mathbf{x} \sim \vec{\gamma}$, then $[\mathbf{x}] = [\vec{\gamma}]$, and if $\mathbf{x} \nsim \vec{\gamma}$, then $[\mathbf{x}] \cap [\vec{\gamma}] = \emptyset$. This implies that whenever the binary relation $\sim$ induces equivalence classes $[\vec{\gamma}]$ on $dom\, h$, then it partitions the domain $dom\, h$ of the function $h$. The resulting collection of equivalence classes $[\vec{\gamma}]$ on $dom\, h$ is called the quotient of $dom\, h$ modulo $\sim$, and we denote this collection by $dom\, h/\sim$. We indicate this set as an equivalence class whenever the context is clear, otherwise we apply the term payoff set or payoff equivalence class. \begin{proposition}[\citet{mei:13}] \label{prop:eq_rel} The binary relation $\sim$ on the set $dom\, h$ defined by $\mathbf{x} \sim \vec{\gamma} \iff \mathcal{S}(\mathbf{x}) = \mathcal{S}(\vec{\gamma})$ is an equivalence relation, which forms a partition of the set $dom\, h$ by the collection of equivalence classes $\{[\vec{\gamma}_{k}]\}_{k \in J}$, where $J$ is an arbitrary index set. Furthermore, for all $k \in J$, the induced equivalence class $[\vec{\gamma}_{k}]$ is a convex set . \end{proposition} \begin{proof} For a proof see~\citet[p. 59]{mei:13}. \end{proof} The cardinality of the collection of the payoff equivalence classes induced by a TU game is finite (cf.~\citet[Proposition 5.4.2.]{mei:13}). Furthermore, on each payoff equivalence class $[\vec{\gamma}]$ from the $dom\, h$ an unique quadratic and convex function can be identified. Therefore, there must be a finite composite of these functions that constitutes the objective function $h$. In order to construct such a quadratic and convex function suppose that $\vec{\gamma} \in [\vec{\gamma}]$. From this vector we attain the collection of most effective coalitions $\mathcal{S}(\vec{\gamma})$ in accordance with Proposition~\ref{prop:eq_rel}. Then observe that the differences in the values between a pair $\{i,j\}$ of players are defined by $\alpha_{ij} := (v(S_{ij}) - v(S_{ji})) \in \mathbb{R}$ for all $i,j \in N,\, i < j$, and $\alpha_{0} := v(N) > 0$ w.r.t. $\mathcal{S}(\vec{\gamma})$. All of these $q$-components compose the $q$-coordinates of a payoff independent vector $\vec{\alpha}$, with $q = \binom{n}{2} +1$. A vector that reflects the degree of unbalancedness of excesses for all pair of players, is denoted by $\vec{\xi} \in \mathbb{R}^{q}$, that is a $q$-column vector, which is given by \begin{equation} \label{eq:unb_exc} \begin{split} \xi_{ij} & := e^{v}(S_{ij},\vec{\gamma}) - e^{v}(S_{ji},\vec{\gamma}) = v(S_{ij}) - \gamma(S_{ij}) - v(S_{ji}) + \gamma(S_{ji}) \quad\forall \, i,j \in N,\, i < j, \allowdisplaybreaks\\ & = v(S_{ij}) - v(S_{ji}) + \gamma(S_{ji}) - \gamma(S_{ij}) = \alpha_{ij} + \gamma(S_{ji}) - \gamma(S_{ij}) \quad\forall \, i,j \in N,\, i < j, \allowdisplaybreaks\\ \xi_{0} & := v(N) - \gamma(N) = \alpha_{0} - \gamma(N). \end{split} \end{equation} In view of Proposition~\ref{prop:eq_rel}, all vectors contained in the equivalence~class $[\vec{\gamma}]$ induce the same set $\mathcal{S}(\vec{\gamma})$, and it holds \begin{equation} \label{eq:xi_zet} \xi_{ij} := e^{v}(S_{ij},\vec{\gamma}) - e^{v}(S_{ji},\vec{\gamma}) = s_{ij}(\vec{\gamma},v) - s_{ji}(\vec{\gamma},v) =: \zeta_{ij} \quad\forall \, i,j \in N,\, i < j. \end{equation} The payoff dependent configurations $\vec{\xi}$ and $\vec{\zeta}$ having the following interrelationship outside its equivalence class: $\vec{\xi} \neq \vec{\zeta}$ for all $\mathbf{y} \in [\vec{\gamma}]^{c}$. Moreover, equation~\eqref{eq:xi_zet} does not necessarily mean that for $\vec{\gamma}^{\,\prime}, \vec{\gamma}^{*} \in [\vec{\gamma}],\, \vec{\gamma}^{\,\prime} \neq\vec{\gamma}^{*} $, it holds $\vec{\xi}^{\,\prime} = \vec{\xi}^{*}$. Hence, the vector of (un)balanced excesses $\vec{\xi}$ is only equal with the vector of (un)balanced maximum surpluses $\vec{\zeta}$ if the corresponding pre-imputation $\vec{\gamma} $ is drawn from its proper equivalence class $[\vec{\gamma}]$. In addition, we write for sake of simplicity that $\mathbf{E}_{ij}:= (\mathbf{1}_{S_{ji}} - \mathbf{1}_{S_{ij}}) \in \mathbb{R}^{n}, \;\forall i,j \in N, i < j$, and $\mathbf{E}_{0} := - \mathbf{1}_{N} \in \mathbb{R}^{n}$. Combining these $q$-column vectors, we can construct an $(n \times q)$-matrix in $\mathbb{R}^{n \times q}$ referred to as $\mathbf{E}$, and which is given by \begin{equation} \label{eq:matE} \mathbf{E} := [\mathbf{E}_{1,2}, \ldots ,\mathbf{E}_{n-1,n},\mathbf{E}_{0}] \in \mathbb{R}^{^{n \times q}}. \end{equation} \begin{proposition}[Quadratic Function] \label{prop:quad} Let $\langle N,v \rangle$ be a TU game with indirect function $\pi$, then an arbitrary vector $\vec{\gamma}$ in the domain of $h$, i.e. $\vec{\gamma} \in dom\, h$, induces a quadratic function: \begin{equation} \label{eq:objf2} h_{\gamma}(\mathbf{x}) = (1/2) \cdot \langle\; \mathbf{x},\mathbf{Q} \,\mathbf{x} \;\rangle + \langle\; \mathbf{x}, \mathbf{a} \;\rangle + \mathbf{\alpha} \qquad \mathbf{x} \in dom\, h, \end{equation} where $\mathbf{a}$ is a column vector of coefficients, $\alpha$ is a scalar and $\mathbf{Q}$ is a symmetric ($n \times n$)-matrix with integer coefficients taken from the interval $[-n \cdot (n-1), n \cdot (n-1)]$. \end{proposition} \begin{proof} The proof is given in~\citet[pp.~66-68]{mei:13}. \end{proof} By the above discussion, the objective function $h$ and the quadratic as well as convex function $h_{\gamma}$ of type~\eqref{eq:objf2} coincide on the payoff set $[\vec{\gamma}]$ (cf.~\citet[Lemma 6.2.2]{mei:13}). However, on the complement $[\vec{\gamma}]^{c}$ it holds $h \not= h_{\gamma}$. Moreover, in view of \citet[Proposition 6.2.2]{mei:13} function $h$ is composed of a finite family of quadratic and convex functions of type~\eqref{eq:objf2}. \begin{proposition}[Least Squares] \label{prop:eqrep} A quadratic function $h_{\gamma}$ given by equation~\eqref{eq:objf2} is equivalent to \begin{equation} \label{eq:eqrep} \langle\, \vec{\alpha} + \mathbf{E}^{\top}\; \mathbf{x}, \vec{\alpha} + \mathbf{E}^{\top}\; \mathbf{x}\,\rangle = \Arrowvert\, \vec{\alpha} + \mathbf{E}^{\top}\; \mathbf{x}\,\Arrowvert^{2}. \end{equation} Therefore, the matrix $\mathbf{Q} \in \mathbb{R}^{n^2}$ can also be expressed as $\mathbf{Q} = 2 \cdot \mathbf{E} \; \mathbf{E}^{\top}$, and the column vector $\mathbf{a}$ as $2 \cdot \mathbf{E} \; \vec{\alpha} \in \mathbb{R}^{n}$. Finally, the scalar $\alpha$ is given by $\Arrowvert \vec{\alpha} \Arrowvert^2$, where $\mathbf{E} \in \mathbb{R}^{n \times q}, \mathbf{E}^{\top} \in \mathbb{R}^{q \times n}$ and $\vec{\alpha} \in \mathbb{R}^q$. \end{proposition} \begin{proof} The proof can be found in~\citet[pp.~70-71]{mei:13}. \end{proof} Realize that the transpose of a vector or a matrix is denoted by the symbols $\mathbf{x}^{\top}$, and $\mathbf{Q}^{\top}$ respectively. \begin{lemma}[\citet{mei:13}] \label{lem:Etx_Pa} Let $\mathbf{x}, \vec{\gamma} \in dom\, h, \mathbf{x} = \vec{\gamma} + \mathbf{z} $ and let $\vec{\gamma}$ induces the matrices $\mathbf{E} \in \mathbb{R}^{n \times q}, \mathbf{E}^{\top} \in \mathbb{R}^{q \times n}$ determined by formula~\eqref{eq:matE}, and $\vec{\alpha}, \vec{\xi} \in \mathbb{R}^q$ as in equation~\eqref{eq:unb_exc}. If $\mathbf{x} \in M(h_{\gamma}) $, then \begin{enumerate} \item $- \mathbf{E}^{\top} \, \mathbf{x} = \mathbf{P}\, \vec{\alpha} $. \item $\mathbf{E}^{\top} \, \vec{\gamma} = \mathbf{P}\, (\vec{\xi} - \vec{\alpha}) = (\vec{\xi} - \vec{\alpha})$. \item $- \mathbf{E}^{\top} \, \mathbf{z} = \mathbf{P}\, \vec{\xi}$. \end{enumerate} In addition, let $q := \binom{n}{2} + 1$. The matrix $\mathbf{P}\in \mathbb{R}^{q^2}$ is either equal to $2 \cdot \mathbf{E}^{\top}\, \mathbf{Q}^{-1} \mathbf{E}$, if the matrix $\mathbf{Q} \in \mathbb{R}^{n^2}$ is non-singular, or it is equal to $2 \cdot \mathbf{E}^{\top}\, \mathbf{Q}^{\dagger} \mathbf{E}$, if the matrix $\mathbf{Q}$ is singular. Furthermore, it holds for the matrix $\mathbf{P}$ that $\mathbf{P} \neq \mathbf{I}_{q}$ and $\text{rank} \, \mathbf{P} \le n$. \end{lemma} \begin{proof} The proof is given in~\citet[pp.~80-81]{mei:13}. \end{proof} Notice that $\mathbf{Q}^{\dagger}$ is the {\bfseries Moore-Penrose} or {\bfseries pseudo-inverse} matrix of matrix $\mathbf{Q}$, if matrix $\mathbf{Q}$ is singular. This matrix is unique according to the following properties: (1) general condition, i.e. $\mathbf{Q}\,\mathbf{Q}^{\dagger}\,\mathbf{Q} = \mathbf{Q}$, (2) reflexive, i.e. $\mathbf{Q}^{\dagger}\,\mathbf{Q}\,\mathbf{Q}^{\dagger} = \mathbf{Q}^{\dagger}$, (3) normalized, i.e. $(\mathbf{Q}\,\mathbf{Q}^{\dagger})^{\top} = \mathbf{Q}^{\dagger}\,\mathbf{Q}$, and finally (4) reversed normalized, i.e. $(\mathbf{Q}^{\dagger}\,\mathbf{Q})^{\top} = \mathbf{Q}\,\mathbf{Q}^{\dagger}$. \begin{proposition}[Orthogonal Projection Operator] \label{prop:orth_mat} Matrix $\mathbf{P}$ is idempotent and self-adjoint, i.e. $\mathbf{P}$ is an orthogonal projection operator. \end{proposition} \begin{proof} The proof can be found in~\citet[p.~86]{mei:13}. \end{proof} \begin{lemma}[\citet{mei:13}.] \label{lem:spV1} Let $\mathcal{E}$ be a subspace of $\mathbb{R}^{q}$ with basis $\{\mathbf{e}_{1}, \ldots, \mathbf{e}_{m}\}$ derived from the linear independent vectors of matrix $\mathbf{E}^{\top}$ having rank $m$, with $m \le n$, and let $\{\mathbf{w}_{1}, \ldots, \mathbf{w}_{q-m}\}$ be a basis of $\mathcal{W}:=\mathcal{E}^{\perp}$. In addition, define matrix $E^{\top}:= [\mathbf{e}_{1}, \ldots, \mathbf{e}_{m}] \in \mathbb{R}^{q \times m}$, and matrix $W^{\top}:= [\mathbf{w}_{1}, \ldots, \mathbf{w}_{q-m}] \in \mathbb{R}^{q \times (q-m)}$, then for any $\vec{\beta}\in \mathbb{R}^{q}$ it holds \begin{enumerate} \item $\vec{\beta}=[E^{\top}\; W^{\top}] \cdot \mathbf{c}$ where $\mathbf{c} \in \mathbb{R}^{q}$ is a coefficient vector, and \item the matrix $[E^{\top}\; W^{\top}] \in \mathbb{R}^{q \times q}$ is invertible, that is, we have \begin{equation*} [E^{\top}\; W^{\top}]^{-1} = \begin{bmatrix}[l] (E\,E^{\top})^{-1}\,E \\ (W\,W^{\top})^{-1}\,W \end{bmatrix}. \end{equation*} \end{enumerate} \end{lemma} \begin{proof} For a proof see~\citet[pp.~90-91]{mei:13}. \end{proof} Notice that $\mathcal{E}$ can be interpreted as indicating a vector subspace of balanced excesses. A pre-imputation will be mapped into its proper vector subspace of balanced excesses $\mathcal{E}$, i.e. the vector subspace induced by the pre-imputation. However, the corresponding vector of (un)balanced excesses generated by this pre-imputation is an element of this vector subspace of balanced excesses, if the pre-imputation is also a pre-kernel point. Hence, the vector of balanced excesses coincides with the vector of balanced maximum surpluses. This is a consequence of Lemma~\ref{lem:Etx_Pa} or see Proposition 8.4.1 in~\citet{mei:13}. Otherwise, this vector of unbalanced excesses will be mapped by the orthogonal projection $\mathbf{P}$ on $\mathcal{E}$. More information about the properties of this kind of vector subspace can be found in~\citet[pp.~87-113~and~138-168]{mei:13}. \begin{proposition}[Positive General Linear Group] \label{prop:GLG} Let $\{\mathbf{e}_{1}, \ldots, \mathbf{e}_{m}\}$ as well as $\{\mathbf{e}^{1}_{1}, \ldots, \mathbf{e}^{1}_{m}\}$ be two ordered bases of the subspace $\mathcal{E}$ derived from the payoff sets $[\vec{\gamma}]$ and $[\vec{\gamma}_{1}]$, respectively. In addition, define the associated basis matrices $E^{\top},E^{\top}_{1} \in \mathbb{R}^{q \times m}$ as in Lemma~\ref{lem:spV1}, then the unique transition matrix $X \in \mathbb{R}^{m^{2}}$ such that $E^{\top}_{1} = E^{\top} \,X$ is given, is an element of the positive general linear group, that is $X \in \text{GL}^{+}(m)$. \end{proposition} \begin{proof} The proof can be found in~\citet[p.~101]{mei:13}. \end{proof} Proposition~\ref{prop:GLG} denotes two payoff sets $[\vec{\gamma}]$ and $[\vec{\gamma}_{1}]$ as equivalent, if there exists a transition matrix $X$ from the positive general linear group, that is $X \in \text{GL}^{+}(m)$, such that $E^{\top}_{1} = E^{\top} \,X$ is in force. Notice that the transition matrix $X$ must be unique (cf.~\citet[p. 102]{mei:13}). The underlying group action (cf.~\citet[Corollary 6.6.1]{mei:13}) can be interpreted that a bargaining situation is transformed into an equivalent bargaining situation. For a thorough discussion of a group action onto the set of all ordered bases, the interested reader should consult~\citet[Sect. 6.6]{mei:13}. The vector space $\mathbb{R}^{q}$ is an orthogonal decomposition by the subspaces $\mathcal{E}$ and $\mathcal{N}_{\mathbf{E}}$. We denote in the sequel a basis of the orthogonal complement of space $\mathcal{E}$ by $\{\mathbf{w}_{1}, \ldots, \mathbf{w}_{q-m}\}$. This subspace of $\mathbb{R}^{q}$ is identified by $\mathcal{W}:= \mathcal{N}_{\mathbf{E}} = \mathcal{E}^{\perp}$. In addition, we have $\mathbf{P}\,\mathbf{w}_{k} = \mathbf{0}$ for all $k \in \{1,\ldots, q-m\}$. Thus, we can obtain the following corollary \begin{corollary}[\citet{mei:13}] \label{cor:innp} If $\vec{\gamma}$ induces the matrices $\mathbf{E} \in \mathbb{R}^{n \times q},\mathbf{E}^{\top} \in \mathbb{R}^{q \times n}$ determined by formula~\eqref{eq:matE}, then with respect to the Euclidean inner product, getting \begin{enumerate} \item $\mathbb{R}^{q} = \mathcal{E} \oplus \mathcal{W} = \mathcal{E} \oplus \mathcal{E}^{\perp}$. \end{enumerate} \end{corollary} A consequence of the orthogonal projection method presented by the next theorem and corollary is that every payoff vector belonging to the intersection of the minimum set of function $h_{\gamma}$ and its payoff equivalence class $[\vec{\gamma}]$ is a pre-kernel element. This due to $h_{\gamma} = h$ on $[\vec{\gamma}]$. \begin{theorem}[Orthogonal Projection Method] \label{thm:pmt_prk} Let $\vec{\gamma}_{k} \in [\vec{\gamma}]$ for $k = 1,2,3$. If $\vec{\gamma}_{2} \in M(h_{\gamma}) $ and $\vec{\gamma}_{k} \notin M(h_{\gamma})$ for $k = 1,3 $, then $\vec{\zeta}_{2} = \vec{\xi}_{2} = \mathbf{0}$, and consequently $\vec{\gamma}_{2} \in \mathcal{P\text{\itshape r}K}(v)$. \end{theorem} \begin{proof} For a proof see~\citet[pp.~109-111]{mei:13}. \end{proof} \begin{corollary}[\citet{mei:13}] \label{cor:xi_prk_2} Let be $[\vec{\gamma}]$ an equivalence class of dimension $3 \le m \le n$, and $\mathbf{x} \in M(h_{\gamma}) \cap [\vec{\gamma}] $, then $\vec{\alpha} = \mathbf{P}\,\vec{\alpha}$, and consequently $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v)$. \end{corollary} \section{The Uniqueness of the Pre-Kernel} \label{sec:siva} To study the uniqueness of the pre-kernel solution of a related TU game derived from a pre-kernel element of a default game, we need to know: (1) if the linear mapping of a pre-kernel element into a specific vector subspace of balanced excesses $\mathcal{E}$ consists of a single point, and (2) that there can not exist any other non-transversal vector subspace of balanced excesses $\mathcal{E}_{1}$ in which a linear transformation of pre-kernel element can be mapped. (3) It must be shown that the pre-kernel coincides with the pre-nucleolus of the set of related games, otherwise, it is obvious that there must exist at least a second pre-kernel point, namely the pre-nucleolus. For conducting this line of investigation some additional concepts are needed. In a first step we introduce the definition of a {\bfseries unanimity game}, which is indicated as: $\mathbf{u}_{T}(S):=1$, if $T \subseteq S$, otherwise $\mathbf{u}_{T}(S):=0$, whereas $T \subseteq N, T \neq \emptyset$. The collection of all unanimity games forms a {\bfseries unanimity/game basis}. A formula to express the coordinates of this basis is given by \begin{equation*} v = \sum_{\substack{T \subset N, \\ T \neq \emptyset}}\, \lambda^{v}_{T}\, \mathbf{u}_{T} \iff \lambda^{v}_{T} = \sum_{\substack{S \subset T, \\ S \neq \emptyset}}\, (-1)^{t-s} \cdot v(S), \end{equation*} if $\langle N,v \rangle$, where $\arrowvert S \arrowvert = s $, and $\arrowvert T \arrowvert = t$. A coordinate $\lambda^{v}_{T}$ is said to be an unanimity coordinate of game $\langle N,v \rangle$, and vector $\lambda^{v}$ is called the unanimity coordinates of game $\langle N,v \rangle$. Notice that we assume here that the game is defined in $\mathbb{R}^{2^{n}-1}$ rather than $\mathbb{R}^{2^{n}}$, since we want to write for sake of convenience the {\bfseries game basis} in matrix form without a column and row of zeros. Thus we write \begin{equation*} v = \sum_{\substack{T \subset N, \\ T \neq \emptyset}}\, \lambda^{v}_{T}\, \mathbf{u}_{T} = [\mathbf{u}_{\{1\}}, \ldots , \mathbf{u}_{\{N\}}] \, \lambda^{v} = \boldsymbol{\EuScript{U}}\; \lambda^{v} \end{equation*} where the unanimity basis $\boldsymbol{\EuScript{U}}$ is in $\mathbb{R}^{p^{\prime} \times p^{\prime}}$ with $p^{\prime}=2^{n}-1$. In addition, define the {\bfseries unity games (Dirac games)} $\mathbf{1}^{T}$ for all $T \subseteq N$ as: $\mathbf{1}^{T}(S):=1$, if $T=S$, otherwise $\mathbf{1}^{T}(S):=0$. In the next step, we select a payoff vector $\vec{\gamma}$, which also determines its payoff set $[\vec{\gamma}]$. With regard to Proposition~\ref{prop:eq_rel}, this vector induces in addition a set of lexicographically smallest most effective coalitions indicated by $\mathcal{S}(\vec{\gamma})$. Implying that we get the configuration $\vec{\alpha}$ by the $q$-coordinates $\alpha_{ij} := (v(S_{ij}) - v(S_{ji})) \in \mathbb{R} $ for all $i,j \in N, i < j $, and $\alpha_{0} := v(N)$. Furthermore, we can also define a set of vectors as the differences of unity games w.r.t. the set of lexicographically smallest most effective coalitions, which is given by \begin{equation} \label{eq:mat_V} \mathbf{v}_{ij} := \mathbf{1}^{S_{ij}} - \mathbf{1}^{S_{ji}} \quad\text{for}\; S_{ij},S_{ji} \in \mathcal{S}(\vec{\gamma}) \quad\text{and}\quad \mathbf{v}_{0} := \mathbf{1}^{N}, \end{equation} whereas $\mathbf{v}_{ij}, \mathbf{v}_{0} \in \mathbb{R}^{p^{\prime}}$ for all $i,j \in N, i < j$. With these column vectors, we can identify matrix $\boldsymbol{\mathcal{V}}:= [\mathbf{v}_{1,2}, \ldots ,\mathbf{v}_{n-1,n},\mathbf{v}_{0}] \in \mathbb{R}^{p^{\prime} \times q}$. Then we obtain $\vec{\alpha} = \boldsymbol{\mathcal{V}}^{\top}\, v$ with $v \in \mathbb{R}^{p^\prime{}}$ due to the removed empty set. Moreover, by the measure $y(S):= \sum_{k \in S}\,{y}_{k}$ for all $\emptyset \neq S \subseteq N$, we extend every payoff vector $\mathbf{y}$ to a vector $\overline{\mathbf{y}} \in \mathbb{R}^{p^{\prime}}$, and define the excess vector at $\mathbf{y}$ by $\overline{e}_{\mathbf{y}} := v - \overline{\mathbf{y}} \in \mathbb{R}^{p^{\prime}}$, then we get $\vec{\xi}_{\mathbf{y}} = \boldsymbol{\mathcal{V}}^{\top}\, \overline{e}_{\mathbf{y}}$. From matrix $\boldsymbol{\mathcal{V}}^{\top}$, we can also derive an orthogonal projection $\mathbf{P}_{\mathcal{V}}$ specified by $\boldsymbol{\mathcal{V}}^{\top}\,(\boldsymbol{\mathcal{V}}^{\top})^{\dagger} \in \mathbb{R}^{q \times q}$ such that $\mathbb{R}^{q} = \mathcal{V} \oplus \mathcal{V}^{\perp}$ is valid, i.e.~the rows of matrix $\boldsymbol{\mathcal{V}}^{\top}$ are a spanning system of the vector subspace $\mathcal{V} \subseteq \mathbb{R}^{q \times q}$, thus $\mathcal{V}:=span\{\mathbf{v}^{\top}_{1,2}, \ldots, \mathbf{v}^{\top}_{n-1,n},\mathbf{v}^{\top}_{0}\}$. Vector subspace $\mathcal{V}$ reflects the power of the set of lexicographically smallest most effective coalitions. In contrast, vector subspace $\mathcal{E}$ reflects the ascribed unbalancedness in the coalition power w.r.t. the bilateral bargaining situation attained at $\vec{\gamma}$ through $\mathcal{S}(\vec{\gamma})$. The next results show how these vector subspaces are intertwined. \begin{lemma}[\citet{mei:13}] \label{lem:inl_vp} Let $\mathbf{E}^{\top} \in \mathbb{R}^{q \times n}$ be defined as in Equation~\eqref{eq:matE}, $\boldsymbol{\mathcal{V}}^{\top}\in \mathbb{R}^{q \times p^{\prime}}$ as by Equation~\eqref{eq:mat_V}, then there exists a matrix $\mathbf{Z}^{\top} \in \mathbb{R}^{p^{\prime} \times n}$ such that $\mathbf{E}^{\top} = \boldsymbol{\mathcal{V}}^{\top}\,\mathbf{Z}^{\top}$ if, and only if, $\mathcal{R}_{\mathbf{E}^{\top}} \subseteq \mathcal{R}_{\boldsymbol{\mathcal{V}}^{\top}}$, that is, $\mathcal{E} \subseteq \mathcal{V}$. \end{lemma} \begin{proof} The proof is given in~\citet[p.~141]{mei:13}. \end{proof} Notice that the minimal rank of matrix $\boldsymbol{\mathcal{V}}^{\top}$ is bounded by $\mathbf{E}^{\top}$ which is equal to $m < n$ with the consequence that we get in this case $\mathcal{V} = \mathcal{E}$. However, the maximal rank is equal to $q$, and then $\mathcal{V} = \mathbb{R}^{q}$ (cf.~\citet[Corollary 7.4.1]{mei:13}). \begin{lemma}[\citet{mei:13}] \label{lem:vsp_al} Let $\vec{\alpha}, \vec{\xi} \in \mathbb{R}^q$ as in Equation~\eqref{eq:unb_exc}, then the following relations are satisfied on the vector space $\mathcal{V}$: \begin{enumerate} \item $\mathbf{P}_{\mathcal{V}}\,\vec{\alpha} = \vec{\alpha} \in \mathcal{V}$ \item $\mathbf{P}_{\mathcal{V}}\,\vec{\xi} = \vec{\xi} \in \mathcal{V}$ \item $\mathbf{P}_{\mathcal{V}}\,(\vec{\xi} - \vec{\alpha}) = (\vec{\xi} - \vec{\alpha}) \in \mathcal{V}$ \item $\mathbf{P}_{\mathcal{V}}\, \mathbf{E}^{\top} = \mathbf{P}\,\mathbf{E}^{\top} = \mathbf{E}^{\top}$, hence $\mathcal{E} \subseteq \mathcal{V}$ \item $\mathbf{E}\,\mathbf{P}_{\mathcal{V}} = \mathbf{E}\,\mathbf{P} = \mathbf{E}$, hence $\mathcal{R}_{\mathbf{E}} \subseteq \mathcal{V}$. \end{enumerate} \end{lemma} \begin{proof} For a proof see~\citet[p.~142]{mei:13}. \end{proof} It was worked out by~\citet[Sect. 7.6]{mei:13} that a pre-kernel element of a specific game can be replicated as a pre-kernel element of a related game whenever the non-empty interior property of the payoff set, in which the pre-kernel element of default game is located, is satisfied. In this case, a full dimensional ellipsoid can be inscribed from which some bounds can be specified within the game parameter can be varied without destroying the pre-kernel properties of the payoff vector of the default game. These bounds specify a redistribution of the bargaining power among coalitions while supporting the selected pre-imputation still as a pre-kernel point. Although the values of the maximum excesses have been changed by the parameter variation, the set of lexicographically smallest most significant coalitions remains unaffected. \begin{lemma}[\citet{mei:13}] \label{lem:repl_min} If $\mathbf{x} \in M(h^{v}_{\gamma})$, then $\mathbf{x} \in M(h^{v^{\mu}}_{\gamma})$ for all $\mu \in \mathbb{R}$, where $v^{\mu} := \boldsymbol{\EuScript{U}}(\lambda^{v} + \mu \Delta)$ and $\mathbf{0} \neq \Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}}=\{\Delta \in \mathbb{R}^{p^{\,\prime}} \;\arrowvert\; \boldsymbol{\EuScript{W}}\Delta = \mathbf{0}\}$, where $\boldsymbol{\EuScript{W}} := \boldsymbol{\mathcal{V}}^{\top}\, \boldsymbol{\EuScript{U}} \in \mathbb{R}^{q \times p^{\,\prime}}$. \end{lemma} \begin{proof} Let $\mathbf{x}$ be a minimizer of function $h^{v}_{\gamma}$ under game $v$, then $\mathbf{x}$ remains a minimizer for a function $h^{v^{\mu}}_{\gamma}$ induced by game $v^{\mu}$ whenever $\mathbf{Q}\, \mathbf{x}= -2\, \mathbf{E}\,\vec{\alpha}= - \mathbf{a}$ remains valid. Since the payoff vector has induced the matrices $\mathbf{Q}, \mathbf{E}$ and matrix $\boldsymbol{\mathcal{V}}$ defined by $ [\mathbf{v}_{1,2}, \ldots ,\mathbf{v}_{n-1,n},\mathbf{v}_{0}]$, where the vectors are defined as by formula~\eqref{eq:mat_V}. We simply have to prove that the configuration $\vec{\alpha}$ remains invariant against an appropriate change in the game parameter. Observing that matrix $\boldsymbol{\EuScript{W}}$ has a rank equal to or smaller than $q = \binom{n}{2} +1$, say $m\le q$, then the null space of matrix $\boldsymbol{\EuScript{W}}$ has rank of $p^{\prime}-m$, thus $\mathcal{N}_{\boldsymbol{\EuScript{W}}}\neq \{\emptyset\}$. But then exists some $\mathbf{0} \neq \Delta \in \mathbb{R}^{p^{\prime}}$ s.t. $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}}$ and $v^{\mu} = \boldsymbol{\EuScript{U}}(\lambda^{v} + \mu \Delta)$ for $\mu \in \mathbb{R}\backslash\{\mathbf{0}\}$, getting \begin{equation*} \boldsymbol{\EuScript{W}}\, \, \lambda^{v^{\mu}} = \boldsymbol{\EuScript{W}}\,(\lambda^{v} + \mu \Delta)= \boldsymbol{\mathcal{V}}^{\top}\,(v + \mu v^{\Delta})= \boldsymbol{\mathcal{V}}^{\top}\,v = \vec{\alpha}, \end{equation*} whereas $\boldsymbol{\EuScript{W}}\, \Delta = \boldsymbol{\mathcal{V}}^{\top}\,v^{\Delta} = \mathbf{0}$ with $v^{\Delta}:=\boldsymbol{\EuScript{U}}\,\Delta$. This argument proves that the configuration $\vec{\alpha}$ remains invariant against a change in the game parameter space by $v^{\Delta} \neq \mathbf{0}$. This implies that the payoff vector $\mathbf{x}$ is also a minimizer for function $h^{v^{\mu}}_{\gamma}$ under game $v^{\mu}$. \end{proof} \begin{lemma}[\citet{mei:13}] \label{lem:repl_prk} If $[\vec{\gamma}]$ has non-empty interior and $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v) \subset [\vec{\gamma}]$, then there exists some critical bounds given by \begin{equation} \label{eq:crit_bds} \delta^{\varepsilon}_{ij}(\mathbf{x})= \frac{\pm\sqrt{\bar{c}}}{\Arrowvert \mathbf{E}^{\top} (\mathbf{1}_{j}-\mathbf{1}_{i}) \Arrowvert} \neq 0 \qquad \forall i,j \in N, i \neq j, \end{equation} with $\bar{c}>0$ and $\Arrowvert \mathbf{E}^{\top} (\mathbf{1}_{j}-\mathbf{1}_{i}) \Arrowvert > 0$. \end{lemma} \begin{proof} Define a set $\varepsilon := \{\mathbf{y}\, \arrowvert h^{v}_{\gamma}(\mathbf{y}) \le \bar{c} \} \subset [\vec{\gamma}]$, whereas $h^{v}_{\gamma}(\mathbf{y})=(1/2) \cdot \langle\; \mathbf{y},\mathbf{Q} \,\mathbf{y} \;\rangle + \langle\; \mathbf{y}, \mathbf{a} \;\rangle + \mathbf{\alpha}$. By assumption the payoff set $[\vec{\gamma}]$ has non-empty interior, we can say that $\varepsilon$ is the ellipsoid of maximum volume obtained by equation~\eqref{eq:objf2} that lies inside of the convex payoff set $[\vec{\gamma}]$. This ellipsoid must have a strictly positive volume, since the payoff equivalence class $[\vec{\gamma}]$ has non-empty interior, hence we conclude that $\bar{c}>0$. Of course, the set $\varepsilon$ is a convex subset of the convex set $[\vec{\gamma}]$, therefore $h^{v}=h^{v}_{\gamma}$ on $\varepsilon$. Moreover, the solution set $M(h^{v}_{\gamma})$ is a subset of the ellipsoid $\varepsilon$, which is the smallest non-empty ellipsoid of the form~\eqref{eq:objf2}, i.e., it is its center in view of Theorem~\ref{thm:pmt_prk}. By our supposition $\mathcal{P\text{\itshape r}K}(v) \subset [\vec{\gamma}]$, we conclude that $M(h^{v})=M(h^{v}_{\gamma})= \mathcal{P\text{\itshape r}K}(v)$ must be satisfied. In the next step similar to~\citet{MPSh:79}, we define some critical numbers $\delta^{\varepsilon}_{ij}(\mathbf{x}) \in \mathbb{R}$ s.t. \begin{equation} \label{eq:crit_num} \delta^{\varepsilon}_{ij}(\mathbf{x}):=\max\,\{\delta \in \mathbb{R}\,\arrowvert\, \mathbf{x}^{\;i,j,\delta}= \mathbf{x} - \delta\,\mathbf{1}_{i} + \delta\,\mathbf{1}_{j} \in \varepsilon \} \qquad \forall\, i,j \in N, i\neq j. \end{equation} That is, the number $\delta^{\varepsilon}_{ij}(\mathbf{x})$ is the maximum amount that can be transferred from $i$ to $j$ while remaining in the ellipsoid $\varepsilon$. This number is well defined for convex sets having non-empty interior. In addition, observe that $\mathbf{x}^{\;i,j,\delta^{\varepsilon}}= \mathbf{x} - \delta^{\varepsilon}_{ij}(\mathbf{x})\,\mathbf{1}_{i} + \delta^{\varepsilon}_{ij}(\mathbf{x})\,\mathbf{1}_{j}$ is an unique boundary point of the ellipsoid $\varepsilon$ of type~\eqref{eq:objf2} with maximum volume. Having specified by the point $\mathbf{x}^{\;i,j,\delta^{\varepsilon}}$ a boundary point, getting \begin{equation*} \begin{split} & h^{v}(\mathbf{x}^{\;i,j,\delta^{\varepsilon}}) =h^{v}_{\gamma}(\mathbf{x}^{\;i,j,\delta^{\varepsilon}})=\bar{c} > 0 \Longleftrightarrow\\ & \Arrowvert \mathbf{E}^{\top}\; \mathbf{x}^{\;i,j,\delta^{\varepsilon}} + \vec{\alpha}\Arrowvert^2 =\bar{c} \Longleftrightarrow \Arrowvert \mathbf{E}^{\top}\; \mathbf{x} + \vec{\alpha} + \delta^{\varepsilon}_{ij}(\mathbf{x}) \, \mathbf{E}^{\top} (\mathbf{1}_{j}-\mathbf{1}_{i})\Arrowvert^2 =\bar{c} \Longleftrightarrow \\ & \Arrowvert \mathbf{E}^{\top}\; \mathbf{x} + \vec{\alpha} \Arrowvert^2 + 2 \cdot \delta^{\varepsilon}_{ij}(\mathbf{x}) \, \langle\,\mathbf{E}^{\top}\; \mathbf{x} + \vec{\alpha}, \mathbf{E}^{\top} (\mathbf{1}_{j}-\mathbf{1}_{i})\,\rangle + (\delta^{\varepsilon}_{ij}(\mathbf{x}))^{2}\,\Arrowvert \mathbf{E}^{\top} (\mathbf{1}_{j}-\mathbf{1}_{i})\Arrowvert^2 =\bar{c} \Longleftrightarrow \\ & (\delta^{\varepsilon}_{ij}(\mathbf{x}))^{2}\,\Arrowvert \mathbf{E}^{\top} (\mathbf{1}_{j}-\mathbf{1}_{i})\Arrowvert^2 =\bar{c} \qquad \forall\, i,j \in N, i\neq j. \end{split} \end{equation*} The last conclusion follows, since by assumption we have $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v)$, which is equivalent to $h^{v}(\mathbf{x})=h^{v}_{\gamma}(\mathbf{x})= 0$, and therefore we obtain $\mathbf{E}^{\top}\; \mathbf{x} + \vec{\alpha}=\mathbf{0}$. In addition, the volume of the ellipsoid $\varepsilon$ is strictly positive such that $\bar{c}>0$, this result implies that $(\delta^{\varepsilon}_{ij}(\mathbf{x}))^{2}$ as well as $\Arrowvert \mathbf{E}^{\top} (\mathbf{1}_{j}-\mathbf{1}_{i})\Arrowvert^{2}$ must also be strictly positive. Therefore, we get finally~\eqref{eq:crit_bds}. \end{proof} \begin{theorem}[\citet{mei:13}] \label{thm:repl_prk} If $[\vec{\gamma}]$ has non-empty interior and $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v) \subset [\vec{\gamma}]$, then $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v^{\mu})$ for all $\mu\cdot v^{\Delta} \in [-\mathsf{C},\mathsf{C}]^{p^{\prime}}$, where $v^{\mu} = v + \mu\cdot v^{\Delta} \in \mathbb{R}^{p^{\prime}}$, $\mu \in \mathbb{R}$ \begin{equation} \label{eq:crt_bds} \mathsf{C} :=\min_{i,j \in N, i \neq j}\bigg\{\bigg\arrowvert \frac{\pm\sqrt{\bar{c}}}{\Arrowvert \mathbf{E}^{\top} (\mathbf{1}_{j}-\mathbf{1}_{i}) \Arrowvert}\bigg\arrowvert\bigg\}, \end{equation} and $\mathbf{0} \neq \Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}}=\{\Delta \in \mathbb{R}^{p^{\,\prime}} \;\arrowvert\; \boldsymbol{\EuScript{W}}\Delta = \mathbf{0}\}$ with matrix $\boldsymbol{\EuScript{W}} := \boldsymbol{\mathcal{V}}^{\top}\, \boldsymbol{\EuScript{U}}$. \end{theorem} \begin{proof} By Lemma~\ref{lem:repl_prk} $\mathbf{x}^{\;i,j,\delta^{\varepsilon}} \in \varepsilon \subset [\vec{\gamma}]$, is an unique boundary point of the ellipsoid $\varepsilon$ of type~\eqref{eq:objf2} with maximum volume. We conclude that either (1) $s_{ij}(\mathbf{x}^{\;i,j,\delta^{\varepsilon}}) = s_{ij}(\mathbf{x}) + \delta^{\varepsilon}_{ij}(\mathbf{x})$ if $ S \in \mathcal{G}_{ij}$, or (2) $s_{ji}(\mathbf{x}^{\;i,j,\delta^{\varepsilon}}) = s_{ji}(\mathbf{x}) - \delta^{\varepsilon}_{ij}(\mathbf{x})$ if $S \in \mathcal{G}_{ji}$, or otherwise (3) $s_{ij}(\mathbf{x}^{\;i,j,\delta^{\varepsilon}}) = s_{ij}(\mathbf{x})$ is satisfied. Moreover, let $v,v^{\mu},v^{\Delta} \in \mathbb{R}^{p^{\prime}}$ and recall that $v^{\mu} = \boldsymbol{\EuScript{U}}(\lambda^{v} + \mu \Delta)$ with $\mathbf{0} \neq \Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}}$. Then it holds $v^{\mu}(S) = v(S) + \mu\cdot v^{\Delta}(S)$ for all $S \in 2^{n}\backslash\{\emptyset\} $. In the next step, extend the pre-kernel element $\mathbf{x}$ to a vector $\overline{\mathbf{x}}$ by the measure $x(S):=\sum_{k \in S}\,x_{k}$ for all $S \in 2^{n}\backslash\{\emptyset\} $, then define the excess vector by $\overline{e}:=v-\overline{\mathbf{x}}$. Due to these definitions, we obtain for $\vec{\xi}^{\,v^{\mu}}$ at $\mathbf{x}$: \begin{equation*} \vec{\xi}^{\,v^{\mu}}=\boldsymbol{\mathcal{V}}^{\top}\,\overline{e}^{\mu}= \boldsymbol{\mathcal{V}}^{\top}\,(v^{\mu}-\overline{\mathbf{x}})=\boldsymbol{\mathcal{V}}^{\top}\,( v -\overline{\mathbf{x}} + \mu \cdot v^{\Delta}) = \boldsymbol{\mathcal{V}}^{\top}\,(v-\overline{\mathbf{x}}) = \boldsymbol{\mathcal{V}}^{\top}\,\overline{e} = \vec{\xi} = \mathbf{0}. \end{equation*} By Lemma~\ref{lem:repl_min}, the system of excesses remains balanced for all $\mu \in \mathbb{R}$. However, the system of maximum surpluses remains invariant on a hypercube specified by the critical values of the ellipsoid $\varepsilon$. Thus, for appropriate values of $\mu$ the expression $\mu\cdot v^{\Delta}(S)$ belongs to the non-empty interval $[-\mathsf{C},\mathsf{C}]$ for $S \in 2^{n}\backslash\{\emptyset\}$. This interval specifies the range in which the game parameter can vary without having any impact on the set of most effective coalition given by $\mathcal{S}(\mathbf{x})$. Thus, the coalitions $\mathcal{S}(\mathbf{x})$ still have maximum surpluses for games defined by $v^{\mu} = \boldsymbol{\EuScript{U}}(\lambda^{v} + \mu \Delta)$ for all $\mu \boldsymbol{\EuScript{U}}\,\Delta = \mu \cdot v^{\Delta} \in [-\mathsf{C},\mathsf{C}]^{p^{\prime}}$. Hence the pre-kernel solution $\mathbf{x}$ is invariant against a change in the hypercube $[-\mathsf{C},\mathsf{C}]^{p^{\prime}}$. The conclusion follows. \end{proof} \citet[Sec. 7.6]{mei:13} has shown by some examples that the specified bounds by Theorem~\ref{thm:repl_prk} are not tight, in the sense that pre-kernel points belonging to the relative interior of a payoff set can also be the object of a replication. However, pre-kernel elements which are located on the relative boundary of a payoff set are probably not replicable. Therefore, there must exist a more general rule to reproduce a pre-kernel element for a related game $v^{\mu}$. In the course of our discussion, we establish that the single pre-kernel element of a default game which is an interior point of a payoff set is also the singleton pre-kernel of the derived related games. In a first step, we show that there exists an unique linear transformation of the pre-kernel point of a related game into the vector subspace of balanced excesses $\mathcal{E}$. This means, there is no other pre-kernel element in a payoff equivalence class that belongs to the same set of ordered bases, i.e.~reflecting an equivalent bargaining situation with a division of the proceeds of mutual cooperation in accordance with the pre-kernel solution. Secondly, we prove that there can not exist any other vector subspace of balanced excesses $\mathcal{E}_{1}$ non-transversal to $\mathcal{E}$ in which a pre-kernel vector can be mapped by a linear transformation. That is, there exists no other non-equivalent payoff set in which an other pre-kernel point can be located. \begin{lemma}[\citet{mei:13}] \label{lem:pivET} Let $\vec{\gamma}$ induces matrix $\mathbf{E}$, then \begin{equation*} (\mathbf{E}^{\top})^{\dagger} = 2\cdot\mathbf{Q}^{\dagger}\mathbf{E} \in \mathbb{R}^{n \times q}. \end{equation*} \end{lemma} \begin{proof} Remind from Lemma~\ref{lem:Etx_Pa} that $\mathbf{P} = 2 \cdot \mathbf{E}^{\top}\, \mathbf{Q}^{\dagger} \mathbf{E}$ holds. In addition, note that we have the following relation $\mathbf{Q}^{\dagger}\mathbf{Q} = (\mathbf{E}^{\top})^{\dagger}\,\mathbf{E}^{\top}$ which is an orthogonal projection onto $\mathcal{R}_{\mathbf{E}}$. Then attaining \begin{equation*} \begin{split} 2\cdot\mathbf{Q}^{\dagger}\mathbf{E} & = 2\cdot\mathbf{Q}^{\dagger}\mathbf{Q}\mathbf{Q}^{\dagger}\mathbf{E}=2\cdot(\mathbf{E}^{\top})^{\dagger}\mathbf{E}^{\top}\mathbf{Q}^{\dagger}\mathbf{E} \\ & =(\mathbf{E}^{\top})^{\dagger}(2\cdot\mathbf{E}^{\top}\mathbf{Q}^{\dagger}\mathbf{E})= (\mathbf{E}^{\top})^{\dagger}\mathbf{P} = (\mathbf{E}^{\top})^{\dagger}. \end{split} \end{equation*} The last equality follows from Lemma~\ref{lem:vsp_al}. This argument terminates the proof. \end{proof} Notice that in the sequel $\text{SO}(n)$ denotes the special orthogonal group, whereas $\text{GL}^{+}(n)$ denotes the positive general linear group (cf.~\citet[pp.~99-109]{mei:13}). \begin{proposition} \label{prop:uniqpk01} Let $E^{\top}_{1} = E^{\top}\,X$ with $X \in \text{SO}(n)$, that is $[\vec{\gamma}] \thicksim [\vec{\gamma}_{1}]$. In addition, assume that the payoff equivalence class $[\vec{\gamma}]$ induced from TU game $\langle\, N, v\, \rangle$ has non-empty interior such that $\{\mathbf{x}\} = \mathcal{P \text{\itshape r}K}(v) \subset [\vec{\gamma}]$ is satisfied, then there exists no other pre-kernel element in payoff equivalence class $[\vec{\gamma}_{1}]$ for a related TU game $\langle\, N, v^{\mu}\, \rangle$, where $v^{\mu} = v + \mu\cdot v^{\Delta} \in \mathbb{R}^{p^{\prime}}$, as defined by Lemma~\ref{lem:repl_min}. \end{proposition} \begin{proof} By the way of contradiction suppose that $\mathbf{x},\mathbf{y} \in \mathcal{P\text{\itshape r}K}(v^{\mu})$ with $\mathbf{y} \in [\vec{\gamma}_{1}]$ is valid. Then we get \begin{equation*} h^{v^{\mu}}(\mathbf{x}) = h_{\gamma}^{v^{\mu}}(\mathbf{x}) = \Arrowvert \mathbf{E}^{\top}\; \mathbf{x} + \vec{\alpha} \Arrowvert^2 = 0 \quad \text{and}\quad h^{v^{\mu}}(\mathbf{y}) = h_{\gamma_{1}}^{v^{\mu}}(\mathbf{y}) = \Arrowvert \mathbf{E}_{1}^{\top}\; \mathbf{y} + \vec{\alpha}_{1} \Arrowvert^2 = 0, \end{equation*} implying that \begin{equation} \label{eq:mapV} \mathbf{P}\,\vec{\alpha} = \vec{\alpha} \in \mathcal{E} \qquad\text{and}\qquad \mathbf{P}\,\vec{\alpha}_{1} = \vec{\alpha}_{1} \in \mathcal{E}. \end{equation} Moreover, we have $E^{\top}_{1} = E^{\top}\,X$ with $X \in \text{SO}(n)$, then $\mathcal{E} \subseteq \mathcal{V} \cap \mathcal{V}_{1}$ in accordance with Lemma 7.4.1 by~\citet{mei:13}. Now assume that $\vec{\alpha}_{1} = \boldsymbol{\mathcal{V}}^{\top}_{1}\,v^{\mu}$ holds with $\mathcal{V}_{1} \subseteq \mathcal{V}$. The latter supposition implies $\boldsymbol{\mathcal{V}}^{\top}_{1} = \mathbf{P}_{\mathcal{V}}\,\boldsymbol{\mathcal{V}}^{\top}_{1}$, since for every $\vec{\beta} \in \mathcal{V}$ we get $\vec{\beta} = \mathbf{P}_{\mathcal{V}}\,\vec{\beta}$ (cf.~Remark 6.5.1 ~\citet{mei:13}). According to $\mathcal{V}_{1} \subseteq \mathcal{V}$ it also holds $\mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}} \supseteq \mathcal{N}_{\boldsymbol{\EuScript{W}}}$. Our hypothesis $\mathbf{y} \in \mathcal{P\text{\itshape r}K}(v^{\mu})$ implies \begin{equation*} \mathbf{0} = \mathbf{E}_{1}^{\top}\; \mathbf{y} + \vec{\alpha}_{1} = \boldsymbol{\mathcal{V}}^{\top}_{1}\,\mathbf{Z}^{\top}\,\mathbf{y} + \boldsymbol{\mathcal{V}}^{\top}_{1}\,v^{\mu} = \boldsymbol{\mathcal{V}}^{\top}_{1}\,\mathbf{Z}^{\top}\; \mathbf{y} + \boldsymbol{\mathcal{V}}^{\top}_{1}\,(v + \mu\cdot v^{\Delta}) = \boldsymbol{\mathcal{V}}^{\top}_{1}\,(v - \overline{\mathbf{y}}), \end{equation*} whereas the vector of measures $\overline{\mathbf{y}}$ is expressed by $\overline{\mathbf{y}} = -\mathbf{Z}^{\top}\,\mathbf{y}$ (cf.~\citet[p. 141]{mei:13}). The result $\boldsymbol{\mathcal{V}}^{\top}_{1}\,(v - \overline{\mathbf{y}}) = \mathbf{0}$ yields to $\mathbf{y} \in \mathcal{P\text{\itshape r}K}(v)$, which is a contradiction. Therefore, we conclude that $\mathcal{V} \subset \mathcal{V}_{1}$ must be satisfied. In addition, from $\vec{\alpha}_{1} = \boldsymbol{\mathcal{V}}^{\top}_{1}\,v^{\mu}$ we attain $\mathbf{P}_{\mathcal{V}}\,\vec{\alpha}_{1} = \boldsymbol{\mathcal{V}}^{\top}\,(\boldsymbol{\mathcal{V}}^{\top})^{\dagger}\,\boldsymbol{\mathcal{V}}^{\top}_{1}\,v^{\mu} \neq \boldsymbol{\mathcal{V}}^{\top}_{1}\,v^{\mu} = \vec{\alpha}_{1}$ in accordance with $\mathbf{P}_{\mathcal{V}}\,\boldsymbol{\mathcal{V}}^{\top}_{1} \neq \boldsymbol{\mathcal{V}}^{\top}_{1}$, in fact, it holds $\mathcal{V} \subset \mathcal{V}_{1}$. Thus, we have $\mathbf{P}_{\mathcal{V}}\,\vec{\alpha}_{1} \notin \mathcal{V}$ contradicting that $\mathbf{P}_{\mathcal{V}}\,\vec{\alpha}_{1} = \vec{\alpha}_{1} \in \mathcal{E} \subseteq \mathcal{V} \subset \mathcal{V}_{1}$ holds true. From this, we conclude that $\vec{\alpha}_{1} = \boldsymbol{\mathcal{V}}^{\top}\,v^{\mu}$ must be in force. Furthermore, from~\eqref{eq:mapV} we have \begin{equation*} \mathbf{P}\,\vec{\alpha} - \vec{\alpha} = \mathbf{P}\,\vec{\alpha}_{1} - \vec{\alpha}_{1} = \mathbf{0} \in \mathcal{E} \Longleftrightarrow \mathbf{P}\,(\vec{\alpha} - \vec{\alpha}_{1}) = (\vec{\alpha} - \vec{\alpha}_{1}) \in \mathcal{E}. \end{equation*} Therefore, obtaining the equivalent expression \begin{equation*} \mathbf{E}^{\top}\;(X\,\mathbf{y} - \mathbf{x}) = (\vec{\alpha} - \vec{\alpha}_{1}) = \boldsymbol{\mathcal{V}}^{\top}\,v - \boldsymbol{\mathcal{V}}^{\top}\,(v + \mu\cdot v^{\Delta}) = \mathbf{0}, \end{equation*} then $\mathbf{x} = X\,\mathbf{y}$, since matrix $\mathbf{E}^{\top}$ has full rank due to $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v)$. Furthermore, notice that \begin{equation*} \langle\,\mathbf{x},\mathbf{y}\,\rangle = \langle\,(\mathbf{E}^{\top})^{\dagger}\,\vec{\alpha},(\mathbf{E}_{1}^{\top})^{\dagger}\,\vec{\alpha}_{1}\,\rangle = \langle\,(\mathbf{E}^{\top})^{\dagger}\,\vec{\alpha},X^{-1}\,(\mathbf{E}^{\top})^{\dagger}\,\vec{\alpha}\,\rangle = \langle\,2\,\mathbf{Q}^{\dagger}\,\mathbf{E}\,\vec{\alpha},2\,X^{-1}\,\mathbf{Q}^{\dagger}\,\mathbf{E}\,\vec{\alpha}\,\rangle \neq \mathbf{0} \end{equation*} Matrix $\mathbf{E}^{\top}$ has full rank, and $\mathbf{Q}$ is symmetric and positive definite, hence $\mathbf{Q}^{\dagger} = \mathbf{Q}^{-1}$, and the above expression can equivalently be written as \begin{equation} \label{eq:simmat} \begin{split} \langle\,\mathbf{Q}^{\dagger}\,\mathbf{a},X^{-1}\,\mathbf{Q}^{\dagger}\,\mathbf{a} \,\rangle & = \langle\,\mathbf{Q}^{-1}\,\mathbf{a},X^{-1}\,\mathbf{Q}^{-1}\,\mathbf{a} \,\rangle = \langle\,\mathbf{a},\mathbf{Q}\,X^{-1}\mathbf{Q}^{-1}\,\mathbf{a} \,\rangle \\ & = \langle\,\mathbf{a},X_{1}\mathbf{a} \,\rangle = \langle\,\mathbf{a},\mathbf{a}_{1} \,\rangle \neq \mathbf{0}, \end{split} \end{equation} while using $\mathbf{a} = 2\,\mathbf{E}\,\vec{\alpha}$ from Proposition~\ref{prop:eqrep}, and with similar matrix $X_{1} = \mathbf{Q}\,X^{-1}\mathbf{Q}^{-1}$ as well as $\mathbf{a}_{1} = X_{1}\,\mathbf{a}$. According to $\mathbf{E}^{\top}_{1} = \mathbf{E}^{\top}\,X$ with $X \in \text{SO}(n)$, we can write $X = \mathbf{Q}^{-1}(2\,\mathbf{E}\, \mathbf{E}_{1}^{\top})$. But then \begin{equation*} X_{1} = \mathbf{Q}\,X^{-1}\mathbf{Q}^{-1} = \mathbf{Q}\, (2\,\mathbf{E}\,\mathbf{E}_{1}^{\top})^{-1}. \end{equation*} Since we have $X \in \text{SO}(n)$, it holds $X^{-1} = X^{\top}$ implying that \begin{equation*} X_{1}^{\top} = X^{-1} = (2\,\mathbf{E}\, \mathbf{E}_{1}^{\top})^{-1}\,\mathbf{Q} = (2\,\mathbf{E}\, \mathbf{E}_{1}^{\top}) \,\mathbf{Q}^{-1} = X^{\top} = X_{1}^{-1}, \end{equation*} which induces $X=\mathbf{Q}^{-1}\,(2\,\mathbf{E}\, \mathbf{E}_{1}^{\top}) = \mathbf{Q}\, (2\,\mathbf{E}\,\mathbf{E}_{1}^{\top})^{-1} = X_{1}$. Now, observe \begin{equation*} \begin{split} X_{1} & = \mathbf{Q}\,X^{-1}\mathbf{Q}^{-1} = \mathbf{Q}\,X^{\top}\mathbf{Q}^{-1} = \mathbf{Q}\,(2\,\mathbf{E}\,\mathbf{E}_{1}^{\top})\,\mathbf{Q}^{-1}\,\mathbf{Q}^{-1} \\ & = \mathbf{Q}\,(2\,\mathbf{E}\,\mathbf{E}^{\top}\,X)\,\mathbf{Q}^{-2} = \mathbf{Q}^{2}\,X\,\mathbf{Q}^{-2}, \end{split} \end{equation*} hence, we can conclude that $X = \mathbf{I}$ implying $X_{1} = \mathbf{I}$ as well. We infer that $\mathbf{x} = \mathbf{y}$ contradicting the assumption $\mathbf{x} \neq \mathbf{y}$ due to $\mathbf{x} \in [\vec{\gamma}]$, and $\mathbf{y} \in [\vec{\gamma}_{1}]$. With this argument we are done. \end{proof} \begin{proposition} \label{prop:uniqpk01b} Impose the same conditions as under Proposition~\ref{prop:uniqpk01} with the exception that $X \in \text{GL}^{+}(n)$, then there exists no other pre-kernel element in payoff equivalence class $[\vec{\gamma}_{1}]$ for a related TU game $\langle\, N, v^{\mu}\, \rangle$. \end{proposition} \begin{proof} By the proof of Proposition~\ref{prop:uniqpk01} the system of linear equations $\mathbf{E}^{\top}\,(X\,\mathbf{y} - \mathbf{x}) = \mathbf{0}$ is consistent, then we get $\mathbf{x} = X\,\mathbf{y}$ by the full rank of matrix $\mathbf{E}^{\top}$. By Equation~\ref{eq:simmat} we obtain similar matrix $X_{1} = \mathbf{Q}\,X^{-1}\mathbf{Q}^{-1}$, hence the matrix $X_{1}$ is in the same orbit (conjugacy class) as matrix $X^{-1}$, this implies that $E^{\top} = E^{\top}_{1} \,X^{-1} = E^{\top}_{1} \,X_{1}$ must be in force. But then $E^{\top} = E^{\top}\,X \,X_{1}$, which requires that $X \,X_{1} = \mathbf{I}$ must be satisfied in view of the uniqueness of the transition matrix $X \in \text{GL}^{+}(m)$ (cf.~\citet[p. 102]{mei:13}). In addition, we have $\mathbf{a}_{1} = X_{1}\,\mathbf{a}$ as well as $\mathbf{a}_{1} = 2\,\mathbf{E}_{1}\,\vec{\alpha} = X\,\mathbf{a}$. Therefore, we obtain $X\,\mathbf{a}_{1} = \mathbf{a} = X^{2}\,\mathbf{a}$. From this we draw the conclusion in connection with the uniqueness of the transition matrix $X$ that $X = \mathbf{I}$ is valid. Hence, $\mathbf{x} = \mathbf{y}$ as required. \end{proof} \begin{proposition} \label{prop:uniqpk02} Assume $[\vec{\gamma}] \nsim [\vec{\gamma}_{1}]$, and that the payoff equivalence class $[\vec{\gamma}]$ induced from TU game $\langle\, N, v\, \rangle$ has non-empty interior such that $\{\mathbf{x}\} = \mathcal{P \text{\itshape r}K}(v) \subset [\vec{\gamma}]$ is satisfied, then there exists no other pre-kernel element in payoff equivalence class $[\vec{\gamma}_{1}]$ for a related TU game $\langle\, N, v^{\mu}\, \rangle$, where $v^{\mu} = v + \mu\cdot v^{\Delta} \in \mathbb{R}^{p^{\prime}}$, as defined by Lemma~\ref{lem:repl_min}. \end{proposition} \begin{proof} We have to establish that there is no other element $\mathbf{y} \in \mathcal{P\text{\itshape r}K}(v^{\mu})$ such that $\mathbf{y} \in [\vec{\gamma}_{1}]$ is valid, whereas $\mathbf{y} \notin \mathcal{P\text{\itshape r}K}(v)$ in accordance with the uniqueness of the pre-kernel for game $v$. In view of Theorem~\ref{thm:repl_prk} the pre-kernel $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v)$ of game $\langle\, N, v\, \rangle$ is also a pre-kernel element of the related game $\langle\, N, v^{\mu}\, \rangle$, i.e. $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v^{\mu})$ with $\mathbf{x} \in [\vec{\gamma}]$ due to Corollary~\ref{cor:innp}. Extend the payoff element $\mathbf{y}$ to a vector $\overline{\mathbf{y}}$ by the measure $y(S):=\sum_{k \in S}\,y_{k}$ for all $S \in 2^{n}\backslash\{\emptyset\} $, then define the excess vector by $\overline{e}^{\mu}:=v^{\mu}-\overline{\mathbf{y}}$. Moreover, compute the vector of (un)balanced excesses $\vec{\xi}^{\,v^{\mu}}$ at $\mathbf{y}$ for game $v^{\mu}$ by $\boldsymbol{\mathcal{V}}_{1}^{\top}\,\overline{e}^{\mu}$. This vector is also the vector of (un)balanced maximum surpluses, since $\mathbf{y} \in [\vec{\gamma}_{1}]$, and therefore $h^{\,v^{\mu}}=h^{\,v^{\mu}}_{\gamma_{1}}$ on $[\vec{\gamma}_{1}]$ in view of Lemma 6.2.2 by~\citet{mei:13}. Notice that in order to have a pre-kernel element at $\mathbf{y}$ for the related game $v^{\mu}$ it must hold $\vec{\xi}^{\,v^{\mu}} = \mathbf{0}$. In addition, by hypothesis $[\vec{\gamma}] \nsim [\vec{\gamma}_{1}]$, it must hold $\mathbf{E}^{\top} = \boldsymbol{\mathcal{V}}^{\top}\,\mathbf{Z}^{\top}$ and $\mathbf{E}_{1}^{\top} = \boldsymbol{\mathcal{V}}_{1}^{\top}\,\mathbf{Z}^{\top}$ in view of Lemma~\ref{lem:inl_vp}, thus $E^{\top}_{1} \not= E^{\top}\,X$ for all $X \in \text{GL}^{+}(n)$. This implies that we derive the corresponding matrices $\boldsymbol{\EuScript{W}} := \boldsymbol{\mathcal{V}}^{\top}\, \boldsymbol{\EuScript{U}}$ and $\boldsymbol{\EuScript{W}}_{1} := \boldsymbol{\mathcal{V}}_{1}^{\top}\, \boldsymbol{\EuScript{U}}$, respectively. We have to consider two cases, namely $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}} \cap \mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}}$ and $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}} \backslash \mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}}$. \begin{enumerate} \item Suppose $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}} \cap \mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}}$, then we get \begin{equation*} \vec{\xi}^{\,v^{\mu}}=\boldsymbol{\mathcal{V}}_{1}^{\top}\,\overline{e}^{\mu}= \boldsymbol{\mathcal{V}}_{1}^{\top}\,(v^{\mu}-\overline{\mathbf{y}})=\boldsymbol{\mathcal{V}}_{1}^{\top}\,( v -\overline{\mathbf{y}} + \mu \cdot v^{\Delta}) = \boldsymbol{\mathcal{V}}_{1}^{\top}\,(v-\overline{\mathbf{y}}) = \boldsymbol{\mathcal{V}}_{1}^{\top}\,\overline{e} = \vec{\xi}^{\,v} \not= \mathbf{0}. \end{equation*} Observe that $\vec{\xi}^{\,v} = \boldsymbol{\mathcal{V}}_{1}^{\top}\,(v-\overline{\mathbf{y}}) \not= \mathbf{0}$, since vector $\mathbf{y} \in [\vec{\gamma}_{1}]$ is not a pre-kernel element of game $v$. \item Now suppose $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}} \backslash \mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}}$, then \begin{equation*} \vec{\xi}^{\,v^{\mu}}=\boldsymbol{\mathcal{V}}_{1}^{\top}\,\overline{e}^{\mu}= \boldsymbol{\mathcal{V}}_{1}^{\top}\,(v^{\mu}-\overline{\mathbf{y}})=\boldsymbol{\mathcal{V}}_{1}^{\top}\,( v -\overline{\mathbf{y}} + \mu \cdot v^{\Delta}) = \boldsymbol{\mathcal{V}}_{1}^{\top}\,\overline{e} + \mu \cdot \boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta} = \vec{\xi}^{\,v} + \mu \cdot \boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta}\not= \mathbf{0}. \end{equation*} Since, we have $\boldsymbol{\mathcal{V}}_{1}^{\top}\,(v-\overline{\mathbf{y}}) \not= \mathbf{0}$ as well as $\boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta} \not= \mathbf{0}$, and $\boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta}$ can not be expressed by $-\boldsymbol{\mathcal{V}}_{1}^{\top}\,(v-\overline{\mathbf{y}})$ in accordance with our hypothesis. To see this, suppose that the vector $\Delta$ is expressible in this way, then it must hold $\Delta = -\frac{1}{\mu}\,(\boldsymbol{\EuScript{W}}_{1})^{\dagger}\,\vec{\xi}^{\,v}$. However, this implies \begin{equation*} \boldsymbol{\EuScript{W}}\,\Delta = - \frac{1}{\mu}\,\boldsymbol{\EuScript{W}}\,(\boldsymbol{\EuScript{W}}_{1})^{\dagger}\,\vec{\xi}^{\,v} = - \frac{1}{\mu}\,(\boldsymbol{\mathcal{V}}^{\top}\,\boldsymbol{\EuScript{U}})\,(\boldsymbol{\mathcal{V}}_{1}^{\top}\, \boldsymbol{\EuScript{U}})^{\dagger}\,\vec{\xi}^{\,v} = - \frac{1}{\mu}\,\boldsymbol{\mathcal{V}}^{\top}\,(\boldsymbol{\mathcal{V}}_{1}^{\top})^{\dagger}\,\vec{\xi}^{\,v} \not=\mathbf{0}. \end{equation*} \end{enumerate} This argument terminates the proof. \end{proof} To complete our uniqueness investigation, we need to establish that the single pre-kernel element of the default game also preserves the pre-nucleolus property for the related games, otherwise we can be sure that there must exist at least a second pre-kernel point for the related game different form the first one. For doing so, we introduce the following set: \begin{definition} For every $\mathbf{x} \in \mathbb{R}^{n}$, and $\psi \in \mathbb{R}$ define the set \begin{equation} \label{eq:dset} \mathcal{D}^{v}(\psi,\mathbf{x}) := \left\{S \subseteq N \,\arrowvert\, e^{v}(S,\mathbf{x}) \ge \psi \right\}, \end{equation} \end{definition} \noindent and let $\mathcal{B}=\{S_{1},\ldots, S_{m}\}$ be a collection of non-empty sets of $N$. We denote the collection $\mathcal{B}$ as balanced whenever there exist positive numbers $w_{S}$ for all $S \in \mathcal{B}$ such that we have $\sum_{S \in \mathcal{B}}\, w_{S}\mathbf{1}_{S} = 1_{N}$. The numbers $w_{S}$ are called weights for the balanced collection $\mathcal{B}$ and $\mathbf{1}_{S}$ is the {\bfseries indicator function} or {\bfseries characteristic vector} $\mathbf{1}_{S}:N \mapsto \{0,1\}$ given by $\mathbf{1}_{S}(k):=1$ if $k \in S$, otherwise $\mathbf{1}_{S}(k):=0$. A characterization of the pre-nucleolus in terms of balanced collections is due to~\citet{kohlb:71}. \begin{theorem} \label{thm:kohlb} Let $\langle\, N, v\, \rangle$ be a TU game and let be $\mathbf{x} \in \mathcal{I}^{\,0}(v)$. Then $\mathbf{x} = \nu(N,v) $ if, and only if, for every $\psi \in \mathbb{R}, \mathcal{D}^{v}(\psi,\mathbf{x})\not= \emptyset$ implies that $\mathcal{D}^{v}(\psi,\mathbf{x})$ is a balanced collection over N. \end{theorem} \begin{theorem} \label{thm:prnmu} Let $\langle\, N, v\, \rangle$ be a TU game that has a singleton pre-kernel such that $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v) \subset [\vec{\gamma}]$, and let $\langle\, N, v^{\mu}\, \rangle$ be a related game of $v$ derived from $\mathbf{x}$, then $\mathbf{x} = \nu(N,v^{\,\mu})$, whereas the payoff equivalence class $[\vec{\gamma}]$ has non-empty interior. \end{theorem} \begin{proof} By our hypothesis, $\mathbf{x} = \nu(N,v)$ is an interior point of an inscribed ellipsoid with maximum volume $\varepsilon := \{\mathbf{y}^{\prime}\, \arrowvert h^{v}_{\gamma}(\mathbf{y}^{\prime}) \le \bar{c} \} \subset [\vec{\gamma}]$, whereas $h_{\gamma}^{v}$ is of type~\eqref{eq:objf2} and $\bar{c} > 0$ (cf.~Lemma~\ref{lem:repl_prk}). This implies by Theorem~\ref{thm:repl_prk} that this point is also a pre-kernel point of game $v^{\mu}$, there is no change in set of lexicographically smallest most effective coalitions $\mathcal{S}(\mathbf{x})$ under $v^{\mu}$. The min-max excess value $\psi^{*}$ obtained by iteratively solving the LP (6.4-6.7) of~\citet[p.~332]{MPSh:79} for game $v$ is smaller than the maximum surpluses derived from $\mathcal{S}(\mathbf{x})$, this implies that there exists a $\bar{\psi} \ge \psi^{*}$ s.t. $\mathcal{S}(\mathbf{x}) \subseteq \mathcal{D}^{v}(\bar{\psi},\mathbf{x})$, that is, it satisfies Property I of~\citet{kohlb:71}. Moreover, matrix $\mathbf{E}^{\top}$ induced from $\mathcal{S}(\mathbf{x})$ has full rank, therefore, the column vectors of matrix $\mathbf{E}^{\top}$ are a spanning system of $\mathbb{R}^{n}$. Hence, we get $span\,\{\mathbf{1}_{S}\,\arrowvert\, S \in \mathcal{S}(\mathbf{x}) \} = \mathbb{R}^{n}$, which implies that the corresponding matrix $[\mathbf{1}_{S}]_{S \in \mathcal{S}(\mathbf{x})}$ must have rank $n$, therefore collection $\mathcal{S}(\mathbf{x})$ is balanced. In addition, we can choose the largest $\psi \in \mathbb{R}$ s.t. $\emptyset \not= \mathcal{D}^{v}(\psi,\mathbf{x}) \subseteq \mathcal{S}(\mathbf{x})$ is valid, which is a balanced set. Furthermore, we have $\mu\cdot v^{\Delta} \in [-\mathsf{C},\mathsf{C}]^{p^{\prime}}$. Since $\mathsf{C} > 0$, the set $\mathcal{D}^{v}(\psi - 2\,\mathsf{C},\mathbf{x}) \not=\emptyset$ is balanced as well. Now observe that $e^{v}(S,\mathbf{x}) -\mathsf{C} \le e^{v}(S,\mathbf{x}) + \mu\cdot v^{\Delta}(S) \le e^{v}(S,\mathbf{x}) + \mathsf{C}$ for all $S \subseteq N$. This implies $\mathcal{D}^{v}(\psi,\mathbf{x}) \subseteq \mathcal{S}(\mathbf{x}) \subseteq \mathcal{D}^{v^{\mu}}(\psi - \mathsf{C},\mathbf{x}) \subseteq \mathcal{D}^{v}(\psi - 2\,\mathsf{C},\mathbf{x})$, hence, $ \mathcal{D}^{v^{\mu}}(\psi - \mathsf{C},\mathbf{x})$ is balanced. Let $c \in [-\mathsf{C},\mathsf{C}]$, and from the observation $\lim_{c \uparrow 0}\, \mathcal{D}^{v^{\mu}}(\psi + c,\mathbf{x}) = \mathcal{D}^{v^{\mu}}(\psi,\mathbf{x}) \supseteq \mathcal{D}^{v}(\psi,\mathbf{x})$, we draw the conclusion $\mathbf{x} = \nu(N,v^{\,\mu})$. \end{proof} \begin{theorem} \label{thm:siva1} Assume that the payoff equivalence class $[\vec{\gamma}]$ induced from TU game $\langle\, N, v\, \rangle$ has non-empty interior. In addition, assume that game $\langle\, N, v\, \rangle$ has a singleton pre-kernel such that $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v) \subset [\vec{\gamma}]$ is satisfied, then the pre-kernel $\mathcal{P\text{\itshape r}K}(v^{\mu})$ of a related TU game $\langle\, N, v^{\mu}\, \rangle$, as defined by Lemma~\ref{lem:repl_min}, consists of a single point, which is given by $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v^{\mu})$. \end{theorem} \begin{proof} This result follows from Theorems~\ref{thm:repl_prk},~\ref{thm:prnmu}, and Propositions~\ref{prop:uniqpk01b},~\ref{prop:uniqpk02}. \end{proof} \begin{example} \label{exp:uniqPk} In order to illuminate the foregoing discussion of replicating a pre-kernel element consider a four person average-convex but non-convex game that is specified by \begin{equation*} \begin{split} v(N) & = 16, v(\{1,2,3\}) = v(\{1,2,4\}) = v(\{1,3,4\}) = 8, \allowdisplaybreaks \\ v(\{1,3\}) & = 4, v(\{1,4\}) = 1, v(\{1,2\}) = 16/3, v(S) = 0 \;\text{otherwise}, \end{split} \end{equation*} \noindent with $N=\{1,2,3,4\}$. The pre-kernel coalesces with the pre-nucleolus, which is given by the point: $\nu(v) = \mathcal{P\text{\itshape r}K}(v)= (44/9,4,32/9,32/9)$. Obviously, the set $\mathcal{S}(\nu(v))=\{\{2\},\{3\},\{4\},\{1,2\},\{1,3,4\}\}$ is balanced, form this set a boundary vector $\vec{b}=(4,32/9,32/9,80/9,12)$ is obtained by $\nu(v)(S)$ for $S \in \mathcal{S}(\nu(v))$. Define matrix $\mathbf{A}$ by $[\mathbf{1}_{S}]_{S \in \mathcal{S}(\nu(v))}$, then the solution of the system $\mathbf{A}\,\mathbf{x} = \vec{b}$ reproduces the pre-nucleolus. Moreover, this imputation is even an interior point, thus the non-empty interior condition is valid. Hence, by Theorem~\ref{thm:repl_prk} a redistribution of the bargaining power among coalitions can be attained while supporting the imputation $(44/9,4,32/9,32/9)$ still as a pre-kernel element for a set of related games. In order to get a null space $\mathcal{N}_{\boldsymbol{\EuScript{W}}}$ with maximum dimension we set the parameter $\mu$ to $0.9$. In this case, the rank of matrix $\boldsymbol{\EuScript{W}}$ must be equal to $4$, and we could derive at most $11$-linear independent games which replicate the point $(44/9,4,32/9,32/9)$ as a pre-kernel element. Theorem~\ref{thm:siva1} even states that this point is also the sole pre-kernel point, hence the pre-kernel coincide with the pre-nucleolus for these games (see Table~\ref{tab:rpl_acvt1}). Notice that non of these $11$-linear independent related games is average-convex. Only two games, namely $v_{1}$ and $v_{3}$ are zero-monotonic and super-additive. Nevertheless, all games have a non-empty core and are semi-convex. The cores of the games have between $16$ and $24$-vertices, and have volumes that range from approximately $80$ to $127$ percent of the default core. TU game $v_{2}$ has the smallest and $v_{3}$ the largest core.\footnote{The example can be reproduced while using our MATLAB toolbox {\itshape MatTuGames}~\citeyear{mei:11}. The results can also be verified with our Mathematica package {\itshape TuGames}~\citeyear{mei:10a}.}\hfill$\#$ \end{example} \input{table-appendix01.tex} \section{On the Continuity of the Pre-Kernel} \label{sec:lhc} In the previous section, we have established uniqueness on the set of related games. Here, we generalize these results while showing that even on the convex hull comprising the default and related games in the game space, the pre-kernel must be unique and is identical with the point specified by the default game. Furthermore, the pre-kernel correspondence restricted on this convex subset in the game space must be single-valued, and therefore continuous. Recall that the relevant game space is defined through $\mathcal{G}(N) := \{v \in \mathcal{G}^{n}\,\arrowvert\, v(\emptyset) = 0 \land v(N) > 0\}$, and \begin{equation*} \mathcal{G}_{\mu,v}^{n}:=\left\{v^{\mu} \in \mathcal{G}(N)\, \arrowvert\, \mu\cdot v^{\Delta} \in [-\mathsf{C},\mathsf{C}]^{p^{\prime}} \right\}. \end{equation*} This set is the translate of a convex set by $v$, which is also convex and non-empty with dimension $p^{\prime}-m^{\prime}$, if matrix $\boldsymbol{\EuScript{W}}$ has rank $m^{\prime} \le q < p^{\prime}$. Then we can construct a convex set in the game space $\mathcal{G}(N)$ by taking the convex hull of game $v$ and the convex set $\mathcal{G}_{\mu,v}^{n}$, thus \begin{equation*} \mathcal{G}_{c}^{n} := conv\; \{v,\mathcal{G}_{\mu,v}^{n}\}. \end{equation*} \begin{theorem} \label{thm:siva2} The pre-kernel $\mathcal{P\text{\itshape r}K}(v^{\mu^{*}})$ of game $v^{\mu^{*}}$ belonging to $\mathcal{G}_{c}^{n}$ is a singleton, and is equal to $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v)$. \end{theorem} \begin{proof} Let be $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v)$ for game $v$. Take a convex combination of games in $\mathcal{G}_{c}^{n}$, hence \begin{equation*} v^{\mu^{*}} = \sum_{k=1}^{m} t_{k}\cdot v_{k}^{\mu} + t_{m+1}\cdot v = \sum_{k=1}^{m} t_{k}\cdot (v + \mu\cdot v_{k}^{\Delta}) + t_{m+1} \cdot v = v + \mu \sum_{k=1}^{m} t_{k}\cdot v_{k}^{\Delta} + \mu\;t_{m+1} \cdot \mathbf{0} = v + \mu \cdot v^{\Delta^{*}}, \end{equation*} with $v^{\Delta^{*}} := \sum_{k=1}^{m} t_{k}\cdot v_{k}^{\Delta} + t_{m+1} \cdot \mathbf{0}$, where $0 \le t_{k} \le 1, \forall k \in \{1,2, \ldots ,m+1\}$, and $\sum_{k=1}^{m+1} t_{k} =1$. Then $\mu\,v^{\Delta^{*}} \in [-\mathsf{C},\mathsf{C}]^{p^{\prime}}$, thus the set of lexicographically smallest coalitions $\mathcal{S}(\mathbf{x})$ does not change. By Theorem~\ref{thm:repl_prk} the vector $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v)$ is also a pre-kernel element of game $v^{\mu^{*}}$. But then by Theorem~\ref{thm:siva1} the pre-kernel of game $v^{\mu^{*}}$ consists of a single point, therefore $\{\mathbf{x}\} = \mathcal{P\text{\itshape r}K}(v^{\mu^{*}})$. \end{proof} \begin{example} \label{exp:siva1} To see that even on the convex hull $\mathcal{G}_{c}^{4}$, which is constituted by the default and related games of Table~\ref{tab:rpl_acvt1}, a particular TU game has the same singleton pre-kernel, we choose the following vector of scalars $\vec{t}=(1,3,8,1,2,4,3,5,7,9,2,3)/48$ such that $\sum_{k=1}^{12} t_k=1$ is given to construct by the convex combination of games presented by Table~\ref{tab:rpl_acvt1} a TU game $v^{\mu^{*}}$ that reproduces the imputation $(44/9,4,32/9,32/9)$ as its unique pre-kernel. The TU game $v^{\mu^{*}}$ on this convex hull in the game space that replicates this pre-kernel is listed through Table~\ref{tab:siva1}: \begin{center} \begin{threeparttable} {\footnotesize \setlength{\tabcolsep}{.3cm} \caption{A TU Game $v^{\mu^{*}}$ on $\mathcal{G}_{c}^{4}$ with the same singleton Pre-Kernel as $v$~\tnote{a,b}} \begin{tabular}[c]{c c c c c c c c} \hline $S$ & $v^{\mu^{*}}\!(S)$ & $S$ & $v^{\mu^{*}}\!(S)$ & $S$ & $v^{\mu^{*}}\!(S)$ & $S$ & $v^{\mu^{*}}\!(S)$ \\[.1em] \hline\hline $\{1\}$ & $-1/23$ & $\{1,2\}$& $134/25$ & $\{2,4\}$ & $173/1125$ & $\{1,3,4\}$& $576/71$ \\[.1em] $\{2\}$ & $8/71$ & $\{1,3\}$& $530/137$ & $\{3,4\}$ & $19/144$ & $\{2,3,4\}$& $15/232$ \\[.1em] $\{3\}$ & $2/75$ & $\{1,4\}$& $179/178$ & $\{1,2,3\}$ & $1436/187$ & $N$ & $16$ \\[.1em] $\{4\}$ & $2/75$ & $\{2,3\}$& $-8/157$ & $\{1,2,4\}$ & $1946/239$ & & \\[.1em] \hline\hline \end{tabular} \label{tab:siva1} \begin{tablenotes} \item[a] Pre-Kernel and Pre-Nucleolus: $(44/9,4,32/9,32/9)$ \item[b] Note: Computation performed with MatTuGames. \end{tablenotes} } \end{threeparttable} \end{center} This game is neither average-convex nor zero-monotonic, however, it is again semi-convex and has a rather large core with a core volume of $97$ percent w.r.t. the core of the average-convex game, and $20$ vertices in contrast to $16$ vertices respectively. \hfill$\#$ \end{example} Let $\mathcal{X}$ and $\mathcal{Y}$ be two metric spaces. A set-valued function or correspondence $\sigma$ of $\mathcal{X}$ into $\mathcal{Y}$ is a rule that assigns to every element $x \in \mathcal{X}$ a non-empty subset $\sigma(x) \subset \mathcal{Y}$. Given a correspondence $\sigma: \mathcal{X} \twoheadrightarrow \mathcal{Y}$, the corresponding graph of $\sigma$ is defined by \begin{equation} \label{eq:gr} Gr(\sigma) := \left\{(x,y) \in \mathcal{X} \times \mathcal{Y} \,\arrowvert\, y \in \sigma(x) \right\}. \end{equation} \begin{definition} \label{def:grcl} A set-valued function $\sigma : \mathcal{X} \twoheadrightarrow \mathcal{Y} $ is closed, if $Gr(\sigma)$ is a closed subset of $\mathcal{X} \times \mathcal{Y}$ \end{definition} The graph of the pre-kernel correspondence $\mathcal{P\text{\itshape r}K}$ is given by \begin{equation*} \begin{split} Gr(\mathcal{P\text{\itshape r}K}) := \left\{ (v,\mathbf{x})\,\arrowvert\, v \in \mathcal{G}^{n}, \mathbf{x} \in \mathcal{I}^{0}(v), \;\; s_{ij}(\mathbf{x},v) = s_{ji}(\mathbf{x},v) \quad\text{for all}\; i,j \in N, i\neq j \,\right\}. \end{split} \end{equation*} Similar, the graph of the solution set of function $h^{v}$ of type~\eqref{eq:objfh} is specified by \begin{equation*} \begin{split} Gr(M(h^{v})) & := \left\{ (v,\mathbf{x})\,\arrowvert\, v \in \mathcal{G}^{n}, \mathbf{x} \in \mathcal{I}^{0}(v), \;\; h^{v}(\mathbf{x}) = 0 \,\right\} \allowdisplaybreaks \\ & = \bigcup_{k \in \mathcal{J}^{\prime}}\; \left\{ (v,\mathbf{x})\,\arrowvert\, v \in \mathcal{G}^{n}, \mathbf{x} \in \overline{[\vec{\gamma}_{k}]}, \;\; h_{\gamma_{k}}^{v}(\mathbf{x}) = 0 \,\right\} = \bigcup_{k \in \mathcal{J}^{\prime}}\; Gr(M(h^{v}_{\gamma_{k}}, \overline{[\vec{\gamma}_{k}]})), \end{split} \end{equation*} with $ \mathcal{J}^{\prime} : = \{k \in \mathcal{J}\, \arrowvert\, g(\vec{\gamma}_{k}) = 0\}$. This graph is equal to the finite union of graphs of the restricted solution sets of quadratic and convex functions $h^{v}_{\gamma_{k}}$ of type~\eqref{eq:objf2}. The restriction of each solution set of function $h^{v}_{\gamma_{k}}$ to $\overline{[\vec{\gamma}_{k}]}$ is bounded, closed, and convex (cf.~\citet[Lemmata 7.1.3, 7.3.1]{mei:13}), hence each graph $Gr(M(h^{v}_{\gamma_{k}}, \overline{[\vec{\gamma}_{k}]}))$ from the finite index set $\mathcal{J}^{\prime}$ is bounded, closed and convex. \begin{proposition} \label{prop:eqgr} The following relations are satisfied between the above graphs: \begin{equation} \label{eq:eqgr} Gr(\mathcal{P\text{\itshape r}K}) = Gr(M(h^{v})) = \bigcup_{k \in \mathcal{J}^{\prime}}\; Gr(M(h^{v}_{\gamma_{k}}, \overline{[\vec{\gamma}_{k}]})). \end{equation} Hence, the pre-kernel correspondence $\mathcal{P\text{\itshape r}K}:\mathcal{G}(N) \twoheadrightarrow \mathbb{R}^{n}$ is closed and bounded. \end{proposition} \begin{proof} The equality of the graph of the pre-kernel and the solution set of function $h^{v}$ follows in accordance with Corollary~\ref{cor:rep}. Finally, the last equality is a consequence of Theorem 7.3.1 by~\citet{mei:13}. From this argument boundedness and closedness follows. \end{proof} \begin{definition} \label{def:uhc} The correspondence $\sigma: \mathcal{X} \twoheadrightarrow \mathcal{Y}$ is said to be upper hemi-continuous ({\bfseries uhc}) at $x$ if for every open set $\mathcal{O}$ containing $\sigma(x) \subseteq \mathcal{O}$ it exists an open set $\mathcal{Q} \subseteq \mathcal{Y}$ of $x$ such that $\sigma(x^{\prime}) \subseteq \mathcal{O}$ for every $x^{\prime} \in \mathcal{Q}$. The correspondence $\sigma$ is {\bfseries uhc}, if it is {\bfseries uhc} for each $x \in \mathcal{X}$. \end{definition} \begin{definition} \label{def:lhc} The correspondence $\sigma: \mathcal{X} \twoheadrightarrow \mathcal{Y}$ is said to be lower hemi-continuous ({\bfseries lhc}) at $x$ if for every open set $\mathcal{O}$ in $\mathcal{Y}$ with $\sigma(x) \cap \mathcal{O} \not= \emptyset$ it exists an open set $\mathcal{Q} \subseteq \mathcal{Y}$ of $x$ such that $\sigma(x^{\prime}) \cap \mathcal{O} \not= \emptyset$ for every $x^{\prime} \in \mathcal{Q}$. The correspondence $\sigma$ is {\bfseries lhc}, if it is {\bfseries lhc} for each $x \in \mathcal{X}$. \end{definition} \begin{lemma} \label{lem:lhc} Let $\mathcal{X}$ be a non-empty and convex polyhedral subset of $\mathbb{R}^{\tilde{p}}$, and $\mathcal{Y} \subseteq \mathbb{R}^{\tilde{n}}$. If $\sigma: \mathcal{X} \twoheadrightarrow \mathcal{Y}$ is a bounded correspondence with a convex graph, then $\sigma$ is lower hemi-continuous. \end{lemma} \begin{proof} For a proof see \citet[pp. 185-186]{pel_sud:07}. \end{proof} \begin{theorem} \label{thm:cont} The pre-kernel correspondence $\mathcal{P\text{\itshape r}K}:\mathcal{G}(N) \twoheadrightarrow \mathbb{R}^{n}$ is on $\mathcal{G}_{c}^{n}$ upper hemi-continuous as well as lower hemi-continuous, that is, continuous. \end{theorem} \begin{proof} The non-empty set $\mathcal{G}_{c}^{n}$ is a bounded polyhedral set, which is convex by construction. We draw from Proposition~\ref{prop:eqgr} the conclusion that the graph of the pre-kernel correspondence is bounded and closed. Form Theorem~\ref{thm:siva2} it follows $\arrowvert\, \mathcal{J}^{\prime}\, \arrowvert =1$ on $\mathcal{G}_{c}^{n}$, this implies that the graph of the pre-kernel correspondence is also convex on $\mathcal{G}_{c}^{n}$. The sufficient conditions of Lemma~\ref{lem:lhc} are satisfied, hence $\mathcal{P\text{\itshape r}K}$ is lower hemi-continuous on $\mathcal{G}_{c}^{n}$. It is known from Theorem 9.1.7.~by \citet{pel_sud:07} that $\mathcal{P\text{\itshape r}K}$ is upper hemi-continuous on $\mathcal{G}(N)$. Hence, on the restricted set $\mathcal{G}_{c}^{n}$, the set-valued function $\mathcal{P\text{\itshape r}K}$ is upper and lower hemi-continuous, and therefore continuous. Actual, it is a continuous function on $\mathcal{G}_{c}^{n}$ in view of $\arrowvert\, \mathcal{J}^{\prime}\, \arrowvert =1$. \end{proof} \begin{corollary} \label{cor:prkf} The pre-kernel correspondence $\mathcal{P\text{\itshape r}K}:\mathcal{G}(N) \twoheadrightarrow \mathbb{R}^{n}$ is on $\mathcal{G}_{c}^{n}$ single-valued and constant. \end{corollary} \begin{example} \label{exp:nlhc} To observe that on the restricted set $\mathcal{G}_{c}^{4}$ the pre-kernel correspondence $\mathcal{P\text{\itshape r}K}:\mathcal{G}(N) \twoheadrightarrow \mathbb{R}^{n}$ is single-valued and continuous, we exemplarily select a line segment in $\mathcal{G}_{c}^{4}$ to establish that all games on this segment have the same singleton pre-kernel. For this purpose, we resume Example~\ref{exp:uniqPk} and~\ref{exp:siva1}. Then we choose a vector of scalars $\vec{t}^{\epsilon}:=(1,3,8,1,2,4+\epsilon,3,5,7,9,2-\epsilon,3)/48$ with $t^{\epsilon}_{k} \ge 0$ for each $k$ such that $\sum_{k=0}^{11} t^{\epsilon}_k=1$ and $\epsilon \in [-2,2]$. Thus, we define the line segment in $\mathcal{G}_{c}^{4}$ through TU game $v^{\mu^{*}}$ from Example~\ref{exp:siva1} by \begin{equation*} \mathcal{G}_{c}^{4,l} := \bigg\{\sum_{k=0}^{11} t^{\epsilon}_{k}\cdot v_{k}^{\mu}\;\bigg\arrowvert v_{k}^{\mu} \in \mathcal{G}_{c}^{4}, \epsilon \in [-2,2] \bigg\}. \end{equation*} Therefore, for each game in the line segment $\mathcal{G}_{c}^{4,l}$, we can write \begin{equation*} \begin{split} v^{\epsilon} & := \sum_{k=1}^{11} t^{\epsilon}_{k}\cdot v_{k}^{\mu} + t^{\epsilon}_{0}\cdot v = \sum_{k=1}^{11} t_{k}\cdot v_{k}^{\mu} + t_{0}\cdot v + \frac{\epsilon}{48}\,(v_{6}^{\mu} - v_{11}^{\mu}) = v^{\mu^{*}} + \frac{\epsilon}{48}\,(v_{6}^{\mu} - v_{11}^{\mu})\allowdisplaybreaks\\ & = v + \mu \cdot v^{\Delta^{*}} + \frac{\epsilon\,\mu}{48}\,(v_{6}^{\Delta} - v_{11}^{\Delta}). \end{split} \end{equation*} We extend the pre-kernel element $\mathbf{x} = (44/9,4,32/9,32/9)$ to a vector $\overline{\mathbf{x}}$ in order to define the excess vector under game $v$ as $\overline{e}:=v-\overline{\mathbf{x}}$, and for game $v^{\epsilon}$ as $\overline{e}^{\,v^{\epsilon}}:=v^{\epsilon}-\overline{\mathbf{x}}$, respectively. According to these definitions, we get for $\vec{\zeta}^{v^{\epsilon}} = \vec{\xi}^{v^{\epsilon}}$ at $\mathbf{x}$ the following chain of equalities: \begin{equation*} \vec{\xi}^{v^{\epsilon}} = \boldsymbol{\mathcal{V}}^{\top}\,\overline{e}^{\,v^{\epsilon}} = \boldsymbol{\mathcal{V}}^{\top}\,\big(v-\overline{\mathbf{x}} + \mu \cdot v^{\Delta^{*}} + \frac{\epsilon\,\mu}{48}\,(v_{6}^{\Delta} - v_{11}^{\Delta})\big) = \boldsymbol{\mathcal{V}}^{\top}\,(v-\overline{\mathbf{x}}) = \boldsymbol{\mathcal{V}}^{\top}\,\overline{e} = \vec{\xi} = \vec{\zeta} = \mathbf{0}, \end{equation*} The last equality is satisfied, since $\mathbf{x}$ is the pre-kernel of game $v$. Recall that it holds $\mu\,v^{\Delta^{*}},\mu\,v_{6}^{\Delta},\mu\,v_{11}^{\Delta} \in [-\mathsf{C},\mathsf{C}]^{15}$, whereas $\boldsymbol{\mathcal{V}}^{\top} \,v^{\Delta^{*}} = \boldsymbol{\mathcal{V}}^{\top}\,v_{6}^{\Delta} = \boldsymbol{\mathcal{V}}^{\top}\,v_{11}^{\Delta} = \mathbf{0}$ is in force. Therefore, for each TU game $v^{\epsilon} \in \mathcal{G}_{c}^{4,l}$ we attain \begin{equation*} \mathcal{P\text{\itshape r}K}(v^{\epsilon}) = (44/9,4,32/9,32/9). \end{equation*} The pre-kernel correspondence $\mathcal{P\text{\itshape r}K}$ is a single-valued and constant mapping on $\mathcal{G}_{c}^{4,l}$. Hence its is continuous on the restriction $\mathcal{G}_{c}^{4,l}$, and due to Theorem~\ref{thm:cont} a fortiori on $\mathcal{G}_{c}^{4}$. \hfill$\#$ \end{example} \section{Preserving the Pre-Nucleolus Property} \label{sec:prspn} In this section we study some conditions under which a pre-nucleolus of a default game can preserve the pre-nucleolus property in order to generalize the above results in the sense to identify related games with an unique pre-kernel point even when the default game has not a single pre-kernel point. This question can only be addressed with limitation, since we are not able to make it explicit while giving only sufficient conditions under which the pre-kernel point must be at least disconnected, otherwise it must be a singleton. However, a great deal of our investigation is devoted to work out explicit conditions under which the pre-nucleolus of a default game will loose this property under a related game. For the next result remember that a balanced collection $\mathcal{B}$ is called minimal balanced, if it does not contain a proper balanced sub-collection. \begin{theorem} \label{thm:nprnmu} Let $\langle\, N, v\, \rangle$ be a TU game that has a non unique pre-kernel such that $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v)$, $\mathbf{y} = \nu(v)$ with $\mathbf{x},\mathbf{y} \in [\vec{\gamma}]_{v}$, and $\mathbf{x} \not= \mathbf{y}$ is satisfied. In addition, let $\langle\, N, v^{\mu}\, \rangle$ be a related game of $v$ with $\mu \not=0$ derived from $\mathbf{x}$ such that $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v^{\mu}) \cap [\vec{\gamma}]_{v^{\mu}}$, and $\mathbf{y} \not\in [\vec{\gamma}]_{v^{\mu}}$ holds. If the collection $\mathcal{S}^{v}(\mathbf{x})$ as well as its sub-collections are not balanced, \begin{enumerate} \item then $\mathbf{y} \not\in \mathcal{P\text{\itshape r}N}(v^{\mu})$. \item Moreover, if in addition $\mathbf{x} = \mathbf{y} \not\in [\vec{\gamma}]_{v^{\mu}}$, then $\mathbf{x} \not\in \mathcal{P\text{\itshape r}N}(v^{\mu})$. \end{enumerate} \end{theorem} \begin{proof} The proof starts with the first assertion. \begin{enumerate} \item By our hypothesis, $\mathbf{x}$ is a pre-kernel element of game $v$ and a related game $v^{\mu}$ that is derived from $\mathbf{x}$. There is no change in set of lexicographically smallest most effective coalitions $\mathcal{S}^{v}(\mathbf{x})$ under $v^{\mu}$ due to $\mathbf{x} \in [\vec{\gamma}]_{v^{\mu}}$, hence $\mathcal{S}^{v}(\mathbf{x}) = \mathcal{S}^{v^{\mu}}(\mathbf{x})$. Moreover, we have $\mu\cdot v^{\Delta} \in \mathbb{R}^{p^{\prime}}$. Furthermore, it holds $\mathbf{y} = \nu(v)$ by our assumption. Choose a balanced collection $\mathcal{B}$ that contains $\mathcal{S}^{v}(\mathbf{x})$ such that $\mathcal{B}$ is minimal. Then single out any $\psi \in \mathbb{R}$ such that the balanced set $\mathcal{D}^{v}(\psi,\mathbf{y})$ satisfies $\mathcal{S}^{v}(\mathbf{x}) \subseteq \mathcal{B} \subseteq \mathcal{D}^{v}(\psi,\mathbf{y}) \not=\emptyset$. Now choose $\epsilon > 0$ such that $\mathcal{D}^{v}(\psi,\mathbf{y}) = \mathcal{D}^{v}(\psi - 2\,\epsilon,\mathbf{y})$ is given. The set $\mathcal{D}^{v}(\psi - 2\,\epsilon,\mathbf{y})$ is balanced as well. Observe that due to $\mathbf{x} \in [\vec{\gamma}]_{v^{\mu}}$ we get $\mu\cdot v^{\Delta}(S) \le \epsilon $ for all $S \subset N$. However, it exists some coalitions $S \in \mathcal{S}^{v}(\mathbf{x})$ such that $e^{v}(S,\mathbf{y}) - \epsilon \not\le e^{v}(S,\mathbf{y}) + \mu\cdot v^{\Delta}(S)$ holds. Let $c \in [-\epsilon,\epsilon]$, now as $\lim_{c \uparrow 0}\, \mathcal{D}^{v^{\mu}}(\psi + c,\mathbf{y}) = \mathcal{D}^{v^{\mu}}(\psi,\mathbf{y})$ we have $\mathcal{D}^{v^{\mu}}(\psi,\mathbf{y}) \subseteq \mathcal{D}^{v}(\psi,\mathbf{y})$. Furthermore, we draw the conclusion that $\mathcal{S}^{v}(\mathbf{x}) \not\subseteq \mathcal{D}^{v^{\mu}}(\psi, \mathbf{y})$ is given due to $\mathcal{S}^{v}(\mathbf{x}) = \mathcal{S}^{v}(\mathbf{y}) \not= \mathcal{S}^{v^{\mu}}(\mathbf{y})$. Therefore, we obtain $\mathcal{D}^{v^{\mu}}(\psi,\mathbf{y}) \subset \mathcal {B} \subseteq \mathcal{D}^{v}(\psi - 2\,\epsilon,\mathbf{y})$. To see this, assume that $\mathcal{D}^{v^{\mu}}(\psi, \mathbf{y})$ is balanced, then we get $\mathcal{B} \subseteq \mathcal{D}^{v^{\mu}}(\psi, \mathbf{y})$, since $\mathcal{B}$ is minimal balanced. This implies $\mathcal{S}^{v}(\mathbf{x}) \subseteq \mathcal{D}^{v^{\mu}}(\psi, \mathbf{y})$. However, this contradicts $\mathcal{S}^{v}(\mathbf{x}) \not\subseteq \mathcal{D}^{v^{\mu}}(\psi, \mathbf{y})$. We conclude that $\mathcal{D}^{v^{\mu}}(\psi,\mathbf{y}) \subset \mathcal {B}$ must hold, but then the set $\mathcal{D}^{v^{\mu}}(\psi,\mathbf{y})$ can not be balanced. Hence, $\mathbf{y} \not\in \mathcal{P\text{\itshape r}N}(v^{\mu})$. \item Finally, if $\mathbf{x}=\mathbf{y}$, then $\mathbf{x}$ is the pre-nucleolus of game $v$, but it does not belong anymore to payoff equivalence class $[\vec{\gamma}]$ under $v^{\mu}$, that is, $[\vec{\gamma}]$ has shrunk. Therefore, $\mathcal{S}^{v}(\mathbf{x}) \not= \mathcal{S}^{v^{\mu}}(\mathbf{x}) $. Define from the set $\mathcal{S}^{v}(\mathbf{x})$ a minimal balanced collection $\mathcal{B}$ that contains $\mathcal{S}^{v}(\mathbf{x})$. In the next step, we can single out any $\psi \in \mathbb{R}$ such that the balanced set $\mathcal{D}^{v}(\psi,\mathbf{x})$ satisfies $\mathcal{S}^{v}(\mathbf{x}) \subseteq \mathcal{B} \subseteq \mathcal{D}^{v}(\psi,\mathbf{x}) \not=\emptyset$. In view of $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v^{\mu})$, it must exist an $\epsilon > 0$ within the maximum surpluses can be varied without effecting the pre-kernel property of $\mathbf{x}$ even when $\mathbf{x} \not\in [\vec{\gamma}]_{v^{\mu}}$, thus we have $\mu\cdot v^{\Delta}(S) \le \epsilon $ for all $S \subset N$. This implies that $\mathcal{D}^{v}(\psi,\mathbf{x}) \subseteq \mathcal{D}^{v}(\psi - 2\,\epsilon,\mathbf{x})$ is in force. The set $\mathcal{D}^{v}(\psi - 2\,\epsilon,\mathbf{x})$ is balanced as well. However, it exists some coalitions $S \in \mathcal{S}^{v}(\mathbf{x})$ such that $e^{v}(S,\mathbf{x}) - \epsilon \not\le e^{v}(S,\mathbf{x}) + \mu\cdot v^{\Delta}(S)$ is valid. Let $c \in [-\epsilon,\epsilon]$, now as $\lim_{c \uparrow 0}\, \mathcal{D}^{v^{\mu}}(\psi + c,\mathbf{x}) = \mathcal{D}^{v^{\mu}}(\psi,\mathbf{x})$ we have $\mathcal{D}^{v^{\mu}}(\psi,\mathbf{x}) \subseteq \mathcal{D}^{v}(\psi,\mathbf{x})$. Furthermore, we draw the conclusion that $\mathcal{S}^{v}(\mathbf{x}) \not\subseteq \mathcal{D}^{v^{\mu}}(\psi, \mathbf{x})$ is given due to $\mathcal{S}^{v}(\mathbf{x}) \not= \mathcal{S}^{v^{\mu}}(\mathbf{x})$. Therefore, we obtain $\mathcal{D}^{v^{\mu}}(\psi,\mathbf{x}) \subset \mathcal {B} \subseteq \mathcal{D}^{v}(\psi - 2\,\epsilon,\mathbf{x})$ by the same reasoning as under (1). Then the set $\mathcal{D}^{v^{\mu}}(\psi,\mathbf{x})$ can not be balanced. Hence, $\mathbf{x} \not\in \mathcal{P\text{\itshape r}N}(v^{\mu})$. \end{enumerate} \end{proof} \begin{theorem} \label{thm:nprnmu2} Let $\langle\, N, v\, \rangle$ be a TU game that has a non unique pre-kernel such that $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v) \cap [\vec{\gamma}]$, $\{\mathbf{y}\} = \mathcal{P\text{\itshape r}N}(v) \cap [\vec{\gamma}_{1}]$ is satisfied, and let $\langle\, N, v^{\mu}\, \rangle$ be a related game of $v$ with $\mu \not=0$ derived from $\mathbf{x}$ such that $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v^{\mu}) \cap [\vec{\gamma}]$ holds. If $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}} \backslash \mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}}$, then $\mathbf{y} \not\in \mathcal{P\text{\itshape r}K}(v^{\mu})$ and a fortiori $\mathbf{y} \not\in \mathcal{P\text{\itshape r}N}(v^{\mu})$. \end{theorem} \begin{proof} From the payoff equivalence classes $[\vec{\gamma}]$ and $[\vec{\gamma}_{1}]$ we derive the corresponding matrices $\boldsymbol{\EuScript{W}} := \boldsymbol{\mathcal{V}}^{\top}\, \boldsymbol{\EuScript{U}}$ and $\boldsymbol{\EuScript{W}}_{1} := \boldsymbol{\mathcal{V}}_{1}^{\top}\, \boldsymbol{\EuScript{U}}$, respectively. By assumption, it is $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}} \backslash \mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}}$ satisfied. From this argument, we can express the vector of unbalanced excesses $\vec{\xi}^{\,v^{\mu}}$ at $\mathbf{y}$ by \begin{equation*} \vec{\xi}^{\,v^{\mu}}=\boldsymbol{\mathcal{V}}_{1}^{\top}\,\overline{e}^{\mu}= \boldsymbol{\mathcal{V}}_{1}^{\top}\,(v^{\mu}-\overline{\mathbf{y}})=\boldsymbol{\mathcal{V}}_{1}^{\top}\,( v -\overline{\mathbf{y}} + \mu \cdot v^{\Delta}) = \vec{\xi}^{\,v} + \mu \cdot \boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta} = \mu \cdot \boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta}\not= \mathbf{0}. \end{equation*} Observe that $\vec{\xi}^{\,v} = \boldsymbol{\mathcal{V}}_{1}^{\top}\,(v-\overline{\mathbf{y}}) = \mathbf{0}$, since vector $\mathbf{y} \in [\vec{\gamma}_{1}]$ is a pre-kernel element of game $v$. However, due to $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}} \backslash \mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}}$, we obtain $\boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta} \not= \mathbf{0}$, it follows that $\mathbf{y} \not\in \mathcal{P\text{\itshape r}K}(v^{\mu})$. The conclusion follows that $\mathbf{y} \not\in \mathcal{P\text{\itshape r}N}(v^{\mu})$ must hold. \end{proof} \begin{theorem} \label{thm:nprnmu3} Let $\langle\, N, v\, \rangle$ be a TU game that has a non unique pre-kernel such that $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v)\backslash \mathcal{P\text{\itshape r}N}(v)$ and $\mathbf{x} \in [\vec{\gamma}]$. If $\langle\, N, v^{\mu}\, \rangle$ is a related game of $v$ with $\mu \not=0$ derived from $\mathbf{x}$ such that $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v^{\mu}) \cap [\vec{\gamma}]$ holds, then $\mathbf{x} \not\in \mathcal{P\text{\itshape r}N}(v^{\mu})$. \end{theorem} \begin{proof} According to our assumption $\mathbf{x}$ is not the pre-nucleolus of game $v$, this implies that there exists some $\psi \in \mathbb{R}$ such that $\mathcal{D}^{v}(\psi,\mathbf{x}) \not= \emptyset$ is not balanced. Recall that the set of lexicographically smallest most effective coalitions $\mathcal{S}^{v}(\mathbf{x})$ has not changed under $v^{\mu}$, since $\mathbf{x}$ is a pre-kernel element of game $v^{\mu}$ which still belongs to the payoff equivalence class $[\vec{\gamma}]$. Then exists a bound $\epsilon > 0$ within the maximum surpluses can be varied without effecting the pre-kernel property of $\mathbf{x}$. Thus, we get $\mathcal{D}^{v}(\psi,\mathbf{x}) = \mathcal{D}^{v}(\psi - 2\,\epsilon,\mathbf{x}) \not= \emptyset$ is satisfied. Then $e^{v}(S,\mathbf{x}) - \epsilon \le e^{v}(S,\mathbf{x}) + \mu\cdot v^{\Delta}(S) \le e^{v}(S,\mathbf{x}) + \epsilon$ for all $S \subseteq N$, therefore, this implies $\mathcal{D}^{v^{\mu}}(\psi - \epsilon,\mathbf{x}) = \mathcal{D}^{v}(\psi,\mathbf{x})$. The set $\mathcal{D}^{v^{\mu}}(\psi - \epsilon,\mathbf{x})$ is not balanced, we conclude that $\mathbf{x} \not\in \mathcal{P\text{\itshape r}N}(v^{\mu})$. \end{proof} \begin{theorem} \label{thm:siva3} Assume that the payoff equivalence class $[\vec{\gamma}]$ induced from TU game $\langle\, N, v\, \rangle$ has non-empty interior. In addition, assume that the pre-kernel of game $\langle\, N, v\, \rangle$ constitutes a line segment such that $\mathbf{x} \in \mathcal{P\text{\itshape r}N}(v) \cap \partial\overline{[\vec{\gamma}]}$, $\mathcal{P\text{\itshape r}K}(v) \cap \overline{[\vec{\gamma}_{1}]}$, and $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v^{\mu}) \cap [\vec{\gamma}]$ is satisfied, then the pre-kernel $\mathcal{P\text{\itshape r}K}(v^{\mu})$ of a related TU game $\langle\, N, v^{\mu}\, \rangle$ with $\mu \not=0$ derived from $\mathbf{x}$ is at least disconnected, otherwise unique. \end{theorem} \begin{proof} In the fist step, we have simply to establish that for game $v^{\mu}$ the pre-imputations lying on the part of the line segment included in payoff equivalence class $[\vec{\gamma}_{1}]$ under game $v$ will loose their pre-kernel properties due to the change in the game parameter. In the second step, we have to show that the pre-nucleolus $\mathbf{x}$ under game $v$ is also the pre-nucleolus of the related game $v^{\mu}$. \begin{enumerate} \item First notice that the payoff equivalence class $[\vec{\gamma}]$ has full dimension in accordance with its non-empty interior condition. This implies that the vector $\mathbf{x}$ must be the sole pre-kernel element in $\overline{[\vec{\gamma}]}$ (cf. with the proof of Theorem 7.8.1 in~\citet{mei:13}). By our hypothesis, it is even a boundary point of the payoff equivalence class under game $v$. Moreover, it must hold $[\vec{\gamma}] \nsim [\vec{\gamma}_{1}]$, since the rank of the induced matrix $\mathbf{E}^{\top}$ is $n$, and that of $\mathbf{E}^{\top}_{1}$ is $n-1$, therefore, we have $E^{\top}_{1} \not= E^{\top}\,X$ for all $X \in \text{GL}^{+}(n)$. In the next step, we select an arbitrary pre-kernel element from $\mathcal{P\text{\itshape r}K}(v) \cap \overline{[\vec{\gamma}_{1}]}$, say $\mathbf{y}$. By hypothesis, there exists a related game $v^{\mu}$ of $v$ such that $\mathbf{x} \in \mathcal{P\text{\itshape r}K}(v^{\mu}) \cap [\vec{\gamma}]$ holds, that is, there is no change in matrix $\mathbf{E}$ and vector $\vec{\alpha}$ implying $h^{v^{\mu}}(\mathbf{x})=h_{\gamma}^{v^{\mu}}(\mathbf{x})=0$. This implies that for game $v^{\mu}$ the payoff equivalence class $[\vec{\gamma}]$ has been enlarged in such a way that we can inscribe an ellipsoid with maximum volume $\varepsilon := \{\mathbf{y}^{\prime}\, \arrowvert h^{v^{\mu}}_{\gamma}(\mathbf{y}^{\prime}) \le \bar{c} \}$, whereas $h_{\gamma}^{v^{\mu}}$ is of type~\eqref{eq:objf2} and $\bar{c} > 0$ (cf. Lemma~\ref{lem:repl_prk}.). It should be obvious that element $\mathbf{x}$ is an interior point of $\varepsilon$, since $\mathbf{x} = M(h_{\gamma}^{v^{\mu}}) \subset \varepsilon \subset [\vec{\gamma}]$. We single out a boundary point $\mathbf{x}^{\prime}$ in $\partial\overline{[\vec{\gamma}]}$ under game $v^{\mu}$ which was a pre-kernel element under game $v$, and satisfying after the parameter change the following properties: $\mathbf{x}^{\prime} \in \partial\overline{[\vec{\gamma}]} \cap \overline{[\vec{\gamma}_{1}]}$ with $\mathbf{x}^{\prime}=\mathbf{x} + \mathbf{z}$, and $\mathbf{z} \not= \mathbf{0}$. This is possible due to the fact that the equivalence class $[\vec{\gamma}]$ has been enlarged at the expense of equivalence class $[\vec{\gamma}_{1}]$, which has shrunk or shifted by the change in the game parameter. Observe now that two cases may happen, that is, either $\mathbf{x}^{\prime} \in \varepsilon$ or $\mathbf{x}^{\prime} \notin \varepsilon$. In the former case, we have $h_{\gamma}^{v^{\mu}}(\mathbf{x}^{\prime}) = h^{v^{\mu}}(\mathbf{x}^{\prime}) = h_{\gamma_{1}}^{v^{\mu}}(\mathbf{x}^{\prime}) = \bar{c} > 0$, and in the latter case, we have $h_{\gamma}^{v^{\mu}}(\mathbf{x}^{\prime}) = h^{v^{\mu}}(\mathbf{x}^{\prime}) = h_{\gamma_{1}}^{v^{\mu}}(\mathbf{x}^{\prime}) > \bar{c} > 0 = h^{v}(\mathbf{x}^{\prime}) = h_{\gamma_{1}}^{v}(\mathbf{x}^{\prime})$. From $h_{\gamma_{1}}^{v^{\mu}}(\mathbf{x}^{\prime}) > 0$, and notice that the vector of unbalanced excesses at $\mathbf{x}^{\prime}$ is denoted as $\vec{\xi}^{\,v^{\mu}}$, we derive the following relationship \begin{equation*} h_{\gamma_{1}}^{v^{\mu}}(\mathbf{x}^{\prime}) = \Arrowvert\,\vec{\xi}^{\,v^{\mu}} \,\Arrowvert^{2} = \Arrowvert\, \vec{\xi}^{\,v} + \mu \cdot \boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta}\,\Arrowvert^{2} = \Arrowvert\,\mu \cdot \boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta} \,\Arrowvert^{2} = \mu^{2} \cdot \Arrowvert\, \boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta} \,\Arrowvert^{2} > 0, \end{equation*} with $\mu \not=0$. Thus, we have $\boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta} \not= \mathbf{0}$, and therefore $\Delta \in \mathcal{N}_{\boldsymbol{\EuScript{W}}} \backslash \mathcal{N}_{\boldsymbol{\EuScript{W}_{1}}}$. Observe that $\vec{\xi}^{\,v} = \boldsymbol{\mathcal{V}}_{1}^{\top}\,(v-\overline{\mathbf{x}^{\prime}}) = \mathbf{0}$, since vector $\mathbf{x}^{\prime} \in \overline{[\vec{\gamma}_{1}]}$ is a pre-kernel element of game $v$. Take the vector $\mathbf{y} \in [\vec{\gamma}_{1}]$ from above that was on the line segment as vector $\mathbf{x}^{\prime}$ under game $v$ which constituted a part of the pre-kernel of game $v$, we conclude that $\mathbf{y} \not\in \mathcal{P\text{\itshape r}K}(v^{\mu})$ in accordance with $\boldsymbol{\mathcal{V}}_{1}^{\top}\,v^{\Delta} \not= \mathbf{0}$. \item By our hypothesis, $\mathbf{x}$ is the pre-nucleolus of game $v$, and an interior point of equivalence class $[\vec{\gamma}]$ of the related game $v^{\mu}$. Using a similar argument as under (1) we can inscribe an ellipsoid with maximum volume $\varepsilon$, whereas $h_{\gamma}^{v^{\mu}}$ is of type~\eqref{eq:objf2} and $\bar{c} > 0$. In view of the assumption that $\mathbf{x}$ is also pre-kernel element of game $v^{\mu}$, we can draw the conclusion that the set of lexicographically smallest most effective coalitions $\mathcal{S}(\mathbf{x})$ has not changed under $v^{\mu}$. But then, we have $\mu\cdot v^{\Delta} \in [-\mathsf{C},\mathsf{C}]^{p^{\prime}}$. In addition, there exists a $\bar{\psi} \ge \psi^{*}$ s.t. $\mathcal{S}(\mathbf{x}) \subseteq \mathcal{D}^{v}(\bar{\psi},\mathbf{x})$, that is, it satisfies Property I of~\citet{kohlb:71}. Moreover, matrix $\mathbf{E}^{\top}$ induced from $\mathcal{S}(\mathbf{x})$ has full rank, therefore, the column vectors of matrix $\mathbf{E}^{\top}$ are a spanning system of $\mathbb{R}^{n}$. Hence, we get $span\,\{\mathbf{1}_{S}\,\arrowvert\, S \in \mathcal{S}(\mathbf{x}) \} = \mathbb{R}^{n}$ as well, which implies that matrix $[\mathbf{1}_{S}]_{S \in \mathcal{S}(\mathbf{x})}$ has rank $n$, the collection $\mathcal{S}(\mathbf{x})$ must be balanced. In accordance with vector $\mathbf{x}$ as the pre-nucleolus of game $v$, we can choose the largest $\psi \in \mathbb{R}$ s.t. $\emptyset \not= \mathcal{D}^{v}(\psi,\mathbf{x}) \subseteq \mathcal{S}(\mathbf{x})$ is valid, which is a balanced set. Since $\mathsf{C} > 0$, the set $\mathcal{D}^{v}(\psi - 2\,\mathsf{C},\mathbf{x}) \not=\emptyset$ is balanced as well. Now observe that $e^{v}(S,\mathbf{x}) -\mathsf{C} \le e^{v}(S,\mathbf{x}) + \mu\cdot v^{\Delta}(S) \le e^{v}(S,\mathbf{x}) + \mathsf{C}$ for all $S \subseteq N$. This implies $\mathcal{D}^{v}(\psi,\mathbf{x}) \subseteq \mathcal{S}(\mathbf{x}) \subseteq \mathcal{D}^{v^{\mu}}(\psi - \mathsf{C},\mathbf{x}) \subseteq \mathcal{D}^{v}(\psi - 2\,\mathsf{C},\mathbf{x})$, hence, $ \mathcal{D}^{v^{\mu}}(\psi - \mathsf{C},\mathbf{x})$ is balanced. To conclude, let $c \in [-\mathsf{C},\mathsf{C}]$, and from the observation $\lim_{c \uparrow 0}\, \mathcal{D}^{v^{\mu}}(\psi + c,\mathbf{x}) = \mathcal{D}^{v^{\mu}}(\psi,\mathbf{x}) \supseteq \mathcal{D}^{v}(\psi,\mathbf{x}) $, we draw the implication $\mathbf{x} = \nu(N,v^{\,\mu})$. \end{enumerate} Finally, recall that the vector $\mathbf{x}$ is also the unique minimizer of function $h_{\gamma}^{v^{\mu}}$, which is an interior point of payoff equivalence class $[\vec{\gamma}]$, therefore the pre-kernel of the related game $v^{\mu}$ can not be connected. Otherwise the pre-kernel of the game consists of a single point. \end{proof} \begin{corollary} \label{cor:prnmu3} Let $\langle\, N, v\, \rangle$ be a TU game that has a non single-valued pre-kernel such that $\mathbf{x} \in \mathcal{P\text{\itshape r}N}(v) \cap \partial\overline{[\vec{\gamma}]}$ and let $\langle\, N, v^{\mu}\, \rangle$ be a related game of $v$ derived from $\mathbf{x}$, whereas $\mathbf{x} \in int\,[\vec{\gamma}]_{v^{\mu}}$, then $\mathbf{x} = \nu(N,v^{\,\mu})$. \end{corollary} \section{Concluding Remarks} \label{sec:rem} In this paper we have established that the set of related games derived from a default game with an unique pre-kernel must also possess this pre-kernel element as its single pre-kernel point. Moreover, we have shown that the pre-kernel correspondence in the game space restricted to the convex hull comprising the default and related games is single-valued and constant, and therefore continuous. Although, we could provide some sufficient conditions under which the pre-nucleolus of a default game -- whereas the pre-kernel constitutes a line segment -- induces at least a disconnected pre-kernel for the set of related games, it is, however, still an open question if it is possible to obtain from a game with a non-unique pre-kernel some related games that have an unique pre-kernel. In this respect, the knowledge of more general conditions that preserve the pre-nucleolus property is of particular interest. Even though, we have not provided a new set of game classes with a sole pre-kernel element, we nevertheless think that the presented approach is also very useful to bring forward our knowledge about the classes of transferable utility games where the pre-kernel coalesces with the pre-nucleolus. To answer this question, one need just to select boundary points of the convex cone of the class of convex games to enlarge the convex cone within the game space to identify game classes that allow for a singleton pre-kernel. \pagestyle{scrheadings} \chead{\empty} \footnotesize
\section{Introduction} The \textit{fractal galaxy distribution hypothesis} is an approach for the description of the large-scale structure of the Universe which assumes that this distribution is formed by a fractal system. This approach characterizes the system by means of its key feature, the \textit{fractal dimension} $D$, which is basically a way of quantifying the irregularity of the distribution \citep{mandelbrot83}. In the context of the large-scale structure of the Universe, $D$ essentially measures galactic clustering sparsity or, complementary, the dominance of voids. Values of $D$ smaller than 3, the topological dimension where the fractal structure is embedded, means irregular patterns in the structure. The smaller the values of $D$, the more sparse, or void dominated, is the galactic clustering. The determination of the possible fractal properties of a given galaxy distribution with data gathered from a galactic survey is the subject of \textit{fractal analysis}, where standard techniques of fractal geometry are applied to a given galaxy distribution dataset in order to calculate the fractal dimension \citep{pietronero87}. There are two ways in which this analysis can be performed: the single fractal approach or the multifractal one \citep{coleman92,ribeiro98}. Describing a fractal system by means of a \textit{single fractal dimension} is the simplest approach, since it basically reduces the quantification of the irregular patterns within the system by means of a unique value for $D$. The single fractal approach is also capable of describing more complex distributions, since a system can exhibit different values of $D$ at different distance ranges, that is, different scaling ranges may possess different single values for $D$. This situation means a succession of single fractal systems at different data ranges \citep{sylos98}. Differently from the single fractal approach, the \textit{multifractal} one characterizes the fractal system by several fractal dimensions in the same scaling range, that is, a whole spectrum of dimensions whose maximum value corresponds to the single fractal dimension the structure would have if it were treated as a single fractal. The multifractal approach is applied when quantities like galactic luminosity or mass have a distribution, that is, they range between very different values. Hence, this is a generalization of $D$ that includes such distributions and the maximum value of the multifractal spectrum corresponds to the single fractal dimension the system would have if the studied quantity did not range, that is, as if it did not exhibit a multifractal pattern \citep{gabrielli2005}. The fractal analysis of galaxy surveys whose redshift measurements are bigger than $z\approx0.1-0.3$ cannot be done without considering relativistic effects. This is because $D$ is determined from plots of volume density vs.\ distance, which means volume-limited samples, and when one determines galaxy distances at those ranges relativistic effects become strong enough so that one is faced not with one distance value, but several different ones whose difference increases as $z$ increases. That happens because relativistic cosmological models do not possess a single distance definition \citep{ellis71,ellis2007, holanda2010} and, therefore, different distance values can be assigned for a galaxy with an empirically determined value of $z$, and the range where those differences become significant also depends on the chosen cosmological model. As a result, relativistic effects cannot be neglected when performing fractal analysis of galaxy distributions whose redshift data are gathered beyond those redshift ranges \citep{juracy2008}. In addition, as a consequence of relativistic effects even spatially homogeneous cosmological models like the standard Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) will present observational inhomogeneities because observations are performed along the past light cone and at high redshift ranges distance measures required in the determination of volume densities will necessarily depart the local spatially homogeneous hypersurfaces of these models \citep{ribeiro92b, ribeiro95,ribeiro2001b}. \textit{Fractal cosmology}, that is, modelling the large-scale structure of the Universe by assuming that the galaxy distribution forms a fractal system, is an old subject previously known in the literature as \textit{hierarchical cosmology}. Discussions regarding the possible hierarchical structuring of the Universe go as far back as the beginning of the 20th century \citep{charlier08,charlier22,einstein22,selety22, amoroso29}, and attempts to theoretically describe and empirically characterize this hierarchical galaxy structure followed suit in later decades \citep{carpenter38,vaucouleurs60,vaucouleurs70,wertz70,wertz71, haggerty72}. The appearance of fractal geometry in the 1980s showed that the hierarchical galaxy structure discussed in these earlier studies has essentially the same features of galaxy distribution models that considered this distribution as a fractal system \citep{mandelbrot83, pietronero87}. In fact, the basic theoretical framework of these early hierarchical models turns out to lead to the same expressions as the ones originated from a fractal galaxy distribution model \citep{ribeiro94, ribeiro98}. Newtonian cosmology was used in earlier hierarchical cosmology models \citep{wertz70,wertz71,haggerty72}, as well as in more recent fractal cosmology ones \citep{elcio99,elcio2004}. Relativistic cosmological models, both spatially homogeneous and inhomogeneous, which considered a fractal system embedded in a 4-dimensional spacetime along the observer's past light cone, producing then relativistic fractal cosmologies, were developed later \citep{ribeiro92a,ribeiro93, emr2001,ribeiro2001a,ribeiro2001b,ribeiro2005}. More recently, several authors discussed relativistic cosmological models with fractal features either theoretically \citep{mureika2004,mureika2007,sylos2011,felipe2013, hossie2018,sadri2018,cosmai2019,jawad2019} or by attempting to apply fractal features to different observational scenarios using Newtonian or relativistic models \citep{jones88,martinez90,pan2000,gaite2007, stahl2016,raj2019,bruno2020}. Of particular interest to this paper is the study presented by \citet[from now on CS2015]{gabriela}, which stands on the middle ground between the two previously mentioned types of works in the sense that it developed both theoretical tools capable of characterizing a fractal system at high redshift by fully considering relativistic effects, and also performed data analysis of high redshift galaxy data in order to actually measure the single fractal dimension of the structure at different scale ranges having deep redshift values, that is, far from our present spatial foliation of spacetime as described in a 3+1 formalism of general relativity. This study concluded that for $z\lesssim1.3-1.9$ the average single fractal dimension that considered all distance definitions used in the paper resulted in $D=1.4^{\ssty +0.7}_{\ssty-0.6}$, whereas for redshift values higher than this approximate threshold they obtained $D=0.5^{\ssty +1.2}_{\ssty-0.4}$. This paper aims at applying the fractal analysis methodology used by CS2015 to a different galaxy sample, but with two major differences. First, the fractal analysis is applied to the UltraVISTA galaxy survey, which has measured redshift values of about 220k galaxies, a considerably larger galaxy sample than the FORS Deep Field dataset of 5.5k galaxies used in CS2015. Second, samples are obtained directly from measured redshift data instead of the indirect luminosity function methodology employed by CS2015. A graph of absolute magnitudes in terms of redshifts shows that the UltraVISTA galaxies scale with redshift bins, providing then a volume-limited subsample appropriate for a fractal analysis. Nevertheless the whole survey data was also subject to fractal analysis for comparison purposes. As a consequence of these different approaches, the results obtained here are clearly improved once compared to the ones achieved by CS2015, namely, a better defined threshold for low and high scaling ranges, smaller uncertainties and results more in line with each other considering all cosmological distance definitions used in both studies. Our calculations showed that summing up all results and uncertainties obtained with the employed distance definitions we concluded that both the subsample and entire survey data of the UltraVISTA catalog can be well characterized as a fractal galaxy distribution system possessing two consecutive scaling ranges with the following single fractal dimensions. The subsample resulted in $D=\left(1.58\pm0.20 \right)$ for $z<1$, and $D=\left(0.59\pm0.28\right)$ for $1\le z\le4$, whereas the complete sample yielded $D=\left(1.63\pm0.20\right)$ for $0.1<z<1$, and $D=\left(0.52\pm0.29\right)$ for $1\le z\le 6$. The plan of the paper is as follows. Section \ref{fractal-cos} develops standard tools of fractal geometry necessary for modelling the large-scale galaxy distribution as a fractal system in both Newtonian and relativistic frameworks, comprising review material extensively discussed and developed elsewhere plus a few additional remarks. This is included here for a self-contained presentation. Section \ref{fractal-analysis} describes the observational details of the UltraVISTA galaxy survey relevant to this work, and discusses the data handling required for the application of fractal tools to this specific dataset. Section \ref{results} presents the results of the fractal analysis of the UltraVISTA galaxy distribution. Discussions and conclusions are the subject of Section~\ref{conclusion}. \section{Fractal cosmology}\lb{fractal-cos} It has been known since Mandelbrot's (\citeyear{mandelbrot83}) original studies that fractal systems are characterized by power-laws. In fact, the early hierarchic cosmological models were connected to fractals exactly because galaxy density distribution data showed power-law features \citep[e.g.,][]{vaucouleurs70}. Hence, the early definitions and concepts used in the hierarchical cosmologies are the appropriate ones to start with. As it turns out these quantities form a set of very simple concepts and definitions able to characterize single-fractal-dimension galaxy distributions. Moreover, they are easily and straightforwardly adapted to a relativistic setting, albeit with some limitations. \subsection{Newtonian hierarchical (fractal) cosmology} Let $\Vobs$ be the \textit{observational volume} defined by the expression below, \be \Vobs=\frac{4}{3} \pi {(\dobs)}^3, \lb{vobs} \ee where $\dobs$ is the \textit{observational distance}. The \textit{observed volume density} $\gobs^\ast$ is defined as follows, \be \gobs^\ast=\frac{\Nobs}{\Vobs}, \lb{gobs-ast} \ee where $\Nobs$ is the \textit{observed cumulative number counts} of cosmological sources, that is, galaxies. Clearly $\gobs^\ast$ gives the number of sources per unit of observational volume out to a distance $\dobs$. The \textit{key hypothesis} underlining the hierarchical (fractal) galaxy distribution relates the cumulative number counts of observed cosmological sources and the observational distance by a phenomenological equation called the \textit{number-distance relation} \citep{wertz70,pietronero87}, whose expression yields, \be \Nobs=B \, {(\dobs)}^D, \lb{Nobs} \ee where $B$ is a positive constant and $D$ is the fractal dimension. This expression forms the basic hypothesis of the \textit{Pietronero-Wertz hierarchical (fractal) cosmology} \citep[and references therein]{ribeiro94,ribeiro98}. Note that since $\Nobs$ is a cumulative quantity, if beyond a certain distance there are no longer galaxies then $\Nobs$ no longer increases with $\dobs$. If instead objects are still detected and counted then it continues to increase. Observational biases may possibly affect its rate of growth, leading to an intermittent behavior, however, $\Nobs$ must grow or remain constant, and thus the exponent in Eq.\ (\ref{Nobs}) must be positive or zero. One may also define a second density in this context, the \textit{ observed differential density} $\gobs$. Its expression may be written as follows \citep{wertz70,wertz71}, \be \gobs=\frac{1}{4 \pi {(\dobs)}^2} \frac{\df N}{\dd(\dobs)}. \lb{gobs} \ee From this definition it is clear that $\gobs$ gives the rate of growth in number counts, or more exactly in their density, as one moves along the observational distance $\dobs$. Substituting Eqs.\ (\ref{vobs}) and (\ref{Nobs}) into Eqs.\ (\ref{gobs-ast}) and (\ref{gobs}) we respectively reach at two forms of the \textit{De Vaucouleurs density power-law} \citep{pietronero87, ribeiro94}, \begin{equation} \gobs^\ast = \frac{3B}{4\pi}{(\dobs)}^{D-3}, \lb{gstar3} \end{equation} \begin{equation} \gobs = \frac{DB}{4\pi}{(\dobs)}^{D-3}. \lb{gama3} \end{equation} Thus, if the observed galaxy distribution behaves as a fractal system with $D<3$, that is, if it follows the number-distance relation (\ref{Nobs}), both observational densities above decay as power-laws. If $D=3$ the distribution is said to be \textit{observationally homogeneous}, as both densities become constant and distance independent. Note that the two power-laws above allow the empirical determination of different single fractal dimensions in two or more scaling ranges dependent on the observational distance. The ratio between the two forms of the De Vaucouleurs density power-law yields, \begin{equation} \frac{\gobs}{\gobs^\ast}=\frac{D}{3}. \label{directD} \end{equation} For an observationally homogeneous galaxy distribution this ratio must be equal to one, whereas an irregular distribution forming a single fractal system will have $0 \leq \left( \gobs \big. \big/ \gobs^\ast \right)<1$. There are two important remarks to be made regarding the densities defined by Eqs.\ (\ref{gobs-ast}) and (\ref{gobs}). First, both $\gobs^\ast$ and $\gobs$ are radial quantities and, therefore, should not be understood in statistical sense because they do not average all points against all points. Second, although both quantities are in principle equally applicable to cosmological objects, as pointed out by \citet[Secs.\ 4.2.1, 4.2.2]{gabriela} $\gobs$ is unsuitable for high redshift measures because the term $\dd\Nobs/\dd(\dobs)$ in Eq.\ (\ref{gobs}) increases, reaches a maximum and then decreases, affecting the exponent at high values of $z$ in such a way as producing spurious negative values for $D$ at such ranges. As noted above, negative values for $D$ are not possible due to the very definition of the number-distance relation (\ref{Nobs}) and, hence, $\gobs$ can only be safely used at relatively low redshift values, that is, $z\lesssim1$ or at ranges smaller than the observed galaxy distribution histogram of galaxy numbers per redshift bins reaches its maximum. So, for the reasons exposed above, from now on we shall only consider the volume density $\gobs^\ast$ in our calculations, since this density is not contaminated by spurious effects at $z>1$. \subsection{Relativistic fractal cosmology} The expressions above can be applied as such in Newtonian cosmologies, but as far as relativistic cosmological models are concerned two important conceptual issues must be considered which alter the expressions above in specific ways. First, in relativistic cosmology observations are located along the observer's past light cone. This means that even spatially homogeneous cosmologies like the FLRW will \textit{not} produce observationally constant volume densities at high redshift values because it is theoretically impossible to expect that the observed volume density will become constant even at moderate redshift values in FLRW cosmologies \citep[Sec.\ 2.1]{gabriela}. The basic point here is that observational and spatial homogeneities are different concepts in relativistic cosmology, so it is theoretically possible to have a cosmological-principle-obeying spatially homogeneous cosmological model exhibiting observational inhomogeneity, as extensively discussed elsewhere \citep{ribeiro92b,ribeiro94,ribeiro95, ribeiro2001b,ribeiro2005,juracy2008}. Second, both $\gobs^\ast$ and $\gobs$ are \textit{average} densities defined in the fractal cosmology context, and, therefore, should \textit{not} be confused with the local density appearing on the right hand side of Einstein equations. Moreover, densities in fractal cosmology are defined in terms of observational distances, which means that at high redshift $\dobs$ will have different values for each distance definition at the same redshift value $z$. In other words, as distance in relativistic cosmology is a concept not uniquely defined \citep{ellis71,ellis2007,holanda2010} we need to replace $\dobs$ for $d_i$ in the equations above, where the index will indicate the observed distance definition chosen to be calculated with specific redshift values. In this case the applicable distance definitions are the \textit{redshift distance} $\dz$, \textit{luminosity distance} $\dl$ and \textit{galaxy area distance} $\dg$, also known as \textit{transverse comoving distance}. The last two are connected by the Etherington reciprocity law \citep{etherington33,ellis2007} which reads as follows, \be \dl=(1+z)\,\dg. \lb{eth} \ee The redshift distance yields, \be \dz=\frac{c \, z}{H_0}, \lb{red} \ee where $c$ is the light speed and $H_0$ is the Hubble constant. This definition of $\dz$ is, of course, only valid in the FLRW metric. \citet{vinicius2007} and \citet{iribarrem2012a} showed that within the FLRW cosmology the densities defined by both $\dl$ and $\dz$ have empirical power-law properties, the same applying for $\dg$. Another distance measure that can be defined in this context is the \textit{angular diameter distance} $\da$, also known simply as \textit{area distance}, which is also connected to the quantities above by the reciprocity law, also known in the literature as the cosmic \textit{distance duality relation} \citep{holanda2010,holanda2011,holanda2012,zheng2020}, which reads as follows, \be \dl={(1+z)}^2\da. \lb{eth2} \ee However, \textit{densities} defined with $\da$ have the odd behavior of increasing as $z$ increases, making this distance unsuitable to use in the context of a fractal analysis of the galaxy distribution \citep{ribeiro2001b,ribeiro2005,juracy2008}. Bearing these points in mind, the expressions above become applicable to relativistic cosmology models once they are rewritten as below, \be \dobs=d_i, \lb{dists} \ee \be \Vobs=V_i=\frac{4}{3} \pi {(d_i)}^3, \lb{vi} \ee \be \Nobs=N_i=B_i \, {(d_i)}^{D_i}, \lb{Nobs_i} \ee \be \gobs^\ast=\gamma^\ast_i =\frac{N_i}{V_i}=\frac{3B_i}{4\pi} {(d_i)}^{D_i-3}, \lb{gobs-ast_i} \ee where $i=({\ssty L}$, ${\sty z}$, ${\ssty G}$) according to the distance definition used to calculate the volume density. The proportionally constant $B_i$ of the number-distance relation will, therefore, be attached to each specific distance definition, this being also true for the fractal dimension $D_i$, because $N_i$ is counted considering the limits of each distance definition. Hence, for a given $z$ each $d_i$ yields its respective $V_i$, $N_i$, $B_i$ and $D_i$, which means that all quantities become attached to a certain distance definition. As final points, first it is important to mention that although the fractal analysis discussed above can be performed using $\dl$ in any cosmological model, the same is not true for $\dz$ because its definition in Eq.\ (\ref{red}) is restricted to FLRW cosmologies. Regarding $\dg$, it has been previously shown that calculating the volume density in the Einstein-de Sitter cosmology using $\dg$ results in $\gamg^\ast=\mbox{constant}$ \citep[pp.\ 1718, 1723, 1724]{ribeiro2001b}, which seemed to indicate $\dg$ as an unsuitable distance definition to be used in fractal analysis. Nevertheless, later works showed that such behavior is cosmology dependent, not being valid in other FLRW models \citetext{\citealp{ vinicius2007}, fig.\ 7; \citealp{iribarrem2012a}, figs.\ 2-5; \citealp{gabriela}, figs.\ 3-4}. This finding justifies the inclusion of $\gamg^\ast$ in the present study. Second, the concepts above indicate that reasoning that regards the fractal approach to the galaxy distribution only at low redshift ranges are not applicable to the analysis performed in this paper. As the light cone is a relativistic concept, confusion arises if one does not acknowledge the difference between spatial and observational volume densities. It is in the latter spacetime locus that astronomy is made and where fractality in the sense of this work may be detected. Hence, observed fractal features appear only by correctly manipulating the FLRW observational quantities along the observer's past light cone at ranges where the null cone effects become relevant as far as distance definitions are concerned \citep{ribeiro95, ribeiro2001b,ribeiro2005,juracy2008}. These effects form the very core of the present analysis. \section{Fractal analysis}\lb{fractal-analysis} As this study seeks to empirically ascertain whether or not the fractal galaxy distribution hypothesis holds at very large-scales of the observed Universe, we chose to perform a fractal analysis with the data provided by the UltraVISTA galaxy survey, since it contains hundreds of thousands of galaxies with measured redshifts. Let us present next some details about this survey and how the fractal analysis was performed in their data. \subsection{The UltraVISTA galaxy survey}\lb{uvista} Our data are based on the first data release of the UltraVISTA galaxy survey, which is centered on the COSMOS field \citep{cosmos2007} with an effective area of $1.5\,\mbox{deg}^{\,2}$. Observations were made in 4 near infrared filters, $Y$, $J$, $H$ and $K_{\mathrm{S}}$, described in \citet{ultra1}. Photometric redshifts were calculated by \citet{ultra11} applying the SED fitting technique to 29 bands, the ones from UltraVISTA and a complementary set of broad and narrow bands from other surveys encompassing the ultraviolet, optical, near infrared and mid infrared regimes. The initial dataset consisted in a $K_{\mathrm{S}}$-band selected ($K_{\mathrm{S}}<24$) sample of about 220k galaxies in the redshift range of $0.1\le z\le6$. Although the sample was originally divided into quiescent and star forming galaxies, such grouping is unimportant for the purposes of this study and, hence, all galaxies of both types were included in the subsample selection and analysis performed here. Fig.\ \ref{histz} shows the UltraVISTA survey's galaxy numbers distribution in terms of redshift. Figs.\ \ref{histra} and \ref{histd} respectively show the galaxy numbers distribution in terms of right ascension and declination. \begin{figure} \includegraphics[width=\columnwidth]{histz.eps} \caption{Histogram showing the galaxy distribution numbers in terms of redshift for the UltraVISTA survey dataset studied here. This graph has $\Delta z=0.03$ as the redshift bins' size.} \label{histz} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{histra.eps} \caption{Histogram showing the galaxy distribution numbers in terms of right ascension (deg) for the UltraVISTA survey dataset studied here.} \label{histra} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{histd.eps} \caption{Histogram showing the galaxy distribution numbers in terms of declination (deg) for the UltraVISTA survey dataset studied here.} \label{histd} \end{figure} \subsection{Data selection}\lb{dataselec} The fractal analytical tools discussed above require the establishment of volume limited samples. However, galaxy surveys are limited by apparent magnitude, so some methodology is needed to ascertain that reduced subsamples follow increasing redshift bins in order to render the reduced galaxy data distributed over limited volume bins. One way of doing this is by plotting the galaxies' absolute magnitudes in terms of their respective measured redshifts and then selecting galaxies below a certain absolute magnitude threshold defined by the limiting apparent magnitude of the survey. This can be done by using the usual expression below, \be M=m-5\log \dl (z) -25, \lb{magabs1} \ee where $M$ is the absolute magnitude, $m$ is the apparent magnitude and $\dl$ is given in Mpc. Next one needs to choose the apparent magnitude threshold $m$ and its passband, as well as verifying if the resulting data is indeed, at least generally, distributed along the measured redshift bins, and then possibly establishing a redshift window for the final subsample distribution. The UltraVISTA survey furnished absolute magnitudes calculated in the $NUV$, $B$, $R$ and $J$ passbands. The $J$-band is the best choice among those for our purposes here, since this near infrared filter is less affected by dust extinction. Fig.\ \ref{hist-number-mags} shows an histogram of the UltraVISTA galaxy numbers in terms of apparent magnitudes in the $K_{\mathrm{S}}$ and $J$ passbands. Clearly the number distribution peaks at the apparent magnitude value 24 for both wavebands, a fact that led us to choose $J=24$ as our apparent magnitude threshold. Therefore, Eq.\ (\ref{magabs1}) can be rewritten according to the expression below, which provides the dividing line between the selected and unselected galaxies. \begin{figure} \includegraphics[width=\columnwidth]{hist-number_mags.eps} \caption{UltraVISTA galaxy numbers plotted in terms of apparent magnitudes in both the $J$ and $K_{\mathrm{S}}$ passbands. The number distribution for both bandwidths peaks at apparent magnitudes equal to 24 indicated by the vertical line.} \label{hist-number-mags} \end{figure} \be M_J=24-5\log \dl (z) -25. \lb{magabs2} \ee Fig.\ \ref{mvsz} shows a plot of the absolute magnitudes of 219300 UltraVISTA galaxies in the $J$-band against their respective measured redshifts. Green filled circles are those whose absolute magnitudes provided by the survey are smaller than $M_J$ in Eq.\ (\ref{magabs2}), and are then inside the reduced subsample, whereas the open gray circles are outside this threshold. We noticed that the absolute magnitude cut based on the $J$-band corresponds to a volume-limited subsample because the data generally follow the redshift bins of increasing values. In addition, we also noticed that there are few galaxies at the tail of the distribution, so our subsample suffered a further cut at $z=4$. Hence, we end up with a \textit{reduced UltraVISTA subsample} of 166566 galaxies cut by $J$-band absolute magnitudes $M_J$ and limited up to $z=4$. The remaining 52734 galaxies outside this subsample, that is, below the dividing line in Fig.\ \ref{mvsz} and having $z>4$, were disregarded. \begin{figure} \includegraphics[width=\columnwidth]{mag-vs-z.eps} \caption{Absolute magnitudes in the $J$-band vs.\ redshift for all galaxies of the UltraVISTA survey. Green filled circles are galaxies having $M_J$ smaller than the blue line cut given by Eq.\ (\ref{magabs2}), whereas open gray circles have bigger values for $M_J$.} \label{mvsz} \end{figure} \subsection{Data analysis}\lb{data-analysis} To obtain the observational distances $d_i\,(i={\ssty G}$, ${\ssty L}$, ${\sty z})$ from the calculated photometric redshift values one needs to choose a cosmological model. We assumed the FLRW cosmology with $\Omega_{m_{\ssty 0}}=0.3$, $\Omega_{\Lambda_{\ssty 0}} = 0.7$ and $H_0=70 \; \mbox{km} \; {\mbox{s}}^{-1} \; {\mbox{Mpc}}^{-1}$. The next steps were the establishment of the minimum redshift value $z_{\ssty 0}$ from where to start the analysis, the respective minimum distances $d_{i_0}=d_{i_0}(z_{\ssty 0})$, and the incremental distance interval $\Delta d_i$. The data sorting process was initiated by counting the number of observed galaxies $N_{i_{\ssty \rm 1}}$ in the first interval $d_{i_1}=d_{i_0}+\Delta d_i$ and calculating the respective volume density $\gamma_{i_1}^\ast$. This first interval defined the first bin. Then, the size of the bin was increased by $\Delta d_i$ and values for $N_{i_{\ssty \rm 2}}$ and $\gamma_{i_2}^\ast$ were obtained at the distance $d_{i_2}=d_{i_0}+2\Delta d_i$. This algorithm was repeated $n$ times until the last, and farthest, group of galaxies were included and all relevant quantities were also counted and calculated. Different bin size increments $\Delta d_i$ were tested for each distance definition in order to find out whether or not that would affect the results. This test turned out negative, which means that the obtained results are independent of bin size increment. We then chose $\Delta d_i=200$~Mpc, value which was applied to all calculations and provided in the end a very reasonable amount of data points for each quantity from where simple linear regression analyses were able to be performed. The final step was the determination of the fractal dimension itself. If the galaxy distribution really formed a fractal system, according to Eq.\ (\ref{gobs-ast_i}) the graphs of $\gamma_i^\ast$ vs.\ $d_i$ would behave as decaying power-law curves, and whose linear fit slopes in log-log plots allow for the fractal dimensions $D_i$ of the distribution to be straightforwardly determined. \section{Results}\lb{results} \subsection{Reduced subsample} Graphs for log-log values of $\gamma_i^\ast$ vs.\ $d_i$ showed that to a good approximation the reduced UltraVISTA galaxy survey subsample sorted according to the criteria set at Sec.\ \ref{dataselec} conforms with what is predicted as if the galaxy distribution does form a fractal system. Moreover, two power-law decaying regions were observed in the data, for $z<1$ and $z>1$, producing different single fractal dimensions. Figs.\ \ref{gammaLdL-r} to \ref{gammaGdG-r} present the results for all distance definitions adopted here, from where it can be concluded that for $z<1$ the fractal dimension is in the range $1.38-1.78$, whereas for $1\le z\le4$ the resulting range is significantly smaller, $0.31-0.87$. Table \ref{tab1} collects all results. \begin{figure} \includegraphics[width=\columnwidth]{dr1dlagosto2020.eps} \caption{Graph showing the log-log results for $\gaml^\ast$ vs.\ $\dl$ obtained with the \textit{reduced} UltraVISTA galaxy redshift survey dataset (see Sec.\ \ref{dataselec}). The dotted line is the straight line fit for galaxies having $z<1$, whereas the dashed line is for those with $z>1$. The error area is in gray. According to Eq.\ (\ref{gobs-ast_i}) the fractal dimensions obtained from these data are $D_{\ssty L}=(1.40\pm0.02)$ for $z<1$ and $D_{\ssty L}=(0.32 \pm0.01)$ for $1\le z\le 4$.} \label{gammaLdL-r} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{dr1dzagosto2020.eps} \caption{Graph showing the log-log results for $\gamz^\ast$ vs.\ $\dz$ obtained with the \textit{reduced} UltraVISTA galaxy redshift survey dataset (see Sec.\ \ref{dataselec}). The dotted line is the straight line fit for galaxies having $z<1$, whereas the dashed line is for those with $z>1$. The error area is in gray. According to Eq.\ (\ref{gobs-ast_i}) the fractal dimensions obtained from these data are $D_{\ssty z}=(1.61\pm0.02)$ for $z<1$ and $D_{\ssty z}=(0.38 \pm0.02)$ for $1\le z\le 4$.} \label{gammaZdZ-r} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{dr1dgagosto2020.eps} \caption{Graph showing the log-log results for $\gamg^\ast$ vs.\ $\dg$ obtained with the \textit{reduced} UltraVISTA galaxy redshift survey dataset (see Sec.\ \ref{dataselec}). The dotted line is the straight line fit for galaxies having $z<1$, whereas the dashed line is for those with $z>1$. The error area is in gray. According to Eq.\ (\ref{gobs-ast_i}) the fractal dimensions obtained from these data are $D_{\ssty G}=(1.75\pm0.03)$ for $z<1$ and $D_{\ssty G}=(0.81 \pm0.06)$ for $1\le z\le 4$.} \label{gammaGdG-r} \end{figure} \begin{table} \caption{Results in two redshift scales of the UltraVISTA galaxy survey fractal analysis in the \textit{reduced} subsample (see Sec.\ \ref{dataselec}). The single fractal dimensions $D_{\ssty L}$, $D_{\ssty z}$ and $D_{\ssty G}$ were obtained from this galaxy distribution respectively using the luminosity distance $\dl$, redshift distance $\dz$ and galaxy area distance (transverse comoving distance) $\dg$.} \label{tab1} \begin{center} \begin{tabular}{cccc} \hline & $D_{\ssty L}$ & $D_{\ssty z}$ & $D_{\ssty G}$\\ \hline $z<1$ & $1.40\pm0.02$ & $1.61\pm0.02$ & $1.75\pm0.03$\\ $1\le z\le 4$ & $0.32\pm0.01$ & $0.38\pm0.02$ & $0.81\pm0.06$\\ \hline \end{tabular} \end{center} \end{table} In summary, the results above indicate that the UltraVISTA galaxy survey provided a galaxy distribution subsample dataset that can be described as a fractal system with the following two consecutive and approximate single fractal dimensions values: $D\,(z<1)=(1.58 \pm0.20)$ and $D\,(1\le z\le4)=(0.59\pm0.28)$. The possible reasons as why the fractal dimension is so much reduced at the deepest range $z>1$ will be discussed below. \subsection{Complete (unselected) survey data} The fractal analysis of the UltraVISTA galaxy subsample, as defined in Sec.\ \ref{dataselec}, were based in the plot of absolute magnitudes in the $J$-band in terms of the measured redshifts of the galaxies shown in Fig.\ \ref{mvsz}. However, this plot also shows that even the galaxies outside the absolute magnitude cut are also distributed along increasing redshift bins, a fact that suggests that the whole sample may also be volume-limited. Hence, it is interesting to apply the same fractal methodology developed above not only to the subsample, but also to the whole survey data in order to compare the results. Figs.\ \ref{gammaLdL}-\ref{gammaGdG} show the results of the fractal analysis of the complete UltraVISTA survey data. It is clear that the galaxy distribution also presents fractal features in two regions below and above the redshift value $z=1$. The corresponding single fractal dimensions for each distance definitions adopted here were found to lie in the range $1.42-1.83$ for $z<1$, whereas for $1\le z\le6$ the dimension is significantly smaller, in the range $0.23-0.81$. Table \ref{tab2} collects these results and a comparison with the ones for the reduced subsample presented in Table \ref{tab1} indicated results very much alike, although their respective uncertainties do not overlap. \begin{figure} \includegraphics[width=\columnwidth]{dr1dl-07-05-2020.eps} \caption{Graph showing the log-log results for $\gaml^\ast$ vs.\ $\dl$ obtained with the complete UltraVISTA galaxy survey dataset. The dotted line is the straight line fit for galaxies having $z<1$, whereas the dashed line is for those with $z>1$. The error area is in gray. According to Eq.\ (\ref{gobs-ast_i}) the fractal dimensions obtained from these data are $D_{\ssty L}=(1.44\pm0.02)$ for $z<1$ and $D_{\ssty L}=(0.24 \pm0.01)$ for $1\le z\le 6$.} \label{gammaLdL} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{dr1dz-07-05-2020.eps} \caption{Graph showing the log-log results for $\gamz^\ast$ vs.\ $\dz$ obtained with the complete UltraVISTA galaxy survey dataset. The dotted line is the straight line fit for galaxies having $z<1$, whereas the dashed line is for those with $z>1$. The error area is in gray. According to Eq.\ (\ref{gobs-ast_i}) the fractal dimensions obtained from these data are $D_{\ssty z}=(1.65\pm0.02)$ for $z<1$ and $D_{\ssty z}=(0.30 \pm0.02)$ for $1\le z\le 6$.} \label{gammaZdZ} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{dr1dg-07-05-2020.eps} \caption{Graph showing the log-log results for $\gamg^\ast$ vs.\ $\dg$ obtained with the complete UltraVISTA galaxy survey dataset. The dotted line is the straight line fit for galaxies having $z<1$, whereas the dashed line is for those with $z>1$. The error area is in gray. According to Eq.\ (\ref{gobs-ast_i}) the fractal dimensions obtained from these data are $D_{\ssty G}=(1.80\pm0.03)$ for $z<1$ and $D_{\ssty G}=(0.75 \pm0.06)$ for $1\le z\le 6$.} \label{gammaGdG} \end{figure} So, it seems that the whole UltraVISTA galaxy survey can also be described as a fractal system having two consecutive single fractal dimensions: $D\,(z<1)=(1.63\pm0.20)$ and $D\,(1\le z\le6) =(0.52\pm0.29)$. \begin{table} \caption{Results in two redshift scales of the UltraVISTA galaxy survey fractal analysis in the \textit{complete} sample. The single fractal dimensions $D_{\ssty L}$, $D_{\ssty z}$ and $D_{\ssty G}$ were obtained from this galaxy distribution respectively using the luminosity distance $\dl$, redshift distance $\dz$ and galaxy area distance (transverse comoving distance) $\dg$.} \label{tab2} \begin{center} \begin{tabular}{cccc} \hline & $D_{\ssty L}$ & $D_{\ssty z}$ & $D_{\ssty G}$\\ \hline $z<1$ & $1.44\pm0.02$ & $1.65\pm0.02$ & $1.80\pm0.03$\\ $1\le z\le 6$ & $0.24\pm0.01$ & $0.30\pm0.02$ & $0.75\pm0.06$\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusions}\lb{conclusion} This paper sought to empirically test if the large-scale galaxy distribution can be described as a fractal system. Tools originally developed for Newtonian hierarchical cosmology were extended and applied to relativistic cosmological models in order to possibly describe galaxy fractal structures by means of single fractal dimensions at deep redshift values. These tools were applied to the UltraVISTA galaxy survey dataset comprising 220k objects spanning the redshift interval of $0.1\le z\le 6$. A reduced subsample of the survey was established by plotting the galaxies' absolute magnitudes in the $J$-band against their respective redshifts and setting an absolute magnitude cut with the apparent magnitude of $J=24$, as well as a redshift cut at $z=4$. Since this subsample showed that its galaxies followed increasing redshift bins, they were considered as effectively being a volume-limited distribution. Fractal analysis of the reduced subsample was carried out using the standard $\Lambda$CDM relativistic cosmological model. As relativistic cosmologies have several definitions of observed distance, only three distinct ones were used here, namely the luminosity distance $\dl$, redshift distance $\dz$ and galaxy area distance $\dg$, also known as transverse comoving distance. The use of several cosmological distance measures is due to the fact that relativistic effects become strong enough for redshift ranges larger than $z\gtrsim 0.3$, so these distance definitions produce different results for the same redshift value at those ranges. An algorithm for sorting the data according to the analytical developments required for testing an observed fractal structure as discussed here was also detailed. The results indicate that the UltraVISTA subsample has two consecutive redshift ranges behaving as single fractal structures. For $z<1$ the derived fractal dimension is well approximated by $D=(1.58\pm0.20)$, whereas for $1\le z\le4$ this dimension decreased to $D=(0.59\pm0.28)$. For comparison, the same fractal analysis was also carried out in the complete survey data, yielding as well two consecutive redshift ranges characterized as single fractal systems: $D=(1.63\pm0.20)$ for $z<1$ and $D=(0.52\pm0.29)$ in the range $1\le z\le6$. Both results, from the reduced and complete data, are consistent with those found by \citet{gabriela}, although here the conclusions were reached by a different, and simpler, methodology, and by applying this methodology to a numerically much larger galaxy sample in both cases. The most obvious question regarding these results is why there is such a significant decrease in the fractal dimension for redshift values larger than the unity. Conceivably this might be due to an observational bias caused by the simple fact that many galaxies located beyond $z=1$ are not being detected, decreasing then the observed galaxy clustering and, therefore, the associated fractal dimension at those scales. One might also consider the possibility of a bias in the galaxy number counts due to the small angular area of this survey. This means that its observational area might not provide a representative measurement of the entire sky distribution. Moreover it is important to consider the choice of observational field. If the size of the observed area is small and contains galaxy clusters, some galaxy concentration peaks can lead to higher fractal dimensions at low $z$. It is also conceivable that the data used to obtain our results suffer from a detection bias related to the cumulative measure $\Nobs$. Unbiassing this quantity might depend on comparing observed $\Nobs$ data with simulations derived from cosmological models' subsets and halo occupation distribution models \citep{hod}. One must also remember that the UltraVISTA photometric redshifts were obtained using SED fitting analysis, which means that the errors in the values are entirely dependent on the input assumptions used to derive these photo-$z$, and, therefore, it is not clear how the use of different types of photometric modeling might affect our results. This possible error source may be mitigated by the use of restricted galaxy samples, say, containing only luminous red galaxies such that photometric redshift uncertainties are minimized \citep{redmagic,redmagic2}. Another possible source of data uncertainty that might affect our results is the tension among the several measures of the Hubble constant. Here we have assumed the standard FLRW cosmological model with $H_0=70\;\mbox{km}\;{\mbox{s}}^{-1}{\mbox{Mpc}}^{-1}$, nevertheless different Hubble constant predictions range this quantity by about $\pm4\;\mbox{km}\;{\mbox{s}}^{-1}{\mbox{Mpc}}^{-1}$. Relativistic cosmological models are known to be highly nonlinear, so how much this uncertainty in the Hubble constant might affect our results, if at all, cannot be quantified beforehand. So, to ascertain the possibly negligible, or otherwise, impact of this uncertainty in our results can only be concluded by actually carrying out the calculations with this parameter range and performing comparisons. Aside from possible observational biases and yet unknown impact due to error sources, one might also attribute the decrease in the fractal dimension to a real physical effect. Perhaps, galaxy evolution dynamics is at play in causing such a decrease in the sense that there might be indeed much less galaxies at high $z$, meaning that the Universe was void dominated at those epochs since galaxies were much more sparsely distributed and in smaller numbers. Only further work with different high-$z$ galaxy samples and in different regions of the sky may clarify this issue. \section*{Acknowledgments} We are grateful to the referee for useful comments. S.T.\ thanks the Universidade Federal do Rio de Janeiro for a PIBIC scholarship. A.R.L.\ acknowledges Brazil's Federal Funding Agency CNPq for the financial support with a PCI fellowship. \bibliographystyle{h}
\section{Introduction} Object detection and tracking have become one of the most important tasks in autonomous vehicles (AV). Recent development of deep learning methods has dramatically boosted the performance of object understanding and tracking in autonomous driving applications thanks to the availability of public datasets. Far apart from prior video tracking datasets collected via single or stereo cameras, e.g., KITTI \cite{geiger2012we}, recent public datasets and their defined tracking problems have become more realistic with multiple cameras in autonomous vehicles. They usually have a full set of camera sensors that aim to create a 360$^\circ$ surround view and provide more redundancy as backup, i.e. more overlapping field-of-views. There are some popular large-scale tracking datasets with multiple sensor setup, such as nuScenes \cite{caesar2020nuscenes}, Waymo \cite{sun2019scalability}, Lyft \cite{skeete2018level}, or Argoverse \cite{chang2019argoverse}. They have a lot more data than KITTI ranging from multiple surrounding cameras, LiDAR, radars and GPS. Having enormous data as in recent public datasets helps to improve deep learning based 3D object detection. However, it also poses more challenging problems in practice, such as maintaining high accuracy and latency performance in variety points of views and environments. In addition, Multiple Object Tracking (MOT) is usually employed together with 3D object detection to track objects and maintain stability of prediction across video frames. In order to handle multiple views, a common approach to Multi-Camera Multiple Object Tracking (MC-MOT) \cite{cai2014exploring, chen2016equalized} is to firstly apply an MOT approach on each camera independently, i.e. single camera tracking (SCT), then link local tracklets across cameras together via global matching steps based on Re-ID features. However, this approach creates more errors, i.e. fragmented local tracklets, and more computation since the data association and the matching steps will perform multiple times both locally and globally. Therefore, using SCT multiple times is not the optimal option. In addition, it is unable to handle scenarios when the detector fails to detect objects from one of the cameras as shown in Fig. \ref{fig:failed_detection_case}. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figs/Failed_cases.png}% \caption{First row: the object detector and tracking method DEFT \cite{Chaabane2021deft} fails to detect partial objects in one camera but can detect in another camera, Second row: The detector fails to detect objects in both cameras. Green arrow indicates true positive detection sample, red arrows indicate false negative detection samples.} \label{fig:failed_detection_case} \end{figure} Therefore, this work proposes to formulate MC-MOT problem as a \emph{global association graph} in a $360^{\circ}$ view using an object detection as the inputs instead of SCT trajectories. Our proposed MC-MOT approach not only models object motion but also the appearance of each tracked object. We encode both location and appearance features in the node embeddings of the proposed graph where the nodes corresponding to each tracked object are updated and added to the graph over time. In addition, we adopt the new self-attention and cross-attention layers to decode motion and location, then propagate them across camera systems via 3D-to-2D transformation. \noindent \textbf{Contributions of this Work. } The main contributions of this work can be summarized as follows. A new MC-MOT framework is firstly introduced where a global graph is constructed with \textit{nodes} containing both appearance and motion features of the tracked objects and \textit{the weighted edges} between tracked objects or nodes. The edge weights are computed based on the similarity in appearance and location between two tracked objects or nodes. Secondly, we present a new \emph{Auto-regressive Graph Transformer network} including a self-attention layer to transform appearance features and cross-attention to predict the motion features of objects. This network can help to obtain a more robust node embedding to maintain accurate tracking when objects are on side views of cameras. Then, we further post-process the prediction results with motion propagation and node merging modules. Finally, the proposed framework will be evaluated with a comprehensive evaluation criterion to demonstrate its robustness compared against previous MC-MOT frameworks. The proposed method even helps to improve the detection accuracy of a standard 3D object detector on the nuScenes benchmark. \section{Related Work} MOT problem on AVs has recently received a lot of attention from the research community. There is an increasing amount of research work targeting trajectory estimation on moving sensors \cite{weng2020ab3dmot, chiu2020probabilistic} or combining appearance information to determine object IDs \cite{zhou2019objects, zhou2020tracking, Hu3DT19}. \paragraph{Tracking using Motion Model} Weng et al. \cite{weng2020ab3dmot} propose a simple yet effective baseline that utilizes classic state estimator Kalman Filter for 3D bounding boxes. They can be obtained not only from a LiDAR point cloud object detector \cite{Shi_2019_CVPR, 2019arXiv190809492Z, qi2016pointnet, qi2017pointnetplusplus, zhou2017voxelnet} but also from an image-based object detector \cite{Ren17CVPR, zhou2019objects, Simonelli_2019_ICCV, Hu3DT19}. Chiu et al. \cite{chiu2020probabilistic} improves the Kalman Filter tracking system by measuring the Mahalanobis distance between the predicted states and observations. This method is promisingly reliable in filtering outliers and handling both partially and fully occluded objects. \paragraph{Tracking using Appearance Model} Zhou et al.'s approaches \cite{zhou2019objects, zhou2020tracking} are widely used in single camera tracking problems. By treating objects as points, these approaches simplify the tracking procedure that is usually a combination of many expensive steps from detection to assigning object ID. Simonelli et al. \cite{Simonelli_2019_ICCV} introduce a novel disentangling transformation for detection loss and a self-supervised term for bounding boxes confidence score. Hu et al. \cite{Hu3DT19} try to estimate robust 3D box information from 2D images then adopt 3D box-reordering and LSTM as a motion module to link objects across frames. \paragraph{Tracking using Hybrid Approaches} Chaabane et al. \cite{Chaabane2021deft} train the object detection and the object association task simultaneously by adding a feature extractor and a matching head after object detector. Besides, an LSTM is used as a motion prediction module as an alternative to Kalman Filter. Similarly, Yin et al. \cite{yin2021center} follow the same process, but perform feature extraction on point cloud maps. \paragraph{Tracking using Modern Approaches} Graph Neural Network, Self-Attention, and Transformer \cite{vaswani2017attention} introduce a new learning-from-context paradigm. It recently has attracted considerable attention from the research community because of its promising performance in a wide range from Natural Language Processing \cite{ott2018scaling, devlin2019bert, Radford2018ImprovingLU, liu2019roberta} to Computer Vision \cite{dosovitskiy2020, carion2020endtoend, wang2020endtoend, ramachandran2019standalone, touvron2021training, zhu2020deformable} tasks. Currently, there are none of these methods applied in MC-MOT on autonomous vehicles but it is worthy to name a few SCT-MOT approaches \cite{Gao_2019_CVPR, Chu_2017_ICCV, sun2020transtrack, meinhardt2021trackformer, Zhu_2018_ECCV, Weng2020_GNN3DMOT, Weng2020_GNNTrkForecast}. Weng et al. \cite{Weng2020_GNN3DMOT} propose the first feature interaction method that leverages Graph Neural Network to individually adapt an object feature to another object features. Meinhardt et al. \cite{meinhardt2021trackformer} propose a new tracking-by-attention paradigm besides existing tracking-by-regression, tracking-by-detection and tracking-by-segmentation to deal with occlusions and reason out tracker's spatio-temporal correspondences. Sun et al. \cite{Zhu_2018_ECCV} utilize Query-Key mechanism to perform joint-detection-and-tracking, disentangle complex components in previous tracking systems. \section{Our Proposed Method} In this section, we first overview our proposed 3D object tracking pipeline where we construct and maintain a Global Graph with the Graph Transformer Networks in Subsection \ref{sec:problem_formulation}. Then, Subsection \ref{sec:attn_dyn_graph} will detail the structure of Graph Transformer Networks and how it is used to model appearance and motion of tracked objects. Finally, Subsection \ref{sec:training_gtn} describes how we train the Graph Transformer Networks. \subsection{MC-MOT via Global Graph Constructing} \label{sec:problem_formulation} Given $C$ cameras, denoted by the set $\mathcal{C}=\{c_1,\dots, c_C\}$, they are used to perceive surrounding environment of a vehicle. In MC-MOT, we assume each camera attached with an off-the-shelf 3D object detector to provide initial location of objects in real-world coordinates. In this work, KM3D \cite{2009.00764} is used to provide 3D object location and features but it can be replaced by any other 3D object detectors. In the previous MC-MOT approaches \cite{cai2014exploring} \cite{chen2016equalized}, \cite{Zhang2017MultiTargetMT} \cite{Qian_2020_CVPR_Workshops}, the methods depend on tracking results of an MOT algorithm on each camera independently. There is no mechanism to model the relationship between cameras while they have a strong relations. Instead, our proposed MC-MOT take detection results directly from the detectors and match with current tracked objects using an auto-regressive approach by taking the cameras relation into consideration. In our approach, a single graph is constructed and maintained across time by graph transformer networks (detailed in Sec. \ref{sec:attn_dyn_graph}). At time step $t$, our MC-MOT framework receives detection outcomes $ \mathcal{O}_c^{(t)} = \{ \mathbf{o}_{i,c}^{(t)}\}$ generated by a 3D object detector from all synchronized camera inputs. The detected $i$-th object $\mathbf{o}_{i, c}^{(t)}$ contains its location in 3D $\mathbf{l}_{i, c}^{(t)}$ and its features $\mathbf{f}_{i, c}^{(t)}$. Then, our MC-MOT framework will update and maintain a set of tracked objects, called tracklets $\mathcal{T}_c^{(t)} = \{\mathbf{tr}^{(t)}_{k,c}\}$, based on detected objects at time step $t$ and previous tracklets at time step $t-1$. Each $\mathbf{tr}^{(t)}_{k,c}$ is a vector with 3D location and features of the corresponding tracked object. This set of tracklets are represented by a global graph $\mathcal{G}^{(t)} = (\mathcal{V}^{(t)}, \mathcal{E}^{(t)})$, where the vertex set $\mathcal{V}^{(t)}$ contains \emph{all the tracklets $\mathcal{T}_c^{(t)}$} tracked up to time $t$ and the edge set $\mathcal{E}^{(t)}$ contains \textit{geometry distance} between two tracklets. In this way, $\mathcal{G}^{(t)}$ can be obtained using graph transformer networks from a joint set of $N_{\mathcal{T}}$ nodes of the previous graph $\mathcal{G}^{(t-1)}$ and $N_{\mathcal{O}}$ new nodes formed by current detections $ \mathcal{O}_c^{(t)}$s. The changes in the global graph from frame-to-frame are likely adding new nodes as new objects are detected or removing old nodes as tracklets are out of view. This step is done by graph link prediction using a Softmax classifier similar to \cite{quach2021dyglip}. Next, we will discuss how the transformer decoder can be employed to update the embedding features for each node with self-attention layer and how to predict tracked objects' motion via cross-attention layer. \begin{figure*} \centering \includegraphics[width=0.85\linewidth]{figs/overall_framework.png} \caption{The proposed framework via Graph Transformer Networks. For every new detected object, we calculate new graph feature described in Sub-sec. \ref{ssec:graph_attention} and \ref{ssec:graph_transformer}. Then, we perform motion propagation and node merging operators that include the removing and the adding nodes in the graph via link prediction in Sub-sec. \ref{ssec:graph_propagation} and \ref{ssec:node_merging}.} \label{fig:framework} \end{figure*} \subsection{Auto-Regressive Graph Transformer Networks} \label{sec:attn_dyn_graph} In this section, we introduce Graph Transformer Networks (GTN) to transform and update node embeddings by attending to other nodes for robust appearance and motion modeling. First, the building blocks of this GTN, i.e. graph self-attention layer and graph cross-attention layer, are presented in Sub-sec. \ref{ssec:graph_attention} and \ref{ssec:graph_transformer}, respectively. Then, we perform motion propagation and node merging operators that include the removing and the adding nodes in the graph via link prediction in Sub-sec. \ref{ssec:graph_propagation} and \ref{ssec:node_merging}, respectively. \subsubsection{Graph Self-Attention Layer for Appearance Modeling} \label{ssec:graph_attention} Each node $k \in \mathcal{V}^{(t)}$ in the graph $\mathcal{G}^{(t)}$ contains the object's 3D location $\mathbf{l}_{k, c}^{(t)}$ and its feature embedding $\mathbf{f}_{k, c}^{(t)}$, i.e. Re-ID features. The Re-ID features are provided by KM3D \cite{2009.00764} as its outputs together with 3D box predictions. To consider the effects of cameras on appearance features, the self-attention layer takes the input node features as the concatenation of embedding features with camera and location encoding as $ \mathbf{h}^l_k = \{ \mathbf{f}_{k, c}^{(t)} | \mathbf{c} | \mathbf{l}_{k, c}^{(t)} \} \in \mathbb{R}^{D_E}$ , where $ l=0 $ only applied for the input of the first layer, $\mathbf{f}_{k, c}^{(t)} \in \mathbb{R}^{D_F}$, $ \mathbf{c} \in \mathbb{R}^{D_C}$ and $ \mathbf{l}_{k, c}^{(t)} \in \mathbb{R}^{3}$. We use pre-computed camera and location encoding to concat with the node features before the first layer, similar to how positional encodings are added in the original Transformer \cite{vaswani2017attention}. Then, the self-attention layer provides the output embeddings as $ \mathbf{h}^{l+1}_k $ for layer $l$. This output can be used as the input for the next layer if there is more than one self-attention layer. In order to further improve pairwise attention scores as in \cite{vaswani2017attention}, we incorporate pairwise edge features by multiplying them together. In summary, the output of the self-attention layer is computed as follows, \begin{eqnarray} \label{eqn:str_att_layer} \footnotesize \mathbf{h'}^{l+1}_{k} = \mathbf{O}_h^l \overset{H}{\underset{i=1}{\Vert}}\left( \sum_{j \in \mathcal{V}^{(t)}} \mathbf{w}^{i,l}_{kj} \mathbf{V}^{i,l} \mathbf{h}^l_k \right) \\ \mathbf{e'}^{l+1}_{kj} = \mathbf{O}_e^l \overset{H}{\underset{i=1}{\Vert}}\left( \sum_{j \in \mathcal{V}^{(t)}} \mathbf{w'}^{i,l}_{kj} \right) \\ \mathbf{w}^{i,l}_{kj} = \text{softmax}_j ( \mathbf{w'}^{i,l}_{kj} ) \\ \mathbf{w'}^{i,l}_{kj} = \left( \frac{\mathbf{Q}^{i,l} \mathbf{h}^l_k \cdot \mathbf{K}^{i,l} \mathbf{h}^l_j }{\sqrt{D_h}} \right) \cdot \mathbf{E}^{i,l} \mathbf{e}^{l}_{kj} \end{eqnarray} where $\mathbf{w}^{i,l}_{kj}$ are the attention coefficients for the $i$-th attention head, $\Vert$ is the feature vector concatenation operation, $\mathbf{Q}^{i,l}, \mathbf{K}^{i,l}, \mathbf{V}^{i,l}, \mathbf{E}^{i,l} \in \mathbb{R}^{D_Z \times D_{E}}$ denote the ``queries", ``keys", ``values" linear projection matrices and node embedding, respectively, as defined in \cite{vaswani2017attention} and $D_Z$ is the output feature dimension. $H$ denotes number of attention head in multi-head attention setting. The outputs $ \mathbf{h}^{l+1}_{k} $ and $\mathbf{e}^{l+1}_{kj}$ are then passed through feed forward layers with residual connections and normalization layers (see Fig. \ref{fig:framework}), defined as follows. \begin{eqnarray} \mathbf{h''}^{l+1}_{k} = \text{norm} \left( \mathbf{h'}^{l+1}_{k} + \mathbf{h}^{l}_{k} \right) \\ \mathbf{h'''}^{l+1}_{k} = \text{FFN}^l_h \left( \mathbf{h''}^{l+1}_{k} \right) \\ \mathbf{h}^{l+1}_{k} = \text{norm} \left( \mathbf{h''}^{l+1}_{k} + \mathbf{h'''}^{l+1}_{k} \right) \end{eqnarray} where $\mathbf{h''}^{l+1}_{k}$ and $\mathbf{h'''}^{l+1}_{k}$ denote the outputs of intermediate layers. FFN is the feed forward layers. \begin{eqnarray} \mathbf{e''}^{l+1}_{kj} = \text{norm} \left( \mathbf{e'}^{l+1}_{kj} + \mathbf{e}^{l}_{kj} \right) \\ \mathbf{e'''}^{l+1}_{kj} = \text{FFN}^l_e \left( \mathbf{e''}^{l+1}_{kj} \right) \\ \mathbf{e}^{l+1}_{kj} = \text{norm} \left( \mathbf{e''}^{l+1}_{kj} + \mathbf{e'''}^{l+1}_{kj} \right) \end{eqnarray} where $\mathbf{e''}^{l+1}_{k}$ and $\mathbf{e'''}^{l+1}_{k}$ denote the outputs of intermediate layers. \subsubsection{Graph Transformer Layer for Motion Modeling} \label{ssec:graph_transformer} In this section, we demonstrate how tracked objects in tracklet nodes are used as queries while newly detected objects are used as keys and values in our proposed transformer layer. This layer perform a cross-attention mechanism instead of self-attention mechanism where queries are different from keys. The input of this layer are the output node embedding from previous self-attention layers and the output of this layer are new tracklet nodes for the current frame $t$. It takes an object feature from previous frames as input query instead. This inherited object feature conveys the appearance and location information of previously seen objects, so this layer could well locate the position of the corresponding object on the current frame and output “tracking boxes”. This design helps to capture the attention on current frame detection features and previous frame track queries, to continuously update the representation of object identity and location in each track query embedding. We first put together all detected objects as $X_{\mathcal{O}} \in \mathbb{R}^{N_{\mathcal{O}} \times D_Z}$ and all tracked objects as $X_{\mathcal{T}} \in \mathbb{R}^{N_{\mathcal{T}} \times D_Z}$. Then the $l$-th output of the multi-head cross attention layer is defined as \begin{eqnarray} \label{eqn:cross_att_layer} \small \mathbf{z}^{l}_{k} = \mathbf{O}_z^l \overset{H}{\underset{i=1}{\Vert}}\left( \sum_{j \in \mathbf{X}_{\mathcal{O}}} \mathbf{W}^{i,l}_{kj} \mathbf{V}^{i,l} \mathbf{X}^T_{\mathcal{T}}[k] \right) \\ \mathbf{W}^{i,l}_{kj} = \text{softmax}_j \left( \frac{\mathbf{Q}^{i,l} \mathbf{X}^T_{\mathcal{T}}[k] \cdot \mathbf{K}^{i,l} \mathbf{X}^T_{\mathcal{O}}[j] }{\sqrt{D_h}} \right) \end{eqnarray} where $\mathbf{Q}^{i,l}, \mathbf{K}^{i,l}, \mathbf{V}^{i,l} \in \mathbb{R}^{D_E \times D_{Z}}$, are the ``queries", ``keys" and ``values" linear projection matrices, respectively, as defined in \cite{vaswani2017attention} and $D_Z$ is the output feature dimension. Similar to attention layer, we can stack multiple cross-attention layers together. Then we get the final output to pass through FFN to provide final set of new node embeddings including location and class predictions for frame $t$. \subsubsection{Cross-Camera Motion Propagation}\label{ssec:graph_propagation} In this section, we provide a more detailed formulation on how to obtain Re-ID features of the detected objects from camera $c_k$ to camera $c_j$. First, we compute the transformation matrix to transform 3D object locations to 2D/image coordinates. This transformation which is composed of a transformation from camera-to-world for camera $c_k$, a transformation from world-to-camera for camera $c_j$, and a transformation from camera-to-image for camera $c_j$, is defined as follows. \begin{equation} \mathbf{M}_{kj} = \mathbf{M}_{I_j} * \mathbf{M}_{E_j} * \mathbf{M}^{-1}_{E_k} \end{equation} where $\mathbf{M}_{E_j}$ and $\mathbf{M}^{-1}_{E_k}$ are the extrinsic camera matrix for camera $c_k$ to camera $c_j$, respectively. $\mathbf{M}_{I_j}$ is the intrinsic camera matrix for camera $c_j$. Note that we only consider two adjacent cameras $c_k$ and $c_j$ where they have a certain amount of overlapping views. Then, we use the transformed 2D/image location to extract the re-id features at the corresponding location on the image. Finally, we update the existing node or add a new node for all the tracked objects $\mathbf{tr}^{(t)}_{k,c_j}$. \subsubsection{Node Merging via Edge Scoring} \label{ssec:node_merging} After having transformed node and edge features, we train a fully connected layer and a softmax layer as a classifier to determine the similarity between two nodes as previously proposed in \cite{quach2021dyglip}. The classifier produces a probability score $s \in [0, 1]$. The higher the score is, the more likely the two nodes are linked. Then we remove detection nodes that have a low class score which indicates that the detection is matched with an existing tracklet. We also merge nodes that have high similarity scores that have the same camera encoding, i.e. detected within single camera and update edge weights as the similarities among tracklet nodes to indicate the same target ID from different cameras. These necessary steps are similar to a non-maximum suppression (NMS) applied to trajectory for post-processing although cross-attention layer help spatially discriminate almost identical track query embeddings merging to the same target ID. \subsection{Processing Flow} In this section, we briefly summarize the pipeline of our proposed graph transformer networks to predict tracklet motion, motion propagation and node merging in Algorithm \ref{alg:DyGLIP}. \begin{algorithm}[th] \caption{The process pipeline for global graph constructing, motion prediction, propagation \& node merging} \label{alg:DyGLIP} \begin{algorithmic}[1] \STATE Init $t\gets 0$ /* Time */, $V \gets \emptyset$ \WHILE{$t < t_{\max}$} \STATE Obtain the set of detected objects $\mathcal{O}_c^{(t)}$ from 3D object detector \cite{2009.00764} in all cameras. \FOR{$\mathbf{o}_{k, c}^{(t)} \in \mathcal{O}_c^{(t)}$} \STATE $\mathcal{V}^{(t)} \gets \mathcal{V}^{(t-1)} \cup \mathbf{o}_{k, c}^{(t)}$ /* Add new nodes to graph */ \STATE /* Use the vector $\{ \mathbf{f}_{k, c}^{(t)} | \mathbf{c} | \mathbf{l}_{k, c}^{(t)} \}$ as node features. */ \ENDFOR \FOR{$k \in \mathcal{V}^{(t)}$} \STATE Obtain new node embedding $\mathbf{h'}_{k}$ /* Section 3.2.1 */ \ENDFOR \STATE Obtain new set of nodes $\mathcal{V}'^{(t)}$ with location and classification of tracked objects $\mathbf{tr}^{(t)}_{k,c}$ via motion modeling /* Section 3.2.2 */ \FOR{$c \in C$} \STATE Propagate the location of $\mathbf{tr}^{(t)}_{c}$ to adjacent cameras /* Section 3.2.3 */ \ENDFOR \FOR{$v_i \in \mathcal{V}'^{(t)}$} \STATE Obtain edge scoring to the remaining nodes and node merging /* Section 3.2.4 */ \STATE Assign ID based on edge scores. \ENDFOR \STATE $t \gets t + 1$ \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Model Training} \label{sec:training_gtn} In this section, we present how to train our proposed graph transformer networks, including self-attention and cross-attention layers. \paragraph{Training Data.} We train our proposed method on a large-scale dataset, i.e. nuScenes, training set with 750 scenes of 20s each and use its validation set for our ablation study. The ground truth 3D bounding boxes and the extracted ReID features from the pre-trained models in \cite{zhou2019osnet, Qian_2020_CVPR_Workshops} were used together as the inputs for training GTN. Each training sample contains a chunk size of two consecutive frames from a training sequence. \paragraph{Training Loss.} Our framework can be trained with two adjacent frames by optimizing for detections and tracklets prediction at frame $t$, given previous frame tracklets. Our joint objective function include \textit{learning node embedding} capturing both structural information from the graph, \textit{computing weighted linking score} between two nodes in the graph and \textit{learning to predict tracklets motion}. For \textit{learning node embedding}, we measure binary cross-entropy loss $ \mathcal{L}_{emb} $ between nodes that belong to the same objects for the model to output similar feature embeddings. \begin{equation} \footnotesize \begin{split} \mathcal{L}_{emb}(v_k) = & \sum_{v_j \in \mathcal{N}_b^{(t)}(v_k)} -\log \left( \sigma \left( < e'_{v_k} , e'_{v_j} > \right) \right) \\ & - w_g \sum_{v_i \in \mathcal{N}_g^{(t)}(v_k)} \log \left( 1 - \sigma \left( < e'_{v_k} , e'_{v_i} > \right) \right) \\ \end{split} \end{equation} where $ <\cdot> $ is the inner production between two vectors, $\sigma$ is Sigmoid activation function, $\mathcal{N}_b^{(t)}(v_k)$ is the set of fixed-length random walk neighbor nodes of $v_k$ at time step $t$, $\mathcal{N}_g^{(t)}(v_k)$ is a negative samples of $v_i$ for time step $t$, $\mathcal{N}_a^{(t)}(v_k) = \mathcal{N}_b^{(t)}(v_k) \cup \mathcal{N}_g^{(t)}(v_k)$ and $w_g$, negative sampling ratio, is an adjustable hyper-parameter to balance the positive and negative samples. For edge scoring, we use a cross-entropy loss function $ \mathcal{L}_c (e_{kj}) $ based on measurement features to ensure the score between two nodes that are connected is higher than other nodes. For \textit{learning to predict tracklets motion}, we set prediction loss to measure the set of predictions for $N_{\mathcal{O}}$ detections and $N_{\mathcal{T}}$ tracklets comparing with ground truth objects in terms of classification and location (bounding boxes). Set-based loss produces an optimal bipartite matching between $N_{\mathcal{O}}$ detections and ground truth objects while $N_{\mathcal{T}}$ tracklets will be matched with boxes from previous frames. The matching cost is defined as follows. \begin{equation} \mathcal{L}_{set} = \overset{N_{\mathcal{O}} + N_{\mathcal{T}}}{\underset{i=1}{\sum}} \left( \lambda_{cls} \mathcal{L}_{cls} + \lambda_{box} \mathcal{L}_{box} + \lambda_{iou} \mathcal{L}_{iou} \right) \end{equation} where $ \lambda_{cls}, \lambda_{box}$ and $\lambda_{iou}$ are combination weighting parameters for each component losses. $\mathcal{L}_{cls}$ is the cross-entropy loss between prediction classification and ground truth category labels. $\mathcal{L}_{box}$ and $\mathcal{L}_{iou}$ are the $\ell_1$ loss and the generalized intersection over union (IoU) \cite{rezatofighi2019generalized} for 3D bounding boxes. Finally, we have the total loss defined as \begin{equation} \mathcal{L}_{total} = \mathcal{L}_{emb} + \mathcal{L}_{c} + \mathcal{L}_{set} \end{equation} \begin{table}[!t] \small \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Method} & \textbf{mATE} $\downarrow$ & \textbf{mASE} $\downarrow$ & \textbf{mAOE} $\downarrow$ & \textbf{mAVE} $\downarrow$ \\ \hline 3D KF \cite{weng2020ab3dmot} & 0.8153 & 0.5155 & 0.7382 & 1.6186 \\ LSTM \cite{Chaabane2021deft} & 0.8041 & 0.4548 & 0.6744 & 1.6139 \\ \hline \textbf{Ours} & \textbf{0.5132} & \textbf{0.4388} & \textbf{0.3677} & \textbf{1.2189} \\ \hline \end{tabular} \caption{Motion Errors comparison for different motion modeling} \label{tab:motion_errors} \end{table} \section{Experimental Results} In this Section, we detail the benchmark dataset and metrics in Subsection \ref{ssec:data_metrics}. Then, the setups for all experiments and the ablation study will be presented in Subsections \ref{ssec:exp_setup} and \ref{ssec:ablation_study} respectively. The comparisons with the State-of-the-Art (SOTA) methods will be detailed in Subsection \ref{ssec:compare_results} on a large-scale Tracking Challenge, i.e. nuScenes Vision Track. \subsection{Benchmark Dataset and Metrics} \label{ssec:data_metrics} \subsubsection{Dataset} \paragraph{nuScenes} \cite{caesar2020nuscenes} is one of the large-scale datasets for Autonomous Driving with 3D object annotations. It contains 1,000 videos of 20-second shots in a setup of 6 cameras, i.e. 3 front and 3 rear ones, with a total of 1.4M images. It also provides 1.4M manually annotated 3D bounding boxes of 23 object classes based on LiDAR data. This dataset is an official split of 700, 150 and 150 videos for training, validation and testing, respectively. \subsubsection{Metrics} The proposed method is evaluated using both detection and tracking metrics described in \cite{caesar2020nuscenes}. \paragraph{Detection Metrics.} A commonly used metric, i.e. \textit{Mean Average Precision (mAP)}, is defined as a match using a 2D center distance on the ground plane instead of intersection over union cost for nuScenes detection challenges. Similarly, other motion-related metrics are also defined in nuScenes, such as \textit{Average Translation Error (ATE)} measuring Euclidean center distance in 2D in meters, \textit{Average Scale Error (ASE)} computing as $1 - IOU$ after aligning centers and orientation, \textit{Average Orientation Error (AOE)} measuring by the smallest yaw angle difference between prediction and ground-truth in radians, \textit{Average Velocity Error (AVE)} measuring the absolute velocity error in $m/s$ and \textit{Average Attribute Error (AAE)} computing as $1 - acc$, where $acc$ is the attribute classification accuracy. \begin{figure*}[!t] \centering \includegraphics[width=0.8\linewidth]{figs/compare_tracking_cropped.png} \caption{Our proposed method (top) can recognize a positive tracking case compare with a MC-MOT system which has no object's correlations linking module (i.e. DEFT) for all cameras (bottom). Green arrows indicate true positive tracking samples, red arrows indicate false negative tracking samples. Best viewed in color and zoom in.} \label{fig:compare_tracking} \end{figure*} Last but not least, we also use the \textit{nuScenes Detection Score (NDS)} that is based on a simple additive weighting of the mean of all other metrics above, including \textit{mAP}, \textit{mATE}, \textit{mASE}, \textit{mAOE}, \textit{mAVE} and \textit{mAAE}. \paragraph{Tracking Metrics.} The tracking performance is measured using the popular \textit{CLEAR MOT} metrics \cite{bernardin2008evaluating} including \textit{MOTA}, \textit{MOTP}, ID switch (\textit{IDS}), mostly tracked (\textit{MT}), mostly lost (\textit{ML}), fragmented (\textit{FRAG}). Similar to nuScenes, we use two accumulated metrics introduced in \cite{weng2020ab3dmot} as the main metrics, including the average over the MOTA metric (\textit{Average MOTA (AMOTA)}) and the average over the MOTP metric (\textit{Average MOTP (AMOTP)}). \subsection{Experiments Setup} \label{ssec:exp_setup} The proposed graph transformer networks module is trained with two consecutive frames where the graph $ \{ \mathcal{G}^{(t - 1)} \} $ in the previous time step is used to predict new graph $ \mathcal{G}^{(t)} $ at time step $t$. Then, Mini-batch (chunk of two) gradient descent is employed with Adam optimizer to learn all the parameters in the attention layers. \subsection{Ablation Study} \label{ssec:ablation_study} In this section, we present some experiments to ablate the effect of each component of the proposed framework. Particularly, this section aims to demonstrate the followings: 1. better motion modeling with cross-attention layer in GTN; 2. the role of architecture choice of graph transformer networks. \begin{table}[!t] \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \textbf{Structures} & \textbf{mATE} $\downarrow$ & \textbf{mASE} $\downarrow$ & \textbf{mAOE} $\downarrow$ & \textbf{mAVE} $\downarrow$ \\ \hline Self-attn 1-layer & 0.812 & 0.298 & 0.820 & \textbf{1.187} \\ Self-attn 2-layer & 0.785 & \textbf{0.286} & 0.703 & 1.284 \\ \textbf{Self-attn 3-layer} & \textbf{0.750} & 0.293 & \textbf{0.485} & 1.432 \\ \hline Cross-attn 1-layer & 0.824 & 0.293 & 0.866 & 1.281 \\ Cross-attn 2-layer & 0.772 & \textbf{0.279} & 0.670 & 1.287 \\ \textbf{Cross-attn 3-layer} & \textbf{0.513} & 0.439 & \textbf{0.368} & \textbf{1.219} \\ \hline \end{tabular} \caption{Ablation study on different configuration for self-attention and cross-attention layers.} \label{tab:attn_structures} \end{table} \begin{figure*}[!t] \centering \includegraphics[width=0.8\linewidth]{figs/compare_detection_cropped.png} \caption{Our proposed method (top) can recover a false negative detection case compared with a MC-MOT system which runs independently on each camera (i.e. DEFT) (bottom). Green arrows indicate true positive detection samples, red arrows indicate false negative detection samples. Best viewed in color and zoom in.} \label{fig:compare_detection} \end{figure*} \begin{table*}[!t] \small \centering \resizebox{1.0\textwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Method} & \textbf{Glo. Assoc.} & \textbf{AMOTA} & \textbf{AMOTP} & \textbf{MOTAR} & \textbf{MOTA} $\uparrow$ & \textbf{MOTP} $\downarrow$ & \textbf{RECALL} $\uparrow$ & \textbf{MT} $\uparrow$ & \textbf{ML} $\downarrow$ & \textbf{IDS} $\downarrow$ & \textbf{FRAG} $\downarrow$ \\ \hline MonoDIS \cite{Simonelli_2019_ICCV} & \xmark & 0.045 & 1.793 & 0.202 & 0.047 & 0.927 & 0.293 & 395 & 3961 & 6872 & 3229 \\ CenterTrack \cite{zhou2020tracking} & \xmark & 0.068 & 1.543 & 0.349 & 0.061 & 0.778 & 0.222 & 524 & 4378 & 2673 & 1882 \\ DEFT \cite{Chaabane2021deft} & \xmark & 0.213 & 1.532 & 0.49 & 0.183 & 0.805 & 0.4 & 1591 & 2552 & 5560 & 2721 \\ QD-3DT \cite{Hu2021QD3DT} & \xmark & \textbf{0.242} & 1.518 & \textbf{0.58} & \textbf{0.218} & 0.81 & 0.399 & 1600 & 2307 & 5646 & 2592 \\ \hline \textbf{Ours} & \cmark & 0.24 & \textbf{1.52} & 0.568 & 0.197 & \textbf{0.832} & \textbf{0.453} & \textbf{1643} & \textbf{2162} & \textbf{1362} & \textbf{1462} \\ \hline \end{tabular} } \caption{Comparison of 3D tracking performance on the nuScenes validation set for Vision Track challenge. \textbf{Glo. Assoc.} indicates method linking object IDs across all cameras} \label{tab:nuscene_track_results} \end{table*} \begin{table*}[!t] \centering \resizebox{0.75\textwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \textbf{Method} & \textbf{mAP} $\uparrow$ & \textbf{NDS} $\uparrow$ & \textbf{mATE} $\downarrow$ & \textbf{mASE} $\downarrow$ & \textbf{mAOE} $\downarrow$ & \textbf{mAVE} $\downarrow$ & \textbf{mAAE} $\downarrow$ \\ \hline MonoDIS \cite{Simonelli_2019_ICCV} & 0.2976 & 0.3685 & 0.7661 & 0.2695 & \textbf{0.5839} & 1.3619 & 0.184 \\ MonoDIS \cite{Simonelli_2019_ICCV} + \textbf{Our MP + NM} & \textbf{0.3019} & \textbf{0.3893} & \textbf{0.6558} & \textbf{0.2410} & 0.6787 & \textbf{1.3209} & \textbf{0.184} \\ \hline \hline CenterNet \cite{zhou2019objects} & 0.3027 & 0.3262 & 0.7152 & 0.2635 & \textbf{0.6158} & 1.4254 & 0.6567 \\ CenterNet \cite{zhou2019objects} + \textbf{Our MP + NM} & \textbf{0.3487} & \textbf{0.4016} & \textbf{0.5417} & \textbf{0.2023} & 0.6317 & \textbf{1.3094} & \textbf{0.6567} \\ \hline \hline KM3D \cite{2009.00764} & 0.2763 & 0.3201 & 0.7495 & 0.2927 & 0.4851 & \textbf{1.4322} & 0.6535 \\ KM3D \cite{2009.00764} + \textbf{Our MP + NM} & \textbf{0.3503} & \textbf{0.4117} & \textbf{0.6998} & \textbf{0.2323} & \textbf{0.1861} & 1.8341 & \textbf{0.5166} \\ \hline \end{tabular} } \caption{Comparison of 3D object detectors with and without using our motion propagation (MP) and node merging (NM) modules in terms of detection metrics on the nuScenes validation set for Vision Detection challenge} \label{tab:nuscene_detection_results} \end{table*} \paragraph{The Role of Motion Model} In this experiment, we evaluate the effectiveness of different motion modeling methods on detection performance. We use the locations predicted by motion models to compare with ground truth locations in terms of motion-related metrics. In such way, we can evaluate how good the motion model capturing and predicting the motion of tracked objects. We compare with two other commonly used motion models, i.e. 3D Kalman Filter \cite{weng2020ab3dmot} and LSTM \cite{Chaabane2021deft}. As shown in Table \ref{tab:motion_errors}, our GTN gives better results than a classical object state prediction technique, i.e. 3D Kalman Filter used in \cite{weng2020ab3dmot} and a deep learning based technique, i.e. LSTM module, used in \cite{Chaabane2021deft}. \paragraph{The Configuration for Graph Transformer Networks} We conduct additional ablation studies to evaluate the effects on configuration of the attention modules in GTN, including the number of attention layers. Table \ref{tab:attn_structures} shows the performance of our proposed framework in terms of detection metrics using various configuration of the attention modules. We change the number of layer for self-attention and the cross-attention layers independently. We use a fixed number of layers, i.e. 2, for self-attention and the cross-attention layers while changing the other, respectively. \subsection{Comparison against The State-of-the-Art Methods} \label{ssec:compare_results} In this section, we first compare our proposed framework with other vision-based (without using LiDAR or RADAR information) tracking approaches, which are the top in nuScenes vision only tracking challenge leaderboard. Then we conduct an experiment to demonstrate that using tracked 3D bounding boxes from our tracking framework can actually improve the detection metrics. \paragraph{Comparison against Tracking Methods on Tracking Metrics} This experiment compares our proposed method with other vision-based methods, including MonoDIS \cite{Simonelli_2019_ICCV}, CenterTrack \cite{zhou2020tracking} and DEFT \cite{Chaabane2021deft}, QD-3DT \cite{Hu2021QD3DT} which are the top/winner of nuScenes vision only tracking challenge. As we can see in Table \ref{tab:nuscene_track_results}, our method decreases error rates compared to top approaches, i.e. DEFT, in most of the metrics. Fig. \ref{fig:compare_tracking} illustrates the key factor that help improve the tracking performance is that we perform appearance matching across cameras in addition to motion modeling. It shows that our proposed method (top) can assign object ID globally between cameras compared with DEFT \cite{Chaabane2021deft} (bottom). \paragraph{Comparison against Detection Methods on Detection Metrics} Table \ref{tab:nuscene_detection_results} demonstrates that the combination of object detector and our motion propagation (MP) and node merging (NM) modules achieves the better results than original object detector. In this experiment, we compare three different 3D object detectors, including KM3D \cite{2009.00764}, MonoDIS \cite{Simonelli_2019_ICCV} and CenterNet \cite{zhou2019objects}. The best result achieves with the combination of KM3D object detector \cite{2009.00764} and our MP+NM modules since it is guided by global decoded locations from our transformation procedure as described in \ref{ssec:graph_propagation}. Fig. \ref{fig:compare_detection} illustrates the improvement on detector fail cases with the help from our tracking framework. \section{Conclusions} This paper has introduced a new global association graph model to solve the MC-MOT problem for AV. The proposed framework can learn to perform tracking frame-by-frame in an end-to-end manner starting from detections to motion prediction and global association tracklets with detections. These tasks are enhanced with self-attention and cross-attention layers so that the proposed graph can capture both structural and motion across cameras. The experiments show performance improvements in a large-scale dataset in AV in terms of vision-based detection and tracking accuracy. \section*{Acknowledgment} This material is based upon work supported in part by the US NSF Data Science, Data Analytics that are Robust and Trusted (DART) and NSF WVAR-CRESH Grant. {\small \bibliographystyle{ieee_fullname}
\section{Introduction} Let $(M,g)$ be a Riemannian manifold of real dimension $n$. We use $\nabla$ to denote the Levi-Civita connection. Recently, Chen-He \cite{CH18} introduced the following function space \begin{equation*} \ti{\mathcal{H}} = \{\varphi\in C^{\infty}(M)~|~\Delta\varphi-b|\nabla\varphi|^{2}+a(x)>0\}, \end{equation*} where $b$ is a nonnegative constant and $a(x)$ is a positive smooth function on $M$. For any $u_{0},u_{1}\in\ti{\mathcal{H}}$, they also introduced the fully nonlinear equation \begin{equation}\label{Chen-He equation} u_{tt}(\Delta u-b|\nabla u|^{2}+a(x))-|\nabla u_{t}|^{2} = f, \end{equation} with boundary condition \begin{equation}\label{Boundary condition 1} u(\cdot,0) = u_{0}, \,\,\, u(\cdot,1) = u_{1}, \end{equation} where $f$ is a nonnegative function on $M\times[0,1]$. In \cite{CH18}, Chen-He solved the equation (\ref{Chen-He equation}) with uniform weak $C^{2}$ estimates, which also hold for the degenerate case (see also \cite{He08}). When $b=0$, $a=1$ and $f=0$, (\ref{Chen-He equation}) becomes the geodesic equation in the space of volume forms on $(M,g)$. More specifically, in \cite{Donaldson10}, Donaldson introduced a Weil-Peterson type metric on the space of volume forms (normalized) on any Riemannian manifold with fixed total volume. We write $\mathcal{H}$ for this infinite dimensional space, which can be parameterized by the space of smooth functions \begin{equation*} \{\varphi\in C^{\infty}(M)~|~1+\Delta\varphi>0\}. \end{equation*} For any $\varphi\in\mathcal{H}$, the tangent space $T_{\varphi}\mathcal{H}$ is $C^{\infty}(M)$. And the metric is defined by \begin{equation*} \|\delta\varphi\|_{\varphi}^{2} = \int_{M}|\delta\varphi|^{2}(1+\Delta\varphi)dV_{g} \,\,\,\, \text{for $\delta\varphi\in T_{\varphi}\mathcal{H}$.} \end{equation*} For a path $\Phi:[0,1]\rightarrow\mathcal{H}$, the energy function is given by \begin{equation*} E(\Phi) = \int_{0}^{1}\int_{M}|\dot{\Phi}|^{2}(1+\Delta\Phi)dV_{g} \end{equation*} and the geodesic equation is \begin{equation}\label{Geodesic equation} \Phi_{tt}(1+\Delta\Phi)-|\nabla\Phi_{t}|^{2} = 0, \end{equation} with boundary condition \begin{equation*} \Phi(\cdot,0) = \varphi_{0}, \,\,\, \Phi(\cdot,1) = \varphi_{1}, \end{equation*} where $\varphi_{0},\varphi_{1}\in\mathcal{H}$. To solve this equation, for any $\varepsilon>0$, Donaldson \cite{Donaldson10} introduced the following perturbed geodesic equation \begin{equation}\label{Perturbed geodesic equation} (\Phi_{\varepsilon})_{tt}(1+\Delta\Phi_{\varepsilon})-|\nabla(\Phi_{\varepsilon})_{t}|^{2} = \varepsilon, \end{equation} with boundary condition \begin{equation}\label{Boundary condition 2} \Phi_{\varepsilon}(\cdot,0) = \varphi_{0}, \,\,\, \Phi_{\varepsilon}(\cdot,1) = \varphi_{1}. \end{equation} In \cite{CH11}, Chen-He solved this perturbed geodesic equation and proved weak $C^{2}$ estimate which is independent of $\varepsilon$. Let $\varepsilon\rightarrow0$. Chen-He proved that there is a unique weak geodesic $\Phi$ connecting $\varphi_{0}$ and $\varphi_{1}$, and that the quantities $\sup_{M\times[0,1]}|\Phi|$, $\sup_{M\times[0,1]}|\Phi_{t}|$, $\sup_{M\times[0,1]}|\nabla\Phi|$, $\sup_{M\times[0,1]}|\Phi_{tt}|$, $\sup_{M\times[0,1]}|\nabla\Phi_{t}|$, $\sup_{M\times[0,1]}|\Delta\Phi|$ are all bounded (see \cite[Theorem 1.2, Corollary 5.3]{CH11}). By the boundary condition (\ref{Boundary condition 2}), the quantity $\sup_{\partial(M\times[0,1])}|\nabla^{2}\Phi|$ is also bounded. Hence, $\Phi$ is $C^{1,\alpha}$ for any $\alpha\in(0,1)$. In general, it is well known that the weak geodesic $\Phi$ is not $C^{2}$. Actually, in complex dimension $1$, (\ref{Geodesic equation}) becomes the geodesic equation in the space of K\"{a}hler metrics. And there are many examples which show that in general the weak geodesic in the space of K\"{a}hler metrics is not $C^{2}$ (see \cite{LV13,DL12,Darvas14}). Recently, Chu-Tosatti-Weinkove \cite{CTW17} proved the $C^{1,1}$ regularity of geodesics in the space of K\"{a}hler metrics. Hence, for (\ref{Perturbed geodesic equation}), it was expected that $\sup_{M\times[0,1]}|\nabla^{2}\Phi_{\varepsilon}|\leq C$, where $C$ is independent of $\varepsilon$. This implies that the weak geodesic $\Phi$ is $C^{1,1}$. In this paper, we prove the $C^{1,1}$ regularity of geodesics in the space of volume forms. \begin{theorem}\label{C11 regularity of geodesics} Let $(M,g)$ be a compact $n$-dimensional Riemannian manifold. For any two points $\varphi_{0},\varphi_{1}\in\mathcal{H}$, the weak geodesic $\Phi$ connecting them is $C^{1,1}$. \end{theorem} As alluded to above, Theorem \ref{C11 regularity of geodesics} is a consequence of \cite[Theorem 1.2]{CH11} and the $C^{1,1}$ estimate for (\ref{Perturbed geodesic equation}). More generally, for (\ref{Chen-He equation}), Chen-He expected that $\sup_{M\times[0,1]}|\nabla^{2}u|$ is bounded (see \cite[Remark 2.15]{CH18}). We prove the following $C^{1,1}$ estimate, which confirms what Chen-He suggested. \begin{theorem}\label{Main estimate} Let $(M,g)$ be a compact $n$-dimensional Riemannian manifold. Suppose that $f$ is a positive smooth function on $M\times[0,1]$. For any smooth solution $u$ of (\ref{Chen-He equation}) satisfying \begin{equation*} u(\cdot,t) \in \ti{\mathcal{H}} ~\text{~for $t\in [0,1]$}, \end{equation*} there exists a constant $C$ depending only on $\sup_{M\times[0,1]}|\nabla u|$, $\sup_{M\times[0,1]}|u_{tt}|$, $\sup_{M\times[0,1]}|\Delta u|$, $\sup_{M\times[0,1]}f$, $\sup_{M\times[0,1]}|\nabla(f^{\frac{1}{2}})|$, $\sup_{M\times[0,1]}|\nabla^{2}(f^{\frac{1}{2}})|$, $u_{0}$, $u_{1}$, $a$, $b$ and $(M,g)$, such that \begin{equation}\label{C11 estimate} \sup_{M\times[0,1]}|\nabla^{2}u| \leq C. \end{equation} \end{theorem} Combining this $C^{1,1}$ estimate, \cite[Theorem 1.1]{CH18} and the approximation argument, we obtain the following corollary. \begin{corollary}\label{Corollary} Let $(M,g)$ be a compact $n$-dimensional Riemannian manifold. Suppose that $f$ is a nonnegative function on $M$ such that \begin{equation*} \sup_{M\times[0,1]}\left(f+|(f^{\frac{1}{2}})_{t}|+|\nabla(f^{\frac{1}{2}})|+|f_{tt}|+|\nabla^{2}(f^{\frac{1}{2}})|\right) \leq C \end{equation*} for a constant $C$. Then the Dirichlet problem (\ref{Chen-He equation}) has a $C^{1,1}$ solution. \end{corollary} We note that (\ref{Chen-He equation}) also covers the Gursky-Streets equation when $k=1$ (see \cite{GS16}). Thus, Corollary \ref{Corollary} shows the existence of $C^{1,1}$ solutions to the Gursky-Streets equation ($k=1$). \section{Proof of Theorem \ref{Main estimate}} We use the same notations as in \cite{CH18}. For $r=(r_{0},r_{1},\cdots,r_{n+1})$, we write \begin{equation*} Q(r) = r_{0}r_{1}-\sum_{i=2}^{n+1}r_{i}^{2} \text{~and~} G(r) = \log Q(r). \end{equation*} We denote the first and second derivatives of $Q$ and $G$ by \begin{equation*} Q^{i} = \frac{\partial Q}{\partial r_{i}}, Q^{i,j} = \frac{\partial^{2}Q}{\partial r_{i}\partial r_{j}}, G^{i} = \frac{\partial G}{\partial r_{i}}, G^{i,j} = \frac{\partial^{2}G}{\partial r_{i}\partial r_{j}}. \end{equation*} For any point $x_{0}\in M$. Let $\{e_{i}\}_{i=1}^{n}$ be a local orthonormal frame in a neighborhood of $x_{0}$. In this paper, the subscripts of a function always denote the covariant derivatives. If we write $r=(u_{tt},B_{u},u_{ti})$ and $B_{u}=\Delta u-b|\nabla u|^{2}+a(x)$, then (\ref{Chen-He equation}) can be written as \begin{equation}\label{Chen-He equation 1} Q(r) = Q(u_{tt}, B_{u},u_{ti}) = u_{tt}B_{u}-|\nabla u_{t}|^{2} = f. \end{equation} Since $f>0$ and $u(\cdot,t)\in\ti{\mathcal{H}}$ for $t\in [0,1]$, we have $u_{tt}>0$ and $B_{u}>0$. By \cite[(2.8)]{CH18}, the linearized operator of $Q$ is given by \begin{equation}\label{Definition of dQ} dQ(\psi) = u_{tt}\left(\Delta\psi-2b(\nabla u,\nabla\psi)\right)+B_{u}\psi_{tt}-2(\nabla u_{t},\nabla\psi_{t}), \end{equation} where $(\cdot,\cdot)$ denotes the inner product. Clearly, the equation (\ref{Chen-He equation 1}) is elliptic. Now we are in a position to prove Theorem \ref{Main estimate}. \begin{proof}[Proof of Theorem \ref{Main estimate}] Let $\lambda_{1}(\nabla^{2}u)$ be the largest eigenvalue of $\nabla^{2}u$. It is clear that \begin{equation}\label{Main estimate equation 1} |\nabla^{2}u| \leq C|\Delta u|+C\max\left(\lambda_{1}(\nabla^{2}u),0\right). \end{equation} To prove Theorem \ref{Main estimate}, it suffices to prove $\sup_{M\times[0,1]}\lambda_{1}(\nabla^{2}u)\leq C$. Hence, we consider the following quantity \begin{equation*} H(x,t,\xi) = u_{\xi\xi}+|\nabla u|^{2}+At^{2}, \end{equation*} for $(x,t)\in M\times[0,1]$, $\xi\in T_{x}M$ a unit vector and $A$ a constant to be determined later. Let $(x_{0},t_{0},\xi_{0})$ be the maximum point of $H$. Without loss of generality, we assume that $(x_{0},t_{0})\notin\partial(M\times[0,1])$. Otherwise, by the boundary condition (\ref{Boundary condition 1}), we obtain (\ref{C11 estimate}) directly. We choose a local orthonormal frame $\{e_{i}\}_{i=1}^{n}$ near $x_{0}$ such that \begin{equation*} e_{1}(x_{0}) = \xi_{0}. \end{equation*} In a neighborhood of $(x_{0},t_{0})$, we define a new quantity by \begin{equation*} \ti{H}(x,t) = H(x,t,e_{1}) = u_{11}+|\nabla u|^{2}+At^{2}. \end{equation*} Clearly, $\ti{H}$ still achieves its maximum at $(x_{0},t_{0})$. To prove Theorem \ref{Main estimate}, it suffices to prove $u_{11}(x_{0},t_{0})\leq C$. By the maximum principle and (\ref{Definition of dQ}), at $(x_{0},t_{0})$, we have \begin{equation}\label{Main estimate equation 2} 0 \geq dQ(\ti{H}) = dQ(u_{11})+dQ(|\nabla u|^{2})+2AB_{u}, \end{equation} where $B_{u}=\Delta u-b|\nabla u|^{2}+a(x)$. From now on, all the calculations will be carried out at $(x_{0},t_{0})$. For the first term of (\ref{Main estimate equation 2}), using (\ref{Definition of dQ}), we compute \begin{equation}\label{Main estimate equation 3} dQ(u_{11}) = u_{tt}\left(\Delta(u_{11})-2b(\nabla u,\nabla u_{11})\right)+B_{u}u_{11tt}-2(\nabla u_{t},\nabla u_{11t}). \end{equation} Applying $\nabla_{e_{1}}\nabla_{e_{1}}$ to the equation $G(r)=\log f$ (the logarithm of (\ref{Chen-He equation 1})) and using the concavity of $G$ (see \cite{Donaldson10,CH11,CH18}), we see that \begin{equation}\label{Main estimate equation 8} G^{i}(r_{i})_{11} = -G^{i,j}(r_{i})_{1}(r_{j})_{1}+\frac{f_{11}}{f}-\frac{|f_{1}|^{2}}{f^{2}} \geq \frac{f_{11}}{f}-\frac{|f_{1}|^{2}}{f^{2}}, \end{equation} where $r=(u_{tt},B_{u},\nabla_{i}u_{t})$. To obtain a lower bound for $G^{i}(r_{i})_{11}$, we need the following lemma. \begin{lemma}[Lemma 3.1 of \cite{Blocki03}]\label{Lemma} Let $\Omega$ be a domain in $\mathbf{R}^{n}$ and let $\psi\in C^{1,1}(\ov{\Omega})$ be nonnegative. Then $\sqrt{\psi}\in C^{0,1}(\Omega)$ and \begin{equation*} |(D\sqrt{\psi})(x)| \leq \max\left\{ \frac{|D\psi(x)|}{2\textrm{dist}(x,\partial\Omega)},\frac{1+\sup_{\Omega}\lambda_{\textrm{max}}(D^{2}\psi)}{2} \right\} \end{equation*} for almost all $x\in\Omega$. \end{lemma} Using $\partial M=\emptyset$ and Lemma \ref{Lemma} (taking $\psi=f^{\frac{1}{2}}$), we obtain \begin{equation*} |\nabla f^{\frac{1}{4}}| \leq C|\nabla(f^{\frac{1}{2}})|+C|\nabla^{2}(f^{\frac{1}{2}})|+C, \end{equation*} which implies \begin{equation*} |\nabla f|^{2} \leq Cf^{\frac{3}{2}}. \end{equation*} Combining this with (\ref{Main estimate equation 8}), it is clear that \begin{equation*} G^{i}(r_{i})_{11} \geq \frac{2(f^{\frac{1}{2}})_{11}}{f^{\frac{1}{2}}}-\frac{|f_{1}|^{2}}{2f^{2}} \geq -\frac{2|\nabla^{2}(f^{\frac{1}{2}})|}{f^{\frac{1}{2}}}-\frac{|\nabla f|^{2}}{2f^{2}} \geq -\frac{C}{f^{\frac{1}{2}}}. \end{equation*} Recalling that $G(r)=\log Q(r)$ and $Q(r)=f$ (see (\ref{Chen-He equation 1})), it follows that \begin{equation}\label{Main estimate equation 4} Q^{i}(r_{i})_{11} = Q(r)G^{i}(r_{i})_{11} = fG^{i}(r_{i})_{11} \geq -C\sqrt{f}. \end{equation} By the commutation formula for covariant derivatives, $r=(u_{tt},B_{u},u_{ti})$, $B_{u}=\Delta u-b|\nabla u|^{2}+a(x)$, $u_{tt}>0$ and $b\geq0$, it is clear that \begin{equation}\label{Main estimate equation 5} \begin{split} & Q^{i}(r_{i})_{11} \\ = {} & u_{tt}(B_{u})_{11}+B_{u}u_{tt11}-2\sum_{i=1}^{n}u_{ti}u_{ti11} \\ = {} & u_{tt}\left((\Delta u)_{11}-b(|\nabla u|^{2})_{11}+a_{11}\right)+B_{u}u_{tt11}-2\sum_{i=1}^{n}u_{ti}u_{ti11} \\[2mm] \leq {} & u_{tt}\left(\Delta(u_{11})+C|\nabla^{2}u|\right)\\[1mm] & -bu_{tt}\left(\sum_{i=1}^{n}|u_{i1}|^{2}+2(\nabla u,\nabla u_{11})-C|\nabla u|^{2}\right) \\[1mm] & +u_{tt}a_{11}+B_{u}u_{11tt}-2(\nabla u_{11t},\nabla u_{t})+C|\nabla u_{t}|^{2} \\[3mm] \leq {} & dQ(u_{11})+Cu_{tt}(|\nabla^{2}u|+1)+C|\nabla u_{t}|^{2}, \\[1mm] \end{split} \end{equation} where we used (\ref{Main estimate equation 3}) in the last inequality. Combining (\ref{Main estimate equation 4}) and (\ref{Main estimate equation 5}), we obtain \begin{equation}\label{Main estimate equation 6} dQ(u_{11}) \geq -Cu_{tt}(|\nabla^{2}u|+1)-C|\nabla u_{t}|^{2}-C\sqrt{f}. \end{equation} For the second term of (\ref{Main estimate equation 2}), by \cite[Proposition 2.9]{CH18}, we have \begin{equation}\label{Main estimate equation 11} \begin{split} dQ(|\nabla u|^{2}) = {} & 2u_{tt}\left(\textrm{Ric}(\nabla u,\nabla u)-(\nabla u,\nabla a)\right)+2(\nabla f,\nabla u) \\ & +2u_{tt}|\nabla^{2}u|^{2}+2B_{u}|\nabla u_{t}|^{2}-4\sum_{i,j=1}^{n}u_{ti}u_{tj}u_{ij}. \end{split} \end{equation} For the reader's convenience, we give a proof of (\ref{Main estimate equation 11}) here. Using (\ref{Definition of dQ}), we compute \begin{equation}\label{Main estimate equation 12} \begin{split} dQ(|\nabla u|^{2}) = {} & u_{tt}\left(\Delta(|\nabla u|^{2})-2b(\nabla u,\nabla(|\nabla u|^{2}))\right) \\[2.5mm] & +B_{u}(|\nabla u|^{2})_{tt}-2\left(\nabla u_{t},\nabla(|\nabla u|^{2})_{t}\right) \\[2.5mm] = {} & 2u_{tt}\left(|\nabla^{2}u|^{2}+(\nabla u,\Delta\nabla u)+\textrm{Ric}(\nabla u,\nabla u)\right) \\[2.5mm] & -2bu_{tt}\left(\nabla u,\nabla(|\nabla u|^{2})\right)+2B_{u}(\nabla u,\nabla u_{tt})+2B_{u}|\nabla u_{t}|^{2} \\ & -2\left(\nabla u,\nabla(|\nabla u_{t}|^{2})\right)-4\sum_{i,j=1}^{n}u_{ti}u_{tj}u_{ij}, \end{split} \end{equation} where for the second equality, we used \begin{equation*} \begin{split} \left(\nabla u_{t},\nabla(|\nabla u|^{2})_{t}\right) = {} & 2\sum_{i,j=1}^{n}u_{ti}u_{j}u_{jti}+2\sum_{i,j=1}^{n}u_{ti}u_{ji}u_{jt} \\ = {} & \left(\nabla u,\nabla(|\nabla u_{t}|^{2})\right)+2\sum_{i,j=1}^{n}u_{ti}u_{tj}u_{ij}. \end{split} \end{equation*} Taking derivative of the equation (\ref{Chen-He equation 1}), it is clear that \begin{equation*} u_{tt}\left(\nabla\Delta u-b\nabla(|\nabla u|^{2})+\nabla a\right)+B_{u}\nabla u_{tt}-\nabla(|\nabla u_{t}|^{2}) = \nabla f, \end{equation*} which implies \begin{equation}\label{Main estimate equation 13} \begin{split} 2(\nabla u,\nabla f)-2u_{tt}(\nabla u,\nabla a) = {} & 2u_{tt}(\nabla u,\nabla\Delta u)-2bu_{tt}(\nabla u,\nabla(|\nabla u|^{2})) \\ & +2B_{u}(\nabla u,\nabla u_{tt})-2(\nabla u,\nabla(|\nabla u_{t}|^{2})). \end{split} \end{equation} Combining (\ref{Main estimate equation 12}) with (\ref{Main estimate equation 13}), we obtain (\ref{Main estimate equation 11}). Using (\ref{Main estimate equation 11}) and $u_{tt}>0$, we have \begin{equation}\label{Main estimate equation 9} \begin{split} dQ(|\nabla u|^{2}) \geq {} & -Cu_{tt}-C|\nabla f|+2u_{tt}|\nabla^{2}u|^{2}+2B_{u}|\nabla u_{t}|^{2} \\[2mm] & -4n^{2}|\nabla u_{t}|^{2}|\nabla^{2}u|, \\[1mm] \end{split} \end{equation} Recalling the equation (\ref{Chen-He equation 1}) and $f>0$, we have \begin{equation*} |\nabla u_{t}|=\sqrt{u_{tt}B_{u}-f}\leq\sqrt{u_{tt}B_{u}}, \end{equation*} which implies \begin{equation}\label{Main estimate equation 10} \begin{split} 4n^{2}|\nabla u_{t}|^{2}|\nabla^{2}u| \leq {} & 4n^{2}(\sqrt{u_{tt}}|\nabla^{2}u|)(\sqrt{B_{u}}|\nabla u_{t}|) \\ \leq {} & u_{tt}|\nabla^{2}u|^{2}+4n^{4}B_{u}|\nabla u_{t}|^{2}. \end{split} \end{equation} Combining (\ref{Main estimate equation 9}) and (\ref{Main estimate equation 10}), it follows that \begin{equation}\label{Main estimate equation 7} \begin{split} dQ(|\nabla u|^{2}) \geq {} & -Cu_{tt}-C|\nabla f|+u_{tt}|\nabla^{2}u|^{2}-CB_{u}|\nabla u_{t}|^{2} \\ \geq {} & -Cu_{tt}-Cf^{\frac{1}{2}}|\nabla(f^{\frac{1}{2}})|+u_{tt}|\nabla^{2}u|^{2}-CB_{u}|\nabla u_{t}|^{2} \\ \geq {} & u_{tt}(|\nabla^{2}u|^{2}-C)-C|\nabla u_{t}|^{2}-C\sqrt{f}, \end{split} \end{equation} where we used $B_{u}\leq C$ in the last inequality. Substituting (\ref{Main estimate equation 6}) and (\ref{Main estimate equation 7}) into (\ref{Main estimate equation 2}), at $(x_{0},t_{0})$, we obtain \begin{equation}\label{Main estimate equation 14} 0 \geq u_{tt}(|\nabla^{2}u|^{2}-C|\nabla^{2}u|-C)-C|\nabla u_{t}|^{2}-C\sqrt{f}+2AB_{u}. \end{equation} From the equation (\ref{Chen-He equation 1}) and $|B_{u}|+|u_{tt}|\leq C$, we have \begin{equation}\label{Main estimate equation 15} C|\nabla u_{t}|^{2}+C\sqrt{f} \leq Cu_{tt}B_{u}+C\sqrt{u_{tt}B_{u}} \leq C\sqrt{u_{tt}B_{u}} \leq CB_{u}+Cu_{tt}. \end{equation} Substituting (\ref{Main estimate equation 15}) into (\ref{Main estimate equation 14}), it follows that \begin{equation*} 0 \geq u_{tt}(|\nabla^{2}u|^{2}-C|\nabla^{2}u|-C)+(2A-C)B_{u}. \end{equation*} Since $u_{tt}>0$ and $B_{u}>0$, after choosing $A$ sufficiently large, we obtain $u_{11}(x_{0},t_{0})\leq C$, as desired. \end{proof}
\section{Headline of first appendix section} \end{document} \section{Analysis}\label{sec:analysis} As a starting point of our investigation, in Section~\ref{subsec:level1}, we quantify the overall market share of Bitcoin mining pools by attributing Bitcoin blocks to known mining pools. Next, in Section~\ref{subsec:level2}, we identify individual miners and investigate revenue streams, obtaining a more accurate picture of the distribution of mining rewards within pools. Finally, in Section~\ref{subsec:cross-mining}, we investigate the economic relationships between pools and other actors in the Bitcoin ecosystem. The analysis results in the rest of this paper are based on Bitcoin blocks 0-556400 \xspace(3 Jan. 2009 - 31 Dec. 2018). \input{sections/analysis_level1} \input{sections/analysis_level2} \input{sections/analysis_cross_pool} \subsection{Mining Reward Distribution}\label{subsec:level2} In the previous section, we saw how mining shares are distributed among pools and how they evolved over time. Before we can further investigate revenue streams within and across mining pools, we need to understand how pool operators split the block reward among individual miners and identify as many payout transactions as possible. From now on, we will focus our analysis on three pools (BTC.com, AntPool and ViaBTC) that held the majority of mining power at the beginning of 2018. Since payout distribution schemes also change over time, we further limit our study to a four-week observation period ranging from block 510,000 (2018-02-19) to 514,032 (2018-03-18) when each major pool almost always followed a distinctive and stable payout pattern (see Figure~\ref{fig:patterns}). This allows us to identify payout transactions while reducing the number of false positive transactions that do not represent payments to individual miners. To verify that the identified payout transactions are indeed within reasonable bounds, we compare the amount of BTC paid out via these transactions with the amount of BTC received by the pool in the same period, as described in detail in Section~\ref{subsec:cross-mining}. We identify transactions from mining pools to individual miners using the following, pool-specific heuristics: \begin{figure} \centering \qquad \subfloat[1][BTC.com payout pattern.]{\label{fig:btccom_flow} \includegraphics[width=0.6\textwidth]{figures/flow_BTCcom_example1.png}} \subfloat[2][ViaBTC payout pattern.]{\label{fig:viabtc_flow} \includegraphics[width=0.4\textwidth]{figures/flow_ViaBTC_example2.png}} \subfloat[3][AntPool payout pattern.]{\label{fig:antpool_flow} \includegraphics[width=0.65\textwidth]{figures/flow_AntPool_example1.png}} \caption{Payout patterns observed in the time period between block 510,000 and 514,032. In gray: reward addresses, in red: addresses performing payout transactions, in blue: pool members, in green: change addresses. Rounded squares are coinbases of blocks mined by the pool. The size of the nodes indicates the differences in received BTC per transaction per address. In BTC.com (Figure~\ref{fig:btccom_flow}) payout transactions are performed by one single collector address. In ViaBTC (Figure~\ref{fig:viabtc_flow}), payout transactions are performed by a dozen of addresses (always changing), receiving 10 BTC each from one single reward address. In AntPool (Figure~\ref{fig:antpool_flow}), similarly to BTC.com, we have a payout chain that originates from always the same collector address, but continues with other change addresses used only once as input in payout transactions.}\label{fig:patterns}\end{figure} \begin{itemize} \item \textbf{BTC.com} received block rewards always in the same reward address denoted as $A_1$ in Figure~\ref{fig:btccom_flow} and used one collector address, denoted as $A_2$, for distributing mining rewards to pool members. That address was also used as change address in payout transactions and the payments continue in a chain-like fashion. Supposing that only this address was used to transfer funds to pool members, we selected all its payout transactions within the examined period, each having a large number of outputs (in the order of $10^3$) with relatively low amounts of revenues associated (few mBTC). \item \textbf{AntPool} also collected most of the mined coins using a single address. As in BTC.com, they have always been sent to one collector address responsible for payments ($A_2$ in Figure~\ref{fig:antpool_flow}), but, differently from BTC.com, the chain of payments continues with a series of change addresses that are never reused. A peculiar aspect of these payout transactions is that a significant percentage of them\footnote{The exact percentage and more details are provided in the Section~\ref{subsec:cross-mining}.} has 101 output addresses (change address included). Because of that, in order to identify the change address in a payout transaction, we investigate all output addresses that spent the received amount through another payout transaction with 101 output addresses. In the time period considered, it turns out that the change addresses were always the output with the largest sum of BTC. By following this series of change addresses and payout transactions, we identified several other addresses as being individual miners, i.e., members of the pool. \item \textbf{ViaBTC} followed a payout pattern similar to the one represented in Figure~\ref{fig:viabtc_flow}, where a dozen of addresses always received a sum of exactly 10 BTC, which was then spent in payout transactions, again with a few hundred outputs and small sums of BTC. Since the dozen of addresses was not fixed, we investigate all transactions that received 10 BTC from the ViaBTC reward address. We must report that, in this case, we could not consistently distinguish the pool's change address from the members' addresses. \end{itemize} \subsubsection{Identification of individual miners.} Having identified payout transactions, we can now extract Bitcoin addresses belonging to individual miners. Furthermore, we can partition these addresses into maximal subsets (clusters) that are likely to be controlled by the same actor using the well-known~\cite{reid2011anonymity,ron2013quantitative} and efficient~\cite{Harrigan:2016aa} multiple-input clustering heuristics. The underlying intuition is that if two addresses (i.e.: A and B) are used as inputs in the same transaction while one of these addresses along with another address (i.e: B and C) are used as inputs in another transaction, then the three addresses (A, B and C) must somehow be controlled by the same actor~\cite{meiklejohn2013fistful}, who conducted both transactions and therefore possesses the private keys corresponding to all three addresses. This heuristic fails when CoinJoin transactions~\footnote{\url{https://en.bitcoin.it/wiki/CoinJoin}} are taken into account because they combine payments from different spenders that do not necessarily represent one single entity. Aware of this problem, we filtered those transactions before applying the multiple input heuristics. Table \ref{miners_stats} provides summary statistics for each investigated mining pool: the number of blocks mined by each pool ($N_{B}$), the number of identified payout transactions ($N_{TX}$), the number of addresses belonging to individual miners ($N_{A}$) and the number of identified clusters ($N_{C}$) within each pool. In order to estimate the real-world coverage of our dataset, we compared the mining reward ($BTC_{M}$) associated to mined blocks with the payouts associated to individual miners' addresses ($BTC_{P}$). This shows that we were---within our observation period and provided that payouts happen regularly---able to identify 92\% of the individual miners in BTC.com, 30\% in AntPool and 75\% in ViaBTC. We hypothesize that the low percentage obtained for AntPool lies on the relatively strict filter criterion we applied on its payout pattern (transactions with exactly 101 outputs). We also investigated how often individual miners reused addresses within a pool and found that the median address reuse $\mu$ is much higher in BTC.com than in AntPool and ViaBTC. When we normalize the median address reuse by the number of identified payout addresses $\dfrac{\mu}{N_A}$, we see that AntPool is the outlier, which could mean that its members are more careful about privacy or are changing their payout addresses at a faster rate compared to the other pools. The fact that the number of blocks mined is greater than the number of payments we detected can be due to two reasons: pool managers distribute the mining rewards not at every mined block, but within a longer time period, combining payments to minimize the transaction fees and, as already noted above, because we didn't manage to find all payout transactions performed by pools. \setlength{\tabcolsep}{4pt} \begin{table} \centering \caption{Statistics of retrieved data within the observed period. $N_{B}$: number of blocks mined by the pool, $N_{TX}$: number of identified payout transactions, $N_{A}$: number of identified members' addresses, $N_{C}$: number of identified clusters, $BTC_{M}$: BTC mined by the pool, $BTC_{P}$: BTC paid to pool members (addresses), $\mu$: median value of address reuse.}\label{miners_stats} \begin{tabular}{@{}lrrrrrrrrrr@{}} \toprule Pool Name & $N_{B}$ & $N_{TX}$ & $N_{A}$ & $N_{C}$ & $BTC_{M}$ & $BTC_{P}$ & $\dfrac{BTC_{P}}{BTC_{M}}$ & $\mu$ & $\dfrac{\mu}{N_A}$ \\ \midrule BTC.com &1,020 &225 &20,444 &8,900 &13,059 &12,057 &92\% &20 &9.8 $\times10^{-4}$ \\ AntPool &617 &408 &14,166 &5,082 &7,887 &2,333 &30\% &2 &1.4 $\times10^{-4}$\\ ViaBTC &457 &104 &7,171 &3,121 &5,841 &4,284 &75\% &5 &7.0 $\times10^{-4}$\\ \bottomrule \end{tabular} \end{table} \subsubsection{Centralization of mining shares within pools.} Previously, in Section~\ref{subsec:level1}, we saw that the mining shares are centralized among a relatively small number of pools. However, little is known about the centralization of mining shares inside each pool. We gain insight into pool centralization by looking at the distribution of a pool's mining shares to identified clusters ($N_{C}$), which represent actors within each pool. Figure~\ref{fig:pool_centralization} shows the cumulative distribution of mining shares among members (clusters) for each pool. In order to investigate how the internal mining distribution changed over time, we expand our dataset (blocks 510,000 to 514,032) by payout transactions for BTC.com from block 550,000 (2018-11-14) to 554,032 (2018-12-16). We chose BTC.com because its payout transactions can easily be identified, as discussed before. In Figure~\ref{fig:pool_centralization} the blue and light-blue lines show the cumulative sum of BTC.com for both periods and it can be observed that the distribution of mining shares within that pool remained relatively stable over time. Although our dataset covers just a fraction of Bitcoin's overall mining activity, we notice that 50\% of the mining power in ViaBTC is controlled by 7 clusters, compared to 20 clusters for BTC.com and 15 clusters for AntPool. Despite this, if we compute the Gini coefficient on these shares we get 0.945 for BTC.com, 0.942 for ViaBTC and 0.938 for AntPool, which indicates that the distribution of mining shares is highly centralized within all investigated pools. We note and discuss later that clusters do not necessarily represent individuals but also larger actors such as exchanges or wallet providers. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/pool_centralization.pdf} \caption{Cumulative sum of mining shares over clusters (actors) for each pool (log-scale). Black-dotted lines highlight the number of clusters controlling 50\% of each pool.} \label{fig:pool_centralization} \end{figure} \subsection{Miners in the Bitcoin Ecosystem}\label{subsec:cross-mining} Having identified individual miners within pools by their Bitcoin addresses and cluster affiliation, we can now turn our focus on individual miners participating in several pools and lay out their economic relationships to other actors in the Bitcoin ecosystem within our observation period. \subsubsection{Cross-Pool Mining} If a single Bitcoin address receives payouts from several mining pools, we can assume that the individual miner holding that address conducts cross-pool mining. Table~\ref{table:crosspool_addresses} shows the number of addresses involved in cross-pool mining for each pair of pools, their respective clusters, as well as the total amount of BTC received by these entities from the pools. We notice that the BTC.com-AntPool pair is the one with the highest overlaps (addresses in common, clusters in common and BTC received), which might be a result of the common mining pool ownership as discussed in Section~\ref{subsec:level1}. However, it must be noted that these figures only represent the fraction of addresses that were reused across clusters; individual miners creating separate addresses for each of their mining activities are immune to multiple-input clustering and therefore not represented in this table. \begin{table} \centering \caption{Cross-pool mining at address level in the time period between block 510,000 and 514,032 ($\sim$ 4 weeks), including how much BTC from each pool has been received by those common addresses.} \label{table:crosspool_addresses} \begin{tabular}{@{}llllll@{}} \toprule Pool 1 & Pool 2 & \begin{tabular}[c]{@{}l@{}}Addresses \\ in common\end{tabular} & \begin{tabular}[c]{@{}l@{}}Clusters \\ in common\end{tabular} & \begin{tabular}[c]{@{}l@{}}BTC from\\ Pool 1\end{tabular} & \begin{tabular}[c]{@{}l@{}}BTC from\\ Pool 2\end{tabular} \\ \midrule BTC.com & AntPool & 537 (1.58\%) & 434 (3.2\%) & 664.3 (5.5\%) & 176.8 (7.6\%) \\ AntPool & ViaBTC & 115 (0.54\%) & 196 (2.4\%) & 11.1 (0.47\%) & 102.6 (2.4\%) \\ ViaBTC & BTC.com & 250 (0.91\%) & 267 (2.3\%) & 175.4 (4.1\%) & 174.1 (1.4\%) \\ \bottomrule \end{tabular} \end{table} Next, we uncover the actors behind the clusters reported in Table~\ref{table:crosspool_addresses} using publicly available tags from blockchain.info and walletexplorer.com. In Table~\ref{table:crosspool_entities} we report the main actors that have been receiving shares of block rewards from the three pools analyzed. For each entity-pool pair, the table shows the amount of BTC received by the entity from the pool, its share within the pool and the number of addresses associated with the cluster representing that actor. The majority of actors is \emph{Unknown}, which means that we could not find tags attributing the associated addresses to actors. Excluding this last set which together accounts for more than 65\% of the shares in each pool, we see that the mining rewards are distributed to cryptocurrency exchanges (E) and wallet providers (W). The top exchanges listed in Table~\ref{table:crosspool_entities} (Bixin, Huobi.com) hold, in combination, relatively strong mining shares within each pool (BTC.com: 20,51\%, AntPool: 16,43\%, ViaBTC: 24.02\%) and can, therefore, be regarded as major forces pushing the centralization of mining shares within pools, as reported before in Section~\ref{subsec:level2}. However, it must be noted that exchanges typically do not participate in mining activities themselves but host wallets of individual miners. Nevertheless, we point out that exchanges and wallet providers are usually operated by a single physical or legal entity and the ownership of assets is often unclear unless users withdraw cryptocurrency units into self-controlled hot or cold wallets~\cite{anderson2018redux}. Furthermore, we can also observe a geographical co-location of mining pool operators and payout services: the same top exchanges have (or had\footnote{\url{https://www.forbes.com/sites/kenrapoza/2017/11/02/cryptocurrency-exchanges-officially-dead-in-china/\#48116a6a2a83}}) strong ties with China, which is where the three observed mining pool operators are located. \setlength{\tabcolsep}{2pt} \begin{table} \caption{Cross-pool mining at a cluster level in the time period between block 510,000 and 514,032 ($\sim$ 4 weeks). For each unknown entity or known actor, it is shown the amount of BTC received by each pool, its share in the pool, the number of addresses linked to it and the type of service it offers (W: wallet provider, E: cryptocurrency exchange service, P: known mining pool, M: unknown mining entity).}\label{table:crosspool_entities} \scalebox{0.73}{ \begin{tabular}{@{}llrrrrrrrrrr@{}} \toprule & & \multicolumn{3}{c}{BTC.com} & \multicolumn{3}{c}{AntPool} & \multicolumn{3}{c}{ViaBTC} & \\ \cmidrule(lr){3-5} \cmidrule(lr){6-8} \cmidrule(lr){9-11} Entity/Actor & Service & BTC & \%BTC & \#Addr. & BTC & \%BTC & \#Addr. & BTC & \%BTC & \#Addr. & Total BTC \\ \midrule Unknown & ? & 8930.39 & 74.07 & 13286 & 1682.25 & 72.09 & 8888 & 2877.02 & 67.17 & 4845 & 13489.67 \\ Bixin & W+E+P & 1663.75 & 13.80 & 1061 & 241.28 & 10.34 & 546 & 795.36 & 18.57 & 476 & 2700.39 \\ Huobi.com & E & 808.64 & 6.71 & 964 & 142.04 & 6.09 & 759 & 225.50 & 5.27 & 322 & 1176.19 \\ Bittrex.com & E & 83.71 & 0.69 & 348 & 29.56 & 1.27 & 251 & 43.36 & 1.01 & 177 & 156.63 \\ Xapo.com & W & 26.96 & 0.22 & 94 & 70.75 & 3.03 & 64 & 5.79 & 0.14 & 33 & 103.50 \\ Poloniex.com & E & 42.65 & 0.35 & 381 & 11.52 & 0.49 & 268 & 19.97 & 0.47 & 139 & 74.15 \\ Luno.com & W+E & 36.59 & 0.30 & 258 & 4.06 & 0.17 & 104 & 4.39 & 0.10 & 60 & 45.04 \\ Bitstamp.net & E & 8.94 & 0.07 & 57 & 3.55 & 0.15 & 38 & 3.91 & 0.09 & 22 & 16.39 \\ Cryptonator.com & W+E & 5.75 & 0.05 & 80 & 0.70 & 0.03 & 41 & 2.70 & 0.06 & 33 & 9.15 \\ BitoEX.com & W & 5.09 & 0.04 & 23 & 1.12 & 0.05 & 35 & 2.19 & 0.05 & 4 & 8.39 \\ CoinHako.com & W+E & 3.59 & 0.03 & 4 & 0.29 & 0.01 & 3 & 0.24 & 0.01 & 2 & 4.12 \\ Bitcoin.de & E & 1.86 & 0.02 & 26 & 0.76 & 0.03 & 13 & 0.58 & 0.01 & 7 & 3.19 \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Economic relationships in the Bitcoin ecosystem} Having identified and partly de-anonymized the actors that were mining with the three analyzed pools within our observation period, we can now illustrate the economic relationships among mining pools and other actors in the Bitcoin ecosystem. Figure~\ref{fig:payment_graph} shows the flow of mining rewards from mining pools to the clusters representing actors within pools. We selected the first 400 clusters\footnote{Sorted by received BTC. This number covers at least 85\% of the mining shares for each pool.} from each pool and grouped together clusters representing unknown entities in one node (1,118 in total). From our analysis, it is clear that the vast majority of mined coins go to unknown entities that we grouped into one \textit{Unknown} entry in Table~\ref{table:crosspool_entities}. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{figures/payments_graph_400.pdf} \caption{Flow of mining rewards from mining pools to their members. The strength of the arcs is scaled by payment volume, while the node size depends on the total amount of received (mined) BTC. In black: wallet services and exchanges, in gray: unknown entities. This plot covers the top 400 clusters from each mining pool sorted by received BTC. \textit{Unknown} entities (1118) were combined into one node.} \label{fig:payment_graph} \end{figure} \subsubsection{Inspecting the Unknown} We now focus on the 10 largest entities within the \emph{Unknown} by inspecting basic statistical properties of the underlying address clusters. In Table~\ref{table:crosspool_unknown_entities}, we report for each cluster its internal ID (assigned by GraphSense), the amount of BTC received by each cluster from each pool, the total revenues from mining, as well as the total amount of BTC received (as of April, 23rd 2018). When further inspecting the total number of addresses in those clusters we can observe that all of them consist of more than 30,000 addresses and one of them (cluster 324067473 with 11,534,706 addresses) is a so-called super-cluster~\cite{Harrigan:2016aa}. 9 clusters have been receiving a relatively large number of transactions for more than a year, 2 clusters (cluster 327539880 and 324067473) for more than 4 years. While not being verifiable without having attribution data, those statistics suggest that the ten largest unknown mining clusters also represent untagged exchange services or wallet providers. \begin{table}[] \centering \caption{Cross-pool mining of the ten largest unknown mining clusters sorted by total amount of BTC received by the three pools in the time period between block 510,000 and 514,032 ($\sim$ 4 weeks).}\label{table:crosspool_unknown_entities} \scalebox{0.9}{ \begin{tabular}{@{}rrrrrrrrr@{}} \toprule & \multicolumn{2}{c}{BTC.com} & \multicolumn{2}{c}{AntPool} & \multicolumn{2}{c}{ViaBTC} & & \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} Cluster ID & \multicolumn{1}{c}{BTC} & \multicolumn{1}{c}{\%BTC} & \multicolumn{1}{c}{BTC} & \multicolumn{1}{c}{\%BTC} & \multicolumn{1}{c}{BTC} & \multicolumn{1}{c}{\%BTC} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Mined\\BTC\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Total BTC\\Received\end{tabular}} \\ \midrule 327539880 & 409.34 & 3.40 & 122.10 & 5.23 & 258.55 & 6.04 & 789.99 & 521,939 \\ 324067473 & 295.02 & 2.45 & 90.44 & 3.88 & 189.15 & 4.42 & 574.61 & 3,756,583 \\ 350822682 & 244.77 & 2.03 & 9.29 & 0.40 & 182.92 & 4.27 & 436.98 & 110,566 \\ 350824718 & 244.67 & 2.03 & 65.65 & 2.81 & 46.20 & 1.08 & 356.52 & 112,680 \\ 333653856 & 153.02 & 1.27 & 54.02 & 2.31 & 83.60 & 1.95 & 290.63 & 130,680 \\ 372448840 & 181.10 & 1.50 & 33.64 & 1.44 & 55.73 & 1.30 & 270.48 & 882,713 \\ 234254928 & 93.31 & 0.77 & 27.18 & 1.16 & 58.68 & 1.37 & 179.17 & 905,101 \\ 249123673 & 15.63 & 0.13 & 0.40 & 0.02 & 107.23 & 2.50 & 123.26 & 6,812,938 \\ 349962609 & 8.67 & 0.07 & 39.01 & 1.67 & 19.74 & 0.46 & 67.41 & 1,173,892 \\ 311503667 & 38.94 & 0.32 & 7.47 & 0.32 & 7.77 & 0.18 & 54.18 & 486,338 \\ \bottomrule \end{tabular} } \end{table} \section{Background}\label{sec:background} In this section, we briefly introduce central notions used throughout this paper. While we do not attempt to give a complete introduction to the underlying technology of Bitcoin and permissionless cryptocurrencies, we direct the reader to existing literature, such as~\cite{nakamoto2008bitcoin,bonneau2015research,tschorsch2015bitcoin,narayanan2016bitcoin}. \subsection{Bitcoin Mining and Mining Pools} One of the key innovations of Bitcoin's Nakamoto consensus is the successful implementation of a random leader election process in a dynamically changing set of pseudonymous participants. Thereby, each node taking part in the consensus mechanism is required to solve a memoryless and non-invertible cryptographic puzzle, i.e., provide a Proof-of-Work (PoW). In Bitcoin, the latter is represented by a partial-preimage attack on the SHA-256 algorithm, whereby participants must brute-force the hash over the transactions defining the new state of the system and some additional information, including the reference to the last seen block and a random nonce. To become leader, participants, also referred to as \emph{miners}, must provide a hash which is not only valid but also fulfills the network's current difficulty requirements, i.e., lies below a specified target value. The difficulty is adjusted every 2016 blocks (est. two weeks), such that the average interval at which PoW solutions are found is approximately 10 minutes. Due to the structure of PoW, the time between each two found blocks in Bitcoin is exponentially distributed, yielding the number of found blocks per time period a Poisson process. The rate parameter is thereby defined by the ratio of PoW difficulty to the overall mining power present in the network. As such, \emph{solo miners} are expected to face a high variance of payouts, depending on their share of the overall mining power. Consequently, miners collude to form so-called \emph{mining pools}, where participants work together towards finding the next block and share rewards based on each miner's contribution and according to some reward distribution scheme. As such, since the appearance of the first mining pools in Bitcoin, the fraction of blocks generated by solo miners has steadily declined and is negligible today. The rising competition among mining pools has been shown to motivate adversarial strategies, such as denial-of-service attacks~\cite{johnson2014game,laszka2015bitcoin}. Other, less detectable, attack techniques such as \emph{block withholding}~\cite{eyal2015miner,luu2015power,courtois2014subversive,rosenfeld2011analysis} and \emph{spy mining}~\cite{sompolinsky2018bitcoin,eyal2014majority} have been observed and studied. A recent economic model suggests miners will refrain from attacking the underlying blockchain as long as the revenue received from honest behavior exceeds the one-off benefit yielded by attacks~\cite{Budish:2018aa}. \subsection{Reward Distribution in Mining Pools}\label{sec:background_payout_schemes} At the time of writing, major pools direct their mining revenue to publicly known addresses, specified in the output of a block's coinbase transaction. We refer to this address as \textit{reward address}. Furthermore, mining pools have been observed to usually reveal their identity by adding human-readable text or \emph{markers} to the coinbase. The reward address, often used for prolonged periods by a pool, can thereby be used (i) to directly distribute block rewards to pool members, which can be identified by a set of \textit{miners' addresses}, or (ii) to send the newly minted coins to a \textit{collector address}, which is used to hold larger amounts of BTC so as to perform payments to pool members when necessary. We refer to the transaction by which pool members receive their mining reward share as \textit{payout transaction}. Such transactions are often characterized by a large number of output addresses, each receiving a reward usually in the order of a few mBTC, as our findings show in Section~\ref{subsec:level2}. While the exact structures of payout schemes vary from mining pool to mining pool and also with time, the general principle is the same: mining pool operators distribute the template for the next to-be-generated block to participating miners and require the latter to submit PoW solutions meeting some minimal difficulty. These ``partial'' solutions are referred to as \emph{shares} and serve as a measure of each miner's contribution. Using this information, the mining pool operators distribute the revenue among participating miners, based on some pre-defined scheme. An overview of the main classes of reward distribution schemes employed by mining pools, as well as discussions on fairness, is provided in~\cite{rosenfeld2011analysis,schrijvers2016incentive}. \section{Discussion}\label{sec:discussion} We believe that the empirical analysis presented in this paper led to novel insights into the structure and behavior of Bitcoin mining pools. Our longitudinal analysis of mining pool market shares, which is based on a simple yet effective attribution scheme, outlines the need for open attribution data and agreed upon reproducible methodologies of how this attribution data is applied. Moreover, it highlights conflicts and gray spots in block attributions not visible in aggregated pie charts and confirms centralization tendencies in Bitcoin mining, as pointed out by previous research. It also confirms concerns targeting the recent domination of three to four mining pools, which could surpass the 50\% security threshold when combining their mining power. Additionally, we were able to trace payout transactions within a one-month observation period, found addresses belonging to individual miners and could group them into clusters representing actors in the Bitcoin ecosystem. We showed that the distribution of shares within analyzed mining pools is also highly centralized and that the majority of mining rewards distributed by a pool is received by a relatively small set of actors. We also saw that individual miners conduct cross-pool mining and that geographically co-located exchange services and wallet services hold large shares within pools and push towards centralization within mining pools. Plus, our dataset is openly available and our method is reproducible and could be extended over additional mining pools, longer observation periods and other cryptocurrencies. We are well aware that our approach has a number of limitations. First, we focus our in-depth analysis of mining pools on a restricted set of pools (the top three pools) and an observation period of one month. For AntPool in particular, we selected only payout transactions following the pattern 101 outputs, even though, by manual investigation we noticed that the chain of payments sometimes continued with a different amount of outputs. Improvements in this aspect are possible and are part of our future works. Other, non-investigated smaller pools could be less centralized and follow other payout patterns. We address this limitation, by binding our results to specific, in our opinion systemically relevant, mining pools. This ensures, that we are not claiming that we assess the entire set of mining pools in Bitcoin. Second, we know that the multiple-input heuristics we used for clustering addresses could lead to false positives (unrelated addresses are joined together) when transactions are tunneled through Mixing services (c.f.,~\cite{MoeserB2016JoinMeOnAMarket}) and false negatives (mining addresses do not belong to any cluster) when individual miners create new addresses for each single mining activity. However, we are confident of having avoided false positives by applying strict filtering criteria on mining pool payout patterns and ignoring CoinJoin transactions. Third, our approach is limited by the extent and quality of the attribution data (tags) available. Without this information, address clusters remain anonymous and inferences about their real-world nature are impossible. Nevertheless, we believe that such data will increasingly become available in the near future with the growing popularity of cryptocurrency analytics tools. Overall, our paper strengthens a line of recent research and community discussions that suggest skepticism and scrutiny on the decentralization of control in cryptocurrencies, which is often considered being the key feature distinguishing them from fiat currencies. To plausibly uphold this claim, mining pools as well as big players in the ecosystem like exchanges, have to find the sweet spot between acting more transparently to encourage public auditability and the privacy demands of their users. \section{Conclusions}\label{sec:conclusions} We present an empirical analysis of the distribution of mining shares within and across mining pools. Our investigation on the longitudinal evolution of mining pools confirms centralization among a relatively small number of mining pools, three to four controlling more than 50\% of the hash rate. Further inspection of the three largest mining pools has shown centralization tendencies also within those pools, where in each pool less than 18 pools members receive more than 50\% of the identified pool payouts. Examination of payments between those mining pools and actors representing individual miners has revealed cross-pool mining activity (both at address and cluster level), economic relationships between operators of geographically co-located mining pools, exchange services and wallet providers. Overall, our research supports previous findings and scrutinizes the decentralization property of cryptocurrencies. \section{Introduction}\label{sec:introduction} The distribution of mining power or \emph{hash rate} can be seen as a key indicator for the market shares of mining pools and represents a core security parameter in Bitcoin and other cryptocurrencies relying on Nakamoto consensu ~\cite{nakamoto2008bitcoin,garay2016bitcoin,rosenfeld2014doublespending,sompolinsky2016bitcoin}. An attacker in control of the majority of the network's hash rate is capable of manipulating the system at will, i.e., executing double spending attacks, prohibiting transactions from entering the blockchain and effectively rewriting the transaction history within computational limits. However, concentrations even below this barrier open up the possibility for effective selfish-mining attacks and its variants~\cite{eyal2014majority,nayak2015stubborn,sapirshtein2015optimal,gervais2016security}. Since the hash rate distribution is not directly observable in the public blockchain, it is currently estimated retroactively by attributing mined blocks to known mining pools and counting their relative frequency. Statistics published by popular analytics platforms\footnote{Including but not limited to~\url{https://blockchain.info/charts}} indicate that the overall mining power is concentrated among a relatively small number of pools, with BTC.com, ViaBTC and AntPool holding―or being close to hold―the majority, but none of them exceeding the 50\% limit. However, large miners and pools could have an incentive to conceal or obfuscate the actual extent of their mining power. If successful, this would allow to maximize market shares and profits without visibly harming the security and credibility of the underlying system, ideally maintaining the stability of revenue streams. Despite its relevance for security considerations, measurement studies on the level of mining-power centralization are still scarce. Recently, a study of miner centralization on the P2P network layer by Gencer et. al~\cite{gencer2018decentralization} found that over 50\% of the mining power has exclusively been shared by eight miners in Bitcoin and five miners in Ethereum for prolonged periods. In~\cite{ozisik2017estimation}, two novel methods to accurately estimate network hash rates are proposed, however, without providing empirical results. Judmayer et al. perform an analysis of mining power distribution in merge-mined cryptocurrencies, showing the latter exhibits vulnerabilities to centralization by large mining pools~\cite{judmayer2017merged}. A series of blog posts on the distribution of Bitcoin mining shares covering the period between early 2012 and late 2016~\cite{online:organofcorti} are consistent with the results reported by Gencer et. al~\cite{gencer2018decentralization}.\newline \noindent\textbf{Contributions.} In this paper we expand on existing research, aiming at shedding light on the distribution of mining rewards in Bitcoin. Specifically, we conduct an empirical analysis of mined blocks and mining reward payout patterns to present a clearer picture on how mining pools operate and how the mining reward assigned to operators is further distributed to individual miners enabling the analysis of monetary flows and economic relationships. We leverage attribution techniques described in~\cite{judmayer2017merged}, improve their accuracy by incorporating information from existing online analysis platforms~\cite{online:blockchaininfo,online:blocktrail} and apply multiple-input clustering~\cite{haslhofer2016bitcoin,github:graphsense} for identifying address clusters that are likely to be controlled by the same actor. Summarizing, our contributions are as follows: \begin{enumerate} \item We combined several sources of miner and pool attribution data and present a simple yet effective method for attributing blocks to mining pools. Using this method, we were able to attribute approximately 35K blocks more than \emph{blockchain.info} and analyzed the longitudinal evolution of the mining pool market share distribution in Bitcoin over a 5-year period. \item We investigate payout patterns of three mining pools that together exceeded the 50\% threshold (between the end of 2017 and mid-2018) and analyze how mining rewards are distributed from pools to individual miners within a four-week observation period. Our results show that in each of the three pools, a small number of address clusters ($\leq 20$) receives over 50\% of all payouts, suggesting a relatively strong centralization. \item We examine economic relationships of miners, reveal cross-pool mining activities and find that geographically co-located exchange services and wallet providers are major receivers of minted coins. \item We publicly release our findings on addresses, pools and mining entities, as well as the code to reproduce our results and improve the current knowledge about Bitcoin mining and involved actors at \url{https://github.com/MatteoRomiti/Deep_Dive_BTC_Mining_Pools}. \end{enumerate} \noindent\textbf{Outline.} First, in Section~\ref{sec:background}, we introduce the necessary background on Bitcoin mining and mining pools. Then we go on and present our empirical analysis and the results we obtained in Section~\ref{sec:analysis}. We finish by discussing our findings in Section~\ref{sec:discussion} and conclude in Section~\ref{sec:conclusions}. \section{Acknowledgments}\label{sec:acknowledgments} This work was funded by the Austrian Research Promotion Agency (FFG) through the projects VIRTCRIME, SESC and PR4DLT (Project IDs: 860672, 858561 and 864738), the competence center SBA-K1 (funded by COMET) and Blockchain.com. \subsection{Market Shares of Bitcoin Mining Pools}\label{subsec:level1} The first step of our analysis consists of attributing mined blocks to mining entities, i.e., pools and individual miners. Thereby, we leverage two main information sources: coinbase markers and coinbase transaction output addresses. We have seen that the \emph{coinbase field} of the coinbase transaction, is usually used by miners to place so-called \emph{coinbase markers}~\cite{judmayer2017merged} --- also called \emph{coinbase tags} or \emph{signatures} --- in order to claim blocks publicly. This enables mining pool members to monitor their respective pool activity and allows estimating the overall mining power of pools. Providing this information is also relevant for miners to publicly show that they support certain forks by setting the respective version bits using a signaling mechanism\footnote{\url{https://github.com/bitcoin/bips/blob/master/bip-0135.mediawiki}}. As already noted in~\cite{judmayer2017merged}, this information is not cryptographically secured and hence can easily be faked by the miner of a block. Therefore, it is reasonable to rely on the \emph{reward address} as a primary data source for block attribution, as modifications to the latter, e.g. to impersonate a different mining entity, result in the loss of the associated mining reward. In our observation period, only $ \percentMultiCBOut\% $ of all blocks exhibited more than one coinbase output address, from which most occurred in the early days of Bitcoin. We note that coinbase transactions, which have more than one output address, can theoretically contain ordinary payouts performed by the pool and therefore might not be directly mappable to one single entity directly. For example, P2Pool and Eligius use the coinbase transaction to payout shares of the block reward without an intermediary transaction. Although there are several online resources (e.g., \emph{blockchain.info}, \emph{btc.com}, \emph{blocktrail.com}, etc.) which provide aggregated charts regarding the shares miners hold in various cryptocurrencies, their exact methodology of how they attribute blocks to individual miners/pools often remains undisclosed, as for example with \emph{blocktrail.com}. If underlying attribution/mapping data is publicly available, it sometimes is outdated, as it is the case for \emph{blockchain.info}, or diverges between services, as it is the case for \emph{btc.com}. Data provided by \emph{btc.com} was forked from the information provided by \emph{blockchain.info} but tends to use different miner/pool names which makes unification of mapping information a highly manual task. To map blocks to mining entities we retrieve mapping information from the following sources and merge them into a single file: \begin{itemize} \item the official \emph{Blockchain.info} Github repository\footnote{\srcBlockchainInfo} that, according to its documentation, is the basis for this visualization \url{https://www.blockchain.com/pools}, \item the official \emph{BTC.com} Github repository\footnote{\srcBtcCom} that, according to its documentation, is the basis for this visualization \url{https://btc.com/stats/pool} \item mappings performed by \emph{Blocktrail.com}\footnote{\url{https://www.blocktrail.com/api}}. However, we do not have precise information about how Blocktrail attributed blocks and this API was closed while working on this paper. Therefore, we don't have Blocktrail mapping information for blocks after $ 514239 $. \item manually retrieved coinbase markers from coinbase transactions, \item multiple-input cluster information obtained from the GraphSense tool~\cite{github:graphsense}. \end{itemize} Given these mappings, our methodology for attributing blocks to mining entities is depicted in Figure~\ref{fig:attribution}. First, we unify the names used by different sources to indicate equality of entities in our mapping file. This ensures that each mining entity can be uniquely identified, regardless of the source used. For each block, we first check whether some mining entity is associated with the coinbase output address. Recall, we consider this the most reliable source of information for attribution in a block, as providing wrong data here would result in the miner giving away funds. If the reward address cannot be identified, e.g in cases where the reward address is not yet attributed to a mining entity or the coinbase transaction has multiple outputs, we proceed to check the coinbase marker. If a match is found and there is only one coinbase output address, this address is added to the list of reward addresses of the corresponding mining entity. Moreover, the respective mining entity is added to the list of attributions for this block, alongside the source(s) used for the attribution. \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/attribution_horizontalized} \caption{High level flow chart representing our attribution scheme.} \label{fig:attribution} \end{figure} If all sources attribute the same entity to a block, we call this mapping \emph{unique}. Only in $ \numberConflicts $ out of $ 556400 $ blocks ($ \percentConflicts\% $), we encountered conflicts, i.e., the block was attributed to two or more mining entities. Conflicts mostly occurred for blocks that were attributed to \emph{BTC.TOP} and \emph{CANOE} at the same time, as well as Waterhole and BTC.com, or Tang Pool and Bixin. Despite being just a small fraction of blocks, it shows that publicly available resources for Bitcoin analysis are not always precise and accurate. Sometimes even the information coming from a single source is inconsistent and would attribute a single block to different miners. For example, only using the pool information from \emph{Blockchain.info} blocks $ 482059 $ and $ 482221 $ are attributed to \emph{Waterhole} (based on address) as well as \emph{BTC.com} (based on coinbase). Other examples can be found in BTC pool information, which is used in our attribution scheme: block $ 524045 $, for instance, is attributed to \emph{BitcoinRussia} and \emph{Bitcoin-Ukraine}; $ 16 $ other blocks ($476422, 478113, 482242, 482614, 483576, \ldots$) are attributed to \emph{CANOE} and \emph{BTC.TOP}. Overall, with our method, we attributed more blocks than any other source considered alone. As an example, if we consider blocks from \attrStart~to \attrEnd~we attributed \attr~blocks, which is \attrMoreBlockchainInfo~($\sim$ \attrMoreBlockchainInfoBTC~BTC) more than it would be possible by only using pool information from \emph{blockchain.info}. A comparison of different attribution data sources is shown in Table~\ref{tbl:attr}, while the conflicts in our final attributed dataset are distributed as shown in Table~\ref{tbl:conflicts}. \begin{table} \centering \caption{Comparison of different attribution data sources for miners and pools from block 0 to $ 556400 $. The column \emph{direct use} refers to using the provided mapping information directly on every block without adding newly identified addresses by markers. The column \emph{our attribution} refers to the procedure described in Figure \ref{fig:attribution}. For \emph{blocktrail.com} we only have a list of blocks up to $ 514,239 $ with the respective associated attribution, therefore there cannot be any conflicts. \emph{Combined} contains all previously mentioned sources as well as GraphSense and manually identified coinbase markers.} \label{tbl:attr} \begin{tabular}{@{}lllll@{}} \toprule Source & Direct use & \begin{tabular}[c]{@{}l@{}}Direct use \\ conflicts\end{tabular} & Our attribution & \begin{tabular}[c]{@{}l@{}}Our attribution \\ conflicts\end{tabular} \\ \midrule blockchain.info & 334,416 & 2 & 339,597 & 5 \\ btc.com & 337,629 & 3 & 342,619 & 22 \\ blocktrail.com & 324,720 & - & - & - \\ combined & - & - & 375,381 & 684 \\ \bottomrule \end{tabular} \vspace{0.2cm} \end{table} \begin{table} \centering \caption{Conflicts in final attribution. The last conflict between F2Pool and BTCC Pool is probably due to a misattribution by \emph{blocktrail.com} as all other sources attribute the respective block to F2Pool.}\label{tbl:conflicts} \begin{tabular}{@{}llll@{}} \toprule Miner 1 & Miner 2 & Number of conflicts & Example blocks \\ \midrule BTC.TOP & CANOE & 338 & 516210, 516275, \ldots \\ Bixin & TangPool & 142 & 339210, 339284, \ldots \\ BTC.com & Waterhole & 113 & 478230, 478328, \ldots \\ BTC.TOP & WAYI.CN & 81 & 509073, 509100, \ldots \\ ViaBTC & Okminer & 5 & 510279, 523217, \\ Yourbtc & OzCoin & 3 & 159846, 159929, 159964 \\ BitcoinRussia & Bitcoin-Ukraine & 1 & 524045 \\ F2Pool & BTCC Pool & 1 & 482886 \\ \bottomrule \end{tabular} \vspace{0.2cm} \end{table} \subsubsection{Evolution of mining pool market shares.} Having attributed blocks to mining pools, we can now analyze how their shares have evolved over time. Figure~\ref{fig:stack_miners} shows the evolution of mining shares between \stackplotStart~and \stackplotEnd, aggregated in bins spanning $ 2,016 $ blocks, which corresponds to Bitcoin's difficulty adjustment period. The gray region represents small known pools or miners for which the sum of all its percentages is below $ 4\% $. It also indicates the $ 50\% $ mining power threshold (red line) and the Gini coefficient (black line) as a measure of market share distribution (between 0 and 1). The higher the Gini value, the stronger the inequality among market's participants. The evolution of the Gini coefficient shows peaks of mining power centralization around June 2014, April 2016 and January 2018, almost in a cyclical fashion (roughly a 22-month period). \begin{figure} \hspace*{-1.7cm}\includegraphics[width=1.2\textwidth]{figures/stackplot_periodLen_2016secs_end_554399_numPeriods_138_threshold_4_groupBy_miner.pdf} \caption{Evolution of mining pool market shares in Bitcoin between \stackplotStart~and \stackplotEnd. The red line indicates the $ 50\% $ security threshold, while the black line is the Gini coefficient as a measure of market share distribution.}\label{fig:stack_miners} \end{figure} Regarding the evolution of mining pools' market shares, we can observe that the distribution has changed over time. Pools dominating mining now (BTC.com, ViaBTC and BTC.TOP) did not exist in early 2016 and, vice versa, most of the largest pools in 2016 (e.g., BTCC, Bitfury and BW Pool) are now much smaller players. We can also observe that, from January until mid-2018, three pools combined held more than 50\% of the overall mining power. In particular, the first two (BTC.com and AntPool) are owned by Bitmain, the leader among mining hardware manufacturers. Another interesting observation is that the number of \emph{unknown} blocks which could not be attributed to a mining entity has increased lately to a level last observed 2015.
\section{Reduction -- Amortized to worst case} \label{sec:reduction-amortize} We start by providing the amortized to worst case reduction and prove Theorem \ref{thm:main-amortize}. It also serves as a warm up for the worst case to worst case reduction. Our key idea is to re-order current elements in reverse order of deletions, this ensures a deletion update takes only $O(\Gamma_{u})$ time by rewinding the computation. Of course, maintaining the reverse order could be expensive as the new-coming element could be deleted last (e.g. the sliding window model). Instead, we only maintain a partial reverse order -- we re-insert the last $O(2^i)$ elements in reverse order in every $2^{i}$ updates. This suffices to handle fully-dynamic updates and brings $O(\log (T))$ computation overhead on average. The reduction is formally depicted in Algorithm \ref{algo:red-amortize}. The current elements are distributed to $m+1 = \lceil\log_2 T\rceil + 1$ buckets $B_0, \ldots, B_{m}$. At the $t$-th update ($t\in [T]$), the algorithm first handles the update (insertion or deletion) on $B_0$ and it is guaranteed by our algorithm that a deletion occurs on $B_0$. The algorithm then rewinds the computation to $B = B_m \ldots B_{k(t) + 2}$ (Line \ref{line:rewind1}) and then {\sc Insert} elements of $B_0 \cup \cdots \cup B_{k(t) +1}$ in the reverse order (Line \ref{line:insert1}). Algorithm \ref{algo:red-amortize} re-arranges buckets as follows: It keeps buckets $B_{m}, \ldots , B_{k(t)+2}$ unchanged, it puts the youngest $[2^{i} :2^{i+1} - 1]$ elements in $B$ to bucket $B_i$ ($i \in [0: k(t)]$) and the remaining elements to $B_{k(t) + 1}$ (see Line \ref{line:sort} -- \ref{line:sort2}). \begin{algorithm}[!htbp] \caption{Reduction -- Amortized to worst case} \label{algo:red-amortize} \begin{algorithmic}[1] \State Initialize $B_i \leftarrow \emptyset $ ($i \in [0: m]$) \Comment{$m = \lceil \log_2 T\rceil$} \For{$t = 1,2, \ldots, T$} \State Rewind the computation to $B_{m}\ldots B_{k(t)+2}$ \label{line:rewind1} \State Add/remove the element in $B_0$ \Comment{Deletion is guaranteed to occur in $B_0$} \label{line:insert-deletion} \State $B \leftarrow B_0 \cup \cdots \cup B_{k(t)+1}$ \State $B_i \leftarrow B[2^{i}:2^{i+1} - 1]$ ($i \in [0:k(t)]$) \label{line:sort} \Comment{$B_i$ contains the youngest $[2^i: 2^{i+1}-1]$ elements} \State $B_{k(t)+1} \leftarrow B \setminus (B_0 \cup \cdots \cup B_{k(t)})$ \label{line:sort2} \State {\sc Insert} $B_{k(t)+1},\ldots, B_0$ \Comment{Elements are inserted in reverse order of deletion} \label{line:insert1} \EndFor \end{algorithmic} \end{algorithm} For any $t \in [T]$, let $E_t$ be all existing elements at the end of $t$-th update, and let $E_{t, r}$ be the youngest $r$ elements in $E_t$ (when there are less than $r$ elements in $E_t$, we take $E_{t, r} = E_t$). The following lemma formalizes the main invariant for Algorithm~\ref{algo:red-amortize}. \begin{lemma} \label{lem:size1} At the end of $t$-th update ($t\in [T]$), one has \begin{itemize} \item $|B_0 \cup \cdots \cup B_{k(t)+ 1}| \leq 2^{k(t)+2} + 2^{k(t)}$; \item $E_{t, 2^{i+1} - 1} \subseteq B_0 \cup \cdots \cup B_{i}$ for any $i \in [0: k(t)]$. \end{itemize} \end{lemma} \begin{proof} For the first claim, at the end of $(t - 2^{k(t)})$-th update, Line \ref{line:sort} of Algorithm \ref{algo:red-amortize} guarantees that $|B_{i}| \leq 2^{i}$ holds for any $i \in [0:k(t)+1] $ as $(t - 2^{k(t)})$ is a multiple of $2^{k(t)+1}$. Since there are at most $2^{k(t)}$ insertions between the $(t - 2^{k(t)})$-th and $t$-th update, we have \[ |B_0 \cup \cdots \cup B_{k(t) + 1}| \leq 2^{k(t)} + \sum_{i=0}^{k(t)+1}2^{i} \leq 2^{k(t)+2} + 2^{k(t)}. \] We prove the second claim by induction. The case of $t =1$ holds trivially and suppose the induction holds up to $t -1$. Then we know that \[ E_{t - 2^{k(t)}, 2^{k(t)+2} - 1} \subseteq B_0 \cup \cdots \cup B_{k(t) + 1} \] at the end of $(t - 2^{k(t)})$-th update as $(t - 2^{k(t)})$ is a multiple of $2^{k(t)+1}$. As a consequence, one has \[ E_{t - 2^{k(t)}}\setminus E_{t - 2^{k(t)}, 2^{k(t)+2} - 1} \cap E_{t, 2^{k(t)+ 1} - 1} = \emptyset \] as there are at most $2^{k(t)}$ deletions between the $(t - 2^{k(t)})$-th and $t$-th update, and $2^{k(t)+2} - 1 - 2^{k(t)} \geq 2^{k(t)+1} - 1$. At the same time, new elements arrive between the $(t - 2^{k(t)})$-th and $t$-th update and they are all contained in $B_0 \cup \cdots \cup B_{k(t)+1}$, hence, we conclude that $E_{t, 2^{k(t)+1} - 1} \subseteq B_0 \cup \cdots \cup B_{k(t)+1}$. By Line \ref{line:sort} of Algorithm \ref{algo:red-amortize}, we have $E_{t, 2^{i+1} - 1} \subseteq B_0 \cup \cdots \cup B_{i}$ for any $i \in [0: k(t)]$. We conclude the proof here. \end{proof} The correctness of Algorithm \ref{algo:red-amortize} follows immediately from the correctness of the incremental algorithm because the set $B_0$ always contains the youngest element. It remains to bound the amortized running time. \begin{lemma} \label{lem:amortize} The amortized update time of Algorithm \ref{algo:red-amortize} is at most $O(\Gamma_{u} \cdot \log (T))$. \end{lemma} \begin{proof} Line \ref{line:insert-deletion} takes constant time since $B_0$ has constant size. By Lemma \ref{lem:size1}, $|B_0 \cup \cdots \cup B_{k(t)+1}| \leq 2^{k(t)+2} + 2^{k(t)}$, the rewinding step (Line \ref{line:rewind1}) takes at most $O(2^{k(t)}\Gamma_u)$ time and we make $O(2^{k(t)})$ calls to {\sc Insert} at Line \ref{line:insert1}. The allocation step (Line \ref{line:sort}) takes no more than $O(2^{t(k)})$ time, since the bucket $B_{i}$ has already been sorted ($i \in [0:k(t)+1]$) and it remains to merge them. The total update time equals \[ \sum_{t=1}^{T} O(2^{k(t)} \Gamma_{u}) = \sum_{k=0}^{m}O(2^{k} \Gamma_{u}) \cdot T/2^{k} = O(\Gamma_{u}\cdot T \log (T)). \] We conclude the proof here. \end{proof} \begin{Remark}[Implementation of rewinding] We work in the RAM model and perform reversible computation. One simple way of implementing reversible computation (e.g. \cite{bennett1973logical}) is to write down the change to memory cell in every step. The forward computation time only slows down by a constant factor and the backward (rewind) computation time equals the forward computation time. In practice for specific problems there may be faster ways to implement rewinding. \end{Remark} \section{Application} \label{sec:application} We provide a few applications of our reduction. \subsection{Submodular maximization} \paragraph{Dynamic submodular maximization} In a submodular maximization problem, there is a ground set $N = [n]$ and a set function $f: N\rightarrow \mathbb{R}^{+}$. The function is said to be monotone if $f(A) \geq f(B)$ for any $B \subseteq A \subseteq N$ and it is said to be submodular if $f(A \cup \{u\}) - f(A) \leq f(B \cup \{u\}) - f(B)$ for any $B \subseteq A \subseteq N$ and element $u$. The task of submodular maximization under a cardinality constraint refers to $\max_{S \subseteq [n], |S| = k}f(S)$ for some parameter $1 \leq k \leq n$, and the task of submodular maximization under a matroid constraint $\mathcal{M}$ refers to $\max_{S \subseteq \mathcal{M}}f(S)$. Finally, in a dynamic submodular maximization problem, the ground set can be inserted and deleted, and the goal is to maintain a good solution set $S$. \cite{feldman2022streaming} provides an $0.3178$-approximation algorithm (with a matroid constraint) under streaming setting, and one can adapt it to a dynamic algorithm with worst case update time under incremental setting. \begin{theorem}[Adapt from \cite{feldman2022streaming}] For any $n,k > 0$, under the incremental update, there exists a dynamic algorithm that maintains an $0.3178$-approximate solution for monotone submodular maximization under a matroid constraint of rank $k$ and makes $\poly(k, \log n)$ queries per iteration. \end{theorem} The sliding window model is of interests to the community \cite{epasto2017submodular}. The algorithm of \cite{epasto2017submodular} maintains an $1/2$-approximation solution with polylogarithmic updates time for dynamic submodular maximization under cardinality constraints. Our reduction gives the first constant approximation algorithm for a matroid constraint. \begin{theorem}[Dynamic submodular maximization] For any $n, k > 0$, there exists a dynamic algorithm that achieves $0.3178$-approximation for the problem of submodular maximization under a matroid constraint using $\poly(k, \log n)$ queries per update under the sliding window model. \end{theorem} \subsection{Depth first search (DFS) tree} \paragraph{Dynamic DFS} Given an undirected graph $G = (V, E)$ with $|V| = n, |E| = m$, the task is to maintain a depth first search (DFS) tree under edge insertion/deletion. In the incremental model, \cite{baswana2016dynamic} obtains a dynamic algorithm with $O(n(\log n)^3)$ worst case update time, and it is improved to $O(n)$ by \cite{chen2018improved}. While in the fully dynamic model, the current best known algorithm \cite{baswana2016dynamic} has $\widetilde{O}(\sqrt{mn})$ update time. \begin{theorem}[\cite{baswana2016dynamic, chen2018improved}] Given a graph $G = (V, E)$, with $|V| = n$, $|E| = m$. There is a dynamic algorithm that maintains a DFS tree with $O(n)$ update time in the incremental model. \end{theorem} Using our reduction, one can immediately obtain \begin{theorem}[Dynamic DFS] Given a graph $G = (V, E)$, with $|V| = n$, $|E| = m$. There is a dynamic algorithm that maintains a DFS tree with $\widetilde{O}(n)$ worst case update time in the offline model. \end{theorem} \subsection{All-Pair Shortest-Paths (APSP)} \paragraph{Dynamic APSP} The APSP problem has been a central topic of graph algorithm. In a dynamic APSP problem, there is a undirected weighted graph $G = (V, E)$ ($|V| = n, |E| = m$) that subjects to edge insertion and deletion, The goal of the algorithm is to maintain an estimate $\delta(u, v)$ for every pair of node $u, v \in V$ that approximates the shortest path distance between $u$ and $v$. In the incremental setting, \cite{chen2020fast} obtains an $O(1)$-approximate algorithm with $n^{o(1)}$ worst case update time. \begin{theorem} [Theorem 3.1 in \cite{chen2020fast}] Let $G = (V, E)$ be a undirected weighted graph, there exists an incremental deterministic All-Pair Shortest-Paths algorithm that maintains $O(1)$ approximate shortest path in $n^{o(1)}$ worst case update time. \end{theorem} The offline model is of interest and it is already pointed out in \cite{chen2020fast} that their data structure can be adapted to the offline model (but in a problem specific way). With Theorem \ref{thm:main} in hand, we can recover Theorem 4.8 of \cite{chen2020fast}. \begin{theorem}[Dynamic APSP] Let $G = (V, E)$ be a undirected weighted graph, there exists a deterministic All-Pair Shortest-Paths algorithm that maintains $O(1)$ approximate shortest path in $n^{o(1)}$ worst case update time under the offline model. \end{theorem} \section{Impossibility of general reduction} \label{sec:impossible} We prove both conditions (worst case guarantee and known deletion order) are indeed necessary to obtain a black box reduction. \paragraph{Worst case guarantee is necessary} A black box reduction is generally impossible if one only has amortized guarantee of incremental model. An example is the dynamic submodular maximization problem, where unconditional lower bound is known. \begin{theorem} Let $T \geq 1$ be the total number of updates. There exists a problem such that it is possible to have a dynamic algorithm with amortized update time $\Gamma_u$ in the incremental model, but any algorithm in the deletions-look-ahead model takes at $\Omega\left(\frac{T}{\log^4 (T)}\right) \cdot \Gamma_{u}$ amortized update time. \end{theorem} \begin{proof} Let $[n]$ be the ground set and the total number of update be $T = O(n)$. By Theorem 1.3 of \cite{chen2022complexity}, there exists an algorithm with $O(\log(k/\epsilon)/\epsilon^2)$ amortized query complexity and maintains an $(1-1/e-\epsilon)$-approximate solution for dynamic submodular maximization. While in the fully dynamic model (with known deletion order), by Theorem 1.2 of \cite{chen2022complexity}, no algorithm could maintain an $0.584$-approximation with $o(n/k^3)$ amortized queries whenever $k = \Omega(\log n)$. Taking $k = C \log n$ for some constant $C > 0$ exhibits a $\Omega(T/\log^4 (T))$ separation. \end{proof} \paragraph{Known deletion order is necessary} If the deletion order is not known in advance, there exists a separation between the fully-dynamic and incremental model. \begin{theorem} Let $T \geq 1$ be the total number of updates. There exists a problem such that it is possible to have a dynamic algorithm with worst case update time $O(1)$ in the incremental model, but any algorithm in the fully dynamic model has amortized running time $\Omega\left(\frac{T}{\log (T)}\right)$. \end{theorem} \begin{proof} Let $N = [n]$ be the ground set element and $T = 2n$ be the total number of updates. We first formalize the oracle model. For any subset of elements $A \subseteq N$, let $S(A) \in S$ be the transcript on $A$ and $Q(A) \in \{0, 1\}$ be the answer to {\sc Query}. There exists an oracle $O: S \times N \rightarrow S \times \{0, 1\}$, it takes input of a transcript $S(A) \in S$ and an element $e \in N$, and returns the next transcript $S(A \cup \{e\}) \in S$ and the answer $Q(A \cup \{e\}) \in \{0, 1\}$, i.e., \[ O(S(A), e) = (S(A \cup \{e\}),Q(A \cup \{e\})). \] The oracle outputs empty when the input is invalid and we assume the oracle takes unit time. It is clear that it takes only $1$ oracle call per update in the incremental model. For the fully dynamic model, consider the update sequence of first inserting all elements in $N$ and delete them in random order. We prove $\Omega(n^2/\log n)$ oracle calls are necessary. For any $t \in [0: n/100\log n]$, let $N_{t}$ be the set of elements remain after deleting $4t \log n$ elements and $N_0 = N$. It suffices to prove the algorithm needs to make at least $\Omega(n)$ oracle queries between $N_t$ and $N_{t+1}$. Suppose the algorithm has the transcript $S(A_1), \ldots S(A_{\ell(t)})$ at the beginning of the $(n + 4t\log n)$-th update, where $A_i \subseteq [N_t]$ and $|A_i| \geq n/2$ ($i \in [\ell(t)]$). We know that $\ell(t) \leq n^2$. After deleting the next $4\log n$ elements (at random), the probability that $A_i \subseteq N_{t+1}$ is at most $1/n^4$. Taking an union bound, with probability at least $1-1/n^2$, none of the set satisfies $A_{i} \subseteq N_{t+1}$. Then we conclude that to get the transcript of $S(N_{t+1})$, the algorithm needs at least $|N_{t+1}| - n/2 = \Omega(n)$ queries. We conclude the proof here. \end{proof} \section{Introduction} \label{sec:intro} A dynamic algorithm is a data structure that maintains certain properties (e.g. shortest paths) of a ground set that is subject to a sequence of updates (e.g. insertions/deletions of edges), and the goal is to minimize the (total) update time. A {\em fully dynamic} algorithm supports both insertions and deletions of the ground set elements, while an {\em incremental} algorithm restricts the updates to be insertion-only and a {\em decremental} algorithm handles deletion operations only. A fully dynamic algorithm clearly benefits from handling more general updates, but the at the same time, it is expected to be much harder than incremental/decremental algorithms. Meanwhile, several existing works have exploited special structure (\cite{HK99, HLT01, BKS12, ADKKP16, BS80,OL81,OvL81}) or specific order of update sequence (\cite{AW14,HKNS15,KPP16}) and reduce fully dynamic to incremental or decremental algorithm. This motivates one to ask \begin{center} \textit{Can a generic reduction transform an incremental algorithm into one that handles both insertions and deletions? } \end{center} Perhaps surprisingly, we find that once the order of deletions of current elements is known to the algorithm (deletions-look-ahead), one can translate an incremental algorithm with {\em worst case guarantee} to a fully dynamic algorithm with worst case guarantee, with only polylogarithmic overhead. \begin{theorem}[Reduction, worst case to worst case] \label{thm:main} Let $T \geq 1$ be the total number of updates. Suppose there exists a dynamic algorithm in the incremental setting with query time $\Gamma_{q}$ and worst case update time $\Gamma_{u}$, then there is a dynamic algorithm for deletions-look-ahead setting with query time $\Gamma_{q}$ and worst case update time% \footnote{We believe that the update time can be improved at least to $O(\Gamma_{u} \log(T) \log\log(T ))$ at the expense of a more complicated algorithm.} $O(\Gamma_{u} \log^2 (T))$. \end{theorem} Our reduction requires the incremental algorithm to have worst case (rather than amortized) runtime guarantee, and most importantly, the relative order of deletions of current elements must be known.\footnote{Equivalently, one can assume there exists an oracle that outputs the deletion order of existing ground set elements.} The latter assumption is satisfied by well-studied models including sliding window (FIFO) model \cite{epasto2017submodular, epasto2022improved, datar2002maintaining,braverman2020near, woodruff2022tight}, where only most recent insertion updates are of interests, as well as offline/look-ahead model \cite{khanna1998certificates, sankowski2010fast, van2019dynamic, chen2020fast, AW14}, where the entire sequence of updates is known in advance. These models are of interests to both theoretical and empirical community \cite{hanauerrecent}. Furthermore, the sliding window model is often used as a benchmark for empirical investigation of fully dynamic algorithms (e.g.~\cite{wang2018location, lattanzi2020fully, henzinger2021random}). To complement our results, we also prove that both conditions are indeed indispensable for a black box reduction (see Section \ref{sec:impossible}). If one only aims for algorithms with amortized runtime guarantee, we have a reduction with improved update time. We believe it could be of independent interests due to its simplicity and could be beneficial for empirical implementation. \begin{theorem}[Reduction, amortized to worst case] \label{thm:main-amortize} Let $T \geq 1$ be the total number of updates. Suppose there exists a dynamic algorithm for {\em incremental setting} with query time $\Gamma_{q}$ and worst case update time $\Gamma_{u}$, then there is a dynamic algorithm for {\em deletions-look-ahead} setting with query time $\Gamma_{q}$ and amortized update time $O(\Gamma_{u} \cdot \log (T))$. \end{theorem} On the technical side, Theorem~\ref{thm:main-amortize} exploits the ideas of rewinding the incremental algorithm (e.g.~\cite{AW14,HKNS15,KPP16}), and in particular achieves only logarithmic overhead by rewinding $2^i$ insertions every $\Theta(2^i)$ updates (e.g.~\cite{HK99, HLT01, BKS12, ADKKP16}). To achieve the worst case guarantee, Theorem \ref{thm:main} additionally requires amortizing this rewinding of $2^i$ insertions over $\Theta(2^i)$ updates; this is a little trickier because we have to start re-inserting elements in advance before all the elements we would want to insert have arrived. We demonstrate the power of our reduction in Section \ref{sec:application} by providing applications on dynamic submodular maximization, dynamic Depth first search (DFS) tree and dynamic All-Pair Shortest-Paths (APSP). \paragraph{Related work} A systematic study of black box reduction for dynamic algorithms has been initiated by the seminal work of \cite{BS80}, which shows how to make a static data structure support insertions for ``decomposable search problem''. The ideas of rewinding incremental algorithm and the logarithmic scheduling have been studied in different problem specific context \cite{AW14,HKNS15,KPP16,HK99, HLT01, BKS12, ADKKP16,BS80,OL81,OvL81,DS91,Chan12}. Our deletion-look-ahead model has also been considered in the computational geometry literature, and it is one variant of the semi-online model (see \cite{DS91} for a detailed discussion). The work of \cite{Chan12} is closely related to us, it presents a worst-case to amortized-case reduction, when the exact deletion time is known. The result is (almost) equivalent to Theorem \ref{thm:main-amortize}.\footnote{We thank Timothy Chan for pointing out this connection after we published the first version.} It differs from Theorem \ref{thm:main} as our reduction is worst-case to worst-case. \paragraph{Notation} Let $[n] = \{1, 2, \ldots, n\}$ and $[n_1:n_2] = \{n_1, n_1+1, \ldots, n_2\}$. For any ground set elements $e_1$ and $e_2$, we say $e_1$ is {\em younger} than $e_2$ if $e_1$ would be deleted earlier than $e_2$. An element is of rank $r$ if it is the $r$-th youngest of the current elements. For an ordered set $A$, we use $A[n_1:n_2]$ to denote the youngest $n_1$-th to $n_2$-th elements of $A$. For any $t \in [T]$, let $k(t)$ be the largest integer such that $t$ is exactly a multiple of $2^{k(t)}$. In our pseudocode, {\sc Insert} refers to the insertion procedure of the incremental algorithm (hence taking $O(\Gamma_u)$ time), while adding/removing elements to a set $A$ operates only over the the set (hence taking $O(1)$ time). \section*{Acknowledgement} A.R. is grateful to Amir Abboud and Soheil Behnezhad for inspiring conversations. A.R. and B.P would like to thank Timothy Chan for explaining the interesting connections to related work from computational geometry. \bibliographystyle{alpha} \section{Reduction -- Worst case to worst case} \label{sec:worst-case} We next dedicate to prove Theorem \ref{thm:main}, which translates the worst case guarantee from the incremental model to the deletions-look-ahead model. The major difference with the amortized reduction is that one cannot re-order/re-insert a large block of elements at once. A natural idea is to prepare the re-order/re-insertion in advance and split the cost. This brings new challenges as (unknown) future insertions/deletions interleave with the preparation step and one does not know the exact set of elements beforehand. To resolve it, we maintain multiple threads, and each thread further divides the preparation step into epochs of geometrically decreasing size. The high-level idea is presented in Algorithm \ref{algo:red} with implementation details deferred to the proof of Lemma \ref{lem:worst}. Algorithm \ref{algo:red} maintains $m+1$ threads and $m+1$ buckets $B_0, \ldots, B_{m}$. During the execution of the algorithm, all existing elements are distributed over $B_0, \ldots, B_m$, and ideally, the $i$-th bucket $B_{i}$ should be of size $O(2^{i})$ and $B_0 \cup \cdots \cup B_{i}$ should contain the youngest $\Omega(2^{i})$ elements. This guarantees the insertion/deletion of an element can be resolved in $O(\Gamma_u)$ time since one only needs to re-insert elements in $B_0$. The crucial part is to maintain the ordered buckets $B_{0}, \ldots, B_{m}$, for which Algorithm \ref{algo:red} maintains $m$ threads; the $i$-th thread ($i \in [m]$) prepares the re-order/re-insertion ahead of $2^{i}$ updates. Precisely, the $i$-th thread re-starts every $2^{i+1}$ updates and operates over the upcoming $2^{i}$ updates (Line \ref{line:restart1} -- \ref{line:restart2}). It first rewinds the computation status to $B_m \ldots B_{i}$ (Line \ref{line:rewind}), which is prepared by the $(i + k(\tau))$-th thread and then re-inserts elements of $B_0 \cup \cdots \cup B_{i-1}$ (Line \ref{line:epoch} -- \ref{line:leftover}). Concretely, the re-insertion procedure is further divided into $i$ epochs, where epoch $j$ ($j \in [i-1]$) lasts for $2^{j}$ updates and epoch $0$ lasts for $2$ updates. Let $t(i, \tau, j) := 2^{i}\tau + \sum_{r= j+1}^{i-1}2^{r}$ denote the end of epoch $j+1$ in the $(\tau/2 + 1)$-th outer-for-loop-iteration of the $i$-th thread. During epoch $j$, the $i$-th thread leaves alone the youngest $2^{j+2}$ elements at the beginning of epoch $j$ to $B^{(i)}$ and {\sc Insert} the remaining elements $B_{j}^{(i)}$ over the upcoming $2^{j}$ updates (i.e. $[t(i, \tau, j)+1: t(i, \tau, j) + 2^j]$-th update, see Line \ref{line:insert}). Meanwhile, the set $B^{(i)}$ is updated and elements are added and removed (Line \ref{line:update}). Finally, at the end of $t$-th update ($t \in [T]$ and $k(t) \geq 1$), Algorithm \ref{algo:red} resets $B_{j}$ to $B_{j}^{(k(t))}$ for every $j \in [0:k(t)-1]$ (Line \ref{line:init2}) and this step takes $O(\log T)$ time as we only change the pointer of $B_{j}$. \begin{algorithm}[!htbp] \caption{\underline{Reduction -- Worst case to worst case}\\ $\triangleright$\ \text{Variables with superscript $^{(i)}$ internal to {\sc Thread}($i$)}\\ $\triangleright$\ \text{{\sc Thread}($i$) only uses information from bigger threads (aka {\sc Thread}($j$) for $j>i$})\\ $\triangleright$\ \text{In particular, {\sc Thread}($i$)'s {\sc Insert} and rewind do not affect the state seen by bigger threads}\\ $\triangleright$\ \text{The output of the algorithm is maintained by {\sc Thread}($0$)} } \label{algo:red} \begin{algorithmic}[1] \State Initialize $B_i \leftarrow \emptyset$ and run $\textsc{Thread}(i)$ ($i \in [0:m]$) \Comment{$m = \lceil \log_2 T \rceil$}\\ \Procedure{Thread}{$0$} \For{$t = 1,2, \ldots, T$} \State Add/remove the element in $B_0$ \Comment{$t$-th update (deletions guaranteed to be from $B_0$)}\label{line:delete} \State Rewind the computation to $B_{m}\ldots B_1$ \Comment{State prepared by {\sc Thread}($k(t-1)$)} \State \textsc{Insert} $B_0$ \label{line:back} \EndFor \EndProcedure \\ \Procedure{Thread}{$i$} \Comment{$i \in [1:m]$} \For{$\tau = 0,2, 4, \ldots, \lfloor T/2^i \rfloor$ } \label{line:restart1} \Comment{Restart every $2^{i+1}$ updates} \State $B^{(i)} \leftarrow B_{0} \cup \cdots \cup B_{i-1}$ \label{line:init1} \State Rewind the computation to $B_{m}\ldots B_{i}$ \label{line:rewind} \Comment{State prepared by {\sc Thread}($i+k(\tau)$)}\\ \For{$j = i-1,i-2,\ldots, 1$} \label{line:epoch} \Comment{$j$-th epoch, amortized over $2^{j}$ updates} \State $B_{j}^{(i)} \leftarrow B^{(i)}[2^{j+2}+1:]$ \label{line:define-Bj} \Comment{Oldest $\Theta(2^{j})$ elements in $B^{(i)}$} \State $\textsc{Insert}$ $B_{j}^{(i)}$ \label{line:insert} \State $B^{(i)}\leftarrow B^{(i)}\backslash B_{j}^{(i)}$ \label{line:remove-Bj \State Add/remove elements in $B^{(i)}$ \Comment{Updates $[t(i, \tau, j) + 1 : t(i, \tau, j) + 2^{j}]$} \label{line:update} \EndFor \State Add/remove elements of $B^{(i)}$ in the remaining 2 updates \Comment{Epoch $0$} \label{line:leftover-init} \State $B_0^{(i)} \leftarrow B^{(i)}$ \State $\textsc{Insert}$ $B_0^{(i)}$ \label{line:leftover}\\ \State $B_j \leftarrow B_{j}^{(i)}$ ($\forall j \in [0: i - 1]$) \label{line:init2} \State Do nothing for $2^{i}$ updates \EndFor \label{line:restart2} \EndProcedure \end{algorithmic} \end{algorithm} Recall $E_{t, r}$ denotes the youngest $r$ elements and $E_t$ denotes all elements at the end of $t$-th update. We use $B_{t, j}$ and $B_{t, j}^{(i)}$ to denote the status of $B_{j}$ and $B_{j}^{(i)}$ at the end of $t$-th update. We first formalize the main invariant for Algorithm~\ref{algo:red}. \begin{lemma} \label{lem:bucket_size} For any thread $i \in [m]$, outer-for-loop-iteration $\tau \in \{0,2,\ldots, T/2^i\}$, at the end of the $(2^{i}\tau)$-th update, we have \begin{itemize} \item $E_{2^{i}\tau, 2^{j+2}} \subseteq B_{2^{i}\tau, 0} \cup \cdots \cup B_{2^{i}\tau, j}$, \item $|B_{2^{i}\tau, j}| \leq 3 \cdot 2^{j+1}$ \end{itemize} holds for any $j \in [i-1]$. For $j = 0$, we have $|B_{2^{i}\tau, 0}| \leq 12$ and $E_{2^{i}\tau, 4} \subseteq B_{2^{i}\tau, 0}$. \end{lemma} \begin{proof} We prove the first bullet by an induction on $i$ (in the reverse order). The base case of $i = m$ holds trivially as all buckets are empty at the beginning. Suppose the induction holds up to the $(i+1)$-th thread. For any outer-for-loop-iteration $\tau \in \{2, \ldots, T/2^i\}$, at the end of $2^{i}\tau$-th update, the buckets $B_{0}, \ldots, B_{i-1}$ are reset by the $(i+k(\tau))$-th thread, hence, it suffices to prove $E_{2^{i}\tau, 2^{j+2}} \subseteq B_{2^{i}\tau,0}^{(i+k(\tau))}\cup \cdots \cup B_{2^{i}\tau, j}^{(i+k(\tau))}$ ($\forall j \in [i-1]$). By the inductive hypothesis of the $(i+k(\tau))$-th thread, we know that \[ E_{2^{i}\tau -2^{i + k(\tau)}, 2^{i+k(\tau) + 1}} \subseteq B_{2^{i}\tau -2^{i + k(\tau)}}^{(i + k(\tau))} = B_{2^{i}\tau -2^{i + k(\tau)}, 0} \cup \cdots \cup B_{2^{i}\tau -2^{i + k(\tau)}, i + k(\tau) - 1}, \] that is, the youngest $2^{i+k(\tau) + 1}$ are contained in $B^{(i + k(\tau))}$ initially. We prove the desired claim by contradiction and assume for some $j \in [0: i - 1]$, there exists an element $e$ such that $e \in E_{2^{i}\tau, 2^{j+2}}$ but $e\notin B_{2^{i}\tau, 0}^{(i+k(\tau))} \cup \cdots \cup B_{2^{i}\tau, j}^{(i + k(\tau))}$. This can only happen if (1) the element $e$ is inserted before epoch $j + 1$; and (2) it is removed from $B^{(i+k(\tau))}$ at some epoch $\gamma \geq j + 1$. The reason for (1) is that elements inserted on/after epoch $j+1$ would ultimately be included in $B_{2^{i}\tau, 0}^{(i+k(\tau))} \cup \cdots \cup B_{2^{i}\tau, j}^{(i + k(\tau))}$; the reason for (2) is similar. Since the element $e$ is removed from $B^{(i+k(\tau))}$ at epoch $\gamma$, we have that \[ e \notin E_{t(i + k(\tau), \tau/2^{k(\tau)} - 1, \gamma)} \setminus E_{t(i + k(\tau), \tau/2^{k(\tau)} -1, \gamma), 2^{\gamma + 2}}. \] There are at most $2 + \sum_{r=1}^{\gamma}2^{r} = 2^{\gamma + 1}$ deletions since epoch $\gamma$, the rank of $e$ can be improved to at most $2^{\gamma + 2} - 2^{\gamma + 1} = 2^{\gamma+1}$, hence we have $e \notin E_{2^{i}\tau} \setminus E_{2^i \tau, 2^{\gamma + 1}}$ and therefore $e \notin E_{2^{i}\tau} \setminus E_{2^i \tau, 2^{j + 2}}$ ($\gamma \geq j + 1$), this contradicts with the assumption. For the second bullet, for any $i \in [m]$ and $\tau \in \{2, \ldots, T/2^i\}$, consider the $(i+k(\tau))$-th thread. After executing Line~\ref{line:remove-Bj} in the $(j+1)$-th epoch of Algorithm \ref{algo:red}, we have that \[ |B^{(i+k(\tau))}_{t(i + k(\tau), \tau/2^{k(\tau)} - 2^{j+1}, j)} | \leq 2^{j + 3} \] For the rest of the epoch, there can be at most $2^{j+1}$ insertions, hence: \[ |B^{(i+k(\tau))}_{t(i + k(\tau), \tau/2^{k(\tau)}, j)} | \leq 2^{j + 3} + 2^{j+1} \] Finally, after executing Line~\ref{line:define-Bj} in the $j$-th epoch, we have: \[ |B^{(i+k(\tau))}_{t(i + k(\tau), \tau/2^{k(\tau)}, j), j}| \leq 2^{j + 3} + 2^{j+ 1} - 2^{j+2} = 3 \cdot 2^{j+1}. \] We have proved the first claim for $j \in [i-1]$, the case of $j = 0$ follows similarly \end{proof} We next bound the worst case update time. \begin{lemma} \label{lem:worst} The update time per operation is at most $O(\Gamma_{u}\cdot \log^2 (T))$. \end{lemma} \begin{proof} By Lemma \ref{lem:bucket_size}, the size of $B_0$ is $O(1)$ and it contains the youngest $2$ elements, hence the rewinding and {\sc Insert} step (Line \ref{line:back}) can be performed in $O(\Gamma_u)$ time per update. The major overhead comes from maintaining $m$ threads, and we bound the runtime of each thread separately. For any thread $i$ and outer-for-loop-iteration $\tau \in \{0,2,\ldots, T/2^{i}\}$, due to Line \ref{line:define-Bj} of Algorithm \ref{algo:red}, we have \[ |B_{t(i, \tau, j-1), j}^{(i)}| \leq |B_{t(i, \tau, j)}^{(i)}| \leq 2^{j+3} + 2^{j+1} \quad \forall j \in [i - 2] \] and by Lemma \ref{lem:bucket_size}, \[ |B_{t(i, \tau, i-2), i-1}^{(i)}| \leq |B_{2^{\tau}i, 0}\cup \cdots \cup B_{2^{\tau}i, i-1}| \leq \sum_{r=0}^{i-1}3\cdot 2^{r+1} \leq 3\cdot 2^{i+1}. \] We analyse the update time step by step. We first come to the rewinding step (Line \ref{line:rewind}). Unlike the amortized case, we cannot simply rewind by the reversible computation since we maintain multiple threads that need to access the state of the incremental algorithm with different sets of elements, in parallel. Instead, when we call {\sc Insert} of each block $B_{j}^{(i)}$, we maintain a dictionary that records the location/value of changed memory cell. The construction of dictionary only incurs constant overhead. By doing this, during the execution of Algorithm \ref{algo:red}, one can access any memory cell by looking up to at most $O(\log (T))$ dictionaries (note the lookup path is known to Algorithm \ref{algo:red}) and find the last time it has been changed. Naively, looking up the memory updates in each dictionary takes $O(\log (\Gamma_{u}T))$ time. This brings an $O((\log (T)\log (\Gamma_{u}T)))$ total overhead for every operation of {\sc Insert}. Except for this, Line \ref{line:rewind} essentially comes for free. A more careful implementation leads to only $O(\log (T))$ overhead. We maintain an additional data structure, which links each memory cell of the incremental algorithm to $m+1$ lists, where the $i$-th list records the changes made by the $i$-th thread in chronological order. The maintenance of the data structure slows down the forward computation of {\sc Insert} by a constant factor. At the same time, in order to search the content of a memory cell, we only need to search through the lists (note again the look-up path is known), which takes $O(1)$ time per list and $O(\log (T))$ in total. Hence, it brings an $O((\log (T))$ total overhead for every operation of {\sc Insert}. Algorithm \ref{algo:red} updates $B^{(i)}$ and $B_{j}^{(i)}$ at the beginning of epoch $j$ (Lines~\ref{line:define-Bj} and~\ref{line:remove-Bj}). We do not rewrite, but instead, we copy $B^{(i)}$ and $B_{j}^{(i)}$ to new memory cells. Since both sets are of size $O(2^{j})$, the copy operation can be done in the first $\frac{1}{4} \cdot 2^{j} $ updates during epoch $j$ and has $O(\log (T))$ cost per update using Binomial heap. Algorithm \ref{algo:red} calls at most $O(2^{j})$ times \textsc{Insert} during epoch $j$ (Line \ref{line:insert}). Since elements in $B_{j}^{(i)}$ are known at the beginning so these operations can be averaged over the following $3/4 \cdot 2^{j}$ updates of epoch $j$ and take $O(\Gamma_u \log (T))$ time per update. The set $B^{(i)}$ receives new elements as well as removes old elements in epoch $j$ (Line \ref{line:update}). We buffer the changes in the first $\frac{1}{4} \cdot 2^{j}$ updates (as the ``new'' set $B^{(i)}$ is not yet ready) and add/remove elements during the following $3/4 \cdot 2^{j}$ updates. The size of $B^{(i)}$ is $O(2^j)$ during epoch $j$, so the update cost is $O(\log (T))$ per update. Finally, we note (1) Lines \ref{line:leftover-init} -- \ref{line:leftover} takes\ only $O(\Gamma_u \log(T))$ time in total; (2) Line \ref{line:init1} can be done similarly to Line \ref{line:define-Bj}; (3) Line \ref{line:init2} resets $B_j$ ($j \in [0:i -1]$) by changing the pointer, so it incurs only $O(\log T)$ cost. Overall, Algorithm \ref{algo:red} has worst case update time $O(\Gamma_{u} \log (T))$ per thread and $O(\Gamma_u \cdot \log^2 (T))$ in total. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}] The worst case guarantee has already been established in Lemma \ref{lem:worst}, it remains to prove the correctness of Algorithm \ref{algo:red}. By Lemma \ref{lem:bucket_size}, the youngest $2$ elements are always contained in $B_0$, hence insertions/deletions are operated correctly, i.e., the removal step (Line \ref{line:delete}) indeed removes element in $B_{0}$. It remains to prove each thread operates normally, i.e., for any thread $i$ and outer-for-loop-iteration $\tau$, the removal operation would only remove elements in $B^{(i)}$ during epoch $j$ ($j \in [i-1]$). It suffices to prove that $E_{t(i, \tau, j), 2^{j+2}} \in B^{(i)}_{t(i, \tau, j)}$. We prove by induction. This is true in epoch $i-1$ by Lemma \ref{lem:bucket_size}. Suppose it holds to epoch $j+1$, i.e., $E_{t(i, \tau, j+1), 2^{j+3}} \in B^{(i)}_{t(i, \tau, j+1)}$, since there are at most $2^{j+1}$ deletions in epoch $j+1$, we have that $E_{t(i, \tau, j+1), 2^{j+2}} \in B^{(i)}_{t(i, \tau, j)}$. We complete the proof here. \end{proof}
\section{Related Work} In this section, we will first review different approaches to action detection. We will then discuss recent works related to our approach, with a focus on \emph{action detection} and \emph{image generation}. \textcolor{black}{Before deep learning, several issues, such as view-independency \cite{Roh2010}, multi-modality \cite{Maeng2012}, and considering human identity \cite{Park2013}, have been challenges for action recognition task. For further information, we refer readers to \cite{poppe2010survey}.} \vspace{.3cm} \noindent\textbf{Action detection}: Action detection is a new field when compared to the action classification task. In recent work action classification, Ma et al. \cite{MA2017334} studied utilizing web images for better model generalization. Ijjina and Chalavadi \cite{IJJINA2016199} exploited a genetic algorithm to train CNN models efficiently. In the early stages of action detection, Hoai et al. \cite{hoai2011joint} performed joint segmentation and recognition on concatenated trimmed videos. Gaidon et al. \cite{gaidon2013temporal} localized actions temporally within the Coffee and Cigarettes dataset, which contains untrimmed video with two classes. Since then, Shou et al. \cite{shou2016temporal} have studied for action detection on untrimmed video datasets, such as THUMOS'14 \cite{THUMOS14} and ActivityNet \cite{caba2015activitynet}. With excellent results in the THUOMS'14 competition, Oneata et al. \cite{oneata2014lear} and Wang et al. \cite{wang2014action} employed improved dense trajectories (IDT) encoded by fisher vectors (FVs) and CNN features, as well as SVM classifiers based on sliding windows with some variations, such as window sizes, fusion methods, post-processing methods. Inspired by these works, Shou et al. \cite{shou2017cdc} proposed multi-stage CNNs that consider spatial-temporal information and a convolution-de-convolutional (CDC) network that performs spatial downsampling and temporal upsampling to capture abstractions for action semantics and temporal dynamics. Yuan et al. \cite{yuan2016temporal} captured temporal context by proposing a pyramid of score distribution features (PSDF) descriptor. Yeung et al. \cite{yeung2016end} proposed a frame-wise end-to-end framework with a reinforced learning. Modeling actions in a grammatical form through N-grams and latent Dirichlet allocation (LDA) was performed as another method of action detection \cite{richard2016temporal}. Meanwhile, some works considered action detection not only temporally but also spatially. Lan et al. \cite{lan2015action} and Weinzaepfel et al. \cite{weinzaepfel2015learning} proposed a spatio-temporal action localization (or parsing) method by representing mid-level action elements using a hierarchical structure and localizing actions with a spatio-temporal motion histogram (STMH) descriptor individually at the track level. Additionally, Lea et al. \cite{lea2016segmental} detected fine-grained actions in ego-centric video datasets using graphical models based on tracking hands and objects or using deep neural networks. While most of the methods mentioned above focus on offline settings, several methods performed online action detection by predicting an action's ending point using temporal priors or unsupervised learning. Recently, new datasets for online action detection have been proposed. Geest et al. \cite{de2016online} introduced an RGB-based \emph{TVSeries} dataset with baseline models using long short-term memory (LSTM) and a CNN. Li et al. \cite{li2016online} introduced a skeleton-based online action dataset (OAD) and proposed a joint classification regression method using LSTM. \vspace{.3cm} \noindent\textbf{Temporal sliding window-based detection}: It is crucial to consider the temporal contextual information of time series data, such as video and sound. In action detection literature, many methods extensively employed the temporal sliding window method, which is performed by moving a window of a specific size over time. To make the most of temporal contextual information, \cite{shou2016temporal, yuan2016temporal} used multi-scale sliding windows. Features are extracted using windows of various scales (e.g., 16, 32, or 64 frames), and the detection results of each scale are post-processed to generate a final prediction. However, this approach is unsuitable for online action detection because only a limited amount of information is available. In this paper, despite the success achieved through the use of multi-scale windows, we employ the single-scale window approach. \vspace{.3cm} \noindent\textbf{Image generation}: Several approaches have been proposed for generating target images from given images. Kingma et al. \cite{kingma2013auto} proposed a variational inference-based method by extending the auto-encoder structure. Dosovitskiy et al. \cite{7469347} introduced a method applying CNN structure for object-oriented image generation. Additionally, Radford et al. \cite{radford2015unsupervised} exploited generative adversarial networks (GANs) \cite{goodfellow2014generative}, which reconstruct distributions of data through discriminate networks and adversarial networks. Based on these approaches, several video frame generation approaches have been proposed. Finn et al. \cite{finn2016unsupervised} proposed a method that generates future frames through a combination of RNNs and auto-encoding (called a variational auto-encoder). Mathieu et al. \cite{mathieu2015deep} and Vondrick et al. \cite{vondrick2016generating} proposed video frame generation methods exploiting deep convolutional GANs \cite{radford2015unsupervised}, respectively. \textcolor{black}{Most recently, Villegas et al. \cite{villegas2017decomposing} proposed a motion-content network (MCnet) that considers temporal and spatial information separately, which resembles the way human brain processes temporal and spatial information \cite{Bulthoff2002}}. In this paper, in order to resolve the limited information issue in online settings, we adopt MCnet for generating future frames. \section{Method} In this section, we first overview the proposed framework. Then, we detail our proposed framework, including the architectures of the deep neural networks used in each component. We also elaborate on how to train the networks on a large-scale dataset. The objective of our framework is to detect actions in untrimmed video streams. The framework is composed of four deep networks: a proposal representation (PR) network that discriminates between actions and background scenes (Sec. \ref{sec_proposal}), action representation (AR) network that predicts the type and temporal order of an action (Sec. \ref{sec_classify}), future frame generation (F$^2$G) network that generates future video frames (Sec. \ref{sec_f2g}), and detection network that detects actions by receiving outputs from other networks (Sec. \ref{sec_det}). Fig. \ref{fig_frm} illustrates the pipeline of the proposed framework. Motivations for choosing these networks are as follows. Unlike the sole action classification task, action detection from untrimmed videos requires action representations dedicated to not only action itself but also background scenes. Intuitively, visual treats of background scenes and actions are different. Thus, in the proposed framework, we exploit two deep networks to solve two different tasks: one is to distinguish background scenes from actions, and the other is to classify actions of interest, e.g., twenty classes for THUMOS'14. Both networks have the same structures, 3D convolutional layers followed by fully connected layers (Fig. \ref{fig_proposal}), which have shown outstanding performance for the action classification. When it comes to online situations, as described in Sec. 1, the action localization task suffers from the short of information to solve the problem. In order to resolve this issue, we propose using the future frame generation network. The detection network is composed of LSTM layers to mode temporal correlations and capture temporal changes locally, such as motion features. \begin{figure*}[t!] \centering \includegraphics[trim=1cm 1.5cm 1cm 1.5cm, clip=true, width=.7\linewidth]{./figs/framework.pdf} \caption{Overview of the proposed framework. At the current time $t$, input frames ($I_{t-15}, ..., I_t$) are passed to both PR network (C3D$_{\textrm{PR}}$) and AR network (C3D$_{\textrm{AR}}$) to extract features $X^{PR}_{fc7}$ and $X^{AR}_{fc7}$ from fully connected layer 7 (fc$_7$) of each network. The input frames are also fed into F$^2$G network, which predicts future frames ($I_{t+1}, ..., I_{t+8}$). Then new input frames ($I_{t-7}, ..., I_{t+8}$) are also passed to PR and AR networks to extract features $\hat{X}^{PR}_{fc7}$ and $\hat{X}^{AR}_{fc7}$. All four features are concatenated and passed to the detection network, which emits the detection results by considering both current and (generated) future input frames.} \label{fig_frm} \end{figure*} \subsection{Detecting Action Candidate Spots} \label{sec_proposal} Untrimmed videos are an irregular combination of actions and background scenes. Under this situation, it is necessary that detecting candidate segments where an action is likely to occur, i.e., distinguishing between actions and background scenes. Towards this goal, we train a proposal representation (PR) network. The PR network takes a video segment as an input, which is acquired by a temporal sliding window of length $\tau$. The network is trained to classify binary classes -- \emph{background scene class and action scene class} -- via a 3DCNN whose final fully connected layer has two neurons. \vspace{.3cm} \noindent\textbf{Network architecture}: We employ a 3DCNN to consider spatial and temporal information simultaneously. Different from widely used conventional 2D CNNs, our 3DCNN learns motion context, which is an important clue in video analysis, by considering adding a time axis to the 2D image coordinate system. We adopt the 3DCNN architecture \cite{tran2015learning}: 8 convolutional layers, 5 pooling layers, and 2 fully connected layers, and an output layer with a softmax function. Details of the architecture are shown in Fig. \ref{fig_proposal} (top row). \begin{figure} \centering \includegraphics[trim=1.5cm 8.4cm 1.5cm 8.4cm, clip=true, width=.8\linewidth]{./figs/c3d_arch.pdf} \caption{The 3DCNN architecture used in the PR and AR networks.} \label{fig_proposal} \end{figure} \subsection{Learning Visual Traits from a Temporal Order} \label{sec_classify} The beginning and ending phases of all action instances in the same action class share an identical trait. E.g., assuming there are videos containing scenes where a person is throwing a baseball, the beginning phase of all videos would contain the person making a wind-up posture before throwing the ball, and the ending phase would contain the person leaning forward and lifting his leg back after throwing the ball. However, duration of these phases are different per instance (see Fig. \ref{fig_cls_ex}). In order to capture this trait, we design an action representation (AR) network considering actions as a set of temporally ordered subclasses. Specifically, we divide each action class into the beginning and ending phase classes, and train a 3DCNN to classify these subclasses. Learning temporally ordered subclasses allows the model to represent the time phase of an action using only visual information. When compared to methods that exploit the average length of each action as temporal priors to detect actions, e.g., \cite{li2016online}, our method detects only from a given input sequence. \vspace{.3cm} \noindent\textbf{Network architecture}: The AR network has the architecture identical to the PR network, except the last fully connected layer consists of $K \times 2$ neurons where $K$ is the number target classes, e.g., $K=20$ for THUMOS'14 and $K=200$ for ActivityNet. Details of the architecture of the AR network are shown in Fig. \ref{fig_proposal} (bottom row). \begin{figure} \centering \includegraphics[width=.5\linewidth]{./figs/cls_ex.png} \caption{Visualization of each phase of baseball pitching.} \label{fig_cls_ex} \end{figure} \subsection{Generating Future Frames} \label{sec_f2g} The major limitation in online settings is that only past and current information can be considered for decision. To overcome this limitation, we introduce a future frame generation (F$^2$G) network that generates future frames. In this paper, we design an F$^2$G network with the same architecture as the MCnet proposed in \cite{villegas2017decomposing}, which considers spatial and temporal information by modeling content and encoding motion, respectively. \vspace{.3cm} \noindent\textbf{Network architecture}: The F$^2$G network is an encoder-decoder network composed of two different networks: a content encoder with a CNN architecture and motion encoder with a convolutional LSTM architecture. Generated samples are illustrated in Fig. \ref{fig_f2g_ex}. We refer to \cite{villegas2017decomposing} for further details. \begin{figure} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=.6\linewidth]{./figs/f2g_ex.png} \caption{Illustration of future frame generation. (From left to right) the first four frames with green bounding boxes are input frames, and the next two frames with red bounding boxes are generated frames.} \label{fig_f2g_ex} \end{figure} \subsection{Detecting Actions by Modeling Temporal Correlations} \label{sec_det} To detect actions, to model temporal correlations and to capture temporal changes locally, such as motion features, are an essential factor. However, the PR and AR networks employing 3DCNN are lack of this ability. Therefore, we design our detection network by employing a recurrent neural network (RNN) that can model temporal correlations. The network takes the outputs from each fully connected layer (fc$_7$) of the PR and AR networks as input (see Fig. \ref{fig_det_ex}). The detection network uses the outputs of other networks to reflect the response (opinion) of each network (expert) for a given input data sample over time; it then derives final results by modeling temporal correlations from each network using the RNN. \begin{figure} \centering \includegraphics[width=.35\linewidth]{./figs/det_ex.png} \caption{Illustration of detection network. The outputs of final fully connected layers (not softmax layer) of C3D$_{PR}$ and C3D$_{AR}$ are concatenated and then fed to the LSTM network to detect action.} \label{fig_det_ex} \end{figure} \vspace{.3cm} \noindent\textbf{Network architecture}: There are various types of RNNs, such as long short-term memory (LSTM) and gated recurrent units (GRUs). In this paper, the detection network consists of a dropout layer with the probability 0.5, two LSTM layers, each of which has 128 states, a dropout layer with the probability 0.5, and a fully connected layer having $K + 1$ neurons, which correspond to $K$ action classes and one background class of a target dataset, e.g., $K = 20$ for THUMOS'14 and $K=200$ for ActivigyNet. Fig. \ref{fig_detection} shows details of the architecture of the detection network. \begin{figure} \centering \includegraphics[trim=14.15cm 9.45cm 7cm 9.05cm, clip=true, width=.32\linewidth]{./figs/lstm_arch2.pdf} \caption{The LSTM architecture used in the detection network.} \label{fig_detection} \end{figure} \subsection{Training} \label{sec_train} \noindent\textbf{PR and AR networks}: For training PR and AR networks, we first initialize the weights of all convolution layers (Conv1a to Conv5b) and the first fully connected layer (fc6) with the pre-trained 3DCNN network \cite{tran2015learning}. Then we fine-tune these networks with the target benchmark dataset, either THUMOS'14 or ActivityNet. To train the PR network, we modified labels as two: foreground (action) and background (non-action). To train the AR network, we divide each action class into two subclasses, beginning and ending, thus 40 classes for THUMOS'14 and 400 classes for ActivityNet. In experiments, we use SGD (stochastic gradient descent) optimization with a learning rate of 0.0001, momentum of 0.9, weight decay factor of 0.0005, and dropout probability of 0.5. We use the cross-entropy as the loss function to update the network weights. \begin{equation} \mathcal{L}_{softmax}=\frac{1}{N}\sum_{n}^{N}\sum_{k}^{K} -y^{k}log(\hat{y}^k), \end{equation} \noindent where $N$ is the total number of training samples, $y_{n}^k$ is the $k$-th element value of the ground truth probability of the $n$-th sample. $\hat{y}_{n}^k$ is the $k$-th element value of the predicted probability (network output) of the $n$-th sample. $K$ is the total number of classes: 2 for the PR network and the number of action classes of a target dataset for the AR network. \vspace{.3cm} \noindent\textbf{F$^2$G network}: We fine-tune the F$^2$G network on the action classes of a target dataset. The F$^2$G network uses the loss function composed of different sub-losses: an image loss and generator loss (we refer to \cite{villegas2017decomposing} for further details). In experiments, we fine-tune the network with a learning rate of 0.001. \vspace{.3cm} \noindent\textbf{Detection network}: We use SGD (stochastic gradient descent) optimization with a learning rate of 0.0001, momentum of 0.9, weight decay factor of 0.0005, and dropout probability of 0.5. We use the cross-entropy as the loss function to update the network weights. Additionally, we use classes weight to consider imbalanced instance number among classes as follows: \begin{equation} \omega_{k} = 1 - \frac{|S_k|}{2\hat{|S|}} \end{equation} \noindent where $|S_k|$ is the number of training instances of $k$-th class and $|\hat{S}|$ is the largest instance number among all classes. In experiments, we use RMSProp optimization with a learning rate of 0.0001 and a dropout probability of 0.5. We use the cross-entropy as the loss function. \subsection{Data Augmentation} \label{sec_dataaug} We introduce a video data augmentation technique for further improvement of the single-scale temporal sliding window method. There are some issues in both the single-scale window and multi-scale window methods. When using single-scale windows, they cannot capture areas that are important for representing an action if the length of the window is set improperly. When using multi-scale windows, they require post-processing techniques that are unsuitable for online settings. Therefore, we augment training data by varying the lengths of videos. This augmentation allows us to obtain effects similar to using multi-scale windows even though we only use single-scale windows. We conduct augmentation in two ways: increasing and decreasing. The former is to simulate a video clip being played faster by sampling some frames from the original video. The later is to simulate a video clip being played slower. This effect is performed by motion interpolation using the butterflow algorithm\footnote{More details for the butterflow algorithm, and its implementation can be found at https://github.com/dthpham/butterflow}. Motion interpolation is performed by rendering intermediate frames between two frames based on motion. Specifically, given two frames $I_A$ and $I_B$, this technique fills the space between these two frames with generated intermediate frames $I_{AB_1}$, $I_{AB_2}$, ..., $I_{AB_O}$, as shown in Fig. \ref{fig_motion_interpol}. In general, data augmentation helps a model to better generalize \cite{Krizhevsky_imagenet}. With the proposed data augmentation, a model sees training videos with different temporal resolutions. This augmentation mimics, to a certain extent, the effect when using a multi-scale window. As a multi-scale window would learn from having windows of different temporal scales as input during training time, with the augmented data, a single-scale window learns from temporal variations during training time in a different way. \begin{figure} \centering \includegraphics[trim=1cm 7cm 1cm 6.5cm, clip=true, width=.6\linewidth]{./figs/data_aug.pdf} \caption{Illustration of motion interpolation.} \label{fig_motion_interpol} \end{figure} \section{Experiments} We evaluated the proposed framework on two large benchmark datasets: THUMOS'14 \cite{THUMOS14} and ActivityNet \cite{caba2015activitynet}. We will first describe the experimental setup, compare the performance with other methods on two benchmark datasets, and then demonstrate the ablation study. Finally, we describe the limitations of the proposed framework. \subsection{Experimental Setup} \noindent\textbf{Implementation details}: The length of the temporal sliding window $\tau$ is 16. During the training phase, there is no overlap between sliding windows. In other words, sliding windows at time $t_1$ and $t_2$ do not have an intersection between them. In the test phase, we allow 50\% overlap between adjacent sliding windows. At time $t$, the PR and AR networks take 16 frames ($I_{t-15}, ..., I_{t}$) as an input. The F$^2$G network generates 8 future frames ($\hat{I}_{t+1}, ..., \hat{I}_{t+8}$) from the input frames. The generated 8 frames are concatenated with past input frames ($I_{t-7}, ..., I_t, \hat{I}_{t+1}, ..., \hat{I}_{t+8}$). Then these concatenated 16 frames are passed to the PR and AR networks (Fig. \ref{fig_frm}). The detection network receives a $4096\times4$ dimensional vector as an input which corresponds to the output of the PR network (4096$\times$2, 4096 from the input frames and 4096 from the generated frames concatenated with the second half of input frames) and AR network (4096$\times$2, same as the PR network). The PR and AR networks were trained with 50 epochs on each training and validation sets of THUMOS'14. The F$^2$G network was trained with 150k epochs on the training set. Training the detection network is a nontrivial part. We first train the network with 100 epochs on the training set, then 100 epochs on the validation set, and we repeat three times. The training batch is organized as 8 time steps, i.e., 8 sliding windows to allow the network to learn the long-term temporal relationship of input frames. In the test phase, a single sliding window is passed to the detection network. During training, we augment the data by increasing the speed by two and decreasing by half. \vspace{.3cm} \noindent\textbf{Evaluation metric}: We use interpolated average precision (AP) and mean average precision (mAP) to evaluate the performance of our model following the guidelines of the action detection task of THUMOS'14. A detection result is evaluated as a true positive when the overlapping intersection over union (IoU) between the predicted temporal range ${R}_{pred}$ and ground truth temporal range ${R}_{gt}$ is larger than an overlap threshold $\theta$. \begin{equation} IoU =\frac{{R}_{pred}\bigcap {R}_{gt}}{{R}_{pred}\bigcup {R}_{gt}}. \label{eq_map} \end{equation} \subsection{Experimental Results on THMOS'14} \noindent\textbf{Dataset}: We consider twenty classes of THUMOS'14 dataset for evaluation: BaseballPitch, BasketballDunk, Billiards, CleanAndJerk, CliffDiving, CricketBowling, CricketShot, Diving, FrisbeeCatch, GolfSwing, HammerThrow, HighJump, Javelin-Throw, LongJump, PoleVault, Shotput, SoccerPenalty, TennisSwing, Throw-Discus, and VolleyballSpiking. The training set consists of 2,765 trimmed videos that contain one action. The background set consists of 2,500 untrimmed videos that include actions in undefined categories. The validation and test sets consist of 1,010 untrimmed videos and 1,574 untrimmed videos, respectively, which contain more than one action instance, including backgrounds. \begin{figure} \centering \begin{adjustbox}{addcode={\begin{minipage}{\width}}{\caption{% Samples for 20 classes in THUMOS'14 dataset. }\end{minipage}}} \includegraphics[width=.6\linewidth]{./figs/dataset_fig.jpg}% \end{adjustbox} \label{fig_prsoposal} \end{figure} \vspace{.3cm} \noindent\textbf{Comparison to other methods}: We evaluate our model and compare its performance to various offline action detection methods: Wang et al. \cite{wang2014action}, Oneta et al. \cite{oneata2014lear}, Yeung et al. \cite{yeung2016end}, Richard et al. \cite{richard2016temporal}, Shou et al. \cite{shou2017cdc}, and Zhao et al. \cite{zhao2017temporal} using THUMOS'14 because there are no online methods reported on this dataset. As a baseline, we implemented a framework identical to the proposed method except F$^2$G is removed and trained without data augmentation. We report the performance with mAP metric (\ref{eq_map}) with the threshold $\theta$ ranging from 0.1 to 0.7. \begin{table} \caption{Performance of methods on THUMOS'14.} \centering \includegraphics[trim=5.3cm 6.3cm 5.3cm 6.3cm, clip=true, width=0.5\linewidth]{./figs/comparison_result.pdf} \label{cmp_table2} \end{table} Tab. \ref{cmp_table2} summarizes the performance of methods on THUMOS'14. Note that the proposed method was tested on the online setting scenario in which only past and current information are available for decision at a given moment, while other comparing methods were tested on the offline setting in which rich temporal context information is available. Taking this into consideration, the performance of our method is comparable to other offline setting methods. Compared to the baseline, the proposed method outperforms significantly, 0.06 higher mAP on average. Fig. \ref{fig_result3} visualizes results in some challenging test data instances on THUMOS'14. \begin{figure} \centering \subfigure[CleanAndJerk]{ \label{ret_tu1} \includegraphics[trim=1.6cm 2.9cm 1.6cm 2.9cm, clip=true, width=0.45\linewidth]{./figs/tu1.pdf} } \subfigure[BasketballDunk]{ \label{ret_tu2} \includegraphics[trim=1.5cm 3.0cm 1.6cm 3.0cm, clip=true, width=0.45\linewidth]{./figs/tu2.pdf} } \caption{\textcolor{black}{Detection results illustration on \emph{CleanAndJerk} (a) and \emph{BasketballDunk} (b) classes of THUMOS'14 test set. In the horizontal bar of each subfigure, the top row denotes the ground truth and the bottom row detection result. The color-shaded region (dark red) indicates the target class and the white region background. Images with the dark red boundary denote the class instance and those without the colored border background (best seen in color). In (a), paused moments of the person sitting down are labeled as background while the rest is CleanAndJerk in which the proposed method detects with some misalignment. In (b), the proposed method struggles to locate the background precisely. It is probably due to short of the temporal context available to decide.}} \label{fig_result3} \end{figure} \subsection{Experimental Results on ActivityNet} \noindent\textbf{Dataset}: We use the latest version 1.3 of ActivityNet dataset, which contains two hundred activity classes. The training set consists of 10,024 untrimmed videos that contain single activity instances. The validation set and test set consist of 4,926 and 5,044 untrimmed videos, respectively. In this experiment, we use the validation set for evaluation since the annotation of the test set is not available. \vspace{.3cm} \noindent\textbf{Comparison to other methods}: We evaluate our model and compare its performance to two offline action detection methods: Heilbron et al. \cite{Heilbron_2017_CVPR}, Shou et al. \cite{shou2017cdc}, and Zhao et al. \cite{zhao2017temporal} using the ActivityNet because there are no online methods reported on this dataset. As a baseline, we implemented a framework identical to the proposed method except F$^2$G is removed and trained without data augmentation. We report the performance with mAP metric (3) with the threshold $\theta$s: 0.5, 0.75, and 0.95 as in \cite{Heilbron_2017_CVPR}. \begin{table} \caption{Performance of methods on ActivityNet.} \centering \includegraphics[trim=9.4cm 7.7cm 9.4cm 7.65cm, clip=true, width=0.3\linewidth]{./figs/comparison_result_activitynet.pdf} \label{cmp_table_act} \end{table} Tab. \ref{cmp_table_act} summarizes the performance of methods on ActivityNet. Note that the proposed method was tested on the online setting scenario in which only past and current information are available for decision at a given moment, while other comparing methods were tested on the offline setting in which rich temporal context information is available. Taking this into consideration, the performance of our method is comparable to other offline setting methods. Compared to the baseline, the proposed method outperforms significantly, 0.06 higher mAP on average. Fig \ref{fig_result4} visualizes results in some challenging test data instances on ActivityNet. \begin{figure} \centering \subfigure[Knitting]{ \label{ret_act1} \includegraphics[trim=1.5cm 3.0cm 1.55cm 3.0cm, clip=true, width=0.45\linewidth]{./figs/act1.pdf} } \subfigure[Tango]{ \label{ret_act2} \includegraphics[trim=1.65cm 2.9cm 1.45cm 2.9cm, clip=true, width=0.45\linewidth]{./figs/act2.pdf} } \caption{\textcolor{black}{Detection results illustration on \emph{Knitting} (a) and \emph{Tango} (b) classes of ActivityNet validation set. In the horizontal bar of each subfigure, the top row denotes the ground truth and the bottom row detection result. The color-shaded region (dark red) indicates the target class and the white region background. Images with the dark red boundary denote the class instance and those without the colored border background (best seen in color). In (a), the proposed method successfully detects two class instances except for one instance of a short period. (b) illustrates the failure case: the proposed method does not locate the instances precisely. We postulate that it is due to the long sequence length of the input stream; thus the proposed method struggles to exploit temporal information.}} \label{fig_result4} \end{figure} \subsection{\textcolor{black}{Additional Analysis}} \begin{table} \caption{Per-frame labeling performance on THUMOS'14.} \centering \includegraphics[trim=11.4cm 8.6cm 11.4cm 8.45cm, clip=true, width=0.21\linewidth]{./figs/per_frame.pdf} \label{cmp_table_perframe} \end{table} \noindent\textbf{\textcolor{black}{Performance in online setting}}: \textcolor{black}{Since the proposed method works in the online setting, it is not straightforward to directly compare the performance with methods based on the offline setting. To our best knowledge, there is no metric for the online action detection performance. Thus, we evaluate the per-frame mAP performance as exploited by Shou et al. \cite{shou2017cdc}. Tab. \ref{cmp_table_perframe} shows the performance on THUMOS'14. The proposed method outperforms Shou et al. by 0.01.} \noindent\textbf{\textcolor{black}{Computational complexity}}: \textcolor{black}{We tested the proposed framework on a single NVIDIA Titan X GPU with 12GB memory. The speed of the proposed framework is around 9 frames per second (fps). Assuming that each C3D network in the proposed framework has three convolutional layers streams to deal with different temporal resolutions, similar to multi-scale methods, the fps of this configuration decreases to 7 fps, which implies that the proposed data augmentation allows less computational cost and efficient memory usage.} \subsection{\textcolor{black}{Ablation Study}} \begin{table} \caption{Performance of proposed components on THUMOS'14.} \centering \includegraphics[trim=5.4cm 6.8cm 5.4cm 7.65cm, clip=true, width=0.55\linewidth]{./figs/ablation_result.pdf} \label{table_result3} \end{table} \begin{table} \caption{Performance of proposed components on ActivityNet.} \centering \includegraphics[trim=9.4cm 6.8cm 9.4cm 7.65cm, clip=true, width=0.31\linewidth]{./figs/ablation_result_activitynet.pdf} \label{table_result3_act} \end{table} \textcolor{black}{We conduct additional experiments to analyze the impact of each model component by eliminating them one at a time. The experiments are conducted with six model setups: i) baseline, ii) without data augmentation (w/o Aug), iii) the C3D$_{AR}$ network connected after the C3D$_{PR}$ network (w/ CS), iv) without the F$^2$G network (w/o F$^2$G), v) the full model (Full), and vi) ground truth as future frame generation output (w/ F$^2$G GT). Tab. \ref{table_result3} summarizes the result on THUMOS'14 dataset and Tab. \ref{table_result3_act} the result on ActivityNet.} \vspace{.3cm} \noindent\textcolor{black}{\textbf{Baseline}: This setup is when the proposed framework consists of PR, AR, and Det networks. The performance is shown in the second row of Tab. \ref{table_result3} and Tab. \ref{table_result3_act}. Its performance is inferior to all other settings.} \vspace{.3cm} \noindent\textcolor{black}{\textbf{Data augmentation}: Without data augmentation for model training, the performance decreases by 0.01 on average on both datasets (the third row). This result indicates that with proper data augmentation, including the frame interpolation used in the proposed framework, the performance can improve more.} \vspace{.3cm} \noindent\textcolor{black}{\textbf{Proposal representation and action representation configuration}: This setup is to study which of two C3D network arrangements, in parallel or serial, is useful in the proposed framework. As exploited in \cite{shou2016temporal}, the C3D$_{AR}$ network only takes the input segment classified as action by the C3D$_{PR}$ network. In this setup, the performance decreases, on average, by 0.02 and 0.01 on THUMOS'14 and ActivityNet, respectively (the fourth row).} \vspace{.3cm} \noindent\textcolor{black}{\textbf{Future frame generation}: Exploiting future frame generation component increases the mAP, on average, by 0.05 on THUMOS'14 and 0.02 on ActivityNet, respectively (the fifth row). The performance gain is the most significant among other components. The result demonstrates that using the F$^2$G network allows our framework to consider more amount of information when compared to a situation without the F$^2$G. Thus, the limitations in the online setting is resolved to a certain extent.} \vspace{.3cm} \noindent\textcolor{black}{\textbf{Ground truth as future frame generation output}: To simulate this setup, we replace the output of the F$^2$G network by the ground truth frames. In other words, given an input sequence $(I_{t - 15}, ..., I_t)$ the F$^2$G generates future frames $(\hat{I}_{t + 1}, ..., \hat{I}_{t + 7})$; we replace generated future frames $(\hat{I}_{t + 1}, ..., \hat{I}_{t + 7})$ by the actual future frames $({I}_{t + 1}, ..., {I}_{t + 7})$ to simulate the aforementioned situation. The results are shown in the bottom row of Tab. \ref{table_result3} and Tab. \ref{table_result3_act}. Comparing to the performance of the `full' model (the sixth row), using the ground truth as outputs of the F$^2$G network (the bottom row) increases the mAP, on average, by 0.01 on THUMOS'14 and 0.006 on ActivityNet. This result indicates that improving future frame generation performance leads to detection performance increase.} \textcolor{black}{To summarize, as we argued in Sec. \ref{sec_intro}, for the online action detection scenario from video streams, a limited amount of information is a significant factor; using the F$^2$G network resolves this limitation by feeding predicted future input frames of a short period, eight frames in this paper, to the system so that more information is considered. Augmenting data also improves the detection performance, which means that making a model aware of variation of action duration is critical to a certain extent. Arranging two C3D networks in parallel, instead of connecting them in serial, is more effective in the proposed framework.} \subsection{Limitations} We demonstrated that the proposed method shows comparable performances on two benchmark datasets. However, there are several limitations that we summarize as follows. \begin{enumerate} \item[--] Computational complexity: The proposed framework exploits four deep neural networks, which costs roughly 174M parameters. The reason is due to the difficulties of the online action localization task, which requires several components to deal with the lack of available information, distinguishing actions from background scenes, and accurately localize the start and the end of an action class. Thus, we designed the proposed framework with four deep neural networks in each of which is dedicated to deal with the issues mentioned above. \item[--] Limited backpropagation during training: This limitation comes from the limited GPU resources to handle all four networks at the same time. As described in Sec. 3, each network of the proposed framework is trained separately, which implies that detection error at the final network LSTM$_{DET}$ does not backpropagate to the input layer of AR, PR, and F$^2$G networks, respectively. \item[--] Dependency on the F$^2$G network: We demonstrated that using generated future frames improves the online temporal action localization performance. However, the generation performance is not satisfactory when compared to the real ground truth frames. There is a large room for improving generation performance in which the proposed method depends. \item[--] Room for further improvement: As we mentioned above, the limitations of the proposed method mostly come from the hardware side. We expect that, with enough computational resources, training the proposed framework with proper backpropagation, the performance will improve to a certain extent. \end{enumerate} \section{Conclusion and Future work} \label{sec6} In this paper, we proposed a novel action detection framework to address the challenging problems of online action detection from untrimmed video streams. To resolve the limited information issue, we proposed to exploit a future frame generation network. To learn temporal order using only visual information without learning any temporal prior, such as duration of action, we reorganized action class as two temporally-ordered subclasses. To make the proposed framework generalize better, we augmented training video data by varying the duration of action. We demonstrated that the performance of the proposed framework is comparable with the offline setting methods on two benchmark datasets, THUMOS'14 and ActivityNet. Through the ablation study, we demonstrated that the F$^2$G network gives meaningful improvement. We believe that other time-series tasks, such as traffic flow prediction \cite{POLSON20171} and financial market analysis \cite{CHONG2017187}, can also be benefitted by using a future generation network. In the meanwhile, there are also several limitations. The dependency on the future frame generation network and computational complexity of the proposed framework need to be addressed for further improvement. As future work, we plan to design a more efficient feature extraction network so that the whole framework can learn with the same backpropagation error. We will also plan to formulate action detection as a multitask learning problem. \bibliographystyle{unsrt}
\section{INTRODUCTION} The dynamical process of synchronization in coupled chaotic systems has greatly influenced researchers because of the sensitive dependence of chaos on initial conditions \cite{Pecora1990,Pikovsky2003}. The high complexity and unpredictability prevailing in the dynamics of chaotic systems requires a complete understanding on the synchronization dynamics of coupled systems as it has potential applications in secure transmission of information signals \cite{Rulkov1992,Chua1992,Chua1993,Murali1993,Boccaletti2002,Chen2020}. Several higher and low-dimensional chaotic systems have been studied for synchronization and numerous electronic circuit systems have been analyzed for the application of chaos synchronization to secure communication \cite{Chua1992,Murali1993,Oppenheim1992,Murali1994,Murali1997,Koronovskii2009,Wu2019,Wang2019,Wang2019a}. The important requirement for signal transmission by chaos synchronization is that the coupled systems must exist in stable synchronized states over greater values of coupling strength. The existence of coupled chaotic systems in stable synchronized states is observed through the evaluation of the {\emph{Master Stability Function}} (MSF) \cite{Pecora1998} and the negative valued regions of MSF becomes a necessary condition for occurrence of synchronization. Recently, induced synchronization has been observed in coupled chaotic systems by Schr{\"o}der {\emph{et al.}} using the method of {\emph{transient uncoupling}} \cite{Schroder2015}. This method induces synchronization in coupled systems and enhances the stability of synchronization to greater values of coupling strength \cite{Schroder2016,Aditya2016,Ghosh2018}. Further, the effect of the size of the chaotic attractors with different Lyapunov dimension in enhancing synchronization stability is studied \cite{Sivaganesh2019}. However, the direction dependence of the method transient uncoupling in clipping the phase space of chaotic attractors of the response system in a {\emph{drive-response}} scenario is yet to be studied. This paper introduces a new approach to study the direction dependence of transient uncoupling i.e., {\emph{optimal uncoupling}} and for the identification of optimal directions of implementing clipping widths to achieve greater stable synchronization in coupled chaotic systems. The following are discussed in this article. The method of transient and optimal uncoupling are briefly discussed in Section \ref{sec:2} and in Section \ref{sec:3}, the implementation of the method of optimal uncoupling in enhancing synchronization in coupled chaotic systems is presented. \section{Transient and Optimal Uncoupling} \label{sec:2} The method of {\emph{transient uncoupling}} involves the clipping of phase space of the response system over the coordinate axis through the drive and response systems are unidirectionally coupled. The state equations of a $d$-dimensional chaotic system subjected to transient uncoupling driven by an identical chaotic drive system can be written as \begin{equation} {\bf{\dot x_2}} = {\bf{F(x_2)}} + \epsilon \chi_{A} (\bf{x_2}) \bf{G} \times (\bf{x_1} - \bf{x_2}) \label{eqn:1} \end{equation} where, $\epsilon,~\chi_A$ represent the coupling strength and transient uncoupling factor and {\bf{G}} is the coupling matrix. The terms ${\bf{x_{1},x_{2}}}$ represents the state vectors of the drive and response systems and the transient uncoupling factor $\chi_A$ representing the region of phase space $A$ where, $A \subseteq \mathbb{R}^d$, is written as \begin{equation} \chi_A = \begin{cases} 1 & \text{if ${\bf{x_2}} \in A$}\\ 0 & \text{if ${\bf{x_2}} \notin A$} \end{cases} \label{eqn:2} \end{equation} The phase space of the response system is clipped normal to the axis of the coordinate variable $({\bf{x}}_2)_i$ where, $i=1,2,...d$, that couples the drive and response systems with respect to a point $({\bf{x}}_{2}^*)_i$ to a width $\Delta$. The clipped region of phase space is given as \begin{equation} A_{\Delta} = \{ {\bf{x}}_2 \in \mathbb{R}^d : |({\bf{x}}_2)_i - ({\bf{x}}_{2}^*)_i| \le \Delta \} \label{eqn:3} \end{equation} However, the clipping of phase space of the response system has not to be always restricted to any one of the coordinate axis and it can have orientations $(\theta)$ with respect to the coordinate axes. The method of finding the optimal direction $(\theta^{*})$ of applying clipping width to obtain stable synchronization leads to the evaluation of the effectiveness of clipping fraction for which synchronization is observed in the coupled systems for a fixed value of coupling strength \cite{Schroder2015} and is given as \begin{equation} S(\theta) = \int_0^1 s(f,\theta) df, \label{eqn:4} \end{equation} where, the synchrony indicator $s(f,\theta)$ is \begin{equation} s(f,\theta) = \begin{cases} 1 & \text{if $\lambda^{\perp}_{max} < 0$}\\ 0 & \text{if $\lambda^{\perp}_{max} \ge 0$} \end{cases} \label{eqn:5} \end{equation} with $f$ being the temporal clipping fraction given as \begin{equation} f = \lim_{T \to \infty} \frac{1}{T} \int_0^T \chi_A ({\bf{x_2}}(t)) dt, \label{eqn:6} \end{equation} The {\emph{master stability function}} being the largest transverse Lyapunov exponent $\lambda^{\perp}_{max}$ is obtained to identify the stability of synchronized states in coupled chaotic systems \cite{Pecora1998,Pecora1997}. \section{Results and Discussion} \label{sec:3} We present in this section, the effect of optimal uncoupling in enhancing the stability of synchronized states and explain the novel approach in identifying the optimal directions for applying the clipping widths. The orientation of the clipping width ($\theta$) in the phase space of the response system is considered to vary in the anticlockwise direction with respect to the $x$ or $z$-axis of the corresponding attractor discussed. The clipping width $\Delta$ oriented at an angle $\theta$ radians, has components along both the coordinate axis. Hence, the region of phase space clipped must include clipping along both the axis which indirectly implies that both of the state variables representing the attractor in the phase space must be coupled to the respective variables of the drive system. The {\emph{R{\"o}ssler}} and the {\emph{Chua's circuit}} systems are studied in this paper using this new approach to identify the optimal directions of implementing clipping widths. \subsection{R{\"o}ssler system} \label{sec:2.3} The state equations of coupled {\emph{R{\"o}ssler}} systems \cite{Rossler1976} with the clipping of phase space along a particular direction can be written as \begin{subequations} \begin{eqnarray} \dot x_1 &=& -y_1 - z_1, \\ \dot y_1 &=& x_1 + a y_1,\\ \dot z_1 &=& b + (x_1 - c) z_1,\\ \dot x_2 &=& -y_2 - z_2 + \epsilon \chi_A (x_1 - x_2), \\ \dot y_2 &=& x_2 + a y_2+ \epsilon \chi_A (y_1 - y_2),\\ \dot z_2 &=& b + (x_2 - c) z_2, \end{eqnarray} \label{eqn:7} \end{subequations} where $x_{1,2},y_{1,2},z_{1,2}$ represents the state variables of the drive and response systems. Considering the deviation of the attractor along the $z$-axis is minimum \cite{Schroder2015}, the synchronization stability of the coupled {\emph{R{\"o}ssler}} systems corresponding to the orientation of the clipping widths in the $x-y$ plane can be explored. Hence, an orientation of the clipping width in the $x-y$ plane must have its components along the corresponding coordinates axes leading to the coupling of the systems through the $x$ and $y$ state variables. Eqs. \ref{eqn:7}(d) and \ref{eqn:7}(e) indicates that for clipping widths oriented with respect to the coordinate axes of the state space vectors ($x$ and $y$) i.e., for $\theta \neq 0, \pi$ and $\theta \neq \pi/2, 3\pi/2$, the systems are unidirectionally coupled through both the $x$ and $y$ variables by the factor $\epsilon \chi_A$. For clipping widths with orientations given by $\theta = 0, \pi$ or $\theta = \pi/2, 3\pi/2$, the systems are coupled through the state variables $x$ or $y$, respectively. \\ \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{rossler_phase} \caption{(Color Online) {\emph{R{\"o}ssler}} system: Clipping of phase space of the chaotic attractors corresponding to the response system (cyan) along with the attractor of the drive (green) in the $x_{1,2}-y_{1,2}$ phase planes obtained for the parameters $a=0.2,b=0.2,c=5.7$ and $\epsilon = 0$. $O(x^{*},y^{*})$ is the center of the chaotic attractor with $(x^{*},y^{*})$ = (1.2,-1.5) and $OA=\Delta$ is the clipping width. Clipping is implemented along a particular direction '$\theta$' from the $x-$coordinate axis in the $(x_{2}-y_{2})$ plane over a width of $2\Delta$. $\Delta_x=\Delta cos (\theta)$ and $\Delta_y=\Delta sin (\theta)$ represent the components of the vector $\Delta$ along the coordinate axes. The coupling strength is active only over the region of phase space of the response system within the red colored box indicating the intersection of the components $\Delta_x$ and $\Delta_y$ of the vector $\Delta$ along the corresponding axis.} \label{fig:1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{rossler_angle} \caption{(Color Online) {\emph{R{\"o}ssler}} system: (a) 3D plot indicating the variation of $\lambda^{\perp}_{max}$ with the orientation of clipping width $\theta$ and clipping fraction $\Delta^{'}$; (b) Variation of the effectiveness of clipping fraction $S(\theta)$ with orientation of clipping width $\theta$ indicating symmetry of the curve about the angle $\theta=\pi/2$. The optimal directions of clipping exists in the range $0.1667\pi \le \theta^{*} \le 0.3444\pi$ and $0.6556\pi \le \theta^{*} \le 0.8333\pi$, respectively.} \label{fig:2} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{rossler_1} \caption{(Color Online) {\emph{R{\"o}ssler}} system: Variation of $\lambda^{\perp}_{max}$ with clipping fraction ($\Delta^{'}$) for different orientations of the clipping width $\theta$ with the couping strength fixed at $\epsilon=10$. Variation of $\lambda^{\perp}_{max}$ with $\Delta^{'}$ for (a) $\theta=0$ i.e. $x$-coupling, indicates stable synchronization in the range $0.1445 \le \Delta^{'} \le 0.7739$; (b) $\theta=\pi/2$ i.e. $y$-coupling, indicates stable synchronization for $\Delta^{'} \ge 0.1023$; (c) Optimal direction of clipping width $\theta^{*}=\pi/4$ indicates stable synchronization for $\Delta^{'} \ge 0.163$; (d) Parameter regions for stable synchronization (gray colored) in the $\Delta^{'}-\epsilon$ plane.} \label{fig:3} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{rossler_optimal_phase_sync.eps} \caption{(Color Online) {\emph{R{\"o}ssler}} system: Phase-portraits of the drive (red) and response (yellow) systems in $(x_{1,2}-y_{1,2})$ planes indicating synchronization of the coupled systems within the clipped region (black dotted box) for the parameters (a) $\theta=0.1 \pi,~\epsilon = 10,~\Delta^{'}=0.6$ and (b) $\theta=0.25 \pi,~\epsilon = 10,~\Delta^{'}=0.6$, respectively.} \label{fig:4} \end{center} \end{figure} Figure \ref{fig:1} shows the implementation of the above said method of the orientation of clipping width in the phase space of the response system along a particular direction $\theta$. The point $O(x^{*},y^{*})$ is the center of the chaotic attractor and clipping of phase space is performed to a width of $\Delta~ (OA=\Delta)$ on both sides of the center leading to a total width of $2\Delta$. The component of the clipping width $\Delta$ along the $x$ and $y$ axes are represented as $\Delta_x$ and $\Delta_y$. The intersection of the components $\Delta_x$ and $\Delta_y$ on both sides of the coordinate axes gives rise to a region of phase space $2 \Delta_x . 2 \Delta_y$ of the response system, as indicated by the red colored dotted box in Fig. \ref{fig:1}, within which the coupling of the systems is valid. This region of phase space varies with the clipping width ($\Delta$) and its orientation ($\theta$) along the $x$-coordinate axis. For $\theta=0$ and $\theta=\pi$, the coupling between the systems is only through the $x$-variable and for $\theta=\pi/2$ and $\theta=3\pi/2$, the coupling is only through the $y$-variable. The fraction of phase space clipped along any orientation from the $x$-axis is given by the clipping fraction $\Delta^{'}=2\Delta_{x,y}/\Omega_{x,y}$ where, $\Omega_x$ and $\Omega_y$ represent the width of the chaotic attractor along the $x$ and $y$ axes, respectively. The chaotic attractor shown in Fig. \ref{fig:1} is obtained for the system parameters $a=0.2,~b=0.2,~c=5.7$ and has a Lyapunov dimension $L_d = 2.0139$.\\ The functional work steps involved in identifying the optimal directions of applying clipping widths to achieve stable synchronization is summarized as follows: \begin{enumerate} \item Fix the center of the chaotic attractor of the response system $({x^{*}_2}, {y^{*}_2})$ along the co-ordinate axis of the state variable coupled to the drive system. \item For a fixed value of $\theta$ and clipping width $\Delta$ identify the region of phase space within which the coupling strength is active by resolving the horizontal $(\Delta_{x} = \Delta~ cos \theta)$ and vertical component $(\Delta_{y} = \Delta ~sin \theta)$ of the vector $OA~(OA=\Delta)$ and estimate $\lambda^{\perp}_{max}$. \item Vary the clipping fraction $\Delta^{'}$ $(\Delta^{'}_x = 2 \Delta_x / \Omega_x,~\Delta^{'}_y = 2 \Delta_y / \Omega_y)$ in the range $0 \le \Delta^{'} \le 1$ in steps, identify the active phase-phase region of coupling strength to estimate $\lambda^{\perp}_{max}$ in each step and evaluate the effectiveness of clipping fraction $S(\theta)$ using the synchrony indicator $s(\theta)$ obtained for each step of clipping fraction. \item Evaluate $S(\theta)$ for each value of $\theta$ by varying $\theta$ in steps in the range $0 \le \theta \le \pi$ by repeating steps 2 and 3. \item Plot $S(\theta)$ obtained for the corresponding value of $\theta$ to find the optimal directions $\theta^{*}$. \end{enumerate} The stability of synchronization of the coupled {\emph{R{\"o}ssler}} systems can be analyzed by observing the MSF to identify the optimal direction of implementing the clipping width with respect to a particular coordinate axis. Fig. \ref{fig:2}(a) shows the variation of MSF as functions of the orientation of clipping width ($\theta$) and the clipping fraction ($\Delta^{'}$). The orientation of clipping width is varied from 0 to $\pi$ radians with respect to the $x$-coordinate axis of the response system. The 3D plot shown in Fig. \ref{fig:2}(a) shows the existence of certain range of optimal directions over which the coupled system confines to stable synchronized states. Fig. \ref{fig:2}(b) showing the variation of the effectiveness of clipping fraction $S(\theta)$ as a function of the orientation angle $\theta$ in the range $0 \le \theta \le \pi$ leads to some interesting results. Firstly, the curve is symmetrical on both sides about the orientation angle $\theta=\pi/2$. Hence, the clipping width can be sufficiently varied through the angle $0 \le \theta \le \pi/2$ to study the effectiveness of clipping orientation in coupled systems. Secondly, from Fig. \ref{fig:2}(b), it can be observed that the optimal direction $\theta^{*}$ over which the effectiveness of clipping fraction is observed and greater stable synchronized states is promised exists over a range of orientations of clipping widths. For the {\emph{R{\"o}ssler}} system, the optimal directions for stable synchronization is observed in the ranges $0.1667\pi \le \theta^{*} \le 0.3444\pi$ and $0.6556\pi \le \theta^{*} \le 0.8333\pi$, respectively. Figure \ref{fig:3} shows the variation of MSF ($\lambda^{\perp}_{max}$) with $\Delta^{'}$ for different orientations of the clipping width for a common value of the coupling strength $\epsilon=10$. Figure \ref{fig:3}(a) and \ref{fig:3}(b) showing the MSF variation with $\Delta^{'}$ for $\theta=0$ (x-coupling) and $\theta=\pi/2$ (y-coupling) indicates stable synchronized states in the range of clipping fractions $0.1145 \le \Delta^{'} \le 0.7739$ and $\Delta^{'} \ge 0.1023$, respectively. Figure \ref{fig:3}(c) shows the variation of $\lambda^{\perp}_{max}$ with $\Delta^{'}$ for the optimal direction $\theta^{*}=\pi/4$ indicating larger negative values of $\lambda^{\perp}_{max}$ for the region $\Delta^{'} \ge 0.163$. The parameter regions in the $\Delta^{'}-\epsilon$ plane indicating the negative valued regions of $\lambda^{\perp}_{max}$ for the optimal direction $\theta^{*}=\pi/4$ is shown in Fig. \ref{fig:3}(d). The synchronization of the response system with drive within the the clipped region of phase space obtained for certain values of $\theta$ is a shown in Fig. \ref{fig:4}. Figure \ref{fig:4}(a) and \ref{fig:4}(b) shows the phase-portraits in the $(x_{1,2}-y_{1,2})$ planes indicating the synchronization of dive (red) and response (yellow) systemswithin the clipped region of phase space (black dotted box), over which the coupling strength $\epsilon$ is active, for the parameters $\theta=0.1 \pi,~\epsilon=10,~\Delta^{'}=0.6$ and $\theta=0.25 \pi,~\epsilon=10,~\Delta^{'}=0.6$, respectively.\\ The method of optimal uncoupling presented above can be validated through its application to another chaotic system namely, the {\emph{Chua's}} circuit system. \begin{figure} \begin{center} \centering \includegraphics[width=1\textwidth]{chua_phase} \caption{(Color Online) {\emph{Chua's}} circuit system: Clipping of phase space of the chaotic attractors corresponding to the response system (cyan) along with the attractor of the drive (green) in the $z_{1,2}-y_{1,2}$ phase planes for $\epsilon = 0$. $O(z^{*},y^{*})$ is the center of the chaotic attractor with $(z^{*},y^{*})$ = (-0.116,~-0.002) and $OA=\Delta$ is the clipping width. Clipping is implemented along a particular direction '$\theta$' from the $z-$coordinate axis in the $(z_{2}-y_{2})$ plane over a width of $2\Delta$. $\Delta_z=\Delta~cos \theta$ and $\Delta_y=\Delta~sin \theta$ represent the components of the vector $\Delta$ along the coordinate axes. The coupling strength is active only over the region of phase space of the response system within the red colored box indicating the intersection of the components $\Delta_z$ and $\Delta_y$ of the vector $\Delta$ along the corresponding axis.} \label{fig:5} \end{center} \end{figure} \begin{figure} \begin{center} \centering \includegraphics[width=1\textwidth]{chua_angle} \caption{(Color Online) {\emph{Chua's}} circuit system: (a) 3D plot indicating the variation of $\lambda^{\perp}_{max}$ with the orientation of clipping width $\theta$ and clipping fraction $\Delta^{'}$; (b) Variation of the effectiveness of clipping fraction $S(\theta)$ with orientation of clipping width $\theta$ indicating symmetry of the curve about the angle $\theta=\pi/2$. The optimal directions of clipping exists in the range $0.1667\pi \le \theta^{*} \le 0.3111\pi$ and $0.6889\pi \le \theta^{*} \le 0.8333\pi$, respectively.} \label{fig:6} \end{center} \end{figure} \begin{figure} \begin{center} \centering \includegraphics[width=1\textwidth]{chua_1} \caption{(Color Online) {\emph{Chua's}} circuit system: Variation of $\lambda^{\perp}_{max}$ with clipping fraction ($\Delta^{'}$) for different orientations of the clipping width $\theta$ with the couping strength fixed at $\epsilon=5$. Variation of $\lambda^{\perp}_{max}$ with $\Delta^{'}$ for (a) $\theta=0$ i.e. $z$-coupling, indicates stable synchronization in the range $0.6766 \le \Delta^{'} \le 0.931$; (b) $\theta=\pi/2$ i.e. $y$-coupling, indicates stable synchronization for $\Delta^{'} \ge 0.4468$; (c) Optimal direction of clipping width $\theta^{*}=\pi/4$ indicates stable synchronization for $\Delta^{'} \ge 0.3295$; (d) Parameter regions for stable synchronization (gray colored) in the $\Delta^{'}-\epsilon$ plane.} \label{fig:7} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{chua_optimal_phase_sync.eps} \caption{(Color Online) {\emph{Chua's}} circuit system: Phase-portraits of the drive (red) and response (yellow) systems in $(x_{1,2}-y_{1,2})$ planes indicating synchronization of the coupled systems within the clipped region (black dotted box) for the parameters (a) $\theta=0.1 \pi,~\epsilon = 5,~\Delta^{'}=0.8$ and (b) $\theta=0.25 \pi,~\epsilon = 5,~\Delta^{'}=0.8$, respectively.} \label{fig:8} \end{center} \end{figure} \subsection{Chua's circuit system} \label{sec:2.1} The dynamical equations of the {\emph{Chua's circuit}} system \cite{Chua1992,Matsumoto1984,Matsumoto1985} with the drive and response systems coupled through the $y$ and $z$-variables is written as \begin{subequations} \begin{eqnarray} \dot x_1 &=& \alpha (y_1-x_1+f(x_1)), \\ \dot y_1 &=& x_1-y_1+z_1,\\ \dot z_1 &=& -\beta y_1 - \gamma z_1,\\ \dot x_2 &=& \alpha (y_2-x_2+f(x_2)), \\ \dot y_2 &=& x_2-y_2+z_2 + \epsilon \chi_A (y_1 - y_2),\\ \dot z_2 &=& -\beta y_2 - \gamma z_2+ \epsilon \chi_A (z_1 - z_2), \end{eqnarray} \label{eqn:8} \end{subequations} where $f(x_{1}), f(x_2)$ represent the three-segmented piecewise-linear function of the drive and response systems given as \begin{equation} f(x_{1,2}) = \begin{cases} -bx_{1,2}+(a-b) & \text{if $x_{1,2} > 1$}\\ -ax_{1,2} & \text{if $|x_{1,2}| < 1$}\\ -bx_{1,2}-(a-b) & \text{if $x_{1,2} < -1$} \end{cases} \label{eqn:9} \end{equation} where $x_{1,2},y_{1,2},z_{1,2}$ represents the state variables of the drive and response systems. Figure \ref{fig:5} shows the implementation of clipping width for a finite orientation $\theta$ about the $z$-axis in the phase space of the response system. The point $O(z^{*},y^{*})$ is the center of the chaotic attractor and the clipping of phase space is performed to a width of $\Delta~(OA=\Delta)$ on both sides of the center leading to a total width of $2\Delta$. The intersection of the components of clipping width $\Delta_z$ and $\Delta_y$ on both sides of the coordinate axes gives rise to a region of phase space $2 \Delta_z . 2 \Delta_y$ of the response system, as indicated by the red colored dotted box in Fig. \ref{fig:5}, within which the coupling strength is valid. The clipping fraction $\Delta^{'}=2\Delta_{z,y}/\Omega_{z,y}$ where, $\Omega_z$ and $\Omega_y$ represent the width of the chaotic attractor along the $z$ and $y$ axes, respectively. The chaotic attractor shown in Fig. \ref{fig:5} is obtained for the system parameters $\alpha=10,~\beta=14.87,~\gamma=0,~a=-1.55,~b=-0.68$ and has a Lyapunov dimension $L_d = 2.1192$.\\ The stability of synchronization of the coupled {\emph{Chua}} systems can be analyzed similar to the {\emph{R{\"o}ssler}} system to identify the optimal directions of implementing the clipping width. Fig. \ref{fig:6}(a) shows the variation of MSF as functions of the orientation of clipping width ($\theta$) and the clipping fraction ($\Delta^{'}$). Fig. \ref{fig:6}(b) showing the variation of the effectiveness of clipping fraction $S(\theta)$ with orientation $\theta$ in the range $0 \le \theta \le \pi$ indicates optimal directions ($\theta^{*}$) for stable synchronization in the ranges $0.1667\pi \le \theta^{*} \le 0.3111\pi$ and $0.6889\pi \le \theta^{*} \le 0.8333\pi$, respectively. Further, the curve is symmetrical on both sides about the angle $\theta=\pi/2$. Hence, it is confirmed that optimal directions of clipping width can be obtained by studying the effectiveness of clipping fraction over the range $0 \le \theta \le \pi/2$. Figure \ref{fig:7} shows the variation of $\lambda^{\perp}_{max}$ with $\Delta^{'}$ for different orientations of the clipping width for the coupling strength $\epsilon=5$. Figures \ref{fig:7}(a) and \ref{fig:7}(b) showing the MSF variation with $\Delta^{'}$ for $\theta=0$ (z-coupling) and $\theta=\pi/2$ (y-coupling) indicates stable synchronized states in the range $0.6766 \le \Delta^{'} \le 0.931$ and $\Delta^{'} \ge 0.4468$, respectively. Figure \ref{fig:7}(c) shows the variation of $\lambda^{\perp}_{max}$ with $\Delta^{'}$ for the optimal direction $\theta^{*}=\pi/4$ indicating larger negative values of $\lambda^{\perp}_{max}$ for $\Delta^{'} \ge 0.3295$. The parameter regions in the $\Delta^{'}-\epsilon$ plane indicating the negative valued regions of $\lambda^{\perp}_{max}$ for the optimal direction $\theta^{*}=\pi/4$ is shown in Fig. \ref{fig:7}(d). The synchronization of the drive (red) and response (yellow) systems within the clipped region of phase space (black dotted box) obtained in the $z_{1,2}-y_{1,2}$ phase planes for the parameters $\theta=0.1 \pi,~\epsilon = 5,~\Delta^{'}=0.8$ and $\theta=0.25 \pi,~\epsilon = 5,~\Delta^{'}=0.8$ are as shown in Fig. \ref{fig:8}(a) and \ref{fig:8}(b), respectively. Hence, the response system synchronize with the drive within the clipped region of phase space for suitable values of $\theta$ and $\Delta^{'}$. \section{Conclusion} We have reported in this paper, the implementation of a novel method in enhancing the stability of synchronization observed in coupled chaotic systems through optimal uncoupling. The orientation of the clipping width in the phase space of the attractor leads to the coupling of the systems through both of the state variables representing the phase space of the attractor. The optimal directions of implementing the clipping width to achieve stable synchronization is observed over certain ranges of orientation and the functional work steps for identifying the optimal directions are presented. The method presented in the paper reveals the sufficient directions of orientation that has to be studied to identify the optimal directions which has been confirmed through studies of the {\emph{R{\"o}ssler}} and {\emph{Chua's}} circuit systems. The present study leads to the implementation of the method of transient uncoupling to enhance the synchronization stability in coupled chaotic systems over any directions in the phase space of the attractors.
\section{Introduction} \label{sec:intro} \begin{deluxetable*}{ccccccc}[t!] \tablecaption{Stellar model atmosphere grids available with the first release of ExoTETHyS \label{tab:stellar_grids}} \tablecolumns{6} \tablenum{1} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{Geometry\tablenotemark{a}} & \colhead{Range $T_{\mbox{eff}}$(K)} & \colhead{Range $\log{g}$} & \colhead{Range $[M/H]$} & \colhead{Range $\lambda$ ($\mu$m)} & \colhead{Reference} } \startdata \texttt{ATLAS} & P-P & 3500-50000 & 0.0-5.0 & --5.0-1.0 & 0.009-160.0 & \citet{claret00} \\ \texttt{PHOENIX}\_2012\_13 & S1 & 3000-10000 & 0.0-6.0 & 0.0 & 0.25-10.0 & \citet{claret12,claret13} \\ \texttt{PHOENIX}\_2018 & S1 & 2300-12000 & 0.0-6.0 & 0.0 & 0.05-2.6 & \citet{claret18} \\ \enddata \tablenotetext{a}{Geometry types: P-P=plane-parallel; S1=spherical 1D} \end{deluxetable*} More than 3000 transiting exoplanets have been discovered in the last 20 years. The number of transiting exoplanets accounts for about three-quarters of the current exoplanet census\footnote[7]{source: \url{https://exoplanetarchive.ipac.caltech.edu}}, although this large fraction is due to targeted research programs rather than being a random sample from the exoplanet population. The success of the transit method is due to several contributing factors, including its ability to characterize them in great detail. A transit is revealed by a decrement in flux while the planet occults part of the stellar disk. The main observables are the transit depth and durations, leading to measurements of the exoplanet size, orbital semimajor axis and inclination, and stellar mean density \citep{seager03}. Transit spectroscopy is now routinely used to investigate the chemistry and physics of exoplanet atmospheres, through differences in transit depth of $\sim$10-100 parts per million (ppm) relative to the stellar flux at multiple wavelengths (e.g., \citealp{iyer16, sing16, tsiaras18}). Accurate modeling of the host star effects is mandatory to achieve the spectrophotometric precision required for characterizing the atmosphere of transiting exoplanets. The most prominent effect is stellar limb-darkening \citep{mandel02}, followed by magnetic activity \citep{ballerini12, zellem17}, granulation \citep{chiavassa17}, and, in some cases, rotational oblateness and gravity darkening \citep{howarth17}, and tidal deformations \citep{akinsanmi19, hellard19}. Among the nonstellar effects, the exoplanet nightside emission can also play a significant role \citep{kipping10, morello19}. The \texttt{ExoTETHyS} package is conceived as a toolbox for those who analyze the exoplanetary transits. The first release focuses on the tools for modeling the stellar limb-darkening effect, the importance of which is ubiquitous in transit observations, as well as in optical interferometry, microlensing, and eclipsing binary observations. Future versions of \texttt{ExoTETHyS} will include useful tools for modeling other effects, as well as for estimating their impact on specific observations, based on the astrophysical system parameters, the instrument passband, and the noise level. Accurate modeling of all of the aforementioned effects proved to be crucial in the analysis of several \emph{CoRoT} and \emph{Kepler} objects (e.g., \citealp{mazeh10, barnes11, mazeh12, masuda15, howarth17, reinhold17, shporer17, nielsen19}), because of the high-precision photometry down to the $\lesssim$10~ppm level \citep{christiansen12}. A similar photometric precision is expected for some of the ongoing \emph{Transiting Exoplanet Survey Satellite} (\emph{TESS}) observations \citep{ricker14}, future observations with the \emph{CHaracterising ExOPlanet Satellite} (\emph{CHEOPS}; \citealp{isaak19}) and \emph{PLAnetary Transits and Oscillations} (\emph{PLATO}; \citealp{rauer14}), and in spectroscopic observations with the upcoming \emph{James Webb Space Telescope} (\emph{JWST}; \citealp{beichman14}) and \emph{Atmospheric Remote-sensing Infrared Exoplanet Large-survey} (\emph{ARIEL}; \citealp{pascale18}) space missions. Stellar limb-darkening is the wavelength-dependent radial decrease in specific intensity. Consequently, the transit light curve deviates from the flat-bottomed shape that would be observed in the case of a uniform stellar disk; the difference signal can be as large as $\sim$10$^4$ ppm for the transit of a hot Jupiter observed at UV or visible wavelengths. Typically, the radial intensity distribution computed from specific stellar atmosphere models is parameterized by a set of limb-darkening coefficients, which are fixed in the analyses of transit light curves. Many researchers have produced multiple grids of stellar atmosphere models with different codes, then used to compile precalculated tables of limb-darkening coefficients (e.g., \citealp{claret00, claret03, claret04, claret08, claret17, claret18, sing10, howarth11b, claret11, claret12, claret13, claret14, neilson13, neilson13b, magic15, reeve16}). The lack of empirical validation for stellar limb-darkening prevents the final choice of the most reliable model(s). The presence of unocculted stellar spots during an exoplanetary transit may alter the effective limb-darkening coefficients, which will be slightly different from those calculated for the case of unspotted stellar surface \citep{csizmadia13}. In some cases, significantly different parametric intensity profiles have been obtained from the same model atmosphere, depending on the sampling of the model intensity profile, the functional form (so-called limb-darkening law), and/or the fitting algorithm adopted \citep{claret00, heyrovsky07, howarth11, espinoza15}. The system parameters obtained from the light curve fits with the alternative sets of limb-darkening coefficients can vary by more than the respective 1$\sigma$ error bars, typically, if the relative photometric precision of the observations is of the order of (or better than) 100~ppm per minute interval. In this paper, we probe an optimized fitting algorithm for the limb-darkening coefficients that minimizes the difference between (numerically integrated) reference light curves and the corresponding approximated transit models with limb-darkening coefficients. Therefore we eliminate the degeneracy from the choice between several fitting algorithms that were leading to significantly different parametric profiles for the same stellar atmosphere model (e.g., \citealp{espinoza15}). The high-fidelity match between the stellar intensity profiles and the transit light curve models facilitates comparative studies of the model atmospheres, especially with the increasing number of observations with a spectrophotometric precision down to $\sim$10~ppm (e.g., \emph{CoRoT}, \emph{Kepler}, \emph{TESS}, and \emph{Hubble Space Telescope} (\emph{HST})/WFC3 data). \subsection{Structure of the paper} Section~\ref{sec:exotethys} provides a technical description of the \texttt{ExoTETHyS} package and the algorithms adopted. Section~\ref{sec:performances} discusses the precision of the limb-darkening calculator for the analysis of exoplanetary transits. In particular, Section~\ref{ssec:fitting_methods} compares various algorithms that are adopted in the other publicly available codes and their variants, Section~\ref{ssec:fitting_ld_laws} compares the performances of the alternative limb-darkening laws, and Section~\ref{ssec:GOF_ppm} provides a formula to estimate the potential error in the transit model based on the goodness of fit for the limb-darkening coefficients that should be compared with the noise level in the observations. Section~\ref{sec:usage} discusses the main functionality of the \texttt{ExoTETHyS} package, its current and future usage. Finally, Section~\ref{sec:conclusions} summarizes the key points discussed in this paper. \section{Description of the \texttt{ExoTETHyS} package} \label{sec:exotethys} The first release of \texttt{ExoTETHyS} includes the following subpackages: \begin{enumerate} \item Stellar Atmosphere Intensity Limb (SAIL), which can calculate the limb-darkening coefficients for specific stellar targets or over predetermined parameter grids; \item Transit Ring-Integrated Profile (TRIP), which can compute an exact transit light curve by direct integration of the occulted stellar flux, without using an analytical function (limb-darkening law) to approximate the stellar intensity profile. \end{enumerate} The TRIP subpackage was conceived to model exoplanetary transits. Following requests by users, we are adding a function to model eclipsing binaries. \subsection{The SAIL subpackage} The SAIL subpackage is a generic stellar limb-darkening calculator that is not specific to a predetermined list of instruments or standard passbands. It is conceptually similar to the calculator provided by \cite{espinoza15}, but with different features. A technical difference is the use of a novel fitting algorithm for obtaining the limb-darkening coefficients, specifically optimized for modeling the exoplanetary transits, instead of multiple algorithm options with unclear performances (see Sections~\ref{ssec:fit_ints} and \ref{ssec:fitting_methods}). \subsubsection{Input and output} \label{ssec:sail_IO} The SAIL subpackage requires a configuration file to specify the desired calculation. The user can choose either ``individual'' or ``grid'' calculation type. The first option enables calculation of the limb-darkening coefficients for a star or for a list of stars with the parameters specified by the user, while the latter will provide the limb-darkening coefficients for a grid of precalculated stellar model atmospheres. In both cases, the user must select one of the available stellar model grids, which were computed with different codes and settings (see Table~\ref{tab:stellar_grids} and references therein). For each grid, the stellar models are identified by a set of three parameters, i.e., the effective temperature ($T_{\mbox{\footnotesize{eff}}}$), the surface gravity ($\log{g}$), and the metallicity ($[M/H]$). As the limb-darkening coefficients are mostly dependent on the effective temperature, the user must provide the effective temperatures of all the individual stars. The other parameters have default values of $\log{g}=$4.5 and $[M/H]=$0.0, corresponding to a main-sequence star with solar abundances, if they are not given by the user. For the grid calculation type, the default option is to calculate the limb-darkening coefficients for all the stellar models in the selected database. Alternatively, the user can select a subgrid by specifying the minimum and/or maximum values for each stellar parameter. Another key input is the passband, i.e., the total spectral response of the observing instrument. For most instruments, the spectral response is available as a table of photon-to-electron conversion factors at given wavelengths. The limb-darkening coefficients do not depend on the absolute values of the spectral response, so that a scaled/normalized version of the spectral response will give identical results. The spectral responses of the most common instruments for transiting exoplanets are built into the package. The code can accept any user-defined passband with the same file format. It is also possible to calculate the limb-darkening coefficients for multiple wavelength bins within a given passband by specifying the two wavelengths delimiting each bin. This option is particularly useful for exoplanet spectroscopic observations, such as those currently performed with \textit{HST}/WFC3. The last mandatory input in the configuration file is the list of limb-darkening laws to adopt (at least one). The code includes several built-in limb-darkening laws, including all of the most commonly used (see Section~\ref{ssec:ld_laws}), but it can also accept user-defined laws. The ``basic'' outputs are python dictionaries containing the best-fit limb-darkening coefficients obtained for the required passbands, wavelength bins, and limb-darkening laws. The output dictionaries also provide the corresponding weighted rms of the fitting residuals to allow for a quick quality check (see Section~\ref{ssec:GOF_ppm}). For the case of individual calculation type, the results obtained for each target are stored in separate pickle files. Optionally, the user can request a ``complete'' output, whic includes intermediate products such as the numeric intensity profiles at various stages of the calculation (see Sections~\ref{ssec:integ_ints}-\ref{ssec:ldc_interp}). The additional information of the complete output is offered, mainly, as a way to identify bugs in the code and/or issues with certain stellar model atmospheres and wavelengths. Usually, the exoplanetary scientists will be interested to the basic output only. \subsubsection{From the stellar model atmospheres to the passband-integrated intensities} \label{ssec:integ_ints} The stellar model atmosphere grids consist of one file for each triple of stellar parameters ($T_{\mbox{\footnotesize{eff}}}$, $\log{g}$, $[M/H]$), providing the specific intensities ($I_{\lambda}(\mu)$) in units of erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$ sr$^{-1}$ at several positions on the sky-projected stellar disk over a given spectral range. For historical reasons, the independent variable is $\mu=\cos{\theta}$, where $\theta$ is the angle between the line of sight and the corresponding surface normal. The radial coordinate in the sky-projected disk is $r=\sqrt{1-\mu^2}$, where $r=1$ ($\mu=0$) corresponds to the spherical surface radius. Table~\ref{tab:stellar_grids} reports the information about the databases available with the first release of ExoTETHyS. We refer to the relevant papers and references therein for comparisons between the models. The passband-integrated intensities are calculated as \begin{equation} \label{eqn:integrated_intensities} I_{\mbox{\footnotesize pass}} (\mu) \propto \int_{\lambda_1}^{\lambda_2} I_{\lambda}(\mu) R_{\mbox{\footnotesize pass}}(\lambda) \lambda d \lambda , \end{equation} where $R_{\mbox{\footnotesize pass}}(\lambda)$ is the spectral response of the instrument in electrons photon$^{-1}$, and $\lambda_1$ and $\lambda_2$ are the passband or wavelength bin limits. The passband-integrated intensities are obtained in units proportional to electrons cm$^{-2}$ s$^{-1}$ sr$^{-1}$. As the limb-darkening coefficients are not affected by the (omitted) proportionality factor in Equation~\ref{eqn:integrated_intensities}, the final intensities are normalized such that $I_{\mbox{\footnotesize pass}} (\mu=0) = 1$. The intensity profiles, $I_{\lambda}(\mu)$, have distinctive behaviors depending on the plane-parallel or spherical geometry adopted by the selected grid of model atmospheres. In particular, the spherical intensity profiles show a steep drop-off close to the stellar limb, which is not observed in the plane-parallel models. The explanation for the different behaviors is exhaustive in the literature \citep{wittkowski04, espinoza15, morello17}. The almost null intensities at small $\mu$ are integrated over lines of sight that intersect only the outermost atmospheric shells, which have the smallest emissivity. Here $\mu=$0 ($r=$1) corresponds to the outermost shell of the model atmosphere, which is typically outside the stellar radius that would be observed in transit. Our algorithm calculates the photometric radius at the inflection point of the spherical intensity profile, i.e., where the gradient $|dI(r)/dr|$ is the maximum \citep{wittkowski04, espinoza15}. The radial coordinates are then rescaled such that $r=1$ ($\mu=$0) at the photometric radius, and those intensities with rescaled $r>$1 are rejected. No rescaling is performed for the plane-parallel models. \subsubsection{Limb-darkening laws} \label{ssec:ld_laws} A long list of analytical forms, so-called limb-darkening laws, has been proposed in the literature to approximate the stellar intensity profiles. The following options are built in the package: \begin{enumerate} \item the linear law \citep{schwarzschild06}, \begin{equation} \label{eqn:ld_law_linear} I_{\lambda}(\mu) = 1 - a(1-\mu) ; \end{equation} \item the quadratic law \citep{kopal50}, \begin{equation} \label{eqn:ld_law_quadratic} I_{\lambda}(\mu) = 1 - u_1(1-\mu) - u_2(1-\mu)^2 ; \end{equation} \item the square-root law \citep{diaz-cordoves92}, \begin{equation} \label{eqn:ld_law_sqrt} I_{\lambda}(\mu) = 1 - v_1(1-\sqrt{\mu}) - v_2(1-\mu) ; \end{equation} \item the power-2 law \citep{hestroffer97}, \begin{equation} \label{eqn:ld_law_power2} I_{\lambda}(\mu) = 1 - c(1-\mu^{\alpha}) ; \end{equation} \item the four-coefficient law \citep{claret00}, hereinafter referred to as claret-4, \begin{equation} \label{eqn:ld_law_claret4} I_{\lambda}(\mu) = 1 - \sum_{k=1}^{4} a_n(1-\mu^{k/2}) ; \end{equation} \item a generalized $n^{\mbox{\footnotesize{th}}}$-degree polynomial law, \begin{equation} \label{eqn:ld_law_gen_poly} I_{\lambda}(\mu) = 1 - \sum_{k=1}^{n} b_k(1-\mu^{k}) ; \end{equation} \item a generalized claret-$n$ law, \begin{equation} \label{eqn:ld_law_gen_claret} I_{\lambda}(\mu) = 1 - \sum_{k=1}^{n} c_k(1-\mu^{k/2}) ; \end{equation} \end{enumerate} Additionally, user-defined limb-darkening laws can be easily implemented. We recommend using the claret-4 law to achieve a model precision of $\lesssim$10~ppm in the analysis of exoplanetary transits (see Section~\ref{ssec:fitting_ld_laws}). The next release of \texttt{ExoTETHyS} will include a grid of white dwarf models, for which we have also found the claret-4 law to be significantly more accurate than the two-coefficient laws \citep{claret20}. \subsubsection{From the passband-integrated intensities to the limb-darkening coefficients} \label{ssec:fit_ints} The limb-darkening coefficients are obtained through a weighted least-squares fit of the passband-integrated intensity profile with weights proportional to the sampling interval in $r$, hereinafter referred to as \emph{weighted}-$r$ fit. The corresponding cost function is the weighted rms of residuals, \begin{equation} \label{eqn:w-rRMS} \mbox{\emph{weighted}-}r \, \mbox{rms} = \left ( \frac{\sum_{i=1}^{n} w_i ( I_{\mbox{\footnotesize pass}} (\mu_i) - I_{\mbox{\footnotesize pass}}^{\mbox{\footnotesize law}} (\mu_i) )^2}{ \sum_{i=1}^{n} w_i } \right )^{\frac{1}{2}}, \end{equation} with weights \begin{equation} \label{eqn:w-r_weights} w_i = \begin{cases} (1-r_1) + 0.5 \, (r_1-r_2), & \mbox{if} \ i=1 \\ 0.5 \, (r_{i-1}-r_{i+1}), & \mbox{if} \ 1<i<n\\ 0.5 \, r_{n-1}, & \mbox{if} \ i=n \end{cases}, \end{equation} where the $r_i$ are arranged in descending order, and $r_n =0$. The choice of cost function is optimized for the study of exoplanet transits, as detailed in Section~\ref{ssec:fitting_methods}. The performances of the spherical model fits are further enhanced by discarding those points with $r>0.99623$ (after rescaling as explained in Section~\ref{ssec:integ_ints}). This cut is a generalization of that implemented in the quasi-spherical (QS) fits by \cite{claret12}. For this reason, we rename the total fitting procedure explained here for the spherical intensity profiles as the \emph{weighted}-$r$ QS fit. Further details about the alternative fitting procedures are discussed in Section~\ref{ssec:fitting_methods}. \subsubsection{Interpolation from the grid of stellar models} \label{ssec:ldc_interp} The process described in Sections~\ref{ssec:integ_ints}-\ref{ssec:fit_ints} enables the calculation of limb-darkening coefficients for the stellar-atmosphere models contained in the grid, starting from their precalculated specific intensities. The limb-darkening coefficients for an individual target with a generic set of stellar parameters are obtained by sequential linear interpolation through the following steps: \begin{enumerate} \item identification of the neighbors in the model-grid, i.e., the vertices of the cube in parameter space that contains the requested model (maximum 8 models); \item calculation of the limb-darkening coefficients for each of the neighbors; \item interpolation in $[M/H]$ between models with the same $T_{\mbox{\footnotesize{eff}}}$ and $\log{g}$, leading to a maximum of 4 sets of limb-darkening coefficients with the requested $[M/H]$; \item interpolation in $\log{g}$ between the above calculated sets of coefficients with the same $T_{\mbox{\footnotesize{eff}}}$, leading to a maximum of 2 sets of limb-darkening coefficients with the requested $\log{g}$ and $[M/H]$; \item interpolation in $T_{\mbox{\footnotesize{eff}}}$ between the above calculated sets of coefficients. \end{enumerate} We note that this sequential interpolation is possible because of the regularity of the model grids. \begin{figure*}[t] \plotone{figures/f1.eps} \caption{Example with a model intensity distribution for a star similar to HD209458 ($T_{\mbox{\footnotesize eff}} = 6100 \, \mbox{K}$, $\log{g} = 4.5$), integrated over the 7.59--7.61~$\mu$m wavelength range, by using the \texttt{PHOENIX}\_2012\_13 database (see Table~\ref{tab:stellar_grids}). Top, left panel: normalized specific intensities vs. $\mu$ from the stellar atmosphere model (black circles), \emph{unweighted} (gray), \emph{weighted}-$r$ (orange), and \emph{weighted}-$r$ QS (red) model fits with claret-4 coefficients. The vertical dashed line denotes the cutoff value for the quasi-spherical fit (see Section~\ref{ssec:fitting_methods}). Top, right panel: analogous plot vs. $r$. Bottom panels: residuals between the fitted and model intensity values. The corresponding unweighted and weighted rms amplitudes of residuals are also reported. Note that, in this case, the unweighted least-squares fit leads to a non-monotonic radial intensity profile, which is physically unexpected. \label{fig:unphysical_ldfit}} \end{figure*} \begin{figure*} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=0.5\textwidth}}]{figure}[\FBwidth] {\caption{Top panel: simulated transit light curve (black) of HD209458~b as it would be observed by \emph{TESS}, and best-fit model with claret-4 limb-darkening coefficients obtained with the \emph{weighted}-$r$ QS method (red). Bottom panels: residuals between the reference light curve and the best-fit models with claret-4 limb-darkening coefficients obtained with different limb-darkening laws and fitting methods (see Section~\ref{ssec:fitting_methods}). The peak-to-peak and rms amplitudes of the residuals are reported.}\label{fig:TESS_transits}} {\hspace{-1.7cm}\includegraphics[width=0.37\textwidth]{figures/f2a.eps}} \includegraphics[width=\textwidth]{figures/f2b.eps} \end{figure*} \begin{figure*}[t] \plotone{figures/f3.eps} \caption{Peak-to-peak of residuals between the reference spectral light curves for the transit of HD209458~b and the best-fit models with claret-4 limb-darkening coefficients obtained with different fitting methods (see Section~\ref{ssec:fitting_methods}). Left panel: results obtained with the spherical methods, i.e., taking into account the whole spherical intensity profiles. Right panel: results obtained with the quasi-spherical methods, i.e., with a cutoff of $r \le 0.99623$, and the \emph{weighted}-$r$ method (dotted, orange line). The \emph{unweighted} QS (gray line) and the \emph{weighted}-$\mu$ QS (green line) overlap in the plot. Note the scale difference between the two panels. \label{fig:spectrum_residuals}} \end{figure*} \subsection{The TRIP subpackage} The TRIP subpackage is used to generate exact transit light curves by direct integration of the occulted stellar flux at given instants. It assumes a dark spherical planet transiting in front of a spherically-symmetric (unspotted) star. In this simple case, the normalized flux (i.e., relative to the stellar flux) is a function of two geometric variables, as reported by \cite{mandel02}, and of the stellar intensity profile: \begin{equation} \label{eqn:F_p_z} F(p,z, I(\mu)) = 1 - \Lambda (p,z,I(\mu)) , \end{equation} where $p$ is the planet-to-star radii ratio ($p=R_p/R_*$), $z$ is the sky-projected distance between the centers of the two spheres in units of the stellar radius, and $I(\mu)$ is the stellar intensity profile. TRIP does not use an analytical approximation of the limb-darkening profile, unlike most transit light curve calculators such as those provided by \cite{mandel02}, \cite{gimenez06}, \cite{agol19}, \texttt{JKTEBOP} \citep{southworth04}, \texttt{TAP} \citep{gazak12}, \texttt{EXOFAST} \citep{eastman13}, \texttt{PyTransit} \citep{parviainen15}, \texttt{BATMAN} \citep{kreidberg15}, and \texttt{PYLIGHTCURVE}\footnote[8]{\url{https://github.com/ucl-exoplanets/pylightcurve}} \citep{tsiaras16}. \subsubsection{Input and output} The TRIP subpackage requires a configuration file, where the user has to specify the name of the text files containing the limb-darkening profile, the phase, time, or $z$-series for which to calculate the normalized flux, and a list of parameter values that includes $p$ and those parameters eventually needed to compute the $z$-series (see Section~\ref{ssec:z_dist}). The limb-darkening file consists of two columns with the $\mu$ or $r$ values (first column) and the corresponding specific intensities (second column). A list of optional parameters can be used to set the calculation details, i.e., the number of annuli, the interpolation variable, and the polynomial order for the spline interpolation (see Section~\ref{ssec:norm_flux}). It is also possible to define simple operations on the original limb-darkening profile, i.e., a possible cutoff in $\mu$ or $r$ with or without rescaling the $\mu$ or $r$ values to the cutoff radius. The output is a text or pickle file containing the normalized flux series for the requested phase, time, or $z$-series. \subsubsection{Computing the $z$-series} \label{ssec:z_dist} In general, $z$ is a function of the orbital phase ($\Phi$), i.e., the fraction of orbital period ($P$) from the closest transit event: \begin{equation} \label{eqn:phi_definition} \Phi = \frac{t - \mbox{E.T.}}{P} - n, \end{equation} where $t$ denotes time, E.T. is the Epoch of Transit (i.e., a reference mid-transit time), and $n$ is the number of orbits from the E.T. rounded to the nearest integer. Conventionally, $\Phi$ values are in the range of $[-0.5, \ 0.5]$ or $[0, 1]$ and $\Phi=0$ at mid-transit time. The projected star--planet separation is given by \begin{equation} \label{eqn:z_dist} z = \begin{cases} a_R \sqrt{1 - \cos^2{( 2 \pi \Phi )} \sin^2{i} } & \mbox{circular orbit} \\ a_R \frac{1-e^2}{1+e \cos{f}} \sqrt{1-\sin^2{(f+\omega)} \sin^2{i}} & \mbox{eccentric orbit} \end{cases} , \end{equation} where $a_R$ is the orbital semimajor axis in units of the stellar radius, $i$ is the inclination, $e$ is the eccentricity, $\omega$ is the argument of periastron, and $f$ is the true anomaly. In the eccentric case, the true anomaly is calculated from the orbital phase by solving Kepler's equation, \begin{equation} \label{eqn:kepler_ecc} \frac{\pi}{2} - \omega + 2 \pi \Phi = E - e \sin{E} , \end{equation} then \begin{equation} \label{eqn:true_anomaly} f = 2 \arctan{ \left ( \sqrt{\frac{1+e}{1-e}} \tan{\frac{E}{2}} \right )} . \end{equation} \subsubsection{Calculating the normalized flux} \label{ssec:norm_flux} The total and occulted stellar flux are given, respectively, by the integrals \begin{equation} \label{eqn:Fstar_integral} F_{*} = \int_{0}^{1} I(r) \, 2 \pi r \, dr , \end{equation} and \begin{equation} \label{eqn:Focc_integral} F_{*,\mbox{\footnotesize occ}} = \int_{0}^{1} I(r) \, 2 \pi r \, f_{p,z}(r) \, dr , \end{equation} with \begin{multline} \label{eqn:fpzr_fraction} f_{p,z}(r) =\\ \left. \begin{cases} \frac{1}{ \pi} \arccos{ \frac{r^2 + z^2 - p^2}{2zr}} & |z-p| < r < z+p \\ 0 & r \le z-p \ \mbox{or} \ r \ge z+p \\ 1 & r \le p-z \end{cases} \right |_{0 \le r \le 1} . \end{multline} $I(r)$ is the specific intensity at the normalized radial coordinate $r=\sqrt{1-\mu^2}$, and $f_{p,z}(r)$ is the fraction of circumference with radius $r$ covered by the planet. Equations~\ref{eqn:Fstar_integral} and \ref{eqn:Focc_integral} rely on the assumed spherical symmetry for the star; Equation~\ref{eqn:fpzr_fraction} also makes use of the planet sphericity. Finally, the normalized flux is given by Equation~\ref{eqn:F_p_z} with \begin{equation} \label{eqn:lambda_p_z} \Lambda (p,z,I(\mu)) = \frac{F_{*,\mbox{\footnotesize occ}}}{F_{*}} . \end{equation} The integrals in Equations~\ref{eqn:Fstar_integral} and \ref{eqn:Focc_integral} are calculated numerically by using the mid-point rule with a uniform partition in $r$. The specific intensities are evaluated at the partition radii by interpolating in $\mu$ or $r$ from the input limb-darkening profiles. The TRIP algorithm with default settings is identical to the ``tlc'' described by \cite{morello17}. \section{Performance of \texttt{ExoTETHyS}} \label{sec:performances} \subsection{Comparison between fitting algorithms for the stellar intensity profiles} \label{ssec:fitting_methods} \begin{figure*}[t] \plotone{figures/f4.eps} \caption{Best-fit transit parameters to the reference spectral light curves for the transit of HD209458~b assuming claret-4 limb-darkening coefficients obtained with different fitting methods (see Section~\ref{ssec:fitting_methods}). The true parameter values are reported in black. Left panels: results obtained with the spherical methods, i.e., taking into account the whole spherical intensity profiles. Right panels: Results obtained with the quasi-spherical methods, i.e., with a cutoff of $r \le 0.99623$, and the \emph{weighted}-$r$ method (dotted, orange line). Note the scale difference between the two panels. \label{fig:spectrum_params}} \end{figure*} A long list of methods has been adopted in the literature for fitting the limb-darkening laws to the model intensity profiles leading to significantly different limb-darkening coefficients. The coefficients obtained with a simple least-squares fit depend on the spatial distribution of the precalculated intensities. The effect of sampling is particularly evident for the \texttt{PHOENIX} profiles because of a much finer sampling near the drop-off region. For example, Figure~\ref{fig:unphysical_ldfit} shows the case of a star similar to HD209458 in the mid-infrared, for which the simple least-squares solution presents a non-monotonic (unphysical) profile with unexpected undulations. In this paper, we compare the following fitting procedures: \begin{enumerate} \item \textit{unweighted}, i.e., simple least-squares fit; \item \textit{weighted}-$r$, i.e., weighted least-squares fit with weights proportional to the sampling interval in $r$, as detailed in Equations~\ref{eqn:w-rRMS} and \ref{eqn:w-r_weights}; \item \textit{weighted-$\mu$}, i.e., weighted least-squares fit with weights proportional to the sampling interval in $\mu$; \item \textit{interp-$\mu$~100}, i.e., least-squares fit on the intensities interpolated over 100 $\mu$ values with a uniform separation in $\mu$, as suggested by \cite{claret11}; \item \textit{interp-$\mu$~1000}, i.e., least-squares fit on the intensities interpolated over 1000 $\mu$ values with a uniform separation in $\mu$; \item \textit{interp-$r$~100}, i.e., least-squares fit on the intensities interpolated over 100 $r$ values with a uniform separation in $r$, as suggested by \cite{parviainen15b} (with an unspecified number of interpolated values); \item \textit{interp-$r$~1000}, i.e., least-squares fit on the intensities interpolated over 1000 $r$ values with a uniform separation in $r$; \item \textit{unweighted} QS, i.e., least-squares fit with a cutoff $r \le 0.99623$; \item \textit{weighted}-$r$ QS, i.e., analogous to \textit{weighted}-$r$ with a cutoff $r \le 0.99623$; \item \textit{weighted}-$\mu$ QS, i.e., analogous to \textit{weighted}-$\mu$ with a cutoff $r \le 0.99623$. \end{enumerate} The cutoff is used to remove the steep drop-off characteristic of the spherical models, hence the term QS. The QS approach was first proposed by \cite{claret12}, who applied a cutoff $\mu \ge 0.1$ to their library of \texttt{PHOENIX} models with the original $\mu$ values. In this work, we redefine the cutoff using the rescaled $r$, such that it corresponds to the same fraction of the photometric stellar radius for all the models (see Section~\ref{ssec:integ_ints}). Our new definition with $r \le 0.99623$ is equivalent to the previous one for the majority of models, particularly for those models that may correspond to main-sequence stars. However, the libraries of \texttt{PHOENIX} models incorporated in the \texttt{ExoTETHyS} package also include models of stellar atmospheres with lower gravities than those analyzed by \cite{claret12}, corresponding to subgiant, giant, and supergiant stars. For some of these models, the intensity drop-off occurs at $\mu>0.1$, so that the cutoff of $\mu \ge 0.1$ (not rescaled) would be ineffective. In order to evaluate the merits of the alternative fitting procedures to the stellar intensity profile, we generated exact synthetic transit light curves using the TRIP subpackage and compared these light curves with their best-fit solutions obtained with the various sets of claret-4 limb-darkening coefficients. Figure~\ref{fig:TESS_transits} shows the residuals obtained for a noiseless simulation of the transit of HD209458~b in the \emph{TESS} passband when adopting the different sets of limb-darkening coefficients. The \emph{weighted}-$r$ QS method implemented in \texttt{ExoTETHyS}.SAIL gives the smallest residuals, with a peak-to-peak of 2~ppm and rms amplitude below 1~ppm. The other QS methods, \textit{weighted}-$\mu$ QS and \textit{unweighted} QS, lead to almost identical residuals, with a peak-to-peak of 3~ppm. Among the spherical methods, the \textit{weighted}-$r$ gives the smallest residuals with a peak-to-peak of 9~ppm and rms amplitude of 2~ppm, followed by the \textit{interp-$r$~100} and \textit{interp-$r$~1000} with about 1.5 and 2 times larger residual amplitudes, respectively. All the other methods lead to significantly larger residuals of tens to a few hundred ppm, which are comparable with the predicted noise floor of 60~ppm for the \emph{TESS} observations \citep{ricker14}. Figure~\ref{fig:spectrum_residuals} shows the peak-to-peak of the residuals for the same transit as a function of wavelength, based on simulated light curves with 20~nm passband widths. This spectral analysis confirms the relative ranking of the fitting methods derived from the \emph{TESS} simulation. In particular, the \textit{weighted}-$r$ QS method leads to a peak-to-peak of residuals below 2~ppm at wavelengths longer than 1~$\mu$m, and overall below 8~ppm. The other quasi-spherical methods are marginally worse than \textit{weighted}-$r$ QS at wavelengths shorter than 2~$\mu$m, but the worst case peak-to-peak of residuals is less than 13~ppm. The \textit{weighted}-$r$ method leads to peak-to-peak of residuals in the range of 5-15~ppm, with a sawtooth-like modulation in wavelength. We noted that the small, but abrupt, jumps that occur at certain wavelengths correspond to changes of the inflection point in the stellar intensity profile as defined in Section~\ref{ssec:integ_ints}. The same phenomenon occurs for all the other spherical models with larger sawtooth-like modulations. It may appear surprising that the peak-to-peak of residuals obtained with the spherical methods tends to be larger at the longer wavelengths, for which the limb-darkening effect is expected to be smaller. The cause of the poor performances of most spherical methods in the infrared is the intensity drop-off, which is typically steeper than the drop-off in the UV and visible. Such drop-off has a negligible effect in the numerically integrated transit light curves, hence the better performances of the QS fits. Figure~\ref{fig:spectrum_params} shows the best-fit transit parameters corresponding to the same spectral light curves, and compared with the respective input parameters corrected for the rescaled $r$ (see Section~\ref{ssec:integ_ints}). We retrieved the correct transit depth within 5~ppm, the impact parameter within 6$\times$10$^{-4}$, and the transit duration within 1~s at all wavelengths, when using the \textit{weighted}-$r$ or QS limb-darkening coefficients. However, slightly larger spectral trends appear in these parameters because of the wavelength-dependent stellar radius. The peak-to-peak variation in transit depth over the spectral range of 0.25--10~$\mu$m is 10~ppm. The other sets of limb-darkening coefficients introduce orders-of-magnitude larger biases in the retrieved transit parameters, also larger spectral sawtooth-like modulations in the infrared (few tens of ppm in transit depth across 1-10~$\mu$m), and severe discrepancies between the parameter values obtained in the UV/visible and those obtained in the infrared. \begin{figure*}[t] \plotone{figures/f5.eps} \caption{Top, left panel: peak-to-peak of residuals between the reference spectral light curves for the transit of HD209458~b and the best-fit models using the limb-darkening coefficients calculated for the different laws (see Section~\ref{ssec:ld_laws}). Top, right panel: \emph{weighted}-$r$ QS rms of residuals to the model intensity profiles. Bottom panels: zoom-in of the panels above. \label{fig:spectrum_residuals_ldlaws}} \end{figure*} \begin{figure*}[t] \plotone{figures/f6.eps} \caption{Best-fit transit parameters to the reference spectral light curves for the transit of HD209458~b using the limb-darkening coefficients calculated for the different laws (see Section~\ref{ssec:ld_laws}). The true parameter values are reported in black. \label{fig:spectrum_params_ldlaws}} \end{figure*} \begin{table*}[t] \centering \caption{Spectral analysis of the error in transit depth when adopting different limb-darkening laws.} \label{tab:p2_bias} \begin{tabular}{cccccc} \tablewidth{0pt} \hline \hline & Wavelength range & Claret-4 & Power-2 & Quadratic & Square-root \\ \hline Maximum bias & 0.25--10.0~$\mu$m & 5 & 165 & 235 & 174 \\ (ppm) & $<$1~$\mu$m & 4 & 165 & 235 & 174 \\ & $>$1~$\mu$m & 5 & 19 & 27 & 18 \\ & $>$5~$\mu$m & 3 & 4 & 10 & 5 \\ \hline Rms bias & 0.25--10.0~$\mu$m & 1 & 20 & 20 & 23 \\ (ppm) & $<$1~$\mu$m & 2 & 71 & 62 & 81 \\ & $>$1~$\mu$m & 1 & 5 & 11 & 4 \\ & $>$5~$\mu$m & 1 & 2 & 6 & 2 \\ \hline Spectrum & 0.25--10.0~$\mu$m & 10 & 177 & 258 & 341 \\ peak-to-peak & $<$1~$\mu$m & 7 & 177 & 254 & 341 \\ (ppm) & $>$1~$\mu$m & 10 & 27 & 17 & 25 \\ & $>$5~$\mu$m & 2 & 3 & 4 & 3 \\ \hline Spectrum std & 0.25--10.0~$\mu$m & 2 & 20 & 18 & 23 \\ (ppm) & $<$1~$\mu$m & 1 & 45 & 58 & 64 \\ & $>$1~$\mu$m & 2 & 7 & 4 & 5 \\ & $>$5~$\mu$m & $<$1 & $<$1 & $<$1 & $<$1 \\ \hline \end{tabular} \end{table*} \subsection{Performance of the limb-darkening laws} \label{ssec:fitting_ld_laws} Figure~\ref{fig:spectrum_residuals_ldlaws} compares the peak-to-peak of the spectral light curve residuals when adopting the limb-darkening coefficients calculated by \texttt{ExoTETHyS}.SAIL for different limb-darkening laws, as well as the corresponding \emph{weighted}-$r$ QS rms of the residuals to the stellar intensity profiles. The correlation between the two goodness-of-fit measures is explored in Section~\ref{ssec:GOF_ppm}. At wavelengths $\gtrsim$3~$\mu$m, the precision of the power-2 and square-root limb-darkening coefficients is comparable to that of the claret-4 coefficients, resulting in light curve residuals below 5~ppm. While the claret-4 law performs similarly well even at shorter wavelengths, the two-coefficient laws lead to larger light curve residuals up to $\sim$100~ppm in the UV and visible. The quadratic law is less precise, leading to light curve residuals above 25~ppm even at 10~$\mu$m. Figure~\ref{fig:spectrum_params_ldlaws} shows the fitted transit parameters and their expected values. Typically, the bias in transit depth is of the same order of magnitude of the light curve residuals, but it can be both larger or smaller than their peak-to-peak amplitudes owing to parameter degeneracies. Table~\ref{tab:p2_bias} reports the statistics of the errors in transit depth obtained with the different limb-darkening laws across given spectral ranges. The maximum bias in transit depth at 5--10~$\mu$m is within 10~ppm for any limb-darkening parameterization, which is just below the minimum photon noise floor for \emph{JWST}/Mid-InfraRed Instrument (MIRI) observations \citep{beichman14}. At $\sim$1~$\mu$m, the two-coefficient laws may introduce a spectral slope of a few tens of ppm, which may have an impact in the analysis of the \emph{HST}/WFC3 spectra \citep{tsiaras18}. At wavelengths shorter than 1~$\mu$m the two-coefficient laws are unreliable for exoplanet spectroscopy, so that the claret-4 law must be preferred. These conclusions are in agreement with previous studies based on both simulated and real data \citep{espinoza16, morello17, morello18, maxted18}. \begin{figure}[t] \plotone{figures/f7.eps} \caption{\emph{Weighted}-$r$ QS rms of residuals to the model intensity profiles vs. peak-to-peak of the transit light curve residuals for the spectral templates of HD20458~b adopting different limb-darkening laws. The black line is the global linear fit. \label{fig:resints_vs_reslc}} \end{figure} \subsection{Predicted precision in light curves} \label{ssec:GOF_ppm} Figure~\ref{fig:resints_vs_reslc} shows that, for a fixed transit geometry, the peak-to-peak of light curve residuals is roughly proportional to the \emph{weighted}-$r$ QS rms of stellar intensity residuals. We found an approximately linear correlation between the two goodness-of-fit measures for the simulated spectral light curves and stellar intensity profiles, therefore obtaining a wavelength-independent factor. We repeated this test for analogous sets of spectral light curves with different transit parameters, then obtaining different proportionality factors. Our preliminary study suggests that \begin{multline} \label{eqn:GOF_prop} (\mbox{peak-to-peak})_{\mbox{\footnotesize ppm}} = (k \times 10^6) \times p^2 \\ \times (\mbox{\emph{weighted}-}r \, \mbox{QS} \, \mbox{rms} ) , \end{multline} where $k$ is a factor of order unity ($k \gtrsim 1$). Equation~\ref{eqn:GOF_prop} provides a useful tool for estimating the systematic noise in the light curve models solely due to the limb-darkening parameterization. The systematic noise in the light curve models should be smaller than the photon noise limit of the observation in order to avoid significant parameter biases. Note that Equation~\ref{eqn:GOF_prop} does not account for uncertainties in the stellar parameters, discrepancies between real and model intensity profiles, and other contaminating signals that may increase the total systematic noise. \section{Usage of \texttt{ExoTETHyS}} \label{sec:usage} Currently, the main use of the \texttt{ExoTETHyS} package is to compute stellar limb-darkening coefficients through the SAIL subpackage. These coefficients can be adopted to simulate the exoplanetary transit light curves, which are largely used by the scientific consortia of the future exoplanet missions for multiple studies. In particular, \texttt{ExoTETHyS} will be linked with \emph{ARIEL}-Sim \citep{sarkar16} and \texttt{ExoNoodle} (a generator of timeseries spectra of exoplanetary systems originally designed for \emph{JWST} observations; M. Martin-Lagarde et al., in prep.), and it has already been adopted by several members of the two mission consortia. It is also common practice to fix the limb-darkening coefficients obtained from stellar models, such as those calculated with \texttt{ExoTETHyS}.SAIL, in the analysis of exoplanetary transit light curves. This approach relies on the perfect match between the model and the real stellar intensity distributions, otherwise introducing a potential bias in the derived exoplanet and orbital parameters. Some authors recommended setting free limb-darkening coefficients in the light curve fits to minimize the potential bias, but the strong parameter degeneracies may lead to larger error bars or prevent the convergence of the fit \citep{southworth08, csizmadia13}. The parameter degeneracies can be mitigated by using multiwavelength transit observations to better constrain the orbital parameters \citep{morello17, morello18}. Here we suggest an approach to take advantage of the knowledge on the stellar parameters in the form of bayesian priors. The stellar parameters will then be optimized in the light curve fits instead of using fixed or fully unconstrained limb-darkening coefficients. The limb-darkening coefficients for a given set of stellar parameters, and a given passband or spectroscopic bin, can be interpolated from a precalculated grid. The grid calculation type (see Section~\ref{ssec:sail_IO}) was specifically designed for this purpose. \section{Conclusions} \label{sec:conclusions} We introduced \texttt{ExoTETHyS}, an open-source python package that offers accessory tools for modeling transiting exoplanets and eclipsing binaries. It includes a versatile stellar limb-darkening calculator with multiple choices of model atmosphere grids, parameterizations, passbands (also accepting user input), and specific user-friendly calculation settings. We demonstrated an optimal fitting algorithm for the limb-darkening coefficients, thus eliminating the degree of freedom associated with the choice of fitting algorithm. The claret-4 coefficients obtained through this algorithm ensure a precision level $\lesssim$10~ppm in the relevant transit light curves at all wavelengths. The precision achieved exceeds by one order of magnitude that obtained with most of the algorithms proposed in the previous literature for stellar models with spherical geometry. We also proposed a simple formula for estimating the light curve model precision, based on the goodness-of-fit for the limb-darkening coefficients. Finally, we discussed the current and future usage of \texttt{ExoTETHyS} with emphasis on exoplanet atmospheric spectroscopy in the era of \emph{JWST} and ARIEL. \acknowledgments The authors would like to thank Ren{\'e} Gastaud and Daniel Dicken for useful discussions. G. M., M. M.-L. and P.-O. L. were supported by the LabEx P2IO, the French ANR contract 05-BLANNT09-573739 and the European Unions Horizon 2020 Research and Innovation Programme, under Grant Agreement N$^\circ$776403. G.M. also acknowledges the contribution from INAF through the ``Progetto Premiale: A Way to Other Worlds'' of the Italian Ministry of Education, University, and Research. A.C. acknowledges financial support from the Spanish MEC (AYA2015-71718-R and ESP2017-87676-C5-2-R), the State Agency for Research of the Spanish MCIU through the ``Center of Excellence Severo Ochoa'' award for the Instituto de Astrof\'isica de Andaluc\'ia (SEV-2017-0709). \bibliographystyle{aasjournal}
\section{Introduction}\label{sec:intro} \begin{abstract} \emph{Over-the-air federated edge learning} (Air-FEEL) is a communication-efficient solution for privacy-preserving distributed learning over wireless networks. Air-FEEL allows ``one-shot" over-the-air aggregation of gradient/model-updates by exploiting the waveform superposition property of wireless channels, and thus promises an extremely low aggregation latency that is independent of the network size. However, such communication efficiency may come at a cost of learning performance degradation due to the aggregation error caused by the non-uniform channel fading over devices and noise perturbation. Prior work adopted channel inversion power control (or its variants) to reduce the aggregation error by aligning the channel gains, which, however, could be highly suboptimal in deep fading scenarios due to the noise amplification. To overcome this issue, we investigate the power control optimization for enhancing the learning performance of Air-FEEL. Towards this end, we first analyze the convergence behavior of the Air-FEEL by deriving the optimality gap of the loss-function under any given power control policy. Then we optimize the power control to minimize the optimality gap for accelerating convergence, subject to a set of average and maximum power constraints at edge devices. The problem is generally non-convex and challenging to solve due to the coupling of power control variables over different devices and iterations. To tackle this challenge, we develop an efficient algorithm by jointly exploiting the \emph{successive convex approximation} (SCA) and trust region methods. Numerical results show that the optimized power control policy achieves significantly faster convergence than the benchmark policies such as channel inversion and uniform power transmission. \end{abstract} \section{Introduction}\label{sec:intro} In the pursuit of ubiquitous intelligence envisioned in the future 6G networks, recent years have witnessed the spreading of {\it artificial intelligence} (AI) algorithms from the cloud to the network edge, resulting in an active area called {\it edge intelligence} \cite{Survey_FEEl,Debbah19}. The core research issue therein is to allow low-latency and privacy-aware access to rich mobile data for intelligence distillation. To this end, a popular framework called {\it federated edge learning} (FEEL) is proposed recently, which distributes the task of model training over edge devices so as to reduce the communication overhead and keep the data-use locally \cite{Konecny2016aa_FL,Chen20FL}. Essentially, the FEEL framework is a distributed implementation of {\it stochastic gradient decent} (SGD) over wireless networks. A typical training process involves iterations between 1) broadcasting of the global model under training from edge server to devices for local SGD execution using local data, and 2) local models/gradients uploading from devices to edge server for aggregation and global model updating. Although the uploading of high-volume raw data is avoided, the updates aggregation process in FEEL may still suffer from a communication bottleneck due to the high-dimensionality of each updates and the multiple access by many devices over wireless links. To tackle this issue, one promising solution called {\it over-the-air} FEEL (Air-FEEL) has been proposed, which exploits the {\it over-the-air computation} (AirComp) for ``one-shot" aggregation via concurrent update transmission, such that communication and computation are integrated in a joint design by exploiting the superposition property of a {\it multiple access channel} (MAC) \cite{Survey_FEEl,nomo_function_Nazer,Gastpar08}. The idea of AirComp was first proposed in \cite{nomo_function_Nazer} in the context of data aggregation in sensor networks, where it is surprisingly found that ``interference" can be harnessed by structured codes to help functional computation over a MAC. Inspired by the finding, it was shown in the subsequent work \cite{Gastpar08} that for Gaussian {\it independent and identically distributed} (i.i.d.) data sources, the uncoded transmission is optimal in terms of distortion minimization. Besides the information-theoretic studies, various practical issues faced by AirComp implementation were also considered in \cite{Abari15,Cao_PowerTWC,Cao_2020aa}. In particular, the synchronization issue in AirComp was addressed in \cite{Abari15} via an innovative idea of shared clock broadcasting from edge server to devices. The optimal power control policies for AirComp over fading channels were derived in \cite{Cao_PowerTWC} to minimize the average computation distortion, and the cooperative interference management framework for coordinating coexisting AirComp tasks over multi-cell networks was developed in \cite{Cao_2020aa}. More recently, AirComp found its merits in the new context of FEEL, known as Air-FEEL, for communication-efficient update aggregation as demonstrated in a rich set of prior works \cite{GZhu2020TWC,Amiri2020TSP,KYang2020TWC,NZhang2020Ar,GZhu2020Ar,DLiu2020Ar}. Specifically, a broadband Air-FEEL solution was proposed in \cite{GZhu2020TWC}, where several communication-learning tradeoffs were derived to guide the design of device scheduling. Around the same time, a source-coding algorithm exploiting gradient sparsification was proposed in \cite{Amiri2020TSP} to implement Air-FEEL with compressed updates for higher communication efficiency. In parallel, a joint design of device scheduling and beamforming in a multi-antenna system was presented in \cite{KYang2020TWC} to accelerate Air-FEEL. Subsequently, the gradient statistics aware power control was investigated in \cite{NZhang2020Ar} to further enhance the performance of Air-FEEL. Furthermore, to allow Air-FEEL compatible with digital chips embedded in modern edge devices, Air-FEEL based on digital modulation was proposed in \cite{GZhu2020Ar} featuring one-bit quantization and modulation at the edge devices and majority-vote based decoding at the edge server. Besides the benefit of low latency, Air-FEEL was also found to be beneficial in data privacy enhancement as individual updates are not accessible by edge server, eliminating the risk of model inversion attack \cite{DLiu2020Ar}. Despite the promise in high communication efficiency, Air-FEEL may suffer from severe learning performance degradation due to the aggregation error caused by the non-uniform channel fading over devices and noise perturbation. Prior work in this field mostly assumed channel inversion power control (or its variants) \cite{GZhu2020TWC,Amiri2020TSP,KYang2020TWC,DLiu2020Ar} in an effort to reducing the aggregation error by aligning the channel gains, which could be highly suboptimal in deep fading scenarios due to the noise amplification. Although there exists one relevant study on power control for Air-FEEL system in \cite{NZhang2020Ar}, it focused on the minimization of the intermediate aggregation distortion (e.g., mean squared error) instead of the ultimate learning performance (e.g., the general loss function). Therefore, there still leaves a research gap in learning performance optimization of Air-FEEL by judicious power control, motivating the current work. To close the gap, we first analyze the convergence behavior of the Air-FEEL by deriving the optimality gap of the loss-function under arbitrary power control policy. Then the power control problem is formulated to minimize the optimality gap for convergence acceleration, subject to a set of average and maximum power constraints at edge devices. The problem is generally non-convex and challenging to solve due to the coupling of power control variables over different devices and iterations. The challenge is tackled by the joint use of {\it successive convex approximation} (SCA) and trust region methods in the optimized power control algorithm derivation. Numerical results show that the optimized power control policy achieves significantly faster convergence than the benchmark policies such as channel inversion and uniform power transmission, thus opening up a new degree-of-freedom for regulating the performance of Air-FEEL by power control. \section{System Model}\label{sec:system} \begin{figure} \centering \setlength{\abovecaptionskip}{-4mm} \setlength{\belowcaptionskip}{-4mm} \includegraphics[width=3.5in]{Sys_model.eps} \caption{Illustration of over-the-air federated edge learning. } \label{fig:model} \end{figure} We consider an Air-FEEL system consisting of an edge server and $K\ge 0$ edge devices, as shown in Fig.~\ref{fig:model}. With the coordination of the edge server, the edge devices cooperatively train a shared machine learning model via over-the-air update aggregation as elaborated in the sequel. \subsection{Learning Model} We assume that the learning model is represented by the parameter vector ${\bf w}\in\mathbb{R}^q$ with $q$ denoting the model size. Let ${\mathcal D}_k$ denote the local dataset at edge device $k$, in which the $i$-th sample and its ground-true label are denoted by ${\bf x}_i$ and $y_i$, respectively. Then the local loss function of the model vector $\bf w$ on ${\mathcal D}_k$ is \begin{align}\label{LocalLossFunction} F_k({\bf w})=\frac{1}{|{\mathcal D}_k|} \sum \limits_{({\bf x}_i,y_i)\in{\mathcal D}_k} f({\bf w},{\bf x}_i,y_i)+\rho R({\bf w}), \end{align} where $f({\bf w},{\bf x}_i,y_i)$ denotes the sample-wise loss function quantifying the prediction error of the model $\bf w$ on the sample ${\bf x}_i$ {\it with respect to} (w.r.t.) its ground-true label $y_i$, and $R({\bf w})$ denotes the strongly convex regularization function scaled by a hyperparameter $\rho\geq 0$. For notational convenience, we simplify $f({\bf w},{\bf x}_i,y_i)$ as $f_i({\bf w})$. Then, the global loss function on all the distributed datasets is given by \begin{align}\label{GlobalLossFunction} F({\bf w})=\frac{1}{K}\sum\limits_{k\in\mathcal K} D_k F_k({\bf w}), \end{align} where ${\mathcal D}=\cup_{k\in\mathcal K} {\mathcal D}_k$ with $D_{\rm tot}=|{\mathcal D} |$, and the sizes of datasets in all edge devices are assumed to be uniform for notation simplicity, i.e., $|{\mathcal D}_k|=\bar D,\forall k\in\mathcal K$. The objective of the training process is to minimize the global loss function $F({\bf w})$: \begin{align}\label{OptimalParameter} {\bf w}^{\star}=\arg \min_ {\bf w} F({\bf w}). \end{align} Instead of directly uploading all the local data to the edge server for centralized training, the learning process in \eqref{OptimalParameter} can be implemented iteratively in a distributed manner based on gradient-averaging approach as illustrated in Fig.~\ref{fig:model}. At each communication round $n$, the machine learning model is denoted by ${\bf w} ^{(n)}$. Then each edge device can compute the local gradient denoted by ${\bf g}_{k}^{(n)}$ using the local dataset ${\mathcal D}_k$: \begin{align}\label{sys_LocalGradient} {\bf g}_{k}^{(n)}= \frac{1}{|{\mathcal D}_k|} \sum \limits_{({\bf x}_i,y_i)\in{\mathcal D}_k} \nabla f_i({\bf w}^{(n)})+\rho \nabla R({\bf w}), \end{align} where $\nabla$ is the gradient operator and we assume that the whole local dataset is used to estimate the local gradients. Next, the edge devices upload all local gradients to the edge server, which are further averaged to obtain the global gradient: \begin{align}\label{sys_GlobalGradient} \bar{\bf g}^{(n)}= \frac{1}{K}\sum\limits_{k\in\mathcal K} {\bf g}_{k}^{(n)}. \end{align} Then, the global gradient estimate is broadcast from edge server to edge devices, based on which edge device can update its own model under training via \begin{align}\label{sys_ModelUpdate} {\bf w}^{(n+1)}={\bf w}^{(n)}-\eta\cdot \bar{\bf g}^{(n)}, \end{align} where $\eta$ is the learning rate. Notice that the above procedure continues until convergence criteria is met or the maximum number of iterations is achieved. \subsection{Basic Assumptions on Learning Model} To facilitate the convergence analysis next, we make several standard assumptions on the loss function and gradient estimates. \begin{assumption}[Smoothness]\label{Assump_Smooth}\emph{ Let ${\bf g}=\nabla F({\bf w})$ denote the gradient of the loss function evaluated at point ${\bf w}$. Then there exists a non-negative constant vector ${\bf L}\in\mathbb{R}^q$, such that \begin{align*} F({\bf w})\!-\!\left[ F({\bf w}^{\prime})\! +\! {\bf g}^T (\!{\bf w}\!\!-\! {\bf w}^{\prime})\right] \le \frac{1}{2}\sum_{i=1}^{q}\! L_i({{ w}_i\!-\!{w}^{\prime}_i}\!)^2, \forall {\bf w}, {\bf w}^{\prime}, \end{align*} where the superscript $T$ denotes the transpose operation.} \end{assumption} \begin{assumption}[Polyak-Lojasiewicz Inequality]\label{Assump_PL}\emph{ Let $F^{\star}$ denote the optimal loss function value to problem \eqref{OptimalParameter}. There exists a constant $\mu\ge 0$ such that the global loss function $F({\bf w})$ satisfies the following Polyak-Lojasiewicz (PL) condition: \begin{align*} \| {\bf g}\|_2^2 \ge 2\mu(F({\bf w})-F^{\star}). \end{align*} } \end{assumption} Notice that the above assumption is more general than the standard assumption of strong convexity \cite{Karimi2016}. Typical loss functions that satisfy the above two assumptions include logistic regression, linear regression and least squares. \begin{assumption}[Variance Bound]\emph{ The local gradient estimates $\{{\bf g}_k\}$, defined in \eqref{sys_LocalGradient}, where the index $(n)$ is omitted for simplicity, are assumed to be independent and unbiased estimates of the batch gradient ${\bf g}$ with coordinate bounded variance, i.e., \begin{align} &\mathbb{E}[{\bf g}_k]={\bf g}, \forall k\in\mathcal K,\\ &\mathbb{E}[ ({g}_{k,i}-g_i)^2]\le \sigma_i^2, \forall k\in\mathcal K, \forall i, \end{align} where ${g}_{k,i}$ and $g_i$ are defined as the $i$-th element of $\{{\bf g}_k\}$ and ${\bf g}$, respectively, and ${\bm\sigma}=[\sigma_1,\cdots,\sigma_q]$ is a vector of non-negative constants. } \end{assumption} \vspace{-0.45cm} \subsection{Communication Model} \vspace{-0.2cm} The distributed training latency is dominated by the update aggregation process, especially when the number of devices becomes large. Therefore, we focus on the aggregation process over a MAC. Instead of treating different devices' update as interference, we consider AirComp for fast update aggregation by exploiting the superposition property of MAC. We assume that the channel coefficients remain unchanged within a communication round, and may change over different communication rounds. Besides, the channel state information (CSI) is assumed to be available at all edge devices, so that they can perfectly compensate for the phases introduced by the wireless channels. Let $\hat h_k^{(n)}$ denote the complex channel coefficient from device $k$ to the edge server at communication round $n$, and $h_k^{(n)}$ denote its magnitude with $h_k^{(n)}=|\hat h_k^{(n)}|$. During the gradient-uploading phase, all the devices transmit simultaneously over the same time-frequency block, and thus the received aggregate signal is given by \begin{align}\label{sys_ReceivedSignal} {\bf y}^{(n)}=\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}{\bf g}_{k}^{(n)}+{\bf z}^{(n)}, \end{align} in which $p_k^{(n)}$ denotes the transmit power at device $k$, and ${\bf z}^{(n)}\in\mathbb{R}^q$ denotes the additive white Gaussian noise with ${\bf z}^{(n)}\sim{\mathcal CN}(0,N_0\bf I)$, where $N_0$ is the noise power density and $\bf I$ is an identity matrix. Therefore, the global gradient estimate at the edge server is given by \begin{align}\label{sys_GlobalGradient} \hat{\bf g}^{(n)}=\frac{{\bf y}^{(n)}}{K}. \end{align} The devices can adaptively adjust their transmit powers for enhancing the learning performance. In practice, the transmit power of each edge device $k\in\mathcal K$ at each communication round is constrained by a maximum power budget $\bar P_{k}$: \begin{align} p_{k}^{(n)} \leq \bar{P}_{k},~\forall k\in{\mathcal K}, ~\forall n.\label{sys_bar_P_max} \end{align} In addition, each device $k\in\mathcal K$ is also constrained by an average power budget denoted by $\tilde{P}_{k}$ over the whole training period as expressed below:\begin{align} \frac{1}{N}\sum \limits_{n\in\mathcal{N}}p_{k}^{(n)} \leq \tilde{P}_{k},~\forall k\in{\mathcal K}.\label{sys_bar_P_ave} \end{align} Here, we generally have $\tilde{P}_{k} \le \bar{P}_{k},~\forall k\in{\mathcal K}$. \section{Convergence Analysis for Air-FEEL with Adaptive Power Control} In this section, we formally characterize the learning performance of Air-FEEL system, which is derived to be a function of transmit powers of all devices. Let $N$ denote the number of needed communication rounds and $L\triangleq \|\bf L\|_{\infty}$. For notational convenience, we use $F^{(n+1)}$ to represent $F({\bf w}^{(n+1)})$. The optimality gap after $N$ communication rounds defined by $F^{(N+1)}-F^{\star}$ is derived in the following theorem, from which we can understand the convergence behavior of Air-FEEL. \begin{theorem}[Optimality Gap]\label{ConvergenceRate}\emph{ The optimality gap for Air-FEEL, with arbitrary transmit power control policy $\{p_k^{(n)}\}$, is given as \begin{align} &\mathbb{E}\left[F^{(N+1)} \right]-F^{\star}\leq {\Phi}(\{p_k^{(n)}\},\eta)\notag\\ &\!\!\triangleq\!\prod_{n=1}^{N}\!A^{(n)}\!\!\left(F^{(1)}\!-\!F^{\star}\!\right) \!+\!\sum_{n=1}^{N-1}\!\left(\!\prod_{i=n+1}^{N}\!A^{(i)}\!\right)\!B^{(n)}\!+\!B^{(N)},\label{Conv_F_Gap}\! \end{align} with $A^{(n)}=1-\frac{2\mu\eta}{K}\sum\limits_{k\in\mathcal K}\left(h_k^{(n)}\sqrt{p_k^{(n)}}-\frac{ \eta L}{2K}(h_k^{(n)})^2p_k^{(n)}\right) $ and $B^{(n)}=\frac{ \eta^2L\|{\bm \sigma}\|_2^2}{2K^2}\left(\sum\limits_{k\in\mathcal K}(h_k^{(n)})^2p_k^{(n)}\right)+\frac{ \eta^2LN_0q}{2K^2}$. } \end{theorem} \begin{IEEEproof} The proof follows the widely-adopted strategy of relating the norm of the gradient to the expected improvement made in a single algorithmic step, and comparing this with the total possible improvement. \begin{footnotesize} \begin{align*} &F^{(n+1)}- F^{(n)}\\ &\overset{(a)}{\leq} ({\bf g}^{(n)})^T ({\bf w}^{(n+1)}- {\bf w}^{(n)}) + \frac{1}{2}\sum_{i=1}^{q} L_i({{ w}_i^{(n+1)}-{w}^{(n)}_i})^2,\\ &\overset{(b)}{\leq} ({\bf g}^{(n)})^T ({\bf w}^{(n)}-\eta\cdot \hat{\bf g}^{(n)}- {\bf w}^{(n)}) + \frac{L}{2}\|{\bf w}^{(n)}-\eta\cdot \hat{\bf g}^{(n)}- {\bf w}^{(n)}\|_2^2\\ &=-\eta ({\bf g}^{(n)})^T \hat{\bf g}^{(n)}+ \eta^2\frac{L}{2}\|\hat{\bf g}^{(n)}\|_2^2\\ &\!\!=\!\!-\!\frac{\eta}{K} ({\bf g}^{(\!n\!)}\!)^T\!\!\left(\!\sum\limits_{k\in\mathcal K}\!h_k^{(\!n\!)}\sqrt{p_k^{(\!n\!)}}\!{\bf g}_{k}^{(\!n\!)}\!\!+\!\!{\bf z}^{(\!n\!)}\!\right)\!\!\!+\!\frac{ \eta^2L}{2K^2}\!\!\left\|\!\sum\limits_{k\in\mathcal K}\!\!h_k^{(\!n\!)}\sqrt{p_k^{(\!n\!)}}{\bf g}_{k}^{(\!n\!)}\!+\!{\bf z}^{(\!n\!)}\!\right\|_2^2\!,\! \end{align*} \end{footnotesize} where the inequalities (a) and (b) follows the Assumption~\ref{Assump_Smooth} and $L\triangleq \|\bf L\|_{\infty}$. By subtracting $F^{\star}$ and taking expectation at both sides, the convergence rate of each communication round is given by \eqref{App_Con_expectation}. Next, \eqref{App_Con_gap} is obtained by applying the PL condition in the Assumption~\ref{Assump_PL}. Then, by applying above inequality repeatedly through $N$ iterations, after some simple algebraic manipulation we have \eqref{Conv_F_Gap}, which completes the proof. \begin{figure*}[t] \begin{align} &\mathbb{E}\left[F^{(n+1)} \right]-F^{\star}\notag\\ &\le F^{(n)}-F^{\star}-\frac{\eta}{K}\left(\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}\right)\|{\bf g}^{(n)}\|_2^2 +\frac{ \eta^2L}{2K^2}\left(\sum\limits_{k\in\mathcal K}(h_k^{(n)})^2p_k^{(n)}\right)\left(\left\|{\bf g}^{(n)}\right\|_2^2+\left\|{\bm \sigma}\right\|_2^2\right)+\frac{ \eta^2LN_0q}{2K^2}\notag\\ &= F^{(n)}-F^{\star}-\left[\sum\limits_{k\in\mathcal K}\left(\frac{\eta}{K}h_k^{(n)}\sqrt{p_k^{(n)}}-\frac{ \eta^2L}{2K^2}(h_k^{(n)})^2p_k^{(n)}\right)\right]\|{\bf g}^{(n)}\|_2^2 +\frac{ \eta^2L}{2K^2}\left(\sum\limits_{k\in\mathcal K}(h_k^{(n)})^2p_k^{(n)}\right)\left\|{\bm \sigma}\right\|_2^2+\frac{ \eta^2LN_0q}{2K^2}.\label{App_Con_expectation} \end{align} \vspace*{-0.2\baselineskip} \end{figure*} \begin{figure*} \begin{align} \!\!\mathbb{E}\!\left[\!F^{(n+\!1)} \!\right]\!\!-\!F^{\star}\!\!\le\! \! \underbrace{ \left[\!1\!-\!2\mu \! \! \left(\! \sum\limits_{k\in\mathcal K}\!\!\left(\!\frac{\eta}{K}h_k^{(n)}\sqrt{p_k^{(n)}}\!-\!\frac{ \eta^2L}{2K^2}(h_k^{(n)})^2p_k^{(n)}\right)\! \! \right)\! \right]\! }_{A^{(n)}}\left(\!F^{(n)}-F^{\star}\!\right) \! +\underbrace{\frac{ \eta^2L\left\|{\bm \sigma}\right\|_2^2}{2K^2}\sum\limits_{k\in\mathcal K}(h_k^{(n)})^2p_k^{(n)}\!+\!\frac{ \eta^2LN_0q}{2K^2}}_{B^{(n)}}.\label{App_Con_gap} \end{align}\! \vspace*{-1\baselineskip} \end{figure*} \end{IEEEproof} Further applying the mean inequality $(a_1a_2\cdots a_m)\leq(\frac{a_1+a_2+\cdots +a_m}{m})^m$, we can derive a more elegant upper bound for the expression in \eqref{Conv_F_Gap} to attain more insights as follows \begin{align} & {\Phi}(\{p_k^{(n)}\},\eta)\leq\!\!\alpha^N\left(\!F^{(\!1\!)}\!-\!F^{\star}\!\right) \!\!+\!\!\sum_{n=1}^{N}\!\!B^{(n)}\beta_{(n)}^{N-n},\label{Conv_F_Gap_bound} \end{align} where $\alpha=\frac{\sum_{i=1}^{N}\!\!A^{(i)}}{N}$ and $\beta_{(n)}=\frac{\sum_{i=n+1}^{N}A^{(i)}}{N-n}$ for $n=1,\cdots,N-1$ while $\beta_{(N)}=1$. \begin{remark}\emph{ The first term on the right hand side of \eqref{Conv_F_Gap_bound} suggests that the effect of initial optimality gap vanishes as the number of communication round $N$ increases. The second term reflects the impact of the power control and additive noise power on the convergence process, that is, transmission with more power in the initial learning iterations is more beneficial in decreasing the optimality gap. This is because that the contribution of power control at iteration $n$ is discounted by a factor $\beta_{(n)}^{N-n}$.} \end{remark} \section{Power Control Optimization} In this section, we focus on speeding up the convergence rate by minimizing the optimality gap in Theorem~\ref{ConvergenceRate}, under the power constraints stated in \eqref{sys_bar_P_max} and \eqref{sys_bar_P_ave}. The optimization problem is thus formulated as \begin{align} \mathbf{P1:} \min_{\{p_k^{(n)}\ge 0\},\eta\ge 0} ~~&{\Phi}(\{p_k^{(n)},\eta\})\notag\\ {\rm s.t.}~~~~~&\eqref{sys_bar_P_max}~\text{and}~\eqref{sys_bar_P_ave}.\notag \end{align} Due to the coupling between the power control $\{p_k^{(n)}\}$ and learning rate $\eta$, problem (P1) is non-convex and hard to solve. We resort to the alternating optimization technique for efficiently solving this problem. In particular, we first solve problem (P1) under any given $\eta$, and then apply a one-dimension search to find the optimal $\eta$ that achieves the minimum objective value. Let $\tilde{\Phi}(\{p_k^{(n)}\})={\Phi}(\{p_k^{(n)},\eta\})$ under any given $\eta$. Note that the transmit powers at different devices and different communication rounds are coupled with each other in the objective function in \eqref{Conv_F_Gap} under given learning rate $\eta$, leading to a highly non-convex problem: \begin{align} \mathbf{P2:} \min_{\{p_k^{(n)}\ge 0\}} ~~& \tilde{\Phi}(\{p_k^{(n)}\})\notag\\ {\rm s.t.}~~~~~&\eqref{sys_bar_P_max}~\text{and}~\eqref{sys_bar_P_ave}.\notag \end{align} To tackle this problem, we propose an iterative algorithm to obtain an efficient solution using the SCA technique. The key idea is that under any given local point at each iteration, we can approximate the non-convex objective as a constructed convex one. Therefore, after solving a series of approximate convex problems iteratively, we can obtain a high-quality suboptimal solution to problem (P2). Let $\{p_k^{(n)}[i]\}$ denote the local point at the $i$-th iteration with $i\ge 0$, and ${\mathcal N}\triangleq \{1,\cdots,N\}$ the set of communication rounds. Notice that by checking the first-order Taylor expansion of $\tilde{\Phi}(\{p_k^{(n)}\})$ w.r.t. $\{p_k^{(n)}\}$ at the local point $\{p_k^{(n)}[i]\}$, it follows that \begin{align*} &\tilde{\Phi}(\{p_k^{(n)}\})\approx \bar{\Phi}(\{p_k^{(n)}\})\notag\\ &~~ \triangleq \!\tilde{\Phi}(\{p_k^{(n)}[i]\})\!+\!\sum\limits_{n\in{\mathcal N}} \!\sum\limits_{k\in{\mathcal K}}\!\left(p_k^{(n)}-p_k^{(n)}[i]\right)\nabla \tilde{\Phi}(\{p_k^{(n)}[i]\}), \end{align*} where $\nabla \tilde{\Phi}(\{p_k^{(n)}[i]\})$ represents the first-order derivative w.r.t. $p_k^{(n)}[i]$, given in \eqref{Derivation_1} and \eqref{Derivation_2}. \begin{figure*}[t] \begin{align} \nabla \tilde{\Phi}(p_k^{(n)}[n])&=-\frac{\mu\eta h_k^{(n)}\left(F^{(1)}-F^{\star}\right)}{K}\left(\frac{1}{\sqrt{p_k^{(n)}}}-\frac{\eta L h_k^{(n)}}{K}\right)\prod_{i\in{\mathcal N}\setminus \{n\}}A^{(i)}+\frac{\eta^2L \left\|{\bm \sigma}\right\|_2^2 (h_k^{(n)})^2\prod_{j=n}^{N}A^{(j)}}{2K^2A^{(n)}}\notag\\ &~~-\frac{\mu\eta h_k^{(n)}}{K}\left(\frac{1}{\sqrt{p_k^{(n)}}}-\frac{\eta L h_k^{(n)}}{K}\right)\sum_{\ell=1}^{n-1} B_{(\ell)}\frac{\prod_{j=\ell}^{N}A^{(j)}}{A^{(n)}A^{(\ell)}},\forall n\in\mathcal{N}\setminus \{1\}\label{Derivation_1}\\ \nabla \tilde{\Phi}(p_k^{(1)}[i])&=-\frac{\mu\eta h_k^{(1)}\left(F^{(1)}-F^{\star}\right)}{K}\left(\frac{1}{\sqrt{p_k^{(1)}}}-\frac{\eta L h_k^{(1)}}{K}\right)\prod_{i\in{\mathcal N}\setminus \{1\}}A^{(i)}+\frac{\eta^2L \left\|{\bm \sigma}\right\|_2^2 (h_k^{(1)})^2}{2K^2}\prod_{i\in{\mathcal N}\setminus \{1\}}A^{(i)}.\label{Derivation_2} \end{align} \vspace*{-1\baselineskip} \end{figure*}\! In this case, $\bar{\Phi}(\{p_k^{(n)}\})$ is linear w.r.t. $\{p_k^{(n)}\}$. To ensure the approximation accuracy, a series of trust region constraints are imposed as \cite{YLiu_Trust} \begin{align} |p_{k}^{(n)}[i]-p_{k}^{(n)}[i-1]|\le \Gamma[i], ~\forall k\in\mathcal{K}, \forall n\in\mathcal{N},\label{TrustRegion} \end{align} where $\Gamma[i]$ denotes the radius of the trust region. By replacing $\bar{\Phi}(\{p_k^{(n)}\})$ as the approximation of $\tilde{\Phi}(\{p_k^{(n)}\})$ and introducing an auxiliary variable $\gamma$, the approximated problem at the $i$-th iteration is derived as a convex problem: \begin{align} \mathbf{P2.1:} \min_{\{p_k^{(n)}[i]\},\gamma\ge 0} ~~&\gamma \notag\\ {\rm s.t.}~~~~~&\bar{\Phi}(\{p_k^{(n)}[i]\})\le \gamma\\ &\eqref{sys_bar_P_max},~\eqref{sys_bar_P_ave},~\text{and}~\eqref{TrustRegion},\notag \end{align} which can be directly solved by CVX \cite{cvx}. Let $\{p_k^{(n)*}[i]\}$ denote the optimal power control policy to problem (P2.1) at local point $\{p_k^{(n)}[i]\}$. Then, we can obtain an efficient iterative algorithm to solve problem (P2) as follows. In each iteration $i \ge 1$, the power control is updated as $\{p_k^{(n)*}[i]\}$ by solving problem (P2.1) at local point $\{p_k^{(n)}[i]\}$, i.e. $p_k^{(n)}[i+1]=p_k^{(n)*}[i],\forall n\in\mathcal N, \forall k\in\cal K$, where $\{p_k^{(n)}[0]\}$ denotes the initial power control. At the $i$-th iteration, we compute the objective value in problem (P2) by replacing $\{\hat{p}_k^{(n)*}[i]\}$ as $\{p_k^{(n)*}\}$. If the objective value decreases, we then replace the current point by the obtained solution and go to the next iteration; otherwise, we update $\Gamma[i]=\Gamma[i]/2$ and continue to solve problem (P2.1). This algorithm would stop until that $\Gamma[i]$ is lower than a given threshold denoted by $\epsilon$. In summary, the proposed algorithm is presented in Algorithm 1. \begin{table}[htp] \begin{center}\vspace{-0.1cm} \hrule \vspace{0.2cm} \textbf{Algorithm 1 for Solving Problem (P2)}\vspace{0.2cm} \hrule \vspace{0.1cm} \begin{itemize} \item[1] Initialization: Given the initial power control $\{p_k^{(n)}[0]\}$; let $i=0$. \item[2] {\bf Repeat:} \begin{itemize} \item[a)] Solve problem (P1.1) under given $\{p_k^{(n)}[i]\}$ to obtain the optimal solution as $\{p_k^{(n)*}[i]\}$; \item[b)] If the objective value of problem (P2) $\tilde{\Phi}(\{p_k^{(n)}\})$ decreases, then update $p_k^{(n)}[i+1]=p_k^{(n)*}[i],\forall n\in\mathcal N$ with $i=i+1$; otherwise $\Gamma[i]=\Gamma[i]/2$; \end{itemize} \item[3] {\bf Until} $\Gamma[i]\le \epsilon$. \end{itemize} \hrule \vspace{0cm} \end{center}\vspace{-0.5cm} \end{table} With the obtained power control in Algorithm 1, we can find the optimal $\eta$ accordingly via a one-dimensional search. \vspace{-0.1cm} \section{Simulation Results}\label{sec_simu} In this section, we provide simulation results to validate the performance of the proposed power control policy for Air-FEEL. In the simulation, the wireless channels from each device to the edge server over fading states follow i.i.d. Rayleigh fading, such that $h_k$'s are modeled as i.i.d. {\it circularly symmetric complex Gaussian} (CSCG) random variables with zero mean and unit variance. The dataset with size $D_{\rm tot}=600$ at all device are randomly generated, where part of the data, namely $100$ pairs (${\bf x}$, $y$), are left for prediction, and the remaining ones are used for model training. The generated data sample vector ${\bf x}$ follow i.i.d. Gaussian distribution as $\mathcal{N}(0,{\bf I})$ and the label $y$ is obtained as $y=x(2)+3x(5)+0.2z$, where $x(t)$ represents the $t$-entry in vector ${\bf x}$ and $z$ is the observation noise with i.i.d. Gaussian distribution, i.e., $z\sim\mathcal{N}(0,1)$. Unless stated otherwise, the data samples are evenly distributed among the $K=20$ devices, and thus it follows $D_k=25$. Moreover, we apply ridge regression with the sample-wise loss function $f({\bf w},{\bf x},y)=\frac{1}{2}\| {\bf x}^T{\bf w}-y\|^2$ and the regularization function $R({\bf w})=\|{\bf w}\|^2$ with $\rho=5\times 10^{-5}$ in this paper. Furthermore, recall that $D_{\rm tot}=\sum_{k\in \mathcal{K}}D_k$ and then we can obtain the smoothness parameter $L$ and PL parameter $\mu$ as the largest and smallest eigenvalues of the data Gramian matrix ${\bf X}^T{\bf X}/D_{\rm tot}+10^{-4}{\bf I}$, in which ${\bf X}=[{\bf x}_1,\cdots,{\bf x}_{D_{\rm tot}}]^T$ is the data matrix. The optimal loss function $F^{\star}$ is computed according to the optimal parameter vector $\bf w^{\star}$ to the learning problem \eqref{OptimalParameter}, where ${\bf w}^{\star}=({\bf X}^T{\bf X}+\rho{\bf I})^{-1}{\bf X}^T{\bf y}$ with ${\bf y}=[y_1,\cdots,y_{D_{\rm tot}}]^T$. We set the initial parameter vector as an all-zero vector and the noise variance $N_0=0.1$. We consider two benchmark schemes for performance comparison, namely the {\it uniform power transmission} that transmits with uniform power over different communication round under the constraint of average power budget, and the {\it channel inversion} adopted in \cite{DLiu2020Ar}. As for the performance metric for comparison, we consider the optimality gap and prediction error to evaluate the learning performance. \begin{figure}[htbp] \vspace{-0.05cm} \centering \subfigure[Optimality gap versus varying number of devices.] {\label{fig:FL_v_K1}\includegraphics[width=8cm]{K1028_OG.eps}} \subfigure[Prediction error versus varying number of devices.] {\label{fig:FL_v_K2} \includegraphics[width=8cm]{K1028_PE.eps}} \caption{Effect of number of devices on the learning performance of Air-FEEL.} \label{Fig:FL_v_K} \end{figure} The effect of device population on learning performance is illustrated in Fig.~\ref{Fig:FL_v_K} with $N=30$, where the power budgets at all devices are identically set to be $\tilde P=1$ W and $\bar P=5$ W. Notice that the increasing of device population may introduce both the positive and negative effects on the learning performance. The positive effect is that the training process can exploit more data, while the negative effect is the increased aggregation error raised by AirComp over more devices. As observed in Fig.~\ref{Fig:FL_v_K}, the positive effect can be cancelled or even overweighed by the negative effect when applying the channel inversion or uniform power control. The blessing of including more devices in Air-FEEL can dominate the curse it brings only when the power control is judiciously optimized, showing the crucial role of power control in determining the the learning performance of Air-FEEL. \begin{figure}[htbp] \vspace{-0.05cm} \centering \subfigure[Tendency of loss function.] {\label{fig:MSE_v_Con1}\includegraphics[width=8cm]{GO1026_N80.eps}} \subfigure[Tendency of prediction error.] {\label{fig:MSE_v_Con2} \includegraphics[width=8cm]{PE1026_N80.eps}} \caption{Learning performance of Air-FEEL over iterations, where $\eta^*$ denotes the optimized learning rate after one-dimensional search.} \label{Fig:MSE_v_Con} \end{figure} Fig.~\ref{Fig:MSE_v_Con} shows the learning performance during the learning process under the optimized learning rate, where we set $K=20$, $\tilde P=1$ W, $\bar P=5$ W, and $N=80$. It is observed that the proposed power control scheme can achieve faster convergence than both the channel-inversion and uniform-power-control schemes. This is attributed to the power control optimization directly targeting convergence acceleration. \begin{figure} \vspace{-0.1cm} \centering \setlength{\abovecaptionskip}{-1mm} \setlength{\belowcaptionskip}{-1mm} \includegraphics[width=3.5in]{PA1029.eps} \caption{The optimized power allocation over iterations under static channels.} \label{fig:MSE_V_PA} \end{figure} Fig.~\ref{fig:MSE_V_PA} shows the power allocation over a static channel with uniform channel gain during the learning process, where we set $K=20$, $\tilde P=1$ W, $\bar P=5$ W, and $N=30$. It is observed that the power allocation over a static channel follows a stair-wise monotonously decreasing function. The behavior of power control coincides the analysis on Remark 1. \section{Conclusion}\vspace{-0.2cm} In this paper, we exploit power control as a new degree of freedom to optimize the learning performance of Air-FEEL, a promising communication-efficient solution towards edge intelligence. To this end, we first analyzed the convergence rate of the Air-FEEL by deriving the optimality gap of the loss-function under arbitrary power control policy. Then the formulated power control problem aimed to minimize the optimality gap for accelerating convergence, subject to a set of average and maximum power constraints at edge devices. Due to the coupling of power control variables over different devices and iterations, the challenge of the formulated power control problem was tackled by the joint use of SCA and trust region methods. Numerical results demonstrated that the optimized power control policy can achieve significantly faster convergence than the benchmark policies such as channel inversion and uniform power transmission. \vspace{-0.1cm}
\section{Introduction} \label{sec:intro} \label{sec1} \subsection{Two non-Hermitian systems: open quantum systems and $\mathcal{PT}$-symmetric systems} In the conventional formulation of quantum mechanics, the Hamiltonian operator $H$ describing a given physical system is generally required to satisfy the Hermitian symmetry $H = H^\dagger$, a sufficient (but \textit{not} necessary) condition to obtain a real-valued energy spectrum. Since the theory was originally developed, however, a number of researchers have found it useful to introduce non-Hermitian elements to the Hamiltonian, either as an extension of the original theory to accommodate certain physical situations~\cite{HNPRL96,HNPRB97,HNPRB98,Feinberg97,Goldsheid98,FukuiKawakami98,Mudry98,Ahmed02,Bagchi02,Heiss04,Swanson04,Graefe08,Longhi10NH} or as a useful reformulation in others~\cite{Feshbach58,Feshbach62,Moiseyev1980,Petrosky96,Petrosky97,Albeverio96,Rotter91,Sadreev03,Okolowicz03,Rotter_review,Fyodorov97,Dittes00,Pichugin01,Kunz06,Kunz08,Sasada08,SHO11,Klaiman11,NH_H_eff,Hatano14,Moiseyev_NHQM}. In the latter case, various non-Hermitian Hamiltonians have been introduced to describe open quantum systems. Open quantum systems generally consist of a finite system coupled with an infinite environment, and thereby give rise to an energy spectrum with both discrete and continuous eigenvalues; the continuum is associated with the environmental degrees of freedom, while the discrete eigenvalues are a consequence of scattering due to the finite system. Some of the discrete eigenvalues can be complex, a signature of resonance phenomena in open systems. Resonances are associated with transient phenomena such as transport and exponential decay~\cite{Feshbach58,Feshbach62,Rotter91,Sadreev03,Okolowicz03,Dittes00,Kunz06,SHO11,Klaiman11,Hatano14,PPTasaki91,TGP06,HSNP08} and may be viewed as generalized solutions of the Schr\"odinger equation with complex eigenvalues~\cite{PPTasaki91,HSNP08,Hatano14} or as complex poles of the analytically continued S-matrix, among other perspectives~\cite{Moiseyev_NHQM,GP_resonance}. The reason why open quantum systems may accommodate complex eigenvalues can be summarized as follows. Eigenfunctions that are normalizable in open quantum systems, namely bound states and norm-preserving scattering states, lie within the Hilbert space and can only have real eigenvalues. This corresponds to the fact that the Hamiltonian operator is Hermitian in the Hilbert space. However, even the standard Hamiltonian operator may be non-Hermitian in a space wider than the Hilbert space~\cite{HSNP08}. Open quantum systems indeed can harbor unnormalizable eigenfunctions, which lie outside the Hilbert space and can have complex eigenvalues depending on the boundary conditions. (Note, however, that we can still give a probabilistic interpretation for such eigenfunctions~\cite{HSNP08,HKF09}.) While usually hidden in the boundary conditions, this non-Hermitian aspect of open quantum systems manifests itself when we trace out the continuous degrees of freedom associated with the environment; the resulting effective Hamiltonian is then explicitly non-Hermitian. This effective Hamiltonian has only finite degrees of freedom remaining, corresponding to the discrete portion of the open quantum system, which is usually of primary interest. The first and most celebrated example in the literature may be the optical potential in nuclear physics. It was perhaps first introduced as a phenomenological potential but various researchers, Feshbach in particular, formulated it more rigorously~\cite{Feshbach58,Feshbach62,Foldy69}. In the case of the well-known tight-binding model this formulation leads to an energy-dependent effective potential; see Appendix C of Ref.~\cite{SHO11}. With the boundary condition of incoming energy we then have an effective Hamiltonian with a positive imaginary complex potential at the point where the discrete system couples to the environment, which represents an effective gain (or a source). On the other hand, a negative imaginary complex potential appears where the discrete system couples to the environment with the boundary condition of outgoing energy, which represents an effective loss (or a sink). As a recent development in the study of non-Hermitian physics, systems with both gain and loss have attracted a great deal of attention over the past two decades. Bender and Boettcher in 1998 demonstrated that one may relax Hermiticity in favor of $\mathcal{PT}$-symmetry (parity-time) in quantum mechanics and still obtain a real-valued energy spectrum in certain regions of parameter space~\cite{BB98,BBM99}. This has led some researchers to consider whether quantum mechanics could be reformulated in terms of $\mathcal{PT}$ symmetry; see, for example, Refs.~\cite{BQZ01,DDT01,BBJ02,Weigert03,AM_brach} and particularly the references appearing in Refs.~\cite{Bender_review,Bender_review2}. This theoretical question in turn inspired the idea of constructing physical systems that exhibit $\mathcal{PT}$-symmetry in the form of balanced gain and loss components arranged in a spatially-symmetric manner. A number of investigations have been carried out along these lines, both theoretically and experimentally, particularly in the realm of optics~\cite{MGCM,KGM08,PTOptExpt1,PTOptExpt2,Kottos_Nature,ZinPRL11,LonghiJPA11,PTOptExpt3,Uni_Nature,PTOptExpt4,PT_WGM,Longhi_PT_bound}, but also with examples in condensed matter physics~\cite{BFKS09}, simple electronic circuits~\cite{PTCircuitExpt}, coupled mechanical oscillators~\cite{BGOPY13,BBPS13,BGK14}, and mesoscopic superconducting wires \cite{CGBV12}. A number of intriguing phenomena have been studied in the optical context, including power oscillations~\cite{MGCM,KGM08,PTOptExpt2}, double refraction~\cite{MGCM} unidirectional invisibility~\cite{ZinPRL11,LonghiJPA11,Uni_Nature,AliPRA13} and localized states with novel transient behavior~\cite{PTOptExpt4}. One central issue in the investigation of $\mathcal{PT}$-symmetric systems is $\mathcal{PT}$ symmetry-breaking. In many $\mathcal{PT}$-symmetric systems, one finds a transition between a phase in which all states are $\mathcal{PT}$-symmetric and a phase in which at least some states are not; the former is often referred to as the unbroken $\mathcal{PT}$-symmetric phase and the latter as the broken phase. At the $\mathcal{PT}$-symmetry breaking point, two real eigenvalues on the unbroken side coalesce and reappear on the other side of the transition as a complex-conjugate pair; their associated eigenfunctions are no longer $\mathcal{PT}$-symmetric individually, but only so as a pair (i.e. they appear as a state $|\psi\rangle$ and its partner $\mathcal{PT}|\psi\rangle$). We emphasize that at the transition point the eigenstates are not merely degenerate, but coalesce into a single state with a fixed universal phase between them~\cite{HeissChirality,HeissEP3}, as verified by experiment~\cite{EPexpt1a,EPexpt1b,EPexpt1c}. The $\mathcal{PT}$-symmetry breaking point, where the eigenstates coalesce, is an example of an {\it exceptional point}~\cite{Kato}. Similar transitions occur even in open quantum systems described by a Hamiltonian that is Hermitian within the Hilbert space. In this case the exceptional point is typically associated with the appearance of a resonance state (along with its anti-resonance partner)~\cite{Klaiman10,Hatano11,GRHS12} after two real eigenvalues collide. While the large majority of studies on exceptional points appearing in the literature focus on the case of two coalescing eigenvalues (EP2s), the standard nomenclature is to refer to an exceptional point at which $N$ eigenvalues coalescence as an EP$N$~\cite{HeissEP3,GraefeEP3}. In this paper we divide the EP2s into two further subcategories: we refer to an exceptional point at which two real-valued solutions meet to form complex conjugate partners as an EP2A; meanwhile we refer to an exceptional point at which two complex solutions with negative (positive) imaginary part coalesce to form two new solutions with negative (positive) imaginary part as an EP2B. \subsection{$\mathcal{PT}$-symmetric open quantum system} In this paper, we combine these two non-Hermitian systems in order to analyze a $\mathcal{PT}$-symmetric open quantum system. Specifically, we incorporate a centralized $\mathcal{PT}$-symmetric scattering potential $\pm i\Gamma$ into an infinite tight-binding chain with otherwise real-valued site potentials. In the perspective given above, we can interpret this model as an otherwise standard open quantum system except that two sites are equipped with a direct environmental influence, one with $+i\Gamma$ that injects energy into the chain, the other with $-i\Gamma$ that represents an energy drain. This may be realized as an optical lattice array in which one waveguide attenuates photon propagation (the `lossy' component) and a second has a compensating amplifying character (the `gain' component). We observe how the $\mathcal{PT}$-symmetric gain and loss modify the usual open quantum system properties under two different boundary conditions: outgoing waves and scattering waves. For both of these we first consider the general case, including solutions that are $\mathcal{PT}$-asymmetric, and then further investigate the solutions for which the boundary conditions themselves satisfy $\mathcal{PT}$-symmetry. First we consider the boundary condition consisting of purely outgoing waves (often called the Siegert boundary condition)~\cite{Gamow28,Siegert39,Peierls59,Landau77,Ostrovsky05,Kunz06,Kunz08,Sasada08,HSNP08,NH_H_eff}, which yields the discrete spectrum for the system, including all bound states and other solutions. We also observe the location of all exceptional points and other spectral features of interest. Here we demonstrate that for moderately small values of the $\mathcal{PT}$-parameter $\Gamma$, the spectral characteristics remain typical of traditional Hermitian open quantum systems. However, as we increase $\Gamma$ explicitly non-Hermitian spectral properties emerge. We find a resonance state with vanishing decay width for certain specific values of $\Gamma$. In the context of a Hermitian open quantum system we would refer to this as a bound state in continuum (BIC) (see, e.g. Refs.~\cite{BIC_1929,BIC_1975,ONK06,LonghiEPJ07,TGOP07,BIC_opt_expt1} and references therein). While BICs typically appear owing to geometric effects and their wave functions discontinuously vanish outside a finite support, the present phenomenon results in a \emph{delocalized} wave function with an eigenvalue that appears directly in the scattering continuum. For this reason, we refer to this state as a {\it resonance in continuum} (RIC). We further demonstrate the presence of localized states with complex eigenvalues that have recently been observed in an experiment~\cite{PTOptExpt4} and have since been considered in the theoretical works Refs.~\cite{Longhi_PT_bound,BGK14}. We note that, unlike the RIC, these complex bound states appear over a wide range of parameter values, and, as observed in Ref.~\cite{PTOptExpt4}, the real part of the eigenvalue for these states may appear in the scattering continuum. Here we clarify that these localized states have complex conjugate values that sit in the first Riemann sheet in the complex energy plane, something that is not allowed in Hermitian open quantum systems. We also emphasize that while these states are indeed localized, they are not stationary states of the Hamiltonian. Instead, in an experiment they demonstrate either an amplifying or an absorbing characteristic~\cite{PTOptExpt4}. However, given that the real part of these eigenvalues may reside within the continuum, in Ref.~\cite{Longhi_PT_bound} the author classifies these states as a type of generalized BIC. By contrast, in this paper we emphasize that since these solutions are localized but non-stationary they would generally behave in a manner that is quite distinct from the usual concept of a BIC. That having been acknowledged, we further point out that there are some parameter ranges for which the imaginary part of the eigenvalues for these states will be very small, and hence they should take on a quasi-bound state behavior for these parameter values, similar to the quasi-bound state in continuum appearing in Ref.~\cite{NHGP07,GNHP09}. Specifically, these states should behave as bound states on time-scales $t < \Gamma^2 / 4$, where the gain-loss defect parameter $\Gamma$ exceeds the energy scale of the embedding optical bandwidth; we propose that these states might be detectable, for example, in a $\mathcal{PT}$-symmetric optical fiber loop array with a defect region \cite{PTOptExpt4} that is modified to imitate our potential introduced in Sec. \ref{sec:PT.model} below (see Fig. \ref{fig1}). We then focus our attention on the ordinary bound state solutions appearing in our system and demonstrate that the wave function for these states satisfies $\mathcal{PT}$-symmetric boundary conditions. Further, we clarify that the wave function for virtual bound states (with real eigenvalue) is also $\mathcal{PT}$-symmetric, despite the fact that these states do not appear in the usual diagonalization scheme. We then consider the case of scattering wave boundary conditions. In the general case ($\mathcal{PT}$-asymmetric scattering waves) we observe that the parameter choices associated with the RIC result in a divergence in the reflection and transmission coefficients. This phenomenon has previously appeared in the literature in which it is referred to as a spectral singularity~\cite{Bender&Wu,Shanley,AliMPRL09,AliMPRA09,AliMPRA11,Longhi09SS,Longhi10SS} and physically can be associated with both lasing and coherent perfect absorption~\cite{Chong10,Wan11,LonghiCPA11}. We then demonstrate that a subset of the scattering wave solutions yield perfect transmission through the scattering region. In the special case in which the scattering potential is pure imaginary, we show that one can obtain perfect transmission for any continuum scattering states by appropriately choosing the value of $\Gamma$; this property approximately holds when small real-valued defects are introduced. We further demonstrate in this case that invisibility (perfect transmission with no scattering phase shift) can be obtained at discrete values within the continuum. In Sec.~\ref{sec:PT.model} below we present our prototype model for an open quantum system with a $\mathcal{PT}$-symmetric defect potential. Then in Sec.~\ref{sec:PT.outgoing} we study the model under the boundary condition of outgoing waves, which yields the discrete spectrum associated with the defect potential. For the simplest case of a purely complex defect potential, we locate all exceptional points in the spectrum and characterize the properties of the spectrum in their vicinity; we further locate the RIC eigenvalues and write the associated wave function as an outgoing plane wave from the defect region. We also identify the parameter ranges that give rise to the localized states with complex eigenvalues and point out the situation in which some of these solutions might behave as quasi-bound states. In Sec.~\ref{sec:PT.outgoing.spec.ep1} we generalize this picture by considering a potential with both real and imaginary defects. Here we demonstrate that as one deforms the system parameters, the RIC may exit the continuum by splitting into a bound state and a virtual bound state at the band edge; we believe that this point has not previously appeared in the literature. We note that traditional real-valued bound states also may appear for this more general potential. We study in closer detail the formal properties of the bound states in Sec.~\ref{sec:PT.bound}, demonstrating that they satisfy $\mathcal{PT}$-symmetric boundary conditions as expected. We also consider the $\mathcal{CPT}$ norm for these states, which we believe has only previously been investigated in closed $\mathcal{PT}$ systems. We then turn to the scattering boundary conditions in Sec.~\ref{sec:PT.scattering}, which we use to characterize the RIC in greater detail. We also show that a subset of the scattering wave solutions give rise to perfect transmission through the scattering region, and in the case of a purely imaginary defect potential, there are two scattering solutions that support invisible signal propagation. We further demonstrate a connection between the localization transition in the discrete spectrum and the perfect transmission states that might be useful from the perspective of designing systems with predictable transport properties. We also point out a possible application in the form of a `switch' that is sensitive to invisible transmission originating from the left (right), but ignores such transmission from the right (left). In Sec.~\ref{sec:PT.scattering.2} we demonstrate that a scattering wave solution can be obtained that itself satisfies $\mathcal{PT}$-symmetric boundary conditions. We also introduce the $\mathcal{PT}$-current, which is conserved for the (general) scattering wave solutions in our system, and which experiences a divergence associated with the perfect transmission states. We summarize our work and make concluding remarks in Sec.~\ref{sec:conclusion}. We also present some details of the calculations from the main text in two appendices. \section{$\mathcal{PT}$-symmetric optical lattice model} \label{sec:PT.model} \label{sec2} In the present paper, we study a tight-binding model with a $\mathcal{PT}$-symmetric scattering defect potential, which can be realized as an optical lattice array or could be approximated by a modified version of the $\mathcal{PT}$-symmetric optical fiber loop array with a defect studied in Ref.~\cite{PTOptExpt4} or other systems appearing in the literature~\cite{PT_WGM,RDM05}. Our tight-binding model takes the form \begin{align}\label{eq-model} H=-\sum_{x=-\infty}^\infty \left(|x+1\rangle \langle x|+|x\rangle \langle x+1|\right) +\sum_x V(x)|x \rangle \langle x|, \end{align} in which the defect potential is specified as \begin{align}\label{eq-potential} V(x)= \begin{cases} \varepsilon_1+i\Gamma & \quad\mbox{for $x=-1$}, \\ \varepsilon_0 & \quad\mbox{for $x=0$}, \\ \varepsilon_1-i\Gamma & \quad\mbox{for $x=-1$}, \end{cases} \end{align} where $\varepsilon_0$, $\varepsilon_1$, and $\Gamma$ are all real, with $V(x) = 0$ otherwise, such that our scattering potential is confined to the central sites $|x| \le 1$. The positive imaginary part of the complex potential contributes a factor $\exp[-i(i|\Gamma|)t]=\exp(|\Gamma|t)$ to the time evolution, and hence is interpreted as being influenced by a particle bath that constantly injects energy as $+i|\Gamma|$. That with a negative imaginary part is similarly interpreted as a particle bath that constantly drains energy as $-i|\Gamma|$. The off-diagonal part of the Hamiltonian~\eqref{eq-model} is Hermitian (real symmetric) while the diagonal potential is not. It nonetheless satisfies the condition $V(x)^\ast = V(-x)$, which guarantees the system is $\mathcal{PT}$-symmetric~\cite{Ganainy07,Makris08,KGM08}. Stated explicitly, the parity transformation $\mathcal{P}$ swaps the potentials at $x=-1$ and $x=+1$ while the time reversal operator $\mathcal{T}$ (which is complex conjugation) flips them back to the original configuration. We note that several studies on $\mathcal{PT}$-symmetric tight-binding models may be found in the literature, some of which are related to our model above~\cite{Longhi_PT_bound,JSBS10,JB11,DVL13,VHIC14}. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{fig1} \end{center} \vspace*{-\baselineskip} \caption{Geometry for $\mathcal{PT}$-symmetric optical lattice with scattering potential given in Eq.~(\ref{eq-potential}) } \label{fig:PT.open quantum system.geo} \label{fig1} \end{figure} The Schr\"{o}dinger equation $H|\psi\rangle=E|\psi\rangle$ for the Hamiltonian~\eqref{eq-model} can be written explicitly in the following way. First, let us consider the projection $\langle x|H|\psi\rangle=E\langle x|\psi\rangle$ for the system component outside of the scattering potential $|x|\geq 2$, in which case $V(x)\equiv0$. We thus obtain \begin{align}\label{eq-Sch.TB.gen} -\psi(x-1)-\psi(x+1)=E\psi(x), \end{align} where $\psi(x)=\langle x|\psi\rangle$. The solution is given by $\psi(x)=e^{\pm ikx}$ with the eigenvalue \begin{align}\label{eq-dispersion-in-k} E(k)=-2\cos k, \end{align} which defines the scattering continuum for our system in the range $|E(k)| \le 2$ with $k \in [ -\pi, \pi ]$. To solve the eigenvalue problem in the scattering region, we hold the continuum dispersion $E(k)$ and evaluate the Schr\"{o}dinger equation for $x=0$ and $\pm 1$, by which we obtain \begin{align} \label{eq-Sch1} -\psi(-2)-\psi(0)+(\varepsilon_1+i\Gamma)\psi(-1)&=E(k)\psi(-1), \\ \label{eq-Sch2} -\psi(-1)-\psi(1)+\varepsilon_0\psi(0)&=E(k)\psi(0), \\ \label{eq-Sch3} -\psi(2)-\psi(0)+(\varepsilon_1-i\Gamma)\psi(1)&=E(k)\psi(1). \end{align} A given solution $\psi(x)$ must satisfy these equations, subject to a specific choice for the boundary conditions. In Sec.~\ref{sec:PT.outgoing} below we consider the boundary condition for resonant states that consist of purely outgoing waves, while in Sec.~\ref{sec:PT.scattering} we consider the boundary conditions for scattering states. For later reference, let us present in Fig.~\ref{fig2} a typical distribution of the eigenvalues of the Hermitian tight-binding model, that is, for $\Gamma=0$. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig2} \caption{A typical distribution of the eigenvalues of the Hermitian tight-binding model on (a) a complex energy plane and (b) a complex $k$ plane.} \label{fig2} \end{figure} We will heavily make use of the terms in the figure. The complex $E$ plane in Fig.~\ref{fig2}(a) consists of two Riemann sheets; they are connected by a branch cut that extends over the range $-2\leq E\leq +2$. The first sheet corresponds to the upper half of the complex $k$ plane in Fig.~\ref{fig2}(b), while the second sheet to the lower half. More specifically, the upper half ($\text{Im} E > 0$) of the first Riemann sheet in the complex $E$ plane corresponds to the first quadrant of the complex $k$ plane, the lower half ($\text{Im} E < 0$) to the second quadrant, the upper half of the second sheet to the third quadrant, and the lower half to the fourth quadrant. Notice that we can go, for example, from the upper half of the first sheet over to the lower half of the second sheet continuously through the branch cut, which corresponds to moving from the first quadrant to the fourth quadrant in the complex $k$ plane. The scattering states continuously surround the branch cut on the real axis of the complex $E$ plane. We hereafter refer to the scattering continuum as the energy band and to the end points of the continuum as the band edges, following the custom of condensed-matter physics. In the complex $k$ plane, the scattering continuum is on the real axis, which is restricted to the first Brillouin zone $-\pi<k\leq +\pi$; note that the line $\text{Re} k=-\pi$ is identified with the line $\text{Re} k=+\pi$ as a result of the lattice periodicity. Bound states can exist on the first Riemann sheet below and above the energy band, that is, to the left ($E < -2$) and to the right ($E > 2$) of the scattering continuum. Those below the band lie on the positive imaginary axis of the complex $k$ plane, while those above the band lie on the positive part of the line $\text{Re} k=+\pi$. Notice that the bound states here have purely real eigenvalues; in other words, they never exist on the first and second quadrants of the complex $k$ plane except on the lines $\text{Re} k=0$ and $\text{Re} k=+\pi$. We will show below that once we introduce the non-Hermiticity, complex eigenvalues can appear in the first Riemann sheet (and therefore on the upper half of the complex $k$ plane); this is one critical difference between Hermitian and non-Hermitian open systems. The resonant states appear in the lower half of the second Riemann sheet, which is the fourth quadrant of the complex $k$ plane, while their anti-resonant partners reside on the upper half of the second sheet, which is the third quadrant of the complex $k$ plane. These are related to one another through time-reversal symmetry~\cite{Hatano14}. Virtual (or anti-bound) states can also appear to the left and right of the branch cut on the second Riemann sheet, which respectively correspond to the negative ($\text{Im} k < 0$) parts of the lines $\text{Re} k=0$ and $\text{Re} k=+\pi$. \section{Outgoing waves boundary condition and discrete spectrum} \label{sec:PT.outgoing} \label{sec3} In the present section our first consideration is the resonant states; there are several ways of computing these states~\cite{NH_H_eff}. We here use the Siegert boundary condition~\cite{Gamow28,Siegert39,Peierls59,Landau77,Ostrovsky05,Kunz06,Kunz08,Sasada08,HSNP08,NH_H_eff}, which dictates that the system has outgoing waves only; this is equivalent to looking for all poles of the $S$ matrix. The solutions of the resulting polynomial equation give the discrete eigenvalues associated with the scattering region, as shown below. Our purely outgoing wave function takes the form \begin{equation} \psi (x) = \begin{cases} B e^{-i k x} & \mbox{for $x \le -1$,} \\ \psi(0) & \mbox{for $x = 0$,} \\ C e^{ i k x} & \mbox{for $x \ge 1$.} \end{cases} \label{outgoing.wave.fcn} \end{equation} This boundary condition gives $\psi(\pm2)=e^{ik}\psi(\pm1)$, which brings Eqs.~\eqref{eq-Sch1}--\eqref{eq-Sch3} into a closed form~\cite{Sasada08}. We thereby obtain \begin{equation} \begin{pmatrix} - \lambda + \varepsilon_1 + i \Gamma & -1 & 0 \\ -1 & \varepsilon_0& -1 \\ 0 & -1 & - \lambda + \varepsilon_1 - i \Gamma \\ \end{pmatrix} \begin{pmatrix} \psi(-1) \\ \psi(0)\\ \psi(1) \end{pmatrix} = E(\lambda) \begin{pmatrix} \psi(-1) \\ \psi(0)\\ \psi(1) \end{pmatrix}, \label{outgoing.matrix0} \end{equation} in which we have introduced $\lambda \equiv e^{ik}$ for convenience. In this notation the continuum dispersion Eq.~\eqref{eq-dispersion-in-k} takes the form \begin{align}\label{eq-dispersion-in-lambda} E(\lambda) = - (\lambda + \lambda^{-1}). \end{align} We obtain non-trivial solutions for the discrete eigenvalues $\lambda_j$ when the determinant of the matrix in Eq.~(\ref{outgoing.matrix0}) vanishes. This is equivalent to the solutions of the quartic equation $P(\lambda_j) = 0$ with \begin{equation} P(\lambda) \equiv \left( \epsilon_1^2 + \Gamma^2 \right) \lambda^4 + \epsilon_0 \left( \epsilon_1^2 + \Gamma^2 \right) \lambda^3 - \left( 1 - \epsilon_1^2 - 2 \epsilon_0 \epsilon_1 - \Gamma^2 \right) \lambda^2 + \left( \epsilon_0 + 2 \epsilon_1 \right) \lambda + 1 . \label{P.lambda} \end{equation} For a given solution $\lambda_j$, the physical energy eigenvalue is determined from $E(\lambda_j)$ and the associated wave number is given as $k_j = - i \log \lambda_j$. We emphasize that the number of solutions (four), is greater than the matrix dimension (three); this is because the matrix itself depends on the energy eigenvalue through the variable $\lambda$. In the remainder of this Section we investigate the four discrete eigenvalue solutions of the quartic equation $P(\lambda_j) = 0$ in detail; first, we study the case of a purely imaginary defect potential with $\varepsilon_0 = \varepsilon_1 = 0$ in Sec.~\ref{sec:PT.outgoing.spec}. Here we locate all EPs and characterize the behavior of the spectrum in the vicinity of these points. We further identify the RIC and write the wave function of this state as a plane wave originating from the impurity sites. We also discuss the complex-valued localized states and their asymptotic localization properties as well as drawing attention to the parameter ranges in which some of these states will behave as quasi-bound states. In Sec.~\ref{sec:PT.outgoing.spec.ep1} we generalize this picture to consider the case $\varepsilon_1 \neq 0$ to illustrate two points (we keep $\varepsilon_0 = 0$ for now). First we note that the EP2As still appear in the spectrum for the $\varepsilon_1 \neq 0$ case, while the EP2Bs vanish. This suggests that the EP2As may be more robust against parameter perturbations than the EP2Bs, on which we comment in relation to experimental results. Second, we demonstrate that as we increase the value of $\varepsilon_1$, one of the RICs approaches the band edge and eventually exits the continuum by splitting into a bound state and a virtual bound state. \subsection{Discrete spectrum for $\varepsilon_0 = \varepsilon_1 = 0$: exceptional points (EPs), resonant states in continuum (RICs) and quasi-bound states in continuum (QBICs)}\label{sec:PT.outgoing.spec} \begin{figure} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig3a} \hfill \includegraphics[width=0.4\textwidth]{fig3b} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(a)\hspace*{0.440\textwidth}(b)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig3c} \hfill \includegraphics[width=0.4\textwidth]{fig3d} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(c)\hspace*{0.440\textwidth}(d)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig3e} \hfill \includegraphics[width=0.4\textwidth]{fig3f} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(e)\hspace*{0.440\textwidth}(f)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \caption{Discrete eigenvalue spectrum for the simplest case $\varepsilon_0 = \varepsilon_1 = 0$: (a) $\text{Re} E_j$ and (b) $\text{Im} E_j$ against the $\mathcal{PT}$-parameter $\Gamma$; (c) $\text{Re} k_j$ and (b) $\text{Im} k_j$ against $\Gamma$; parametric plots of (e) $(\text{Re} E_j(\Gamma),\text{Im} E_j(\Gamma))$ and (f) $(\text{Re} k_j(\Gamma),\text{Im} k_j(\Gamma))$ in the complex plane. In (e) and (f), the solid circles indicate some of the eigenvalues at $\Gamma=0$, while the open circles indicate those in the limit $\Gamma\to\infty$; the arrows indicate how the eigenvalues evolve as $\Gamma$ is increased from $0$ to $\infty$. } \label{fig:PT.0.spec} \label{fig3} \end{figure} We first consider the discrete eigenvalue spectrum for the simplest case of our Hamiltonian $\varepsilon_0 = \varepsilon_1 = 0$, for which the only non-homogeneous element remaining in the system is the gain/loss pair governed by the $\mathcal{PT}$ parameter $\Gamma$. In this case, the quartic polynomial $P(\Gamma)$ given in Eq.~(\ref{P.lambda}) simplifies to a quadratic in $\lambda^2$, yielding the four solutions \begin{eqnarray} \lambda_{1,4} & = & \pm \frac{1}{\sqrt{2} \Gamma} \sqrt{ 1 - \Gamma^2 + \sqrt{1 - 6 \Gamma^2 + \Gamma^4} } , \nonumber \\ \lambda_{2,3} & = & \pm \frac{1}{\sqrt{2} \Gamma} \sqrt{ 1 - \Gamma^2 - \sqrt{1 - 6 \Gamma^2 + \Gamma^4} } . \label{P.lambda.0.solns} \end{eqnarray} We plot the real and imaginary parts of the resulting energy eigenvalues $E_j = - \lambda_j - \lambda_j^{-1}$ as a function of $\Gamma$ in Fig.~\ref{fig:PT.0.spec} (a) and (b), as well as the real and imaginary parts of the associated wave number $k_j = - i \log \lambda_j$ in Fig.~\ref{fig:PT.0.spec} (c) and (d). Figure~\ref{fig:PT.0.spec} (e) and (f) are parametric plots in the complex $E$ plane and the complex $k$ plane in the range of $\Gamma\geq 0$. In Fig.~\ref{fig:PT.0.spec} (a), (b) and (e), solutions plotted with full curves appear in the first Riemann sheet of the complex $E$ plane while those plotted with a dotted curve appear in the second sheet; the former are the solutions with positive imaginary parts of $k_j$ and the latter are those with negative parts in Fig.~\ref{fig:PT.0.spec} (d) and (f). The first and second Riemann sheets of the complex $E$ plane respectively corresponds to the upper- and lower halves of the complex $k$ plane; a branch cut running from $E=-2$ to $E=2$ connects the two Riemann sheets. We realize from the Siegert boundary condition~\eqref{outgoing.wave.fcn} that every solution on the first Riemann sheet has a positive imaginary part of the wave number and hence its wave function is bounded in $x$ space, while every solution on the second Riemann sheet has a wave function that diverges along the leads of the optical array. We immediately observe one critical difference between Hermitian and $\mathcal{PT}$-symmetric open quantum systems: in the $\mathcal{PT}$-symmetric case, solutions with complex eigenvalues are allowed to appear in the first Riemann sheet, with localized wave functions. This is in stark contrast to the Hermitian case, in which complex-valued solutions are allowed to appear only in the second sheet, where they give rise to delocalized resonance and anti-resonance states. Let us summarize the evolution of the discrete eigenvalues from $\Gamma=0$ to $+\infty$ along the lines of Fig.~\ref{fig:PT.0.spec} (e); the change for negative $\Gamma$ is symmetric as this just amounts to swapping the gain and loss elements. At $\Gamma=0$, one eigenvalue is at the lower edge of the continuum $E=-2$ and another at the upper edge $E=+2$ (solid circles in Fig.~\ref{fig:PT.0.spec} (e) and (f)). There are also two eigenvalues at $E=-\infty$ and at $E=+\infty$ both on the real axis of the second Riemann sheet. As we increase $\Gamma$ from $0$, the eigenvalues at $E=\pm2$ separate off from the band edges and move outward, while the eigenvalues at $E=\pm\infty$ move inwards, all four along the real axis of the second Riemann sheet of the complex energy plane. These eigenstates are referred to as virtual bound states or anti-bound states in the sense that they are real-valued solutions that are spatially delocalized~\cite{Hatano14,HSNP08,GPSS13}. The positive pair of solutions and the negative pair each coalesce at a point on the real axis of the second Riemann sheet at $\Gamma=\bar{\Gamma}_\textrm{A} = \sqrt{2} - 1$, which is a second-order exceptional point. We label this point $\Gamma=\bar{\Gamma}_\textrm{A}$ as an EP2A and the region up until this point $0<\Gamma<\bar{\Gamma}_\textrm{A}$ as Region I. After passing the EP2A, all four eigenvalues become complex on the second sheet, forming two resonance/anti-resonance pairs symmetrically on the positive and negative sides. In the vicinity of the EP2A, the eigenvalues can be expanded in the characteristic form~\cite{Kato,GRHS12} \begin{equation} E_\textrm{A}^- (\Gamma) = - \sqrt{2 \left(1 + \sqrt{2}\right)} \pm \frac{1}{2^{1/4} \sqrt{-1 + \sqrt{2}}} \sqrt{\bar{\Gamma}_\textrm{A}^2 - \Gamma^2} \label{z.A.m.exp} \end{equation} for the resonance/anti-resonance pair with negative real part and \begin{equation} E_\textrm{A}^+ (\Gamma) = \sqrt{2 \left(1 + \sqrt{2} \right)} \pm \frac{1}{2^{1/4} \sqrt{-1 + \sqrt{2}}} \sqrt{\bar{\Gamma}_\textrm{A}^2 - \Gamma^2} \label{z.A.p.exp} \end{equation} for the resonance/anti-resonance pair with positive real part. The derivation of these expressions is detailed in App.~\ref{app:EP.calcs}. We can regard Region I as the $\mathcal{PT}$-unbroken phase and the EP2A at $\Gamma=\bar{\Gamma}_\textrm{A}$ as the $\mathcal{PT}$-symmetry breaking point. As we see in Fig.~\ref{fig:PT.0.spec}, Region I is the only continuous parameter region in which all discrete energy eigenvalues are real. As we continue to increase $\Gamma$, the complex eigenvalues eventually turn around and then return to the real energy axis at $\Gamma=\Gamma_\textrm{RIC}^0 =1$. Although each pair of the energy eigenvalues are degenerate when they reach the real axis, their wave numbers are all distinct as can be seen in Fig.~\ref{fig:PT.0.spec} (c) and (f), and therefore this point represents a degeneracy in the standard sense, \textit{not} a coalescence in the sense of the exceptional point. We refer to these states as resonances in continuum (RICs) for reasons described below in Sec.~\ref{sec3B} (also see Sec. \ref{sec:PT.scattering.RIC}), and we refer to the region $\bar{\Gamma}_\textrm{A}<\Gamma<\Gamma_\textrm{RIC}^0$ as Region II. As we further increase $\Gamma$ such that $\Gamma > \Gamma_\textrm{RIC}^0$, the four solutions pass through the branch cut running from $E=-2$ to $E=2$ and emerge on the first Riemann sheet of the $E$ plane. This is equivalent to the observation that these solutions now have an effective wave number $k_j$ with positive imaginary part as shown in Fig.~\ref{fig:PT.0.spec}(d) and~(f). This implies that the wave function for these states $\psi_j (x) \sim e^{i k_j |x|}$ is localized, although the real part of the eigenvalues lie within the range $-2<\text{Re} E<2$ (see Fig.~\ref{fig:PT.0.spec}(a)). These type of states were recently observed in an experiment based on light transmission through an effective $\mathcal{PT}$-symmetric array of optical fiber loops~\cite{PTOptExpt4} in which they gave rise to a pair of exponentially growing and decaying localized states within the continuum. As $\Gamma$ reaches the value $\Gamma=\bar{\Gamma}_\textrm{B}=1 + \sqrt{2}$, these states coalesce on the imaginary axis of the complex energy plane at $\Gamma=\bar{\Gamma}_\textrm{B}=1 + \sqrt{2}$, two on the positive side and the other two on the negative side, which is another second-order exceptional point (this time occurring in the first Riemann sheet). We refer to this point as an EP2B, because it involves a pair of complex eigenvalues coalescing before becoming another pair of complex eigenvalues; we also refer to the region $\Gamma_\textrm{RIC}^0<\Gamma<\bar{\Gamma}_\textrm{B}$ as Region III. In the vicinity of the EP2B, the eigenvalues can be expanded as \begin{equation} E_\textrm{B}^- (\Gamma) = - i \sqrt{2 \left(-1 + \sqrt{2}\right)} \pm \frac{i}{2^{1/4} \sqrt{1 + \sqrt{2}}} \sqrt{\Gamma^2 - \bar{\Gamma}_\textrm{B}^2} \label{z.B.m.exp} \end{equation} for the two eigenvalues with negative imaginary part, and \begin{equation} E_\textrm{B}^+ (\Gamma) = i \sqrt{2 \left(-1 + \sqrt{2}\right)} \pm \frac{i}{2^{1/4} \sqrt{1 + \sqrt{2}}} \sqrt{\Gamma^2 - \bar{\Gamma}_\textrm{B}^2} \label{z.B.p.exp} \end{equation} for the two with positive imaginary part, similar to the expressions near the EP2A above (see App.~\ref{app:EP.calcs}). After surpassing the EP2B, two eigenvalues move to the origin while the other two go off to $\pm i\infty$, all on the imaginary axis of the first Riemann sheet of the complex energy plane. We refer to this region $\Gamma>\bar{\Gamma}_\textrm{B}$ as Region IV. Since $\Gamma \gg 1$ generally holds here, we may expand the solutions in Eqs.~(\ref{P.lambda.0.solns}) in powers of $1 / \Gamma$ to show that two of these solutions behave as $E_{1,4} \approx \pm i (\Gamma - 2 / \Gamma)$; note that in the limit $\Gamma \rightarrow \infty$, these two solutions asymptotically approach the simple value of the gain or loss component of the $\mathcal{PT}$ parameter $\Gamma$, as is indicated in Fig.~\ref{fig3}(b), where these two solutions (blue curves) approach the two diagonal (red) lines. Indeed, as shown in Appendix~\ref{app:iv.calcs}, the solution $E_1 \sim + i \Gamma$ is localized at site $x = -1$, while the solution $E_4 \sim - i \Gamma$ is localized at the $x = 1$; hence, these two solutions gradually begin to mimic the original uncoupled gain/loss pair for large $\Gamma$. We comment further on the asymptotic localization properties of these states and show that the solutions $E_{2,3}$ behave as quasi-bound states in the continuum in Sec. \ref{sec:PT.outgoing.QBIC}. We emphasize that the physics in Regions I and II could arise in Hermitian open quantum systems as well; explicitly non-Hermitian properties appear in Regions III and IV with the appearance of the RIC and then the complex eigenvalues on the first Riemann sheet. \subsection{Resonant state in continuum (RIC)} \label{sec3B} Here we describe the resonant states in continuum (RICs) at the point $\Gamma = \Gamma_\textrm{RIC}^0$ in greater detail. As summarized above, the eigenvalues here appear on the real axis, embedded in the energy continuum that spans $-2\leq E \leq 2$. At a glance, these states appear similar to bound states in continuum (BICs), which in Hermitian systems appear as resonances with vanishing decay width~\cite{BIC_1929,BIC_1975,ONK06,LonghiEPJ07,TGOP07,BIC_opt_expt1}. However, closer inspection reveals that these states are fundamentally different from BICs. For example, in the (Hermitian) double impurity open quantum system model studied in Ref.~\cite{TGOP07} it is shown that BICs appear as localized states between the two impurities; due to interference, the wave function for the BIC states vanishes identically outside of the impurity region. More generally, BICs often appear for geometrical reasons and hence are strictly confined in some spatial area. The present RICs, however, take the form \begin{equation} \psi_\textrm{RIC} (x) = \left\{ \begin{array}{ll} \displaystyle \mp \frac{1}{\sqrt{6}} e^{\pm i \pi (x+1)/4} & \mbox{for $x \le -1$} \\ \displaystyle \frac{1}{\sqrt{3}} & \mbox{for $x = 0$} \\ \displaystyle \mp \frac{1}{\sqrt{6}} e^{\mp i \pi (x-1)/4} & \mbox{for $x \ge 1$} \end{array} \right. \label{psi.RIC} \end{equation} at $\Gamma = \Gamma_\textrm{RIC}^0$, for the respective eigenvalues $E_\textrm{RIC} = \sqrt{2}$ (with $k_\textrm{RIC} = \pm 3\pi/4$) and $E_\textrm{RIC} = -\sqrt{2}$ (with $k_\textrm{RIC} = \pm \pi/4$). We refer to these points as resonant states in continuum (RICs) in part because the wave function for these states is delocalized as demonstrated in Eq.~(\ref{psi.RIC}), and because these states satisfy the Siegert boundary condition for outgoing waves. We will comment further on this naming convention in Sec.~\ref{sec:PT.scattering.RIC} from the perspective of the scattering wave boundary conditions. We note that these are also equivalent to the spectral singularities that have previously appeared in the literature~\cite{Bender&Wu,Shanley,AliMPRL09,AliMPRA09,AliMPRA11,Longhi09SS,Longhi10SS}. In Sec.~\ref{sec:PT.outgoing.spec.ep1} we will also show that for the case $\epsilon_1 \neq 0$, an RIC may approach the continuum edge and split into a bound state and a virtual bound state. However, we add one further brief comment here to emphasize that the RIC is not an exceptional point, as the eigenstates do not coalesce, having different wave numbers, and hence no fractional power expansion such as Eqs.~(\ref{z.A.m.exp},~\ref{z.A.p.exp}) is possible in this case. \subsection{Quasi-bound states in continuum (QBICs)} \label{sec:PT.outgoing.QBIC} \label{QBIC} The solutions $E_{1,4}$ from Region IV (or either pair of solutions from Region III) correspond to the localized states with complex eigenvalues that were recently experimentally observed in Ref.~\cite{PTOptExpt4}, in which the authors investigated light transmission through an effective $\mathcal{PT}$-symmetric optical lattice realized by periodically switching gain and loss in two optical fiber loops~\cite{PTOptExpt3,PTOptExpt4}. As reported in Ref.~\cite{PTOptExpt4}, when a localized defect is introduced into the effective array (both a shift in $\mathcal{PT}$ pairing strength as well as a phase defect), a pair of localized complex conjugate modes appear within the continuum exhibiting exponential growth and decay in the power spectrum. Indeed, our solutions $E_{1,4} \approx \pm i (\Gamma - 2 / \Gamma)$ in Region IV appear directly in the center of the energy continuum (with $\text{Re} \; E_{1,4} = 0$) and would also give rise to an exponential power output (growth or loss) as $\int_{- \infty}^{\infty} |\psi_{1,4} (x,t)|^2 dx \sim e^{\pm 2 t / \Gamma}$. While the author of Ref.~\cite{Longhi_PT_bound} interprets these type of localized states with complex eigenvalue as examples of an effective BIC based on the fact that the real part of each solution may reside within the continuum, we note that since these states decay or grow exponentially, they would generally behave in a manner that is quite distinct from the usual concept of the BIC. However, the other pair of solutions $E_{2,3}$ also have the real part of the eigenvalue residing within the continuum, yet behave quite differently in Region IV. Indeed we can show that the eigenvalues for these two solutions behave as $E_{2,3} \approx \pm i 2 / \Gamma^2$ such that the complex part of the eigenvalue for these states becomes arbitrarily small for increasing values of $\Gamma$. Hence, these states should behave as effective BICs on time scales satisfying $t < \Gamma^2 / 4$, similar in concept to the quasi-bound state in continuum (QBIC) introduced in Refs.~\cite{NHGP07,GNHP09}, which are resonance states in the continuum with extremely long lifetime (also see Refs.~\cite{QBIC_other1,QBIC_other2}). As shown in App.~\ref{app:iv.calcs}, the respective wave functions for the solutions $E_{2,3}$ are exponentially localized around the site $x = 0$, while those for the solutions $E_{1,4}$ are localized around the PT impurities at $x= \pm 1$; we also show that the localization for the solutions $E_{1,4}$ is very narrow as it scales for $\Gamma \gg 1$ as $1/ \log \Gamma$, while that for the quasi-bound states $E_{2,3}$ is very broad, scaling as $\Gamma^2 / 2$. We believe that these quasi-bound states should be observable, for example, in an experiment similar either to Ref.~\cite{PTOptExpt4} or Ref.~\cite{PT_WGM} in which the $\mathcal{PT}$-symmetric defect potential is modified to mimic our potential appearing in Fig. \ref{fig1}. \subsection{Discrete spectrum for $\varepsilon_1 \neq 0$: EP stability and RIC splitting at localization threshold}\label{sec:PT.outgoing.spec.ep1} \begin{figure} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig4a} \hfill \includegraphics[width=0.4\textwidth]{fig4b} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(a)\hspace*{0.440\textwidth}(b)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig4c} \hfill \includegraphics[width=0.4\textwidth]{fig4d} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(c)\hspace*{0.440\textwidth}(d)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig4e} \hfill \includegraphics[width=0.4\textwidth]{fig4f} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(e)\hspace*{0.440\textwidth}(f)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \caption{Discrete eigenvalue spectrum for the case $\varepsilon_0 = 0$. (a) $\text{Im} E_j$ and (b) $\text{Im} k_j$ against the $\mathcal{PT}$-parameter $\Gamma$ for $\epsilon_1 = 0.2$; parametric plots of (c) $(\text{Re} E_j(\Gamma),\text{Im} E_j(\Gamma))$ and (d) $(\text{Re} k_j(\Gamma),\text{Im} k_j(\Gamma))$ in the complex plane for $\epsilon_1=0.2$; (e) $\text{Im} E_j$ and (f) $\text{Im} k_j$ vs. the $\mathcal{PT}$-parameter $\Gamma$ for $\epsilon_1 = 0.6$. In (c) and (d), the solid circles indicate some of the eigenvalues at $\Gamma=0$, while the open circles indicate those in the limit $\Gamma\to\infty$; the arrows indicate how the eigenvalues evolve as $\Gamma$ is increased from $0$ to $\infty$. } \label{fig:PT.02.06.spec} \label{fig4} \end{figure} As we relax the restriction $\varepsilon_1 = 0$, most of the basic features that we observed in the simplest case in Sec.~\ref{sec:PT.outgoing.spec}--C remain, although these become somewhat distorted as shown for $\varepsilon_1 = 0.2$ in Fig.~\ref{fig:PT.02.06.spec}(a)--(d). Here we observe that the EP2As split into two pairs, one pair of which moves outwards and away from the origin on the $\Gamma$-axis while the other pair moves inwards towards the origin (for larger values of $\varepsilon_1$, the latter pair will eventually collide at the origin, before becoming complex-valued). While we obtained compact analytic expressions for the eigenvalue expansions in the vicinity of the EP2As for the case $\varepsilon_1 = 0$, here those expressions become significantly more cumbersome. Nevertheless, following an intuitive generalization of the methods presented App.~\ref{app:EP.calcs} one may still easily obtain numerical versions of Eqs.~(\ref{z.A.p.exp}) and~(\ref{z.A.m.exp}) in the vicinity of the EP2As in the more general case. On the other hand, the EP2Bs that we studied in Sec.~\ref{sec:PT.outgoing.spec} immediately vanish from the spectrum for $\varepsilon_1 \neq 0$, as can be seen in Fig.~\ref{fig:PT.02.06.spec}(b) and the inset of Fig.~\ref{fig:PT.02.06.spec}(a); we can also see these coalescences vanish by comparing Fig.~\ref{fig:PT.0.spec}(e) and (f) (for $\varepsilon_1 = 0$) with Fig.~\ref{fig:PT.02.06.spec}(c) and (d). Indeed, it can be shown that the EP2As also survive the generalization for $\varepsilon_0 \neq 0$, while the EP2Bs do not re-emerge. This seems to indicate that type EP2A exceptional points are more stable against parameter perturbations than those of type EP2B. We note that several experimental studies have been conducted in which an EP2A has been observed by simply passing directly through the exceptional point while varying a single parameter~\cite{PTOptExpt2,PT_WGM,PTCircuitExpt}, but experimental observation of EP2Bs have tended to rely on encircling the exceptional point~\cite{EPexpt1a,EPexpt1c} or mapping out the complex eigenvalue structure around the exceptional point in a two-dimensional parameter space~\cite{EP_Korea} (although Ref.~\cite{EPexpt1b} provides an exception where the EP2B is observed more directly). Theoretically, we believe that the underlying reason for this is that the EP2As seem to vanish from the real parameter space only when they collide with another EP2A (see Ref.~\cite{GRHS12} for another simple example where this occurs), while the EP2Bs do not need to collide with another EP in order to exit into the complex parameter space. The RICs meanwhile are also split apart in the parameter space, appearing at $\pm \Gamma_\textrm{RIC}^+$ and $\pm \Gamma_\textrm{RIC}^-$, given by \begin{equation} \Gamma_\textrm{RIC}^\pm (\varepsilon_1) = \sqrt{1 \pm \left| \varepsilon_1 \right| \sqrt{2 + \varepsilon_1^2}} , \label{Gam.RIC.pm} \end{equation} which we explicitly indicate by red crosses in Fig.~\ref{fig:PT.02.06.spec}(a) and (b). As we increase $\varepsilon_1$ from 0, the RIC wave numbers for $\Gamma_\mathrm{RIC}^\pm$, which we refer to as $k_\mathrm{RIC}^\pm$ in Fig.~\ref{fig:PT.02.06.spec}(d), move away from the $\varepsilon_1 = 0$ values of $\pi / 4$ and $3\pi/4$ (previously shown in Fig.~\ref{fig:PT.0.spec}(f)) and approach $\pi/2$ and the upper band edge $\pi$, respectively. In the latter case, the RIC at $\Gamma_\textrm{RIC}^-$ eventually reaches the upper band edge; we can find the precise value of $\varepsilon_1$ where this occurs from the condition \begin{equation} P \left( \lambda = -1; \varepsilon_1, \Gamma_\textrm{RIC}^- (\varepsilon_1) \right) = 0 , \label{P.RIC.m.trans} \end{equation} which yields $\varepsilon_1 = 1/2$. At this precise point, one of the EP2As also touches the band edge and overlaps with the RIC. Then for $\varepsilon_1 > 1/2$ the RIC exits the continuum and we find that it splits into a bound state and a virtual bound state as shown in Fig.~\ref{fig:PT.02.06.spec}(e) and (f) for the case $\varepsilon_1 = 0.6$. We also show the evolution of the wave numbers $k_\textrm{RIC}^\pm$ in the complex $k$ plane in Fig.~\ref{fig:PT.kRIC.evol} as the system evolves from $\varepsilon_1 = 0$ to $\varepsilon_1 = 1.5$. Here both $k_\textrm{RIC}^\pm$ move rightward on the real axis, excepting that $k_\textrm{RIC}^-$ splits into a bound state/virtual bound state pair beyond $\epsilon_1 = 1/2$. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{fig5} \end{center} \vspace*{-\baselineskip} \caption{Movement of the wave numbers of RICs, $k_\textrm{RIC}^\pm$, in the complex $k$ plane as $\varepsilon_1$ increases from 0 to 1.5. } \label{fig:PT.kRIC.evol} \label{fig5} \end{figure} Another key difference in the $\varepsilon_1 \neq 0$ case (as well as the $\varepsilon_0 \neq 0$ case) is the appearance of one or (at most) two bound states. The bound state properties are discussed in greater detail in Sec. \ref{sec:PT.bound} below. \section{Formal properties of the bound states}\label{sec:PT.bound} We now turn to a closer investigation of the bound states that appear for various parameter ranges of our $\mathcal{PT}$-symmetric prototype model. Our focus here is on the traditional bound states with real energy eigenvalues; however, we will briefly comment on the virtual bound states and the localized states with complex eigenvalue at points for which they are also relevant to our discussion. We first briefly discuss in Sec.~\ref{sec:PT.bound.param} the parameter ranges for which bound states exist in our prototype model for the general case $\varepsilon_0 \neq 0$, $\varepsilon_1 \neq 0$ and comment on the easiest method for finding bound states for a given set of parameter values. Then in Sec.~\ref{sec:PT.bound.PT} we explore the symmetry properties of the wave function for the bound state solutions and verify that they satisfy $\mathcal{PT}$-symmetric boundary conditions; indeed, the virtual bound states (anti-bound states) are also $\mathcal{PT}$-symmetric. Finally, in Sec.~\ref{sec:PT.bound.norm} we investigate the $\mathcal{CPT}$ norm~\cite{BBJ02} for the bound states. \subsection{Existence of bound states for the general case $\varepsilon_0 \neq 0$ and $\varepsilon_1 \neq 0$} \label{sec:PT.bound.param} As we illustrated in Fig.~\ref{fig2}, the bound states of Hermitian tight-binding models can only exist on the real axis of the first Riemann sheet, below and above the energy band (this is also true for our present model). For our non-Hermitian system, such bound states do not appear in the particular case of $\epsilon_0=\epsilon_1=0$ as seen in Fig.~\ref{fig3}(e) and (f); however, these do appear for $\varepsilon_0 \neq 0$ or $\varepsilon_1 \neq 0$. For example, one bound state appears above the upper band edge in the case $\varepsilon_1 > 0$ (within a specific range of $\Gamma$ values); this state is evidenced in Fig.~\ref{fig4}(d) by the portion of the trajectory that lies on the positive side ($\text{Im} k > 0$) of the line $\text{Re} k=+\pi$. In general, we can write the wave number for the bound states as $k_j = i \kappa_j + \delta_j \pi$, with $\kappa_j > 0$ and in which $\delta_j = 0$ for bound states below the lower band edge and $\delta_j = 1$ for bound states above the upper band edge (the same formula also holds true for virtual bound states, except that $\kappa_j < 0$ in that case). Then, for a given set of parameter values, we can test for the presence of bound states within the spectrum by plugging $\lambda_j = e^{i k_j} = \pm e^{- \kappa_j}$ into $P(\lambda_j) = 0$ from Eq.~(\ref{P.lambda}); any real solution that yields $0 < \lambda_j < 1$ represents a bound state below the lower band edge, and any solution with $-1 < \lambda_j < 0$ represents a bound state above the upper band edge. Meanwhile, real solutions satisfying $\lambda_j > 1$ ($\lambda_j < -1$) represent virtual bound states below (above) the lower (upper) band edge. In Fig. \ref{unbroken1}(a, b) we plot numerical solutions of (\ref{P.lambda}) for the representative case $\varepsilon_0 = 0.01$ and $\varepsilon_1 = -1.1$. Here we find that there exist two bound states below the lower band edge in the parameter domain $\Gamma < 0.45$. There are are also a resonance and an anti-resonance in this domain with real part of the energy eigenvalue above the upper band edge. We define the \emph{unbroken} $\mathcal{PT}$-\emph{symmetry region} as any portion of the parameter space for which all of the solutions are real-valued (any combination of bound states and virtual bound states). For example, given $\varepsilon_1 < 0$ we show in Fig. \ref{unbroken1}(c) the range of parameter values that yield real values for all four solutions of the dispersion equation. In the following Sec. \ref{sec:PT.bound.PT}, we explicitly demonstrate that both bound states and virtual bound states satisfy $\mathcal{PT}$-symmetric boundary conditions. \begin{figure} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig6a} \hfill \includegraphics[width=0.4\textwidth]{fig6b} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(a)\hspace*{0.440\textwidth}(b)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \begin{center} \includegraphics[width=0.4\textwidth]{fig6c} \\ (c) \end{center} \caption{(a) Real part and (b) imaginary part of the roots $\lambda$ of the polynomial $P(\lambda)$ in Eq.(\ref{P.lambda}) for $\epsilon_1=-1.1$ and $\epsilon_0=0.05$. The two real roots $0<\lambda<1$ become complex at $\Gamma=0.45$. (c) a region of unbroken $PT$-symmetry (with all four solutions real-valued) in the parameter space $(\varepsilon_0, \varepsilon_1, \Gamma)$. } \label{unbroken1} \end{figure} \subsection{Verification that real-valued bound states satisfy $\mathcal{PT}$-symmetric boundary conditions}\label{sec:PT.bound.PT} Here we verify that the real-valued bound states discussed in Sec.~\ref{sec:PT.bound.param} automatically satisfy $\mathcal{PT}$-symmetric boundary conditions. To accomplish this, we again write the wave number of an arbitrary bound state in the form $k_j = i \kappa_j + \delta_j \pi$, where $\delta_j = 0$ for a bound state below the lower band edge and $\delta_j = 1$ for a bound state above the upper band edge. With this formalism, the wave equation~(\ref{outgoing.wave.fcn}) for the bound states takes the form \begin{equation} \psi_j (x) = \left\{ \begin{array}{ll} B e^{\kappa x - i \delta_j \pi x} & \mbox{for $x \le -1$,} \\ \psi_0 & \mbox{for $x = 0$,} \\ C e^{- \kappa x + i \delta_j \pi x} & \mbox{for $x \ge 1$.} \end{array} \right. \label{outgoing.wave.bound} \end{equation} In order for $\psi_j$ to be a $\mathcal{PT}$-symmetric eigenstate of our Hamiltonian $H$, it must satisfy the condition $\mathcal{PT} \psi_j = e^{i \theta} \psi_j$. Note that at any point we could introduce the state $\tilde{\psi}_j (x) = e^{i \theta / 2} \psi_j (x)$, which is then an eigenstate of $\mathcal{PT}$ with eigenvalue $1$. Applying the $\mathcal{PT}$ operator to the bound-state wave function we obtain \begin{equation} \mathcal{PT} \psi_j (x) = \left\{ \begin{array}{ll} C^* e^{\kappa x + i \delta_j \pi x} = \left( \frac{C^*}{B} \right) B e^{\kappa x - i \delta_j \pi x} & \mbox{for $x \le -1$,} \\ \psi_0^* & \mbox{for $x = 0$,} \\ B^* e^{- \kappa x - i \delta_j \pi x} = \left( \frac{B^*}{C} \right) C e^{\kappa x + i \delta_j \pi x} & \mbox{for $x \ge 1$,} \end{array} \right. \label{outgoing.wave.bound.PT} \end{equation} where in the last step we have taken advantage of the fact that $-\pi$ is physically equivalent to $\pi$ in the Brilliuon zone structure of our model. If we assume $\psi_0^* = F \psi_0$, then we can write the quantity $C^*/B = B^*/C = F$ as a phase factor $F = e^{i \theta}$. Now, for a $\mathcal{PT}$-symmetric solution of our Hamiltonian, we see that we must augment the outgoing boundary condition in Eq.~(\ref{outgoing.wave.fcn}) with an additional condition $B = e^{- i \theta} C^*$, which gives $ \psi (-1) = e^{-i \theta} \psi (1)^\ast$. We apply this condition to re-write the matrix form of the Schr\"odinger equation in Eq.~(\ref{outgoing.matrix0}) as \begin{equation} \begin{pmatrix} - \lambda + \varepsilon_1 + i \Gamma & -e^{i \theta} & 0 \\ -e^{-i \theta} & \varepsilon_0 & -1 \\ 0 & -1 & - \lambda + \varepsilon_1 - i \Gamma \\ \end{pmatrix} \begin{pmatrix} \psi(1)^\ast \\ \psi (0) \\ \psi (1) \\ \end{pmatrix} = E(\lambda) \begin{pmatrix} \psi (1)^\ast \\ \psi (0) \\ \psi (1) \\ \end{pmatrix} . \label{outgoing.matrix.bound.PT} \end{equation} Taking the determinant of this modified equation yields the exact same condition for discrete eigenvalues $P(\lambda_j) = 0$ as we previously encountered at the beginning of Sec.~\ref{sec:PT.outgoing}. Hence, any real-valued bound state of the Hamiltonian $H$ in Eq.~(\ref{eq-model}) is automatically an eigenstate of the $\mathcal{PT}$-symmetry operator with eigenvalue $e^{i \theta}$. We may obtain the explicit form for the coefficient $B = e^{i \theta} C^*$ from the first and third lines of Eq.~(\ref{outgoing.matrix.bound.PT}). For simplicity here, let us choose $\theta = 0$, such that $B = C^*$. We then find the real and imaginary parts of $B = B_R + i B_I$ as \begin{equation} B_R = \frac{\lambda \left( 1+\lambda \varepsilon_1 \right)} {1+\Gamma^2\lambda^2 + 2 \varepsilon_1 \lambda + \varepsilon_1^2\lambda^2} \psi_0; \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ B_I = \frac{ \Gamma \lambda} {1+\Gamma^2\lambda^2 + 2 \varepsilon_1 \lambda + \varepsilon_1^2 \lambda^2} \psi_0 , \label{bound.PT.coeffs} \end{equation} with $\lambda = e^{ik_j} = e^{-\kappa + i \delta_j \pi}$. As a final comment, we note that according to the argument we have presented here the wave function for the virtual bound states (residing in the second Riemann sheet in the complex energy plane) must also satisfy $\mathcal{PT}$ symmetry. This can immediately be seen by simply replacing the form of the wave number for the bound state $k_j = i \kappa_j + \delta_j \pi$ by that for the virtual bound states $k_j = -i \kappa_j + \delta_j \pi$ (with $\kappa_j > 0$ in either case) and proceeding with the argument as presented above. However, we note that the bound states with complex energies are \emph{not} $\mathcal{PT}$-symmetric, which can be seen by writing the wave number for these states in the form $k_j = \kappa_j + i \Pi_j$ with $\Pi_j < | \pi |$ and noting that we can no longer make the sign replacement in the final line of Eq.~(\ref{outgoing.wave.bound.PT}) as these states reside within the Brillioun zone, rather than at the edges as do the bound states and virtual bound states. \subsection{$\mathcal{CPT}$ norm of the bound states}\label{sec:PT.bound.norm} We now set out to write the appropriate normalization condition for the bound-state wave function that we obtained in Sec.~\ref{sec:PT.bound.PT}. For a non-symmetric Hamiltonian $H$, the completeness relation among its eigenstates $\psi_n(x)$ assumes the form \begin{equation} \sum_{x=-\infty}^\infty \psi_n^L(x)\psi_m^R(x)=\delta_{n,m} \label{e22} \end{equation} where $\psi_n^R(x)$ are right eigenstates and $\psi_n^L(x)$ are left eigenstates. If $H$ is $\mathcal{PT}$-symmetric, we identify $$\psi_n^L(x)= \mathcal{CPT} \psi_n^R(x),$$ where the operator $\mathcal{C}$ \cite{BBJ02} satisfies, in the unbroken region, the three algebraic equations \begin{equation} \left[ \mathcal{C}, \mathcal{PT} \right] = 0, \ \ \ \ \ \ \ \ \ \left[ \mathcal{C}, H \right] = 0, \ \ \ \ \ \ \ \ \ \mathcal{C}^2 = 1 . \label{C.relns} \end{equation} The completeness relation for a $\mathcal{PT}$-symmetric Hamiltonians then reads \begin{equation} \sum_{x=-\infty}^\infty[\mathcal{CPT}\psi_n(x)]\psi_m(x)=\delta_{n,m}. \label{e21} \end{equation} Since the $\mathcal C$ operator commutes with the Hamiltonian $H$, the bound states $\psi_j (x)$ of $H$ in the unbroken region must also be eigenstates of $\mathcal C$ and, because $\mathcal C^2=1$, the resulting eigenvalues must be $C_j = \pm 1$ \cite{BBJ02,Weigert03}. (How the $\mathcal C$ operator might act on the complex-valued solutions in the broken region is a question presently under investigation). In order to assign the correct eigenvalue $C_j$ to each eigenstate $\psi_j (x)$, we first evaluate the so-called $\mathcal{PT}$ norm as \begin{equation} \sum_{x = - \infty}^{\infty} [\mathcal{PT} \psi_j (x)] \psi_k (x) = \sum_{x = - \infty}^{\infty} \psi_j (-x)^\ast \psi_k (x) = (-1)^j \delta_{j,k}. \label{PT.norm} \end{equation} We see that the $\mathcal{PT}$-norm is not positive definite \cite{BBMW02}, with alternating signs $\pm 1$ among the bound states $\psi_j (x)$; hence, we assign the eigenvalues $C_n$ to be $\pm 1$ according to the sign of (\ref{PT.norm}) in order to obtain the positive norm introduced in Eq. (\ref{e21}). In either case, we may write the $\mathcal{PT}$-norm for our bound states given in Eq.~(\ref{outgoing.wave.bound}) as \begin{equation} N_j^{PT} \equiv \sum_{x = - \infty}^{\infty} \psi_j (-x)^\ast \psi_j (x) = \left( B^{*2} + B^2 \right) \sum_{x=1}^\infty e^{- 2 \kappa_j x} + \psi_0^2, \label{PT.norm.N} \end{equation} where $\kappa_j$ is the imaginary component of the wave number from $k_j = i \kappa_j + \delta_j \pi$. For convenience, we introduce $\tilde{B}=B/ \psi_0$; we then obtain \begin{equation} N_j^{PT} =\psi_0^2 \left( 2\frac{\tilde{B}_r^2-\tilde{B}_i^2}{e^{2\kappa_j}-1}+1\right) . \label{XX1} \end{equation} The explicit form of the coefficient $B$ is given by Eq. (\ref{bound.PT.coeffs}), where $\lambda=e^{-\kappa}$ for the bound states below the lower band edge and $\lambda=-e^{-\kappa}$ for the bound states above the upper band edge. In Fig.~\ref{norm} we show the $\mathcal{PT}$-norm for the two bound states previously shown in Figs. \ref{unbroken1}(a,b) that appear below the lower band edge; we see that the $\mathcal{PT}$ norm for one of these states is positive, giving the eigenvalue of the $\mathcal C$ operator as $1$, while for the other the norm is negative, giving the eigenvalue of the $\mathcal C$ operator as $-1$. \begin{figure} \includegraphics[width=0.4\textwidth]{fig7} \caption{Real part of the roots $\lambda$ of the polynomial Eq.(\ref{P.lambda}) and normalization constant $N$ corresponding to two bound states below the lower band edge in the domain $0 < \Gamma < 0.45$ for $\varepsilon_1 = -1.1$ and $\varepsilon_0 = 0.05$. One bound state has a positive norm and the other has a negative norm; the eigenvalues of the $\mathcal C$ operator are therefore $+1$ and $-1$, respectively. A similar picture holds for bound states appearing above the upper band edge.} \label{norm} \end{figure} \section{Scattering states, the resonance in continuum (RIC), and perfect transmission} \label{sec:PT.scattering} \label{sec5} In this section we consider typical ($\mathcal{PT}$-asymmetric) scattering boundary conditions for our $\mathcal{PT}$-symmetric open quantum system; we will consider $\mathcal{PT}$-symmetric scattering solutions later in Sec.~\ref{sec:PT.scattering.2}. In Sec.~\ref{sec:PT.scattering.gen} below we write the generic wave function for the states and then obtain a matrix equation for the relevant scattering coefficients. We then explicitly write the transmission and reflection for the simplest case with a pure imaginary potential, namely $\varepsilon_0 = \varepsilon_1 = 0$. We perform these calculations for both left-to-right and right-to-left scattering and we verify that the transmission and reflectance satisfy established relations that generally hold in $\mathcal{PT}$ systems. In Sec.~\ref{sec:PT.scattering.RIC} we discuss the RIC in detail from the perspective of the scattering wave solutions; here we note that the RIC automatically satisfies the Siegert boundary condition for outgoing waves and argue that these states represent a resonance between the background continuum and the $\mathcal{PT}$-symmetric defect potential. While the RIC is a discrete state embedded in the scattering continuum, we discuss in Sec.~\ref{sec:PT.scattering.2.perfect} outstanding scattering states, namely perfectly transmissive states, meaning that the transmission is unity while the reflectance vanishes. For the simplest case in which the defect potential is pure imaginary with $\varepsilon_0 = \varepsilon_1 = 0$ we demonstrate that by appropriately choosing $\Gamma$ we can obtain perfect transmission at any given value of $k$ in the spectrum; this property approximately holds in the case in which $\epsilon_0$ and $\epsilon_1$ are nonzero but take on small values. We also demonstrate that the appearance of the perfect transmission state at the band edges coincides with a delocalization transition, an observation which may be useful from an engineering perspective as one aims to construct devices with specific transmission properties. We also examine the case in which not only the transmission is unity, but also there is no phase shift as the scattering wave passes through the defect region. In this case the signal is transmitted not only perfectly, but invisibly. \subsection{$\mathcal{PT}$-asymmetric scattering states} \label{sec:PT.scattering.gen} \label{sec5A} Here we find scattering states of our $\mathcal{PT}$-symmetric model~\eqref{eq-model} under the potential~\eqref{eq-potential}. We will limit most of the detailed calculations to the simplest case $\varepsilon_0=\varepsilon_1=0$ but the generalization to the case $\varepsilon_0\neq 0$ and $\varepsilon_1\neq 0$ is straightforward. We first solve the Schr\"{o}dinger equations~\eqref{eq-Sch1}--\eqref{eq-Sch3} for left-to-right scattering by assuming a wave function of the form \begin{align} \label{eq-PTasym} \psi(x)= \begin{cases} A e^{ikx} + B e^{-ikx} & \quad\mbox{for $x\leq-1$}, \\ \psi(0) & \quad\mbox{for $x=0$}, \\ C e^{ikx} &\quad\mbox{for $x\geq 1$}, \end{cases} \end{align} where $k$ resides within the scattering continuum $0\leq k \leq \pi$. The term with the coefficient $A$ gives the incoming wave, while the $B$ term is the reflected wave and the $C$ term is the transmitted wave. Note that its eigenvalue is real: $E=-2\cos k$. We have four parameters $A$, $B$, $C$ and $\psi(0)$ to fix under the three conditions given by the Schr\"odinger equations~\eqref{eq-Sch1}--\eqref{eq-Sch3}. Substituting the ansatz~\eqref{eq-PTasym} into them yields \begin{equation} \begin{pmatrix} \epsilon_1+ i \Gamma + \lambda & -\lambda & 0 \\ 1 & - \left( 1+ \epsilon_0 \lambda + \lambda^2 \right) & \lambda^2 \\ 0 & -1 & 1 + \left( \epsilon_1- i \Gamma \right) \lambda \\ \end{pmatrix} \begin{pmatrix} A \\ \psi(0) \\ C \\ \end{pmatrix} = - \lambda B \begin{pmatrix} 1 + \left( \epsilon_1 + i \Gamma \right) \lambda \\ \lambda \\ 0 \\ \end{pmatrix} , \label{outgoing.matrix} \end{equation} Let us limit ourselves from this point to the simplest case $\varepsilon_0=\varepsilon_1=0$. Although the overall phase of the wave function~\eqref{eq-PTasym} does not affect physical quantities, it turns out that it is easiest to assume $B\in\mathbb{R}$. We can represent the coefficients as \begin{align}\label{eq300} A&=B\frac{i\sin k-\Gamma^2e^{2ik}\cos k}% {(\Gamma+2\sin k) \Gamma\cos k}, \\\label{eq310} C&=B\frac{i\sin k}% {(\Gamma+2\sin k) \Gamma\cos k}, \\\label{eq320} \psi(0)&=B\frac{i(1-i\Gamma e^{ik})\sin k}% {(\Gamma+2\sin k) \Gamma\cos k}, \end{align} and thereby obtain the transmission and reflection amplitudes as \begin{eqnarray}\label{trans.int.l1} t_l & = & \frac{C}{A} = \frac{i \sin k}{i \sin k - \Gamma^2 e^{2ik} \cos k} ,\\ r_l & = & \frac{B}{A} = \frac{\left( \Gamma + 2 \sin k \right) \Gamma \cos k}{i \sin k - \Gamma^2 e^{2ik} \cos k} , \label{trans.int.l2} \end{eqnarray} which are followed by the transmission and reflection probabilities as \begin{align}\label{eq-T} T_{L\to R}&:=\left|t_l\right|^2 =\frac{\sin^2k}{\sin^2k+(\Gamma-2\sin k)(\Gamma+2\sin k)\Gamma^2\cos^2k}, \\\label{eq-R} R_{L\to R}&:=\left|r_l\right|^2 =\frac{(\Gamma+2\sin k)^2\Gamma^2\cos^2 k}{\sin^2k+(\Gamma-2\sin k)(\Gamma+2\sin k)\Gamma^2\cos^2k}. \end{align} Note that $T_{L\to R}+R_{L\to R}$ is, in general, not unity because we have a source and a sink and therefore the particle number is not conserved. Instead, the usual probability conservation relation is replaced by a generalized rule for $\mathcal{PT}$-symmetric systems that relates the left-to-right and right-to-left transmission properties~\cite{Mosta14}, as shown below. Hence, we next consider the right-to-left scattering solution given by the ansatz \begin{align} \label{eq-PTasym1} \psi(x)= \begin{cases} B e^{-ikx} & \quad\mbox{for $x\leq-1$}, \\ \psi(0) & \quad\mbox{for $x=0$}, \\ C e^{ikx} +D e^{-ikx} &\quad\mbox{for $x\geq 1$}, \end{cases} \end{align} in which we again have $0\leq k\leq \pi$, the $D$ term is the incoming wave, the $C$ term is the reflected wave and the $B$ term is the transmission wave. Note again that its eigenvalue is real: $E=-2\cos k$. Again substituting this ansatz into the Schr\"odinger equations~\eqref{eq-Sch1}--\eqref{eq-Sch3}, we obtain \begin{equation} \begin{pmatrix} 1 + \left( \varepsilon_1 + i \Gamma\right) \lambda & -1 & 0 \\ \lambda^2 & - \left( 1 + \varepsilon_0 \lambda + \lambda^2 \right) & 1 \\ 0 & - \lambda & \varepsilon_1 - i \Gamma + \lambda \\ \end{pmatrix} \begin{pmatrix} B \\ \psi(0) \\ D \\ \end{pmatrix} = - \lambda C \begin{pmatrix} 0 \\ \lambda \\ 1 + \left( \varepsilon_1 - i \Gamma \right) \lambda \\ \end{pmatrix} . \label{outgoing.matrix1} \end{equation} After assuming $C\in\mathbb{R}$ this time, we obtain for the case $\varepsilon_0 = \varepsilon_1 = 0$ the coefficients as \begin{align} \label{eq390} D&=C\frac{i\sin k-\Gamma^2e^{2ik}\cos k}% {(\Gamma-2\sin k) \Gamma\cos k}, \\\label{eq400} B&=C\frac{i\sin k}% {(\Gamma-2\sin k) \Gamma\cos k}, \\\label{eq410} \psi(0)&=C\frac{i(1-i\Gamma e^{ik})\sin k}% {(\Gamma-2\sin k) \Gamma\cos k} \end{align} and the amplitudes \begin{eqnarray}\label{trans.int.r1} t_r & = & \frac{B}{D} = \frac{i \sin k}{i \sin k - \Gamma^2 e^{2ik} \cos k} , \\ r_r & = & \frac{C}{D} = \frac{\left( \Gamma - 2 \sin k \right) \Gamma \cos k}{i \sin k - \Gamma^2 e^{2ik} \cos k} , \label{trans.int.r2} \end{eqnarray} which in turn lead to the right-to-left transmission and reflection probabilities \begin{align}\label{eq-T1} T_{R\to L}&:=\left|t_r\right|^2 =\frac{\sin^2k}{\sin^2k+(\Gamma-2\sin k)(\Gamma+2\sin k)\Gamma^2\cos^2k} \equiv T_{L\to R}, \\\label{eq-R1} R_{R\to L}&:=\left|r_r\right|^2 =\frac{(\Gamma-2\sin k)^2\Gamma^2\cos^2 k}{\sin^2k+(\Gamma-2\sin k)(\Gamma+2\sin k)\Gamma^2\cos^2k} \leq R_{L\to R}. \end{align} The left-right asymmetry in \eqref{eq-R1} comes from the fact that the $\mathcal{P}$ and $\mathcal{T}$ symmetries are individually broken in our system. We nonetheless note that we have $t_l = t_r \equiv t$, such that the transmission is equal for the left-to-right and right-to-left scattering; this is a general property of $\mathcal{PT}$ systems. Further we note that the relations \begin{equation} t (-k) = t(k) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ r_l (-k) = r_r(k) \label{PT.t.r} \end{equation} are satisfied, which also hold for $\mathcal{PT}$ systems in general~\cite{Mosta14,Ahmed12}. Finally, the usual probability conservation property for Hermitian systems ($T + R = 1$) is here replaced by \cite{Mosta14,Ahmed12,GCS12} \begin{equation} \left| t (k) \right|^2 \pm \left| r_l (k) r_r (k) \right| = 1 , \label{PT.t.r.cons} \end{equation} which we can easily verify by using Eqs.~\eqref{trans.int.l1},~\eqref{trans.int.l2},~\eqref{trans.int.r1}, and~\eqref{trans.int.r2}. This is a result of the fact that Eqs.~\eqref{eq390}--\eqref{eq410} are obtained from the corresponding Eqs.~\eqref{eq300}--\eqref{eq320} by taking the complex conjugate and flipping the sign of $k$, which is just the $\mathcal{PT}$ operation in the wave-number space. Note that the sign choice appearing in Eq.~\eqref{PT.t.r.cons} is fixed by the sign of the quantity $1 - \left| t (k) \right|^2 = 1 - T$; it can easily be shown that for the present case with $\varepsilon_0 = \varepsilon_1 = 0$ the sign changes in this quantity generally occur at $\arcsin (\pm \Gamma / 2)$. \subsection{Scattering wave perspective on the resonance in continuum (RIC)} \label{sec:PT.scattering.RIC} We explain the resonance in continuum (RIC), which we introduced in Sec.~\ref{sec:PT.outgoing} as a discrete resonant eigenstate, here from the perspective of the scattering states presented in Sec.~\ref{sec5A}. More specifically, we show that the RICs appear as singularities in the transmission and reflection probabilities~\eqref{eq-T}, \eqref{eq-R}, \eqref{eq-T1}, and \eqref{eq-R1} on the real axis. In this sense, it is a discrete state embedded in the scattering continuum. Let us recall that in Sec.~\ref{sec:PT.outgoing} the RICs appear as the points where two solutions meet on the real energy axis; in Fig.~\ref{fig3}(e) for the simplest case $\varepsilon_0 = \varepsilon_1 = 0$, this happens at $E=\pm\sqrt{2}$ for $\Gamma=1$. This is not an exceptional point, however, because each of them has a different (real) value of $k$; namely, $k=\pm\pi/4$, $\pm 3\pi/4$ in the simplest case as is shown in Fig.~\ref{fig3}(c) or (f). We here show that these points indeed have the properties of the resonant states in the sense that they have divergent transmission and reflection probabilities. We plot in Fig.~\ref{140313hatanofig1} the transmission and reflection probabilities both for Eqs.~\eqref{eq-PTasym} and~\eqref{eq-PTasym1} in the case $\varepsilon_0=\varepsilon_1=0$ for positive $0 \le \Gamma \le 5$; they are symmetric with respect to $k=0$. \begin{figure} \begin{minipage}[t]{0.3\textwidth} \vspace{0mm} \begin{center} \includegraphics[width=\textwidth]{fig8a} (a) Transmission probability \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.3\textwidth} \vspace{0mm} \begin{center} \includegraphics[width=\textwidth]{fig8b} (b) Reflection probability to the left \end{center} \end{minipage} \hfill \begin{minipage}[t]{0.3\textwidth} \vspace{0mm} \begin{center} \includegraphics[width=\textwidth]{fig8c} (c) Reflection probability to the right \end{center} \end{minipage} \caption{(a) The left-to-right transmission probability~\eqref{eq-T}, which is equal to the right-to-left transmission~\eqref{eq-T1}, (b) the left-to-left reflection probability~\eqref{eq-R}, and (c) the right-to-right reflection probability~\eqref{eq-R1}, all in the simplest case of $\varepsilon_0=\varepsilon_1=0$. Note that the scale of the vertical axis varies from panel to panel.} \label{140313hatanofig1} \end{figure} All probabilities have poles at $k=\pm\pi/4$ and $k=\pm3\pi/4$ with $\Gamma=1$, namely for the RICs. These are the only instances at which any of the probabilities diverges for real $k$. From this perspective, the poles are indeed discrete states embedded in the scattering continuum. Let us explain why they are \textit{resonant states in continuum}. These poles are associated with the zeros of the coefficient $A$ in the wave function~\eqref{eq-PTasym} and the coefficient $D$ in the wave function~\eqref{eq-PTasym1}, as we can see in Eqs.~\eqref{trans.int.l1},~\eqref{trans.int.l2},~\eqref{trans.int.r1}, and~\eqref{trans.int.r2}. Exactly at these zeros $A=0$ and $D=0$, the wave functions~\eqref{eq-PTasym} and~\eqref{eq-PTasym1} have only outgoing waves, which indeed matches the Siegert boundary condition~\eqref{outgoing.wave.fcn} for resonant states~\cite{Gamow28,Siegert39,Peierls59,Landau77,Ostrovsky05,Kunz06,Kunz08,Sasada08,HSNP08,NH_H_eff}. Therefore, the poles of the transmission and reflectance shown in Fig.~\ref{140313hatanofig1} are resonance poles. In a resonance state with $\mathop{\mathrm{Re}} k>0$, particles are ejected from the central area and flow away towards $x=\pm \infty$; in a state with $\mathop{\mathrm{Re}} k<0$, which is historically called an anti-resonance state, particles flow into the central area and vanish. We indeed saw this from another perspective in Eq.~(\ref{psi.RIC}), in which the RIC took the form of an outgoing plane wave outside of the central scattering region. In the Hermitian scattering problem, the particle number is conserved and hence the Siegert boundary condition can only be satisfied at discrete complex values of $k$ and $E$, which give the location of the resonance poles; it can never be satisfied for real values of $k$ and $E$ in the Hermitian case. Under the Siegert boundary condition, the particles flow away from the scattering region and hence the particle number in this area decays in time, which can be described only by a complex energy eigenvalue~\cite{HSNP08}. Hence, a resonance pole in the Hermitian case is strongly tied to an eigenstate with complex $E$ and $k$. In the $\mathcal{PT}$-symmetric case, however, the particle number is not conserved because we have a source and a sink. Hence it is possible for particles to emerge out of the scattering region (or vanish into it) as a stationary state without changing the particle number in this region. This is indeed what happens with the resonant states in continuum: because $\mathop{\mathrm{Im}}E=0$ and $\mathop{\mathrm{Im}}k=0$ for these poles, the wave function stretches over all space as a stationary state. In this sense, it is a remarkable characteristic of $\mathcal{PT}$-symmetric systems that we have resonances resting within the real energy continuum. This is why we specifically refer to these as resonance states in continuum; these states represent a resonance between the open environment associated with the leads and the $\mathcal{PT}$-symmetric potential. We further note that in the special case $\varepsilon_1=0$ and $\Gamma=1$, one or two RICs appear for any values of $\varepsilon_0$, although again one or the other RIC exits the continuum at $\varepsilon_0 = \pm 1$ and splits into a bound state and a virtual bound state, as previously discussed in Sec. \ref{sec:PT.outgoing.spec.ep1}. \subsection{Perfect transmission, invisibility and applications} \label{sec:PT.scattering.2.perfect} In the present subsection, we turn our attention from the RIC poles to the scattering continuum itself. More specifically, we now study the system parameters that give rise to perfect transmission such that $T=1$ while the reflectance vanishes $R = 0$. We also examine the phase associated with the perfect transmission in order to observe the case in which invisibility is obtained, and we comment on several points below that may be useful from an engineering perspective. The condition to obtain perfect transmission in the left-to-right (right-to-left) case is immediately apparent in Eq.~(\ref{eq-PTasym}) (Eq.~(\ref{eq-PTasym1})), namely $B = 0$ ($C = 0$). This condition is realized whenever the determinant of the $3\times 3$ matrix on the left-hand side of Eq.~(\ref{outgoing.matrix}) (Eq.~(\ref{outgoing.matrix1})) vanishes. Hence, we obtain left-to-right (right-to-left) perfect transmission at a given value $k = \tilde{k}_{\textrm{X},j}$ within the continuum whenever the condition $M_\textrm{R} (\tilde{\lambda}_{\textrm{R},j}) = 0$ ($M_\textrm{L} (\tilde{\lambda}_{\textrm{L},j}) = 0$) is satisfied, where $\tilde{\lambda}_{\textrm{X},j} = e^{i \tilde{k}_{\textrm{X},j}}$ and \begin{equation} M_\textrm{L,R} (\lambda) = \left( \varepsilon_1 \mp i \Gamma \right) \lambda^ + \left( \varepsilon_1^2 + \Gamma^2 + \varepsilon_0 \left( \epsilon_1 \mp i \Gamma \right) \right) \lambda^ + \varepsilon_0 \left( 1 + \varepsilon_1^2 + \Gamma^2 \right) \lambda^ + \left( \varepsilon_1^2 + \Gamma^2 + \varepsilon_0 \left( \epsilon_1 \pm i \Gamma \right) \right) \lambd + \varepsilon_1 \pm i \Gamma = 0 . \label{perf.trans.cond} \end{equation} Since $M_\textrm{X} (\lambda)$ is a quartic polynomial for either case $\textrm{X}=\textrm{L,R}$, for a given set of parameter values we generally obtain four values $\tilde{k}_{\textrm{L},j}$ for left-to-right and four values $\tilde{k}_{\textrm{R},j}$ for right-to-left perfect transmission. However, some of these solutions might turn out to be complex-valued and hence must be discarded. In the simplest case $\varepsilon_0 = \varepsilon_1 = 0$, Eq.~(\ref{perf.trans.cond}) for the left-to-right transmission gives the factorized form $M_\textrm{L} (\lambda) = i \Gamma (1 + i \lambda)(1 - i \lambda) (1 - i \Gamma \lambda - \lambda^2)$. The two linear factors give two $\Gamma$-independent solutions as $\tilde{k}_{\textrm{L},1} = \pi / 2$ and $\tilde{k}_{\textrm{L},2} = - \pi / 2$ with energy $E = 0$ appearing directly at the center of the transmission band. Meanwhile the quadratic factor gives the solutions \begin{equation} \tilde{k}_{\textrm{L},\{3,4\} } = \left\{ \begin{array}{ll} \cos^{-1} \left( \pm \sqrt{4 - \Gamma^2} / 2 \right) & \ \ \ \ \mbox{for $-2 < \Gamma < 0$,} \\ \cos^{-1} \left( \pm \sqrt{4 - \Gamma^2} / 2 \right) - \pi & \ \ \ \ \mbox{for $0 < \Gamma < 2$.} \end{array} \right. \label{perf.trans.gamma.L} \end{equation} These four solutions are plotted as the full curves in Fig.~\ref{fig:PT.perf.trans}(a); \begin{figure} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig9a} \hfill \includegraphics[width=0.4\textwidth]{fig9b} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(a)\hspace*{0.440\textwidth}(b)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth} \includegraphics[width=0.4\textwidth]{fig9c} \hfill \includegraphics[width=0.4\textwidth]{fig9d} \hspace*{0.05\textwidth} \\ \vspace*{\baselineskip} \hspace*{0.05\textwidth}(c)\hspace*{0.440\textwidth}(d)\hspace*{0.4\textwidth} \\ \vspace*{\baselineskip} \caption{The wave numbers for perfect transmission (full curves) as a function of $\Gamma$ for fixed values of $\varepsilon_0$ and $\varepsilon_1$; the wave numbers $\text{Re} \ k_j$ for discrete eigenvalues are also shown in the background with dotted curves, except for bound states which appear as full lines at either $k = 0$ (lower band edge) or $\pm \pi$ (upper band edge). We used the following parameter values: (a) left-to-right transmission for $\varepsilon_0 = \varepsilon_1 = 0$, (b) right-to-left transmission for $\varepsilon_0 = \varepsilon_1 = 0$, (c) left-to-right transmission for $\varepsilon_0 = 0, \varepsilon_1 = 0.08$, and (d) left-to-right transmission for $\varepsilon_0 = -0.1, \varepsilon_1 = 0$. } \label{fig:PT.perf.trans} \label{fig7} \end{figure} note that the real parts of the wave numbers for the eigenvalues $\text{Re} k_j$ are also plotted as the dotted curves, for reasons that will be described below. From an applications perspective, we note for this case we can obtain perfect transmission at any given value of $k \in [-\pi, \pi]$ by choosing the appropriate value of $\Gamma$. So far we have only considered the intensity of the transmitted signal, but not the associated phase information. While perfect transmission does maintain the signal intensity, if there is a phase shift between two leads an observer may still be able to detect the presence of impurities in the scattering region by performing a time-of-flight measurement~\cite{LonghiPRA10}. We can investigate the phase shift for the perfect transmission states by calculating the eigenvector $( A, \psi(0), C )^\textrm{T}$ on the left-hand side of the singular matrix~(\ref{outgoing.matrix}) for our four perfect transmission solutions. For the case of the two perfect transmission values $\tilde{k}_{\textrm{L},1} = \pi / 2$ ($\tilde{\lambda}_{\textrm{L},1} = i$) and $\tilde{k}_{\textrm{L},2} = - \pi / 2$ ($\tilde{\lambda}_{\textrm{L},2} = - i$) we have \begin{equation} \begin{pmatrix} i \left( \Gamma \pm 1 \right) & \mp i & 0 \\ 1 & 0 & -1 \\ 0 & -1 & 1 \pm \Gamma \\ \end{pmatrix} \begin{pmatrix} A \\ \psi(0) \\ C \\ \end{pmatrix} = 0 , \label{perf.trans.vec} \end{equation} which yields $A = C$ without any phase shift between the leads and \begin{equation} \psi(0) = ( 1 \pm \Gamma ) A . \label{perf.trans.psi.0} \end{equation} Left-to-right invisibility holds for these two cases; indeed, as we discuss below right-to-left invisibility will also hold for $k = \pm \pi/2$. As a further point, notice from Eq.~(\ref{perf.trans.psi.0}) that we can arbitrarily adjust $\psi(0)$ by tuning the parameter $\Gamma$ and still maintain perfect transmission; for the special case $\Gamma = \mp 1$ we can even eliminate it entirely. Performing a similar calculation for the $\Gamma$-dependent perfect transmission states $\tilde{k}_{\textrm{L},3}$ and $\tilde{k}_{\textrm{L},4}$ reported in Eq.~(\ref{perf.trans.gamma.L}) reveals that a phase shift is always present for these states, apart from the special case $k = \pm \pi/2$ for a specific value of $\Gamma$. To summarize, invisibility can only be achieved at $k = \pm \pi/2$ for arbitrary $\Gamma$. For right-to-left transmission in the simplest case $\varepsilon_0 = \varepsilon_1 = 0$, we obtain from the lower sign in Eq.~(\ref{perf.trans.cond}) the factorized equation $M_\textrm{R} (\lambda) = - i \Gamma (1 + i \lambda)(1 - i \lambda) (1 + i \Gamma \lambda - \lambda^2)$. Hence for the right-to-left transmission we again obtain perfect transmission at $\tilde{k}_{\textrm{R},1} = \pi/2$ and $\tilde{k}_{\textrm{R},2} = - \pi/2$, but for the $\Gamma$-dependent perfect transmission states we have instead \begin{equation} \tilde{k}_{\textrm{R},\{3,4\} } = \left\{ \begin{array}{ll} \cos^{-1} \left( \pm \sqrt{4 - \Gamma^2} / 2 \right) - \pi & \ \ \ \ \mbox{for $-2 < \Gamma < 0$,} \\ \cos^{-1} \left( \pm \sqrt{4 - \Gamma^2} / 2 \right) & \ \ \ \ \mbox{for $0 < \Gamma < 2$,} \end{array} \right. \label{perf.trans.gamma.R} \end{equation} as plotted in Fig.~\ref{fig:PT.perf.trans}(b). Notice that the expressions~\eqref{perf.trans.gamma.R} are just reversed from the left-to-right case reported in Eq.~(\ref{perf.trans.gamma.L}); this is a natural result for a $\mathcal{PT}$-symmetric system since switching our scattering orientation amounts to swapping the position of the gain and loss impurities. Again in this case we can evaluate the scattering wave coefficients to find that $D = B$ and hence we have no phase shift for the perfect transmission states $\tilde{k}_{\textrm{R}, \{1,2\} } = \pm \pi/2$; this shows that these states yield bi-directional invisibility. Here the response at the site $0$ is given by \begin{equation} \psi(0) = (1 \mp \Gamma) D . \label{perf.trans.psi.0.R} \end{equation} If we choose, say $\Gamma = -1$, the invisible right-to-left wave has a finite value of $\psi(0)$ at $k = + \pi/2$, but from Eq. (\ref{perf.trans.psi.0}) the left-to-right transmission gives no response at the same frequency. Hence the site $0$ in this scenario can act as a kind of switch that responds to the otherwise invisible signal transmission in one direction but ignores signals from the other direction. In Fig.~\ref{fig:PT.perf.trans}(c) and (d) we plot the left-to-right perfect transmission for more general parameter values with $\varepsilon_0 = 0, \varepsilon_1 = 0.08$ and $\varepsilon_0 = -0.1, \varepsilon_1 = 0$, respectively. We note that the range of the continuum that is capable of supporting perfect transmission has been reduced slightly in comparison to the `cleaner' case in Figs.~\ref{fig:PT.perf.trans}(a) and (b) for these relatively small values of the impurity energies. For larger impurity values the range of coverage for perfect transmission is further reduced. We also note the following connection between the perfect transmission scattering states and the bound states of the discrete spectrum. Notice that in the background of Fig.~\ref{fig:PT.perf.trans}(a)--(d) we have plotted the real part of the wave numbers $\text{Re} k_j$ for the discrete eigenvalues as the dotted curves. However, the wave numbers for the bound states are marked with full lines appearing at either $k = 0$ (lower continuum edge) or $k = \pm \pi$ (upper continuum edge). Focusing on Fig.~\ref{fig:PT.perf.trans}(c) and (d), we note that the appearance of the perfect transmission scattering state exactly coincides with the appearance (or disappearance) of a bound state at the edges of the Brillouin zone $k=0$ or $k=\pm \pi$; see the footnote~\footnote{Note that we can explicitly show that the appearance of the perfect transmission state and the bound state delocalization do indeed occur at the same point in parameter space in the following manner. On one hand, we can exactly locate the delocalization transition at the band edges $\lambda = \pm 1$ by plugging into the dispersion polynomial Eq. (\ref{P.lambda}) as $P(\lambda = \pm 1) = 0$; on the other hand we can find the appearance of the perfect transmission states at the band edges by plugging into Eq. (\ref{perf.trans.cond}) as $M_\textrm{L} (\pm 1) = M_\textrm{R} (\pm 1) = 0$, which can be analytically solved in either case. Doing so yields the exact same points in parameter space.}. This is a rather intuitive result if we consider the behavior of the bound-state wave function at the delocalization transition, where it brushes against one of the band edges before becoming a virtual bound state in the second Riemann sheet. On one side of this transition we have a bound state with a wave function that is localized in the defect region, while on the other side we have a virtual bound state with a wave function that diverges into the leads (one can even view this state as being localized in the leads \cite{Hatano14}). At precisely the transition between these two states, we have a wave function that spreads out evenly throughout the chain, which here supports perfect transmission from one lead to the other. This explains the connection between the delocalization transition and the appearance of the perfect transmission state at either edge of the scattering continuum, which may provide an intuitive approach to designing systems with desired transport properties. We notice a somewhat similar transition appears in Ref.~\cite{VHIC14}. \section{$\mathcal{PT}$-symmetric scattering wave solutions} \label{sec:PT.scattering.2} \label{sec6} This section is devoted to more mathematical interest. We here investigate the $\mathcal{PT}$-symmetric properties of the scattering wave solutions. Previously we demonstrated in Sec.~\ref{sec3} and Sec.~\ref{sec:PT.bound} that the discrete states satisfy $\mathcal{PT}$-symmetric boundary conditions in certain regions of parameter space; however, we note that despite the fact that the eigenvalue $E=-2\cos k$ associated with the scattering states is always real, in Sec.~\ref{sec5} these states generically satisfied $\mathcal{PT}$-asymmetric boundary conditions. This motivates us to investigate whether or not the scattering states themselves can obey $\mathcal{PT}$-symmetric boundary conditions. We will show in Secs.~\ref{sec:PT.scattering.2.sym} and~\ref{app:Jost} that we indeed always have $\mathcal{PT}$-symmetric scattering states. In the former, we will $\mathcal{PT}$-symmetrize the scattering wave function previously obtained in Sec.~\ref{sec:PT.scattering}. In the latter, we will present a more direct and systematic way of finding a $\mathcal{PT}$-symmetric scattering wave with the use of the Jost solutions. We will then introduce the concept of the $\mathcal{PT}$ current in Sec.~\ref{sec:PT-current}. \subsection{$\mathcal{PT}$-symmetrization of the scattering wave solutions}\label{sec:PT.scattering.2.sym} We can construct a $\mathcal{PT}$-symmetric solution out of an asymmetric solution $\psi(x)$ by writing \begin{align}\label{wave.PT.gen} \psi_\mathcal{PT}(x)&=\psi(x)+\mathcal{PT}\psi(x). \end{align} Let us apply this strategy to the scattering solution~\eqref{eq-PTasym}. The $\mathcal{PT}$ transformation results in the changes $i\to-i$ and $x\to-x$ as well as the complex conjugation of coefficients $A$ and $C$; we recall that $B$ is real by assumption in the relations~\eqref{eq300}--\eqref{eq320} for the simplest case $\varepsilon_0=\varepsilon_1=0$. Note that the real-valued wave number $k$ (the momentum) is invariant under the action of $\mathcal{PT}$, since both $\mathcal{P}$ and $\mathcal{T}$ result in a sign flip separately for this quantity. Substituting our result into Eq.~(\ref{wave.PT.gen}) we obtain a $\mathcal{PT}$-symmetric solution for $0\leq k\leq \pi$ given by \begin{align} \psi_\mathcal{PT}^{(\mathrm{L})}(x)= \begin{cases} (A+ C^\ast)e^{ikx}+B e^{-ikx} & \quad\mbox{for $x\leq-1$}, \\ \psi(0) + {\psi(0)}^\ast & \quad\mbox{for $x=0$}, \\ (A^\ast + C) e^{ikx} + B e^{-ikx} &\quad\mbox{for $x\geq 1$} \end{cases} \end{align} with the relations~\eqref{eq300}--\eqref{eq320} producing \begin{align} A+C^\ast&=-B\frac{e^{2ik}\Gamma}{2\sin k+\Gamma}, \\ \psi(0)+{\psi(0)}^\ast&=B\frac{2\sin k}{2\sin k+\Gamma} \end{align} for the simplest case $\varepsilon_0=\varepsilon_1=0$. Note that the component $\psi_\mathcal{PT}^{(\mathrm{L})}(0)$ is real. If we choose the normalization as $\phi_\mathcal{PT}(0)^{(\mathrm{L})}=1$ we obtain \begin{align}\label{eq-PTsol} \phi_\mathcal{PT}^{(\mathrm{L})}(x)= \begin{cases} \displaystyle -\frac{\Gamma}{2\sin k}e^{ik(x+2)}+\left(1 + \frac{\Gamma}{2\sin k}\right) e^{-ikx} & \quad\mbox{for $x\leq-1$}, \\ 1 & \quad\mbox{for $x=0$}, \\ \displaystyle -\frac{\Gamma}{2\sin k}e^{ik(x-2)} +\left(1 + \frac{\Gamma}{2\sin k}\right) e^{-ikx} &\quad\mbox{for $x\geq 1$} \end{cases} \end{align} for $0\leq k\leq \pi$, as our first $\mathcal{PT}$-symmetric solution. We can instead start from the right-to-left scattering wave Eq.~\eqref{eq-PTasym1}. Following a similar procedure as above, we obtain \begin{align} \psi_\mathcal{PT}^{(\mathrm{R})}(x)= \begin{cases} Ce^{ikx}+ (B+D^\ast) e^{-ikx} & \quad\mbox{for $x\leq-1$}, \\ \psi(0) + {\psi(0)}^\ast & \quad\mbox{for $x=0$}, \\ C e^{ikx} + (B^\ast +D) e^{-ikx} &\quad\mbox{for $x\geq 1$} \end{cases} \end{align} for $0\leq k\leq \pi$, where we used the assumption $C\in\mathbb{R}$ in this case. In the simplest case $\varepsilon_0=\varepsilon_1=0$ we have \begin{align} B+D^\ast&=C\frac{e^{-2ik}\Gamma}{2\sin k-\Gamma}, \\ \psi(0)+{\psi(0)}^\ast&=C\frac{2\sin k}{2\sin k-\Gamma}. \end{align} After again choosing our normalization such that $\phi_\mathcal{PT}^{(\mathrm{R})}(0)=1$ we obtain \begin{align}\label{eq-PTsol2} \phi_\mathcal{PT}^{(\mathrm{R})}(x)= \begin{cases} \displaystyle \left(1 - \frac{\Gamma}{2\sin k}\right)e^{ikx}+\frac{\Gamma}{2\sin k} e^{-ik(x+2)} & \quad\mbox{for $x\leq-1$}, \\ 1 & \quad\mbox{for $x=0$}, \\ \displaystyle \left(1 - \frac{\Gamma}{2\sin k}\right)e^{ikx}+\frac{\Gamma}{2\sin k} e^{-ik(x-2)} &\quad\mbox{for $x\geq 1$} \end{cases} \end{align} for $0\leq k\leq \pi$, as our second $\mathcal{PT}$-symmetric solution, which is indeed obtained simply by flipping the sign of $k$ in the first solution~\eqref{eq-PTsol}: We therefore conclude that the solution~\eqref{eq-PTsol} holds for the entire first Brillouin zone $-\pi<k\leq\pi$. \subsection{Jost solutions}\label{app:Jost} The solutions in the previous subsection~\ref{sec:PT.scattering.2.sym} seem somewhat strange because of the asymmetry with respect to the inversion of $k$. In this subsection we obtain an alternative $\mathcal{PT}$-symmetric solution by directly finding the Jost solutions of the original Schr\"{o}dinger equation~\eqref{eq-Sch1}--\eqref{eq-Sch3}. We will find a solution of a more symmetric form, which is indeed a superposition of the solutions~\eqref{eq-PTsol} and~\eqref{eq-PTsol2}. We again restrict ourselves to the simplest case $\varepsilon_0=\varepsilon_1=0$. Let us briefly overview the way of constructing a scattering wave solution out of the Jost solutions. When the potential vanishes far away from the origin, we can assume a solution of the form of a plane wave there. The solutions thus defined under the boundary conditions~\cite{Newton60,Newton82} \begin{align}\label{eq-Jost} f_\pm(x)=\alpha e^{\pm ikx}\quad\mbox{as $x\to\infty$} \end{align} with an appropriate constant $\alpha$ are called the Jost solutions. They, however, do not generally satisfy boundary conditions at the origin. We therefore take a superposition of the two Jost solutions so that it may satisfy the boundary conditions at the origin, which yields a scattering wave solution. Since the potential vanishes for $x\geq2$ in the present case, we can use Eq.~\eqref{eq-Jost} in the region $x\geq1$. Let us now construct a $\mathcal{PT}$-symmetric Jost solutions. Since $\mathcal{PT} e^{ikx}=e^{ikx}$, we set \begin{align}\label{eq-Jost1} f_\pm(x)=\alpha^\ast e^{\pm ikx}\quad\mbox{as $x\to-\infty$}, \end{align} which we can use in the region $x\leq -1$. These Jost solutions, however, do not satisfy the boundary conditions at $x=0$. Indeed, the Schr\"{o}dinger equation for $x=1$, namely Eq.~\eqref{eq-Sch3}, gives \begin{align}\label{eq640} f_\pm(0)&=-f_\pm(2)+(-E(k) -i\Gamma) f_\pm(1) \nonumber\\ &=\alpha[-e^{\pm 2ik}+(e^{ik}+e^{-ik}-i\Gamma)e^{\pm ik}] \nonumber\\ &=\alpha(1-i\Gamma e^{\pm ik}) \end{align} for $\varepsilon_1=0$, while the Schr\"{o}dinger equation for $x=-1$, Eq.~\eqref{eq-Sch1}, gives \begin{align}\label{eq650} f_\pm(0)&=-f_\pm(-2)+(-E(k) +i\Gamma) f_\pm(-1) \nonumber\\ &=\alpha^\ast[-e^{\mp 2ik}+(e^{ik}+e^{-ik}+i\Gamma)e^{\mp ik}] \nonumber\\ &=\alpha^\ast(1+i\Gamma e^{\mp ik}). \end{align} We can make Eqs.~\eqref{eq640} and~\eqref{eq650} continuous at the origin by choosing \begin{align} \alpha=1+i\Gamma e^{\mp ik}, \end{align} which makes $f_\pm(0)=1\pm 2\Gamma\sin k+\Gamma^2$, but the resulting solution $f_\pm(x)$ does not satisfy the Schr\"{o}dinger equation for $x=0$, Eq.~\eqref{eq-Sch2} (even after setting $\varepsilon_0=0$ for the present case). The physical solution that satisfies the Schr\"{o}dinger equation~\eqref{eq-Sch2} must be given by a linear combination of the two Jost solutions: \begin{align}\label{eq-physsol} \phi_\mathcal{PT}(x)=A_+f_+(x)+A_-f_-(x) \end{align} with two superposing coefficients $A_\pm$, which we set so that $\phi_\mathcal{PT}(x)$ may satisfy Eq.~\eqref{eq-Sch2}. Let us define the Jost \emph{functions} (not to be confused with the Jost solutions) by \begin{align} F_\pm(k)&=-f_\pm(-1)-f_\pm(1)-E(k)f_\pm(0) \nonumber\\ &=-(1-i\Gamma e^{\pm ik})e^{\mp ik}-(1+i\Gamma e^{\mp ik})e^{\pm ik} +(e^{ik}+e^{-ik})(1-i\Gamma e^{\pm ik})(1+i\Gamma e^{\mp ik}) \nonumber\\ &=2\Gamma(\Gamma\pm 2\sin k)\cos k. \end{align} The Schr\"{o}dinger equation~\eqref{eq-Sch2} is then reduced to \begin{align} A_+F_+(k)+A_-F_-(k)=0, \end{align} which fixes the ratio between $A_+$ and $A_-$. By normalizing the function by $\phi_\mathcal{PT}(0)=1$, we can express the final result as follows: \begin{align} \phi_\mathcal{PT}(x)=\begin{cases} \displaystyle \left(1-\frac{\Gamma}{2\sin k}\right)\frac{1-i\Gamma e^{ik}}{2}e^{ikx} +\left(1+\frac{\Gamma}{2\sin k}\right)\frac{1-i\Gamma e^{-ik}}{2}e^{-ikx} &\quad\mbox{for $x\leq-1$}, \\ 1 & \quad\mbox{for $x=0$}, \\ \displaystyle \left(1-\frac{\Gamma}{2\sin k}\right)\frac{1+i\Gamma e^{-ik}}{2}e^{ikx} +\left(1+\frac{\Gamma}{2\sin k}\right)\frac{1+i\Gamma e^{ik}}{2}e^{-ikx} &\quad\mbox{for $x\geq1$}. \end{cases} \end{align} This is indeed a linear combination of Eqs.~\eqref{eq-PTsol} and~\eqref{eq-PTsol2} in the form \begin{align} \phi_\mathcal{PT}(x)=\frac{1}{2}\left(1+\frac{\Gamma}{2\sin k}\right)\psi_\mathcal{PT}^{(R)}(x) +\frac{1}{2}\left(1-\frac{\Gamma}{2\sin k}\right)\psi_\mathcal{PT}^{(L)}(x) ; \end{align} however, the domain extends over the entire first Brillouin zone $-\pi<k\leq\pi$. \subsection{$\mathcal{PT}$-current} \label{sec:PT-current} Because we have a source and a sink, the discrete states generally do not conserve the particle number and the scattering states do not conserve the current. We here, however, introduce a current that is conserved for a $\mathcal{PT}$-symmetric scattering state, which we refer to as the $\mathcal{PT}$-current. The standard current is defined in a one-dimensional continuous space as \begin{align} j&=\text{Re}\left(\psi(x)^\ast \hat{p}\psi(x)\right) =\frac{1}{2i}\left(\psi(x)^\ast \frac{d}{dx}\psi(x)-\psi(x)\frac{d}{dx}\psi(x)^\ast\right), \end{align} which would normally be independent of $x$, but this does not generally hold true in a $\mathcal{PT}$-symmetric non-Hermitian system. We here instead introduce the $\mathcal{PT}$-current \begin{align}\label{eq:PT-current} j_\mathcal{PT}=\frac{1}{2}\left(\psi(x)^\ast\frac{d}{dx}\psi(-x)-\psi(-x)\frac{d}{dx}\psi(x)^\ast\right). \end{align} We can prove that the $\mathcal{PT}$-current is independent of $x$ for an eigenfunction $\psi(x)$ with real eigenvalue $E$ of the Hamiltonian \begin{align} H_\mathcal{PT}=-\frac{d}{dx^2}+V_\mathcal{PT}(x) \end{align} with $\mathcal{PT} V_\mathcal{PT}(x)=V_\mathcal{PT}(-x)^\ast=V_\mathcal{PT}(x)$. Computing the $x$ derivative of the $\mathcal{PT}$-current~\eqref{eq:PT-current}, we indeed have \begin{align} \frac{d}{dx}j_\mathcal{PT}(x)&=\frac{1}{2i}\left(\psi(x)^\ast\frac{d^2}{dx^2}\psi(-x)-\psi(-x)\frac{d^2}{dx^2}\psi(x)^\ast\right) \nonumber\\ &=\frac{1}{2i}\left[\psi(x)^\ast\left(V_\mathcal{PT}(-x)-E\right)\psi(-x)-\psi(-x)\left(V_\mathcal{PT}(x)^\ast-E\right)\psi(x)^\ast\right]=0. \end{align} Notice that it vanishes identically for a $\mathcal{PT}$-symmetric eigenfunction, because we then have $\psi(x)^\ast=\psi(-x)$ in Eq. (\ref{eq:PT-current}). In the discretized space of the one-dimensional tight-binding model, the standard current is given by \begin{align}\label{eq:discretized-current} j&=\frac{1}{2i}\left[\psi(x)^\ast\left(\psi(x+1)-\psi(x)\right)-\psi(x)\left(\psi(x+1)^\ast-\psi(x)^\ast\right)\right] \nonumber\\ &=\frac{1}{2i}\left(\psi(x)^\ast\psi(x+1)-\psi(x)\psi(x+1)^\ast\right), \end{align} while the $\mathcal{PT}$-current is given by \begin{align}\label{eq:discretized-PT-current} j_\mathcal{PT}&=\frac{1}{2}\left(\psi(x)^\ast\psi(-x-1)-\psi(-x)\psi(x+1)^\ast\right). \end{align} For a $\mathcal{PT}$-asymmetric left-to-right scattering state of the form~\eqref{eq-PTasym}, the (traditional) current~\eqref{eq:discretized-current} is \begin{align} j&=\sin k\times \begin{cases} |A|^2+|B|^2 &\quad\mbox{for $x\leq -2$,}\\ |C|^2 & \quad\mbox{for $x\geq 1$,} \end{cases} \end{align} which are generally not equal along the two leads as we showed in Sec.~\ref{sec5}. The $\mathcal{PT}$-current~\eqref{eq:discretized-PT-current}, on the other hand, is \begin{align}\label{eq830} j_\mathcal{PT}&=\sin k\times \begin{cases} -iB^\ast C &\quad\mbox{for $x\leq -2$,}\\ iBC^\ast &\quad\mbox{for $x\geq 1$,} \end{cases} \\ &=\frac{|B|^2\sin^2k}{(\Gamma+2\sin k)\Gamma\cos k}, \end{align} which is conserved on both sides of the scattering region. We plot the $\mathcal{PT}$-current in Fig.~\ref{fig8}; \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig10} \caption{The $\mathcal{PT}$-current~\eqref{eq830} without the factor $|B|^2$ in the simplest case $\varepsilon_0=\varepsilon_1=0$.} \label{fig8} \end{figure} the singularities appearing here correspond to the left-to-right perfect transmission states previously shown in Fig.~\ref{fig7}(a). \section{Conclusion} \label{sec:conclusion} In this paper we have combined two types of non-Hermitian systems, open quantum systems and $\mathcal{PT}$-symmetric systems, in order to study a simple example of a $\mathcal{PT}$-symmetric open quantum system. This system took the form of a tight-binding model with a $\mathcal{PT}$-symmetric defect potential as shown in Fig. \ref{fig1}, which might be physically realized as an optical lattice array or approximated in a variety of $\mathcal{PT}$ systems with a defect scattering center \cite{PTOptExpt4,PT_WGM,RDM05}. We used this model to illustrate a number of quite general features of $\mathcal{PT}$-symmetric open quantum systems, including properties of the discrete spectrum as well as the scattering states. In Sec.~\ref{sec:PT.outgoing} we studied the resonance state in continuum (RIC) as a feature of the discrete spectrum, illustrating that it represents a resonance state appearing directly within the conduction band (scattering continuum) as it crosses from the second Riemann sheet of the complex energy plane into the first sheet. In Sec.~\ref{sec3B} we showed this state takes the form of an outgoing plane wave from the impurity region into the leads. As a result, this feature also appears in the scattering continuum and hence we returned to it in Sec.~\ref{sec:PT.scattering} in which we studied the scattering properties of the system; in Sec.~\ref{sec:PT.scattering.RIC} we described that the RIC represents a resonance between the open environment associated with the leads of the system and the $\mathcal{PT}$-symmetric defect potential. In this sense, the RIC can be viewed as a quite particular feature of $\mathcal{PT}$-symmetric open quantum systems. We also showed in Sec.~\ref{sec:PT.outgoing.spec.ep1} that the RIC may exit the conduction band as we modify the system parameters by splitting into a bound state and a virtual bound state at the edge of the continuum; we believe that this effect should be experimentally observable, perhaps in a modified version of the experiments presented in Ref.~\cite{PTOptExpt4} or Ref. \cite{PT_WGM}. We also point out that it has been previously illustrated that a bound state (or virtual bound state) appearing near the edge of the continuum should generally result in an enhancement of the long-time non-exponential decay that is known to appear in open quantum systems~\cite{GPSS13}. Since both a bound state and a virtual bound state appear near the band edge in the case when the RIC exits the continuum, this may result in an even greater enhancement of the non-exponential effect that could also offer an interesting basis for experimental study. We illustrated another key difference from Hermitian open quantum systems in that complex-valued solutions are allowed to appear in the first sheet of the complex energy plane in the $\mathcal{PT}$-symmetric case. These appear as pairs of localized states, one with an amplifying characteristic and the other with an absorbing characteristic, as observed experimentally in Ref.~\cite{PTOptExpt4}. We also pointed out that some of these states may behave as quasi-bound for large values of the $\mathcal{PT}$ defect parameter $\Gamma$; these again might be observable in a system that imitates our defect potential. We evaluated general scattering properties of $\mathcal{PT}$-symmetric open quantum systems in Sec.~\ref{sec:PT.scattering}, in which we calculated the transmission and reflectance for a $\mathcal{PT}$-asymmetric scattering wave solution in Sec.~\ref{sec:PT.scattering.gen} and verified that these satisfy known symmetry relations~\cite{Mosta14,Ahmed12,GCS12}. We also studied perfect transmission states in Sec.~\ref{sec:PT.scattering.2.perfect}, with invisible solutions as a subset of these, and illustrated a connection between perfect transmission at the band edges and a delocalization transition of the bound state in the discrete spectrum. After noting that the eigenvalue $E = - 2 \cos k$ associated with the scattering states is always real, in Sec.~\ref{sec6} we used our model as a mathematical prototype to illustrate the construction of scattering wave solutions that themselves satisfy $\mathcal{PT}$-symmetric boundary conditions (just as the bound state in the discrete spectrum is well known to satisfy such boundary conditions, as we have illustrated in Sec.~\ref{sec:PT.bound}). In Sec.~\ref{sec:PT-current} we wrote the $\mathcal{PT}$-current for these solutions and pointed out that the previously studied perfect transmission states appeared as a special case. \section*{Acknowledgements} We thank Carl M. Bender, Tomio Petrosky, Dvira Segal and Qinghai Wang for helpful comments on topics presented in this work. S. G. acknowledges support from the Japan Society for the Promotion of Science (Fellowship grant no.~PE12057) as well as a Young Researcher's grant from Osaka Prefecture University. The research of M. G. was supported in part by her own JSPS fellowship (Fellowship grant no.~PE14011) as well as that of S. G. \begin{appendix} \section{EP eigenvalue expansion in the case $\varepsilon_0 = \varepsilon_1 = 0$}\label{app:EP.calcs} Here we briefly detail the eigenvalue expansions obtained in the vicinity of EP2As (Eqs.~(\ref{z.A.p.exp}) and~(\ref{z.A.m.exp})) and EP2Bs (Eqs.~(\ref{z.B.m.exp}) and~(\ref{z.B.p.exp})) for the case $\varepsilon_0 = \varepsilon_1 = 0$, following a variation on the method developed in Ref.~\cite{GRHS12}. First we find it useful to rewrite the polynomial equation $P(\lambda) = 0$ from Eq.~(\ref{P.lambda}) directly in terms of the energy eigenvalue $E$. We accomplish through the substitution $\lambda = -(E + \sqrt{E^2 - 4})$, which yields the equivalent equation $p(E_{\tilde{j}}) = 0$, with \begin{equation} p(E) = \Gamma^2 E^4 + \left( \Gamma^4 - 4 \Gamma^2 - 1 \right) E^2 + 4 . \label{p.E.0} \end{equation} (Note that we have chosen different labeling $\tilde{j}$ for the solutions of this alternative form of the dispersion equation in order to emphasize that there is no consistent labeling that will hold between the sets of solutions as we cross the EP~\cite{EP_Korea}). The basic idea for our calculation is that we will take advantage of the fact that the derivative of the eigenvalues blow up at the EPs to study the system properties nearby. First we take a full derivative of the polynomial equation $\textrm{d}p/\textrm{d}\Gamma = 0$ and re-arrange to obtain \begin{equation} \frac{2 \Gamma E^4 + 4 \Gamma \left( \Gamma^2 - 2 \right) E^2}{\partial E / \partial \Gamma} + 2 E \left[ 2 \Gamma^2 E^2 + \Gamma^4 - 4 \Gamma^2 - 1 \right] = 0 . \label{p.E.0.EP.cond} \end{equation} Since $\partial E / \partial \Gamma$ diverges, we obtain the a useful relationship between $E = \bar{E}$ and $\Gamma = \bar{\Gamma}$ at the EP by setting the second term on the RHS above to zero, which yields \begin{equation} \bar{E} (\Gamma = \bar{\Gamma}) = \pm \frac{\sqrt{1 + 4 \bar{\Gamma}^2 - \bar{\Gamma}^4}}{\sqrt{2} \bar{\Gamma}} . \label{EP.Ebar.gen} \end{equation} We can then plug this formula back into the original polynomial dispersion given in Eq.~(\ref{p.E.0}) to find the locations of the EPs in parameter space as $\Gamma = \pm \bar{\Gamma}_\textrm{A}$ and $\Gamma = \pm \bar{\Gamma}_\textrm{B}$, where \begin{equation} \bar{\Gamma}_\textrm{A} = \sqrt{2} - 1 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \bar{\Gamma}_\textrm{B} = 1 + \sqrt{2} . \label{EP.Gbar.A.B} \end{equation} We then find the locations of the eigenvalue coalescence points by plugging these values back into Eq.~(\ref{EP.Ebar.gen}) to find $E(\bar{\Gamma}_\textrm{A}) = \pm \left| \bar{E}_\textrm{A} \right|$, $E(- \bar{\Gamma}_\textrm{A}) = \pm \left| \bar{E}_\textrm{A} \right|$, $E(\bar{\Gamma}_\textrm{B}) = \pm i \left| \bar{E}_\textrm{B} \right|$, and $E(- \bar{\Gamma}_\textrm{B}) = \pm i \left| \bar{E}_\textrm{B} \right|$, with \begin{equation} \left| \bar{E}_\textrm{A} \right| = \sqrt{ 2 \left( 1 + \sqrt{2} \right)} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left| \bar{E}_\textrm{B} \right| = \sqrt{ 2 \left( \sqrt{2} - 1 \right)} . \label{EP.zbar.A.B} \end{equation} Following Ref.~\cite{GRHS12}, we can now write a generic expansion in the vicinity of each the EPs as \begin{eqnarray} E_{\textrm{A}} (\Gamma) & = & \left| \bar{E}_\textrm{A} \right| + \alpha_{1,2} \sqrt{\Gamma^2 - \bar{\Gamma}_\textrm{A}^2}, \nonumber \\ E_{\textrm{A}} (\Gamma) & = & - \left| \bar{E}_\textrm{A} \right| + \alpha_{3,4} \sqrt{\Gamma^2 - \bar{\Gamma}_\textrm{A}^2} ; \label{z.A.gen.exp} \end{eqnarray} and \begin{eqnarray} E_{\textrm{B}} (\Gamma) & = & i \left| \bar{E}_\textrm{B} \right| + \beta_{1,2} \sqrt{\Gamma^2 - \bar{\Gamma}_\textrm{B}^2}, \nonumber \\ E_{\textrm{B}} (\Gamma) & = & - i \left| \bar{E}_\textrm{B} \right| + \beta_{3,4} \sqrt{\Gamma^2 - \bar{\Gamma}_\textrm{B}^2} . \label{z.B.gen.exp} \end{eqnarray} To find the expansion coefficients $\alpha_{1,2}$, for example, we define $\Delta^2 \equiv \Gamma^2 - \bar{\Gamma}_\textrm{A}^2$, plug this into Eq.~(\ref{p.E.0.EP.cond}), and expand in powers of $\Delta$. Carrying this out for both cases we obtain \begin{equation} \alpha_{1,2} = \alpha_{3,4} = \pm i \frac{1}{2^{1/4} \sqrt{-1 + \sqrt{2}}}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \beta_{1,2} = \beta_{3,4} = \pm i \frac{1}{2^{1/4} \sqrt{1 + \sqrt{2}}} . \label{EP.exp.coeffs} \end{equation} Putting Eqs.~(\ref{EP.zbar.A.B}) and~(\ref{EP.exp.coeffs}) into Eq.~(\ref{z.A.gen.exp}) and Eq.~(\ref{z.B.gen.exp}), we obtain the expansion associated with the EPs as reported in Sec.~\ref{sec:PT.outgoing.spec}. \section{Properties of complex localized states in Region IV for the case $\varepsilon_0 = \varepsilon_1 = 0$}\label{app:iv.calcs} In this appendix we detail the properties of the complex localized states in Region IV (see Fig.~\ref{fig:PT.0.spec}(b)) for the case $\varepsilon_0 = \varepsilon_1 = 0$, as discussed near the end of Sec.~\ref{sec:PT.outgoing.spec} and Sec.~\ref{sec:PT.outgoing.QBIC} of the main text. We can generally expect the condition $\lambda \gg 1$ to hold throughout this region of the parameter space. Hence we begin by expanding the solutions of $\lambda_j$, which are reported in Eq.~(\ref{P.lambda.0.solns}), in powers of $1 / \Gamma$ to obtain \begin{equation} \lambda_{1,4} \approx \pm \frac{i}{\Gamma} \left( 1 + \frac{1}{\Gamma^2} \right), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \lambda_{2,3} \approx \pm i \left( 1 - \frac{1}{\Gamma^2} \right) . \label{lambda.j.IV.exps} \end{equation} We use $E_j = - (\lambda_j + \lambda_j^{-1})$ to obtain expansions for the energy eigenvalues immediately as \begin{equation} E_{1,4} \approx \pm i \left( \Gamma - \frac{2}{\Gamma} \right), \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ E_{2,3} \approx \pm i \frac{2}{\Gamma^2} , \label{E.j.IV.exps} \end{equation} as reported in the main text. To elucidate the asymptotic properties of the wave function for these eigenvalues, we make use of the first and third rows from Eq.~(\ref{outgoing.matrix0}) to write \begin{equation} \frac{ \psi(\mp1) }{\psi(0)} =\frac{1}{-\lambda\pm i\Gamma-E(\lambda)} =\frac{\lambda}{1\pm i\lambda\Gamma} . \label{psi.imp.amps} \end{equation} First, let us evaluate the localization properties for $\psi_1$ associated with the eigenvalue $E_1 \sim i \Gamma$, which appears to be the uncoupled gain site in the limit $\Gamma \rightarrow \infty$. The calculation for the wave function $\psi_4$ proceeds along similar lines. Applying $\lambda_1 \Gamma=i(1+1/\Gamma^2)$ we find \begin{equation} \frac{\psi_1(-1)}{\psi_1(0)} \approx - i \Gamma, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{\psi_1(+1)}{ \psi_1(0) } \approx i \frac{1}{2\Gamma} . \label{psi.imp.amps.E1} \end{equation} Choosing our normalization such that $\psi_1(-1) = 1$, we have $\psi_1(0) \approx i/\Gamma$ and $\psi_1(1) \approx -1 / (2\Gamma^2)$. We then use $\lambda_1 = e^{i k_1} \approx i / \Gamma$ to write the wave function~\eqref{outgoing.wave.fcn} as \begin{equation} \psi_1 (x) \approx \left\{ \begin{array}{ll} -i\Gamma \left( \frac{i}{\Gamma} \right)^{|x|} & \mbox{for $x \le -1$} \\ \frac{i}{\Gamma} & \mbox{for $x = 0$} \\ \frac{i}{2\Gamma} \left( \frac{i}{\Gamma} \right)^{x} & \mbox{for $x \ge 1$} \end{array} \right. , \label{outgoing.wave.fcn.E1} \end{equation} or \begin{align} \left|\psi_1(x)\right|^2\sim e^{-|x+1|\log\Gamma}, \end{align} which shows that the state is localized around the gain site $x = -1$ with the localization length $1/ \log\Gamma$. A similar calculation for $E_4 \approx - i \Gamma$ shows that the wave function for this eigenvalue is localized around the loss site $x = 1$. They become sharper and sharper as we increase $\Gamma$. For the eigenvalues $E_{2,3}$ in Region IV, we can use $\lambda_{2,3} \approx \pm i$ to find \begin{equation} \frac{ \psi_m (\mp1)}{ \psi_m(0)} \approx \mp i \frac{1}{\Gamma} \label{psi.imp.amps.E2.E3} \end{equation} for both eigenvalues with $m = 2,3$. This indicates that the site $0$ is the localization center for the wave functions $\psi_{2,3}$. Let us therefore normalize the wave functions according to $\psi_m(0)=1$. Because the wave numbers are expanded as \begin{eqnarray} k_{2,3} = - i \log \lambda_{2,3} & \approx & - i \log \left( \pm i \left( 1 - \frac{1}{\Gamma^2} \right) \right) \nonumber \\ & \approx & \pm \frac{\pi}{2} + i \frac{1}{\Gamma^2} , \label{k2.k3.iv} \end{eqnarray} we obtain the wave functions as \begin{equation} \psi_{2,3} (x) \approx \left\{ \begin{array}{ll} \mp \frac{1}{\Gamma} e^{\pm i \pi |x| / 2} e^{-|x| / \Gamma^2} & \mbox{for $x \le -1$} \\ 1 & \mbox{for $x = 0$} \\ \pm \frac{1}{\Gamma} e^{\pm i \pi x / 2} e^{-x / \Gamma^2} & \mbox{for $x \ge 1$} \end{array} \right. , \label{outgoing.wave.fcn.E2} \end{equation} which shows that the localization length is $\Gamma^2/2$. In other words, they become broader and broader as we increase $\Gamma$. \end{appendix}
\section{Introduction} \subsection{Convex Clustering} The previous two decades have witnessed a flurry of activity in statistical machine learning and statistical signal processing, as classical methods have been reinterpreted as convex penalized estimation problems and powerful new theory has extended this methodology to the high-dimensional (under-sampled) setting. While the most notable advances have been in supervised methods such as regression and signal reconstruction, the convexity revolution has also yielded powerful new techniques for unsupervised learning. Notably, a convex formulation of clustering \citep{Lindsten:2011,Hocking:2011} has received significant attention as it allows for theoretical and computational guarantees not easily obtained for classical clustering approaches \citep{Zhu:2014,Tan:2015,Radchenko:2017}. Traditional convex clustering combines a squared Frobenius norm data fidelity term, which keeps the estimated cluster centroids near the original data, with a convex fusion penalty, which shrinks the estimated centroids together, yielding the following estimator: \[\hat{\bm{U}} = \argmin_{\bm{U} \in \mathbb{R}^{n \times p}} \frac{1}{2}\|\bm{X} - \bm{U}\|_F^2 + \lambda \sum_{\substack{i, j = 1 \\ i < j}}^n w_{ij} \|\bm{U}_{i\cdot} - \bm{U}_{j\cdot}\|_q\] where $\bm{X} \in \mathbb{R}^{n \times p}$ is the data matrix, $\hat{\bm{U}} \in \mathbb{R}^{n \times p}$ is the matrix of estimated centroids, $\lambda > 0$ is a regularization parameter controlling the overall degree of fusion, and $\{w_{ij}\}$ are optional problem-specific non-negative fusion weights. Observations $i$ and $j$ are said to belong to the same cluster if $\hat{\bm{U}}_{i\cdot} = \hat{\bm{U}}_{j\cdot}$. In this paper, we only consider the convex $\ell_q$ fusion penalty ($q = 1, 2, \infty$), though some research suggests non-convex penalties also perform well \citep{Marchetti:2014}. Focusing our attention on the data fidelity term, we note that any distance function can be used in lieu of the Frobenius norm without sacrificing the advantages of the convex formulation. \citet{Liu:2019} replace the Frobenius loss with a Huber loss to make the clustering solution robust to outliers, while \citet{Wang:2019-GECCO} propose a framework for incorporating arbitrary Bregman divergences and exponential family-type distances in the convex clustering framework. \citet{Sui:2018} use a Mahalanobis-type distance and embed convex clustering in a metric learning scheme. Extending convex clustering to non-vector-valued data, \citet{Park:2018} use an earth-mover's distance to cluster histogram-valued observations arising in genomic sequencing. Building on this line of research, we propose a novel automatic spectral-registration dissimilarity suitable for use with temporal data and investigate its use in the clustering context. \begin{figure*} \includegraphics[width=\textwidth]{centroids.pdf} \caption{Simulation Scenarios Used in Section \ref{sec:sims}. The first scenario (top row) consists of three signals which are clearly distinct in the time domain: a sinusoid, a smoothed triangle wave, and a square wave. The second scenario (middle row) consists sinusoids at three different frequencies. Third scenario (bottom row) consists of combinations of pairs of sinusoids with different relative phase.From left to right, the panels depict: the true cluster centroid (true signal); a sample observation, here with $\text{SNR} = 3$ and von Mises($\kappa = 10$) phase noise; centroids from DTW + hierarchical clustering (HC), na\"ive time-domain HC, frequency-domain HC, and TROUT.} \vspace{-0.15in} \label{fig:sim_design} \end{figure*} \subsection{Time Series Clustering} Before introducing our proposal, we review some of the particular challenges associated with clustering time series data. Specifically, we focus on what \citet{Aghabozorgi:2015} term ``whole time-series clustering,'' wherein multiple time series are grouped into a small number of distinct clusters; they distinguish this from ``subsequence clustering,'' identifying subsequences within a single long time series, and ``time point clustering,'' the task of clustering individual points in a way which respects their temporal ordering. Most commonly, whole series clustering is performed by identifying a suitable distance measure between series and using that distance measure within a classical clustering method such as $k$-means or hierarchical clustering. Perhaps the most frequently used time series distance measure is Dynamic Time Warping (DTW), originally proposed by \citet{Sakoe:1971, Sakoe:1978}. DTW uses a dynamic programming approach to optimally insert or remove additional time points to minimize the Euclidean distance between the time-domain representation of two signals. DTW is particularly well-suited for natural signals, such as those arising from social, economic, or consumer-behavior processes, where the ``shape'' of the signal is more meaningful than the exact temporal structure. The flexibility of DTW is, for some applications, a double-edged sword so a maximum distortion constraint of 10\%, following \citeauthor{Sakoe:1978}, is imposed. \citet{Ratanamahatana:2005} argue that this flexibility is often quite detrimental for clustering performance and suggest that a much lower deformation constraint be used in practice. Alternatives to DTW include na\"ive (time-domain) metrics, where the temporal structure is ignored, as well as clustering using the Fourier or wavelet coefficients of the time series as input data. Spectral representations have the advantage of limiting the temporal dependence and integrating well with established signal processing techniques. See the review paper by \citet{Serra:2014} for a thorough evaluation of time series metrics in the classification context. \section{Automatic Registration of Time Series} While the flexibility of DTW and related methods may be suitable for some problems, we instead consider an alignment method tailored for \textit{stationary} signals. Specifically, we consider signals where the temporal information \emph{within} a signal is meaningful, but the different signals are not aligned to each other. Considering these signals in the spectral (Fourier) domain, only the relative phase of the different components is meaningful, while the absolute phase is not. We model this situation using ``\emph{phase noise},'' where a random shift is applied uniformly across the signal, corresponding to our (random) observational window. To address phase noise, we propose a new distance measure between which only depends on the relative phase of various components, and not the absolute phase. As a generalization of Euclidean distance, our method also handles observational (white) noise in the usual manner. Given two univariate time series, we can consider three types of distances based on their spectral representations: i) \emph{phase-sensitive}, corresponding to the standard Euclidean distance between their Fourier coefficients; ii) \emph{phase-oblivious}, corresponding to the Euclidean distance between the magnitudes of their Fourier coefficients (retaining power, but discarding phase); and iii) \emph{phase-adaptive}, corresponding to the Euclidean distance between their \emph{optimally-aligned} Fourier coefficients. Formally, the phase-adaptive spectral distance can be defined as \begin{equation} d_{\text{TROUT}}(\bm{u}, \bm{x}) = \min_{\theta \in [0, 2\pi)} \|\bm{u} - e^{\mathfrak{i} \theta} \bm{x}\|_2. \label{eqn:trout} \end{equation} Compared to DTW, $d_{\text{TROUT}}$ preserves the temporal structure in the signal, while identifying the optimal alignment between pairs of signals. Given two matrices $\bm{X}, \bm{U} \in \mathbb{C}^{n \times p}$, each row of which represents the DFT of a separate series, the TROUT distance is defined as the sum of row-wise distances: \[d_{\text{TROUT}}(\bm{U}, \bm{X})^2 = \sum_{i=1}^n d_{\text{TROUT}}(\bm{U}_{i\cdot}, \bm{X}_{i\cdot})^2.\] Combining this matrix distance with a convex fusion penalty yields our proposed method for simultaneous time series registration and clustering: \begin{equation} \argmin_{\bm{U} \in \mathbb{C}^{n \times p}} \frac{1}{2}d_{\text{TROUT}}(\bm{U}, \bm{X})^2 + \lambda \sum_{\substack{i, j = 1 \\ i < j}}^n w_{ij} \|\bm{U}_{i\cdot} - \bm{U}_{j\cdot}\|_q. \label{eqn:trout_clust} \end{equation} An important aspect of our formulation is that $d_{\text{TROUT}}$ implicitly re-aligns the series at each step. Compared with standard pre-alignment techniques, this can significantly improve performance and interpretability. Specifically, $d_{\text{TROUT}}$ works by aligning each estimated centroid (row of $\bm{U}$) with the observed data. This is preferable to pre-alignment as it avoids the difficult question of selecting a registration target, and aligns a centroid only to that data in its cluster. Similar advantages of dynamic alignment have been observed in the shape analysis context by \citet{Srivastava:2011}, though our approach is simpler to implement than theirs. We note that $d_{\text{TROUT}}$ fails to satisfy the triangle inequality and hence is not a proper distance measure. As such, Problem \eqref{eqn:trout_clust} is not generally convex without further assumptions on $\bm{X}$. As a generalization of Euclidean convex clustering, TROUT clustering inherits many of the attractive properties of convex clustering including: a continuous solution path, indexed by $\lambda$, which smoothly interpolates from $n$ to $1$ clusters \citep{Hocking:2011}; the ability to quickly identify local optima \citep{Chi:2015}; the ability to construct dendrogram representations of the solution path \citep{Weylandt:2020}; provable statistical consistency \citep{Zhu:2014,Tan:2015,Radchenko:2017}; and a high-degree of robustness to noise \citep{Wang:2019-GECCO,Liu:2019}. While our focus is on the use of $d_{\text{TROUT}}$ in the fusion clustering context, we also note that the TROUT distance, and associated notion of TROUT Fr\'echet mean, can also be used within $K$-means or hierarchical clustering. \begin{figure*} \includegraphics[width=\textwidth]{perf.pdf} \caption{Comparitive performance of TROUT clustering against dynamic time warping (DTW), time-domain, and frequency-domain based hierarchical clustering (HC). The same scenario designs as shown in Figure \ref{fig:sim_design} are used, with the degree of phase noise--the concentration parameter $\kappa$ in a von Mises distribution--and observational noise (SNR) varied. As can be clearly seen, TROUT outperforms all methods in the high-phase noise (low $\kappa$) setting: this outperformance is most pronounced in Scenario 3 (bottom row) where the signals differ only in the relative phase of components. Time-domain clustering with oracle alignment performs as well as frequency-domain clustering, as expected.} \vspace{-0.12in} \label{fig:sim_perf} \end{figure*} \section{Computational Considerations} \label{sec:computation} The TROUT distance \eqref{eqn:trout} is defined by a one-dimensional optimization problem. This problem is non-convex in $\theta$, due to the sinusoidal behavior of the complex exponential, but simple enough that bisection or grid-search methods can be used to identify the minimum. Still, the non-convex problem may pose difficulties for composite optimization schemes \citep{Duchi:2018}. Instead, we recast $d_{\text{TROUT}}$ as a complex-valued optimization problem \begin{equation} d_{\text{TROUT}}(\bm{u}, \bm{x}) = \min_{z \in \mathbb{B}_{\mathbb{C}}(r=1)} \|\bm{u} - z\, \bm{x}\|_2, \label{eqn:trout_cmplx} \end{equation} where $\mathbb{B}_{\mathbb{C}}(r=1)$ is the complex unit sphere. This problem is essentially \textit{complex least squares} with a norm constraint and has closed-form minimizer $z = \angle[\bm{x}^H\bm{u}] = \bm{x}^H\bm{u} / \|\bm{x}\|\|\bm{u}\|$. In multivariate settings, $z$ is optimized over unitary transforms, justifying the TROUT acronym. Now that we can calculate $d_{\text{TROUT}}$ efficiently, we can develop algorithms for the TROUT clustering problem \eqref{eqn:trout_clust}. Following \citet{Chi:2015} and \citet{Weylandt:2020}, we use an alternating direction method of multipliers approach, whose updates are given by \begin{align*} \bm{U}^{(k+1)} &= \argmin_{\bm{U} \in \mathbb{C}^{n \times p}} \frac{1}{2} d_{\text{TROUT}}(\bm{U}, \bm{X})^2 + \\ &\phantom{\argmin}\quad \frac{\rho}{2}\left\|\bm{D}\bm{U} - \bm{V}^{(k)} + \bm{Z}^{(k)}\right\|_F^2 \\ \bm{V}^{(k+1)} &= \prox_{\lambda / \rho P(\cdot; \bm{w}, q)}(\bm{D}\bm{U}^{(k+1)} + \bm{Z}^{(k)}) \\ \bm{Z}^{(k+1)} &= \bm{Z}^{(k)} + \bm{D}\bm{U}^{(k+1)} - \bm{V}^{(k+1)} \end{align*} where $\bm{D}$ is a suitable difference matrix and $P(\cdot; \bm{w}, q)$ is the weighted convex fusion penalty function. For $q = 1, 2$, the proximal operator yields a simple closed-form update for $\bm{V}$, while for $q = \infty$, the efficient algorithm of \citet{Condat:2016} can be used. Unlike standard convex clustering, the primal ($\bm{U}$) update does not have a closed-form update and instead requires an iterative solver. Note that, for two vectors $\bm{u}$ and $\bm{x}$, $d_{\text{TROUT}}(\bm{u}, \bm{x})$ can be equivalently expressed using the projection of $\bm{u}$ onto the set of all ``phase-adjusted'' $\bm{x}$ vectors: that is, \[d_{\text{TROUT}}(\bm{u}, \bm{x}) = \|\bm{u} - P_{\tilde{\bm{x}}}(\bm{u})\|_2 \text{ where } \tilde{\bm{x}} = \{e^{\mathfrak{i} \theta}\bm{x}: \theta \in [0, 2\pi)\}\] and where $P_{\tilde{\bm{x}}}(\cdot)$ denotes the projection onto the set $\tilde{\bm{x}}$. This sort of ``distance-to-set'' function was popularized in machine learning by \citet{Xu:2017-NIPS} and is well-behaved (single-valued and differentiable) for almost all $\bm{u}, \bm{x}$ \citep[Sections 18.6 and 24.3]{Bauschke:2017}. With this formulation, we can use a majorization-minimization (MM) approach \citep{Lange:2016} for the $\bm{U}$ subproblem. In particular, note that \[\frac{1}{2}d_{\text{TROUT}}(\bm{U}, \bm{X})^2 \leq \frac{1}{2}\|\bm{U} - P_{\mathcal{U}^{\{t\}}}(\bm{X})\|_F^2\] where $\mathcal{U}^{\{t\}}$ is the ``phase-adjusted'' equivalence class of the previous iterate $\bm{U}^{\{t\}}$ and $P_{\mathcal{U}^{\{t\}}}(\cdot)$ denotes projections thereon. This majorizes the TROUT distance because $P_{\mathcal{U}^{\{t\}}}(\bm{X}) \in \mathcal{U}^{\{t\}}$ by construction and the distance to a set is always less than or equal to the distance to a fixed point within that set. This yields the inner iterates: \begin{align*} \tilde{\bm{U}}^{\{t+1\}} &= \argmin_{\bm{U} \in \mathbb{C}^{n \times p}} \frac{1}{2}\left\|\bm{U} - P_{\mathcal{U}^{\{t\}}}(\bm{X})\right\|_F^2 + \\&\phantom{\argmin}\quad \frac{\rho}{2}\left\|\bm{D}\bm{U} - \bm{V}^{(k)} + \bm{Z}^{(k)}\right\|_F^2 \end{align*} which have a closed-form solution \[\tilde{\bm{U}}^{\{t+1\}} = (\bm{I} + \rho\bm{D}^T\bm{D})^{-1}(P_{\mathcal{U}^{\{t\}}}(\bm{X}) + \rho \bm{D}^T(\bm{V}^{(k)} - \bm{Z}^{(k)})).\] Though non-convex, the $d_{\text{TROUT}}(\cdot, \bm{X})$ function is well-behaved and global convergence to a stationary point follows from Scenario 1(a) in Theorem 1 of \citet{Wang:2019}. Additional performance improvements can be obtained by taking only a single MM step within each ADMM iteration \citep{Chen:2019-InexactADMM}, though this is typically unnecessary as the inner iterates tend to converge quite quickly. If the $\mathcal{O}(n^3)$ Cholesky factorization of $\bm{I} + \rho \bm{D}^T\bm{D}$ is cached, the per iterate cost of this approach is $\mathcal{O}(n^2p)$, dominated by two rounds of triangular back-substitution, which scales efficiently with high sampling rates. For the simulations considered in the next section ($\bm{X} \in \mathbb{C}^{60 \times 64}$), this algorithm solves Problem \eqref{eqn:trout_clust} to numerical precision in an average of 0.0825 seconds per value of $\lambda$ in a warm start scheme on a low-end desktop with 16 GB of RAM and an Intel i7-9700 processor running at 3.0 GHz. When combined with the ``one-step'' algorithmic regularization approach of \citet{Weylandt:2020}, performance is increased by an order of magnitude (0.000665 seconds per $\lambda$) without any noticeable drop in clustering accuracy \section{Simulation Study} \label{sec:sims} In this section, we compare TROUT clustering with various time- and frequency-domain clustering methods. We make repeated use of three simulation designs, shown in the left column of Figure \ref{fig:sim_design}. These three families highlight different challenges in time series clustering. The first scenario (top row) includes both a square wave and a (smoothed) triangle wave, which have slowly decaying spectra and exhibit Gibbs phenomena (rapid variation overshooting the target function) at their discontinuities; because of the slow spectrum decay, these signals have more relevant features than observations in each cluster, a situation which is well-known to challenge many clustering methods, while the discontinuities in the time-domain challenge warping or smoothing approaches. The second scenario (middle row) consists of sinusoids of different frequencies: these signals are easily separated in the frequency domain, but time-domain approaches struggle to identify structure. The third scenario (bottom row) is the most challenging: all three signals consist of pairs of sinusoids, differing only in relative phase. Power-only approaches are unable to distinguish these signals, and phase-noise poses a particular challenge for both time- and frequency-domain methods. From left-to-right, Figure \ref{fig:sim_design} shows the true cluster centroids; typical observations generated with a signal-to-noise ratio of 3 and with phase noise drawn from a mean zero von Mises random variable with concentration $\kappa = 10$; centroids estimated by dynamic time warping followed by hierarchical clustering (HC); centroids estimated by HC in the time-domain; centroids estimated by HC in the frequency-domain; and centroids estimated by TROUT. In each setting, signals are observed at 128 Hz, with 20 samples generated from each cluster for a total of 60 observations, yielding a spectral representation $\bm{X}_C \in \mathbb{C}^{60 \times 64}$ or a time-domain representation $\bm{X}_R \in \mathbb{R}^{60 \times 128}$. For legibility, a $5$-nearest neighbor smoother is super-imposed. While both TROUT and frequency-domain HC accurately separate clusters, only TROUT is able to correctly recover the shape of the true centroids (up to phase shift). Figure \ref{fig:sim_perf} uses the same scenarios, but varies the signal to noise ratio and the degree of phase noise. The sampling rate and number of observations is fixed as before. The adjusted Rand index used to measure clustering performance, is averaged over 50 replicates, with the oracle number of clusters made available to all methods. As can be seen, TROUT and frequency-domain HC are consistently among the best performing methods in all regimes, with the advantages of TROUT increasing as the phase noise increases ($\kappa$ decreases). For $\kappa = \infty$, corresponding to the case of no phase noise, the additional flexibility of TROUT registration slightly decreases performance. Focusing on the third scenario (bottom row) wherein signals vary only by their phase, we note that frequency domain HC is unable to distinguish signals in the high-phase noise setting, even with a high SNR. Time-domain methods including na\"ive HC and DTW are unable to correctly discover true clusters in this scenario at any SNR. \section{Discussion and Extensions} We have proposed a new approach to time series clustering which combines a convex fusion penalty with automatic registration. We have derived a closed-form solution for the optimal spectral-domain alignment and have used it to develop an efficient MM algorithm which attains a local optimum provably and efficiently. Simulation studies confirm the efficiency and stability of our approach across several regimes. Our method is particularly well-suited for situations of high ``phase noise,'' as would be expected when clustering naturally occurring stationary signals, though it also performs competitively in more standard domains. In the simulations shown in Section \ref{sec:sims}, we only considered basic spectral analysis (whole-sample DFT without tapering or smoothing), though our method is readily applied within more robust analytical pipelines. The heart of our proposed method is the automatic and optimal alignment induced by the TROUT distance. While we have focused on clustering stationary univariate signals, the TROUT distance can be used anywhere a standard $\ell_2$ or Frobenius distance is used in machine learning. The basic framework of TROUT-based clustering can also be extended in several useful directions, including automatic amplitude adjustment, frequency isolation / band identification, and multivariate time series clustering. We conclude with a brief discussion of each of these extensions. As defined above, the TROUT distance \eqref{eqn:trout} only allows for phase adjustment. For signals which may be observed on different scales, where only intra-signal relative magnitude is meaningful, the complex form of the TROUT distance \eqref{eqn:trout_cmplx} can be relaxed to allow arbitrary complex $z$. As with standard TROUT, the relaxed problem has closed-form minimizer $z = \bm{x}^H\bm{u} / \|\bm{x}\|^2$ and the associated clustering problem can be analyzed with a MM algorithm. Notably, this relaxed form has better convexity properties than the method discussed above. In certain problems, it may be useful to isolate certain frequencies which are most important to the clustering solution; this can be achieved by adding an $\ell_1$ or mixed $\ell_2/\ell_1$ penalty to the columns of $\bm{U}$, as explored by \citet{Wang:2019-GECCO}. If, instead, it is useful to identify bands of frequencies which behave similarly, a TROUT-based form of convex bi-clustering \citep{Chi:2017} could be used to simultaneously cluster both observations and frequencies into clusters and meaningful bands respectively. Finally, if we instead seek to cluster $p$-variate time series, our data $\bm{X}$ would now be an $n \times p \times p$ complex higher-order array (tensor) which we would seek to cluster along the first mode: in this case, the TROUT problem now consists of alignment by an arbitrary unitary transform. This problem can be solved via a unitary Procrustes problem, which has a closed-form solution in terms of the SVD of $\bm{X}^H\bm{U}$ and can be easily embedded in efficient tensor clustering algorithms \citep{Weylandt:2019b}. We plan to explore several of these extensions in future work. \section{References} {\ninept \printbibliography[heading=none]} \end{refsection} \begin{refsection} \onecolumn
\section{Introduction} As a powerful learning control paradigm, reinforcement learning is extremely suitable for finding the optimal policy in tasks where the dynamics are either unknown or affected by severe uncertainty \citep{bucsoniu2018reinforcement}. Its combination with the deep neural network has boosted applications in autonomous driving \citep{sallab2017deep}, complicated robot locomotion \cite{hwangbo2019learning}, and skilful games like Atari \citep{mnih2015human} and Go \citep{silver2017mastering}. However, overparameterized policies are prone to become overfitted to the specific training environment, limiting its generalization to the various scenarios \cite{pinto2017robust}. Additionally, RL agents trained in simulation, though cheap to obtain, but likely suffer from the \textit{reality gap} problem \cite{koos2010crossing} when transferred from virtual to the real world. To overcome these drawbacks, various efforts are made to enhance the robustness of the policy \cite{jakobi1995noise, tobin2017domain,mordatch2015ensemble}, since a robust policy has a greater chance of successful generalization and transfer. \textbf{Contribution} In this paper, we propose a unified framework of designing policies with both stability and robust performance guarantee against the various uncertainties in the environment. Without any specific domain knowledge, our method is able to find policy that is robust to large exogenous disturbances and generalizes well to different test environment. First, a novel model-free method for analyzing the Lyapunov stability and $H_\infty$ performance of the closed-loop system is developed. Based on the theoretical results, we propose the Robust Lyapunov-based Actor-Critic (RLAC) algorithm to simultaneously find the Lyapunov function and policy that can guarantee the robust stability of the closed-loop system. We evaluate RLAC on a simulated cartpole in the OpenAI gym \citep{brockman2016openai} environment and show that our approach is robust to: \textbf{i) Large impulsive disturbance:} The trained agent is able to recover when disturbed by adversary impulses 4-6 times of the maximum control input, while other baselines fail almost surely. \textbf{ii) Parametric Uncertainty:} The learned policy generalizes better than the baselines to different test environment settings (e.g. different mass and structural values). \section{Preliminaries} \label{sec:preliminaries} \subsection{Markov Decision Process and Reinforcement Learning} A Markov decision process (MDP) is a tuple, ($S,A,c,P,\rho$), where $S$ is the set of states, $A$ is the set of actions, $c (s,a)\in [0,\infty)$ is the cost function, $P (s'|s,a)$ is the transition probability function, and $\rho (s)$ is the starting state distribution. $\pi(a|s)$ is a stationary policy denoting the probability of selecting action $a$ in state $s$. In addition, the cost function under stationary policy is defined as $c_\pi(s) \doteq \mathbb{E}_{a\sim\pi }c(s,a)$. In this paper, we divide the state $s$ into two vectors, $s^1$ and $s^2$, where $s^1$ is composed of elements of $s$ that are aimed at tracking the reference signal $r$ while $s^2$ contains the rest. The cost function is defined as $\Vert s^1-r \Vert$, where $\Vert \cdot \Vert$ denotes the Euclidean norm. \subsection{Robust Control Against Environment Uncertainty} \begin{definition} \label{def:mss} The stochastic system is said to be stable in mean cost if $\lim_{t\rightarrow \infty }\mathbb{E}_{s_{t}} c_\pi(s_{t})=0$ holds for any initial condition $s_{0}\in \{s_{0}|c_\pi(s_{0})\leq b\}$. If $b$ is arbitrarily large then the stochastic system is globally stable in mean cost. \end{definition} To address the performance of the agent in the presence of uncertainty, the following definition is needed. \begin{definition} \label{def:l2} The system is said to be mean square stable (MSS) with an $l_2$ gain less or equal than $\eta$, if the system is MSS when $w=0$, and the following holds for all $w\in l_2[0,+\infty)$, \begin{equation} \sum_{t=0}^\infty\mathbb{E}_{s_{t}}c_\pi(s_t) \leq \sum_{t=0}^\infty\mathbb{E}_{s_{t}}\eta^2 \Vert w(s_t)\Vert_2 \label{def: robust performance} \end{equation} where $\eta \in \mathbb{R}_+$. $ z(s_t)$ is the error output of the system and $w(s_t)$ is the uncertainty, which is composed of both environmental disturbance and modelling error. \end{definition} The robust performance guarantee \eqref{def: robust performance} holds for all $w$, is equivalent to guaranteeing the inequality for the worst case induced by $w$, i.e., \begin{equation} \sup_{w} \sum_{t=0}^\infty\mathbb{E}_{s_{t}}c_\pi(s_t) -\eta^2 \Vert w(s_t)\Vert_2 \leq 0\label{eq: robust performance} \end{equation} \vspace{-0.3cm} \section{Main Results} \label{sec:main result} \vspace{-0.2cm} \subsection{Lyapunov-based $H_\infty$ Learning Control} In this section, we propose the main assumptions and a new theorem. \begin{assumption}\label{stationary assumption} The stationary distribution of state $q_\pi(s)\triangleq\lim_{t\rightarrow\infty}P(s|\rho,\pi,t)$ exists. \end{assumption} \begin{assumption}\label{initial state assumption} There exists a positive constant $b$ such that $\rho(s)> 0, \forall s\in\{s|c_\pi(s)\leq b\}$. \end{assumption} The core theoretical results on analyzing the stability and robust performance of the closed-loop system with the help of Lyapunov function and sampled data are presented. The Lyapunov function is a class of continuously differentiable semi-positive definite functions $L : \mathcal{S} \to \mathbb{R}_+$. The general idea of exploiting Lyapunov function is to ensure that the derivative of Lyapunov function along the state trajectory is semi-negative definite so that the state goes in the direction of decreasing the value of Lyapunov function and eventually converges to the set or point where the value is zero. \begin{theorem}\label{them:robust mss} If there exists a continuous differentiable function $L:\mathcal{S}\rightarrow \mathbb{R} _{+}$\ and positive constants $\alpha _{1}$, $\alpha _{2}$, $\alpha_{3},\eta, k_1, k_2$ such that \begin{gather} \alpha _{1}c_\pi\left( s\right) \leq L(s)\leq \alpha _{2}c_\pi\left( s\right) \label{robust stability-1}\\ \mathbb{E}_{\beta(s)}(\mathbb{E}_{s^{\prime }\sim P_{\pi }}L(s^{\prime })-L(s))< \mathbb{E}_{\beta(s) }[\eta \Vert w(s) \Vert -(\alpha_{3}+1)c_\pi\left( s\right)] \label{robust stability-2} \end{gather}% holds for all $\{\rho|\mathbb{E}_{s_0\sim\rho}c_\pi(s_0)\leq k_1\}$ and $\{w|\Vert w\Vert\leq k_2\}$. $\beta_\pi(s)\triangleq \lim_{N\rightarrow\infty}\frac{1}{N}\sum_{t=0}^N P(s_t=s|\rho,\pi,t)$ is the sampling distribution. Then the system is mean square stable and has $l_2$ gain no greater than $\eta/(\alpha_3 + 1)$. If the above holds for $\forall k_1, k_2\in \mathbb{R}_+$, then the system is globally mean square stable with finite $l_2$ gain. \end{theorem} Proof of Theorem~\ref{them:robust mss} is given in Appendix~\ref{proof of robust mss}. \subsection{Learning the Adversarial Disturber} In our setting, in addition to the control policy $\pi$, a disturber policy $\mu(w|s)$ is introduced to actively select the worst disturbance for a given state. More specifically, the adversarial disturber seeks to find the disturbance input $w$ over which the system has the greatest $l_2$ gain, i.e. maximizing the following cost function, {\small \begin{equation} \max_{\theta_\mu} J(\mu) = \mathbb{E}_{\beta(s), \mu(w|s)}(c_\pi(s)-\eta^2 \Vert w \Vert) \end{equation} where $\theta_\mu$ is the parameter of the disturber policy $\mu$. } \section{Algorithm}\label{sec:algorithm} In this section, based on the theoretical results in Section~\ref{sec:main result}, we propose an actor-critic style algorithm with robust stability guarantee (RLAC). In this algorithm, we include a critic Lyapunov function $L_c$ to provide the policy gradient, which satisfies $L(s) = \mathbb{E}_{a\sim \pi} L_c(s,a)$. Through Lagrangian method, the objective function for $\pi$ is obtained as follow, {\small \begin{equation} \begin{aligned} J(\pi) &= \mathbb{E}_{(s,a,w,c,s')\sim\mathcal{D}}\left[ \beta \log(\pi(f_{\theta_\pi}(\epsilon,s)|s))+ \lambda \Delta L(s,a,w,c,s')\right]\\ \Delta L(s,a,w,c,s')&=\left(L_c(s',f_{\theta_\pi}(\epsilon,s'))-L_c(s,a)+(\alpha_3+1)c - \eta^2\Vert w \Vert_2\right) \label{RLAC} \end{aligned} \end{equation} } where $\pi$ is parameterized by a neural network $f_{\theta_\pi}$ and $\epsilon$ is an input vector consisted of Gaussian noise. In the above objective, $\nu$ and $\lambda$ are the positive Lagrangian multipliers, of which the values are adjusted automatically. The gradient of \eqref{RLAC} with respect to the policy parameter $\theta_\pi$ is approximated by {\small \begin{equation} \nabla_{\theta} J(\pi) =\mathbb{E}_{\mathcal{D}}\left[ \nabla_\theta \nu \log(\pi_\theta(a|s)) + \nabla_a \nu \log(\pi_\theta(a|s))\nabla_\theta f_\theta(\epsilon,s) + \lambda\nabla_{a'}L_c(s',a')\nabla_\theta f_\theta(\epsilon,s')\right] \label{algorithm:RLAC_policy_gradient} \end{equation} } The Lyapunov function is updated through minimizing the following objective {\small \begin{gather} J(L_c) = \mathbb{E}_{(s,a)\sim \mathcal{D}}\left[\frac{1}{2}(L_c(s,a)-L_{\text{target}}(s,a))^2\right] \end{gather} } We use the sum of cost over a finite time horizon $N$ as the Lyapunov candidate, i.e. {\small \begin{gather} L_{\text{target}}(s,a) = \sum_{t=t_0}^N c(s_t,a_t) \end{gather} } which has long been exploited as the Lyapunov function in establishing the stability criteria for model predictive control (MPC) \citep{mayne2000constrained}. The pseudo-code of RLAC is presented in Algorithm~\ref{algo:RLAC}. \section{Experimental Results} \label{sec:experiment result} In this section, we evaluate the robustness of RLAC against i) large impulsive disturbances; ii) parametric uncertainty. Setup of the experiment is referred to Appendix~\ref{Experiment setup}. \subsection{Robustness to Impulsive Disturbances} \vspace{-0.3cm} \begin{figure}[htb] \centering \subfigure[Visualization of the disturbance]{ \includegraphics[scale=0.32]{figures/impulse/visualization.pdf} } \subfigure[Death rate of the cartpole]{ \includegraphics[scale = 0.27]{figures/impulse/impulse-death_rate.pdf} } \caption{(a) Direction of the disturbance applied on the cartpole, which is dependent on the relative position of cart concerning the origin. (b) The death rate of agents trained by RLAC, RARL, SAC, MPC and LQR in the presence of impulsive force $F$ with different magnitudes. The trained policies are initialized by 10 random seeds. The policies with different initializations are evaluated equally for 500 episodes. The line indicates the average death rate of these policies and the shadowed region shows the 1-SD confidence interval.} \label{fig:impulsive disturbance} \end{figure} We evaluate the robustness of the agents trained by RLAC and baselines against unseen exogenous disturbance. We measure the robust performance via the death rate, i.e., the probability of pole falling after impulsive disturbance. As observed in the figure, RLAC gives the most robust policy against the impulsive force. It maintains the lowest death rate throughout the experiment, far more superior than SAC and RARL. Moreover, RLAC performs even better than MPC and LQR, which possess the full information of the model and are available. \subsection{Robustness to Parametric Uncertainty} In this experiment, we evaluate the trained policies in environments with different parameter settings. In the training environment, the parameter \textit{length of pole} $l=0.5$ and \textit{mass of cart} $m_c=1$, while during evaluation $l$ and $m_c$ are selected in a 2-D grid with $l\in[0.2,2.0]$ and $m_c\in[0.4,2.0]$. \begin{figure}[h] \centering \includegraphics[scale = 0.47]{figures/param_variation/figure2.pdf} \caption{Death rate and total costs of agents trained by RLAC, RARL, SAC and LQR in the presence of different parametric uncertainty which are \emph{unseen} during training and different from dynamic randomization. $l$ (X-axis) and $m_c$ (Y-axis) vary with the step size of $0.1$ and $0.2$ respectively. At each point of the parameter grid, the results are averaged between the agents with different initializations over 100 episodes. } \label{fig:parametric uncertainty} \end{figure} As shown in the heat maps in \autoref{fig:parametric uncertainty}, RLAC achieves the lowest death rate (zero for the majority of the parameter settings) and obtains reasonable total cost (lower than 100). The total cost of RLAC is slightly higher than SAC and RARL since the agents hardly die and sustain longer episodes. Compared to SAC, RARL achieves lower death rate and comparable total cost performance. LQR performs well in the region where parameters are close to the nominal model but deteriorates soon as parameters vary. All of the model-free methods outperform LQR in terms of robustness to parametric uncertainty, except for the case of low $l$ and $m_c$ (left bottom of the grid). This is potentially due to the overparameterized policy does not generalize well to the model where dynamic is more sensitive to input than the one used for training. \vspace{-0.2cm} \bibliographystyle{unsrtnat} \section{Introduction} As a powerful learning control paradigm, reinforcement learning is extremely suitable for finding the optimal policy in tasks where the dynamics are either unknown or affected by severe uncertainty \citep{bucsoniu2018reinforcement}. Its combination with the deep neural network has boosted applications in autonomous driving \citep{sallab2017deep}, complicated robot locomotion \cite{hwangbo2019learning}, and skilful games like Atari \citep{mnih2015human} and Go \citep{silver2017mastering}. However, overparameterized policies are prone to become overfitted to the specific training environment, limiting its generalization to the various scenarios \cite{pinto2017robust}. Additionally, RL agents trained in simulation, though cheap to obtain, but likely suffer from the \textit{reality gap} problem \cite{koos2010crossing} when transferred from virtual to the real world. To overcome these drawbacks, various efforts are made to enhance the robustness of the policy \cite{jakobi1995noise, tobin2017domain,mordatch2015ensemble}, since a robust policy has a greater chance of successful generalization and transfer. \textbf{Contribution} In this paper, we propose a unified framework of designing policies with both stability and robust performance guarantee against the various uncertainties in the environment. Without any specific domain knowledge, our method is able to find policy that is robust to large exogenous disturbances and generalizes well to different test environment. First, a novel model-free method for analyzing the Lyapunov stability and $H_\infty$ performance of the closed-loop system is developed. Based on the theoretical results, we propose the Robust Lyapunov-based Actor-Critic (RLAC) algorithm to simultaneously find the Lyapunov function and policy that can guarantee the robust stability of the closed-loop system. We evaluate RLAC on a simulated cartpole in the OpenAI gym \citep{brockman2016openai} environment and show that our approach is robust to: \textbf{i) Large impulsive disturbance:} The trained agent is able to recover when disturbed by adversary impulses 4-6 times of the maximum control input, while other baselines fail almost surely. \textbf{ii) Parametric Uncertainty:} The learned policy generalizes better than the baselines to different test environment settings (e.g. different mass and structural values). \section{Preliminaries} \label{sec:preliminaries} \subsection{Markov Decision Process and Reinforcement Learning} A Markov decision process (MDP) is a tuple, ($S,A,c,P,\rho$), where $S$ is the set of states, $A$ is the set of actions, $c (s,a)\in [0,\infty)$ is the cost function, $P (s'|s,a)$ is the transition probability function, and $\rho (s)$ is the starting state distribution. $\pi(a|s)$ is a stationary policy denoting the probability of selecting action $a$ in state $s$. In addition, the cost function under stationary policy is defined as $c_\pi(s) \doteq \mathbb{E}_{a\sim\pi }c(s,a)$. In this paper, we divide the state $s$ into two vectors, $s^1$ and $s^2$, where $s^1$ is composed of elements of $s$ that are aimed at tracking the reference signal $r$ while $s^2$ contains the rest. The cost function is defined as $\Vert s^1-r \Vert$, where $\Vert \cdot \Vert$ denotes the Euclidean norm. \subsection{Robust Control Against Environment Uncertainty} \begin{definition} \label{def:mss} The stochastic system is said to be stable in mean cost if $\lim_{t\rightarrow \infty }\mathbb{E}_{s_{t}} c_\pi(s_{t})=0$ holds for any initial condition $s_{0}\in \{s_{0}|c_\pi(s_{0})\leq b\}$. If $b$ is arbitrarily large then the stochastic system is globally stable in mean cost. \end{definition} To address the performance of the agent in the presence of uncertainty, the following definition is needed. \begin{definition} \label{def:l2} The system is said to be mean square stable (MSS) with an $l_2$ gain less or equal than $\eta$, if the system is MSS when $w=0$, and the following holds for all $w\in l_2[0,+\infty)$, \begin{equation} \sum_{t=0}^\infty\mathbb{E}_{s_{t}}c_\pi(s_t) \leq \sum_{t=0}^\infty\mathbb{E}_{s_{t}}\eta^2 \Vert w(s_t)\Vert_2 \label{def: robust performance} \end{equation} where $\eta \in \mathbb{R}_+$. $ z(s_t)$ is the error output of the system and $w(s_t)$ is the uncertainty, which is composed of both environmental disturbance and modelling error. \end{definition} The robust performance guarantee \eqref{def: robust performance} holds for all $w$, is equivalent to guaranteeing the inequality for the worst case induced by $w$, i.e., \begin{equation} \sup_{w} \sum_{t=0}^\infty\mathbb{E}_{s_{t}}c_\pi(s_t) -\eta^2 \Vert w(s_t)\Vert_2 \leq 0\label{eq: robust performance} \end{equation} \vspace{-0.3cm} \section{Main Results} \label{sec:main result} \vspace{-0.2cm} \subsection{Lyapunov-based $H_\infty$ Learning Control} In this section, we propose the main assumptions and a new theorem. \begin{assumption}\label{stationary assumption} The stationary distribution of state $q_\pi(s)\triangleq\lim_{t\rightarrow\infty}P(s|\rho,\pi,t)$ exists. \end{assumption} \begin{assumption}\label{initial state assumption} There exists a positive constant $b$ such that $\rho(s)> 0, \forall s\in\{s|c_\pi(s)\leq b\}$. \end{assumption} The core theoretical results on analyzing the stability and robust performance of the closed-loop system with the help of Lyapunov function and sampled data are presented. The Lyapunov function is a class of continuously differentiable semi-positive definite functions $L : \mathcal{S} \to \mathbb{R}_+$. The general idea of exploiting Lyapunov function is to ensure that the derivative of Lyapunov function along the state trajectory is semi-negative definite so that the state goes in the direction of decreasing the value of Lyapunov function and eventually converges to the set or point where the value is zero. \begin{theorem}\label{them:robust mss} If there exists a continuous differentiable function $L:\mathcal{S}\rightarrow \mathbb{R} _{+}$\ and positive constants $\alpha _{1}$, $\alpha _{2}$, $\alpha_{3},\eta, k_1, k_2$ such that \begin{gather} \alpha _{1}c_\pi\left( s\right) \leq L(s)\leq \alpha _{2}c_\pi\left( s\right) \label{robust stability-1}\\ \mathbb{E}_{\beta(s)}(\mathbb{E}_{s^{\prime }\sim P_{\pi }}L(s^{\prime })-L(s))< \mathbb{E}_{\beta(s) }[\eta \Vert w(s) \Vert -(\alpha_{3}+1)c_\pi\left( s\right)] \label{robust stability-2} \end{gather}% holds for all $\{\rho|\mathbb{E}_{s_0\sim\rho}c_\pi(s_0)\leq k_1\}$ and $\{w|\Vert w\Vert\leq k_2\}$. $\beta_\pi(s)\triangleq \lim_{N\rightarrow\infty}\frac{1}{N}\sum_{t=0}^N P(s_t=s|\rho,\pi,t)$ is the sampling distribution. Then the system is mean square stable and has $l_2$ gain no greater than $\eta/(\alpha_3 + 1)$. If the above holds for $\forall k_1, k_2\in \mathbb{R}_+$, then the system is globally mean square stable with finite $l_2$ gain. \end{theorem} Proof of Theorem~\ref{them:robust mss} is given in Appendix~\ref{proof of robust mss}. \subsection{Learning the Adversarial Disturber} In our setting, in addition to the control policy $\pi$, a disturber policy $\mu(w|s)$ is introduced to actively select the worst disturbance for a given state. More specifically, the adversarial disturber seeks to find the disturbance input $w$ over which the system has the greatest $l_2$ gain, i.e. maximizing the following cost function, {\small \begin{equation} \max_{\theta_\mu} J(\mu) = \mathbb{E}_{\beta(s), \mu(w|s)}(c_\pi(s)-\eta^2 \Vert w \Vert) \end{equation} where $\theta_\mu$ is the parameter of the disturber policy $\mu$. } \section{Algorithm}\label{sec:algorithm} In this section, based on the theoretical results in Section~\ref{sec:main result}, we propose an actor-critic style algorithm with robust stability guarantee (RLAC). In this algorithm, we include a critic Lyapunov function $L_c$ to provide the policy gradient, which satisfies $L(s) = \mathbb{E}_{a\sim \pi} L_c(s,a)$. Through Lagrangian method, the objective function for $\pi$ is obtained as follow, {\small \begin{equation} \begin{aligned} J(\pi) &= \mathbb{E}_{(s,a,w,c,s')\sim\mathcal{D}}\left[ \beta \log(\pi(f_{\theta_\pi}(\epsilon,s)|s))+ \lambda \Delta L(s,a,w,c,s')\right]\\ \Delta L(s,a,w,c,s')&=\left(L_c(s',f_{\theta_\pi}(\epsilon,s'))-L_c(s,a)+(\alpha_3+1)c - \eta^2\Vert w \Vert_2\right) \label{RLAC} \end{aligned} \end{equation} } where $\pi$ is parameterized by a neural network $f_{\theta_\pi}$ and $\epsilon$ is an input vector consisted of Gaussian noise. In the above objective, $\nu$ and $\lambda$ are the positive Lagrangian multipliers, of which the values are adjusted automatically. The gradient of \eqref{RLAC} with respect to the policy parameter $\theta_\pi$ is approximated by {\small \begin{equation} \nabla_{\theta} J(\pi) =\mathbb{E}_{\mathcal{D}}\left[ \nabla_\theta \nu \log(\pi_\theta(a|s)) + \nabla_a \nu \log(\pi_\theta(a|s))\nabla_\theta f_\theta(\epsilon,s) + \lambda\nabla_{a'}L_c(s',a')\nabla_\theta f_\theta(\epsilon,s')\right] \label{algorithm:RLAC_policy_gradient} \end{equation} } The Lyapunov function is updated through minimizing the following objective {\small \begin{gather} J(L_c) = \mathbb{E}_{(s,a)\sim \mathcal{D}}\left[\frac{1}{2}(L_c(s,a)-L_{\text{target}}(s,a))^2\right] \end{gather} } We use the sum of cost over a finite time horizon $N$ as the Lyapunov candidate, i.e. {\small \begin{gather} L_{\text{target}}(s,a) = \sum_{t=t_0}^N c(s_t,a_t) \end{gather} } which has long been exploited as the Lyapunov function in establishing the stability criteria for model predictive control (MPC) \citep{mayne2000constrained}. The pseudo-code of RLAC is presented in Algorithm~\ref{algo:RLAC}. \section{Experimental Results} \label{sec:experiment result} In this section, we evaluate the robustness of RLAC against i) large impulsive disturbances; ii) parametric uncertainty. Setup of the experiment is referred to Appendix~\ref{Experiment setup}. \subsection{Robustness to Impulsive Disturbances} \vspace{-0.3cm} \begin{figure}[htb] \centering \subfigure[Visualization of the disturbance]{ \includegraphics[scale=0.32]{figures/impulse/visualization.pdf} } \subfigure[Death rate of the cartpole]{ \includegraphics[scale = 0.27]{figures/impulse/impulse-death_rate.pdf} } \caption{(a) Direction of the disturbance applied on the cartpole, which is dependent on the relative position of cart concerning the origin. (b) The death rate of agents trained by RLAC, RARL, SAC, MPC and LQR in the presence of impulsive force $F$ with different magnitudes. The trained policies are initialized by 10 random seeds. The policies with different initializations are evaluated equally for 500 episodes. The line indicates the average death rate of these policies and the shadowed region shows the 1-SD confidence interval.} \label{fig:impulsive disturbance} \end{figure} We evaluate the robustness of the agents trained by RLAC and baselines against unseen exogenous disturbance. We measure the robust performance via the death rate, i.e., the probability of pole falling after impulsive disturbance. As observed in the figure, RLAC gives the most robust policy against the impulsive force. It maintains the lowest death rate throughout the experiment, far more superior than SAC and RARL. Moreover, RLAC performs even better than MPC and LQR, which possess the full information of the model and are available. \subsection{Robustness to Parametric Uncertainty} In this experiment, we evaluate the trained policies in environments with different parameter settings. In the training environment, the parameter \textit{length of pole} $l=0.5$ and \textit{mass of cart} $m_c=1$, while during evaluation $l$ and $m_c$ are selected in a 2-D grid with $l\in[0.2,2.0]$ and $m_c\in[0.4,2.0]$. \begin{figure}[h] \centering \includegraphics[scale = 0.47]{figures/param_variation/figure2.pdf} \caption{Death rate and total costs of agents trained by RLAC, RARL, SAC and LQR in the presence of different parametric uncertainty which are \emph{unseen} during training and different from dynamic randomization. $l$ (X-axis) and $m_c$ (Y-axis) vary with the step size of $0.1$ and $0.2$ respectively. At each point of the parameter grid, the results are averaged between the agents with different initializations over 100 episodes. } \label{fig:parametric uncertainty} \end{figure} As shown in the heat maps in \autoref{fig:parametric uncertainty}, RLAC achieves the lowest death rate (zero for the majority of the parameter settings) and obtains reasonable total cost (lower than 100). The total cost of RLAC is slightly higher than SAC and RARL since the agents hardly die and sustain longer episodes. Compared to SAC, RARL achieves lower death rate and comparable total cost performance. LQR performs well in the region where parameters are close to the nominal model but deteriorates soon as parameters vary. All of the model-free methods outperform LQR in terms of robustness to parametric uncertainty, except for the case of low $l$ and $m_c$ (left bottom of the grid). This is potentially due to the overparameterized policy does not generalize well to the model where dynamic is more sensitive to input than the one used for training. \vspace{-0.2cm} \bibliographystyle{unsrtnat}
\section{Introduction} Study of temporal dynamics has greatly advanced our understanding of correlated electron systems for such dynamics may provide information about essential energy scales associated with the elementary excitations and collective modes in these systems.\cite{rev} The newly developed time-resolved angle-resolved photo-emission spectroscopy (trARPES) is a powerful technique to directly measure momentum resolved electronic dynamics,\cite{exp0} and has been applied to study ultrafast dynamics in charge density wave materials,\cite{cdw1,cdw2} high-temperature superconductors(HTSCs),\cite{sc1,sc2,sc3} and topological insulators.\cite{ti} Smallwood \emph{et al.} ~\cite{sc3} have used the technique to study gap and quasiparticle (QP) population dynamics in optimally doped d-wave HTSC Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$. Their results clearly show that QP's relaxation rate is dependent on the momentum and fluence. These intriguing observations have challenged the understanding of non-equilibrium dynamics in the superconducting (SC) state, and called for theoretical explanations for further explorations on HTSCs with trARPES technique. In this paper, we apply a two-temperature model~\cite{exp0} to theoretically study the transient energy distribution and QP relaxation dynamics of d-wave superconductor measured in trARPES. Our theory explains the momentum-resolved dynamics of the photoexcited QPs, and is in good agreement with experiments. \section{Theoretical Model} \subsection{Two-Temperature Scenario} In a typical trARPES experiment, a pump laser pulse is first applied to excite the investigated sample into a non-equilibrium state, and ARPES technique is then used to measure the temporal electronic dynamics of the sample. Presently, trARPES technique has a time-resolution $\tau_{ARPES} \approx 100\sim 300$ fs.\cite{sc1,sc2,sc3} In the relaxation process of the laser-excited d-wave superconductors, there are two essential time scales determined by the intrinsic interactions, namely, $\tau_{ee}\sim\mathcal{O}(\text{fs})$ due to the electron-electron (e-e) scattering and $\tau_{ep}\sim\mathcal{O}(\text{ps})$ due to the electron-phonon (e-p) scattering. We consider a typical case where $\tau_{ee}\ll \tau_{ARPES} \ll \tau_{ep}$, which is suitable for the present experiment resolution of $\tau_{ARPES}$. In this case, the e-e scattering thermalizes the excited electronic subsystem into a quasi-equilibrium state in the time scale of $\tau_{ee}$. Then the detailed non-equilibrium processes of the electronic subsystem will be washed out during the time $\tau_{ARPES}$, and the quasi-equilibrium state can be characterized by the effective temperature $T_{e}$. In the longer time scale $\tau_{ep}$, the extra energy in the electronic subsystem will be dissipated to the lattice subsystem through electron-phonon interaction, and $T_{e}$ will decrease to the lattice temperature $T_{l}$. Thus the observed gap and QP dynamics of the superconductors will be associated with the time-dependent electronic temperature $T_{e}$. This simple two-temperature scenario has been applied to describe the relaxation dynamics in HTSC, \cite{exp0} and will be applied to study the trARPES here. The dependence of $T_{e}$ on the time $t$ in the two-temperature model is not known \emph{a priori}, but may be determined from the experimental measurements or derived from the microscopic theory. For simplicity, we assume that the dominant process for the electronic system to dissipate energy is the pairwise recombination of QPs into Cooper pairs, and the Rothwarf-Taylor equation\cite{RT} for the total QP number still holds, which is supported by the previous THz conductivity measurement.\cite{exp1} Further exploiting the fact that the total QP density is approximately proportional to the square of electronic temperature $T_{e}$, then the equation of the electronic temperature $T_{e}$ is estimated as\cite{TeEq} \begin{eqnarray} \frac{dT_{e}^{2}}{dt}=-r(T_{e}^{4}-T_{l}^{4}).\label{decay} \end{eqnarray} Here, the pump laser is assumed to be applied at time $t=0$, which promotes the electronic temperature from $T_{l}$ to $T_{e}(0)$. For given $T_{l}$, higher laser fluence will drive the sample into higher excited state and lead to larger $T_{e}(0)$. Then Eq.~(\ref{decay}) describes the decay of the electronic temperature from $T_{e}(0)$ to $T_{l}$, and the parameter $r$ is proportional to the recombination rate of QPs\cite{TeEq} and will be determined by fitting to the experimental results later on. \begin{figure}[htbp] \includegraphics[scale=0.16,clip]{Fig1.eps} \caption{(Color online) (a) Fermi surface in a tight-binding model\cite{tb} studied in the present paper for optimally-doped Bi-2212. $\phi$ denotes the angle to describe momentum cutline in ARPES measurements. (b) Temperature dependence of SC gap, determined from Eq.~(\ref{gap}). (c) and (d): Calculated change in the line-momentum-integrated ARPES intensity, $\delta I(\omega)$, defined in Eq.~(\ref{deltaI}), between $T_{e}$=90K and $T_{l}$=20K, along the momentum cutlines $\phi=45^{\circ}$ and $\phi=31^{\circ}$, respectively. (e) and (f): Experimental result of the intensity change $\delta I(\omega)$ along the momentum cutlines $\phi=45^{\circ}$ and $\phi=31^{\circ}$, respectively, taken from Ref.~\onlinecite{sc3} for pump fluence 5~$\mu$J/cm$^{2}$. Blue color represents intensity gain and red color intensity loss. }\label{bandgap} \end{figure} \subsection{Band Structure, Gap Function, Spectral Function, ARPES Intensity} Now we consider the quasi-equilibrium electronic state characterized by temperature $T_{e}$ and the resulting consequences in trARPES measurements. Since the trARPES has been reported on the optimally-doped Bi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$(Bi-2212) samples,\cite{sc1,sc2,sc3} our calculations below will be specified to this compound although the main conclusions are expected to apply to general d-wave superconductors. We use a simple tight-binding model suggested in Ref.~\onlinecite{tb} to describe the normal state energy dispersion, with \begin{eqnarray} \epsilon_{\mathbf{k}}&=&c_{0}+\frac{c_{1}}{2}(\cos k_{x}+\cos k_{y})+c_{2}\cos k_{x}\cos k_{y}\nonumber\\ &-&\frac{c_{3}}{2}(\cos 2k_{x}+\cos 2k_{y})+c_{5}\cos 2k_{x}\cos 2k_{y}\nonumber\\ &+&\frac{c_{4}}{2}(\cos 2k_{x}\cos 2k_{y}+\cos k_{x}\cos k_{y}),\label{band} \end{eqnarray} where the coefficients are $c_{0}=0.1305,c_{1}=-0.5951,c_{2}=0.1636,c_{3}=-0.0519,c_{4}=-0.1117,c_{5}=0.0510$. The Fermi surface (FS) determined by $\epsilon_{\mathbf{k}}=0$ is shown in Fig.~\ref{bandgap}(a). A momentum cutline denoted by an angle $\phi$ for ARPES measurements is also shown. We consider a simple d-wave SC gap function for Bi-2212, $\Delta_{\mathbf{k}}=\frac{\Delta(T_{e})}{2}(\cos k_{x}-\cos k_{y})$, and further assume a simple BCS form for the temperature dependence of $\Delta(T_{e})$, which is determined by the self-consistent equation\cite{Tinkham} \begin{eqnarray} \frac{1}{N(0)V}=\int_{0}^{\hbar\omega_{c}}\frac{\text{tanh}[\frac{1}{2k_{b}T_{e}}(\xi^{2}+\Delta^{2})^{1/2}]}{(\xi^{2}+\Delta^{2})^{1/2}}d\xi.\label{gap} \end{eqnarray} Here, $k_{b}$ is the Boltzmann constant, $N(0)$ denotes the density of states at the Fermi level of one spin orientation, $V$ characterizes the strength of pair potential for the electrons within the cutoff energy $\hbar\omega_{c}$.\cite{Tinkham} With the weak-coupling condition $\hbar\omega_{c}\gg k_{b}T_{c}$ and the relation $\frac{\Delta(0)}{k_{b}T_{c}}=1.764$, where $T_{c}$ is the SC critical temperature, Eq.~(\ref{gap}) gives the universal function of $\Delta(T_{e})/\Delta(0)$ on $T_{e}/T_{c}$,\cite{Tinkham} as shown in Fig.~\ref{bandgap}(b). While this form is for conventional s-wave superconductor, the deviation from d-wave superconductors is not significant to change the qualitative results below. With the bare band dispersion relation $\epsilon_{\mathbf{k}}$ and the gap function $\Delta_{\mathbf{k}}$, the QP excitation energy $E_{\mathbf{k}}$ is determined as $E_{\mathbf{k}}=\sqrt{\epsilon_{\mathbf{k}}^{2}+\Delta_{\mathbf{k}}^{2}}$. The key quality measured in the trARPES is the spectral function $A(\omega,\mathbf{k})$ in terms of the energy distribution curves, which takes the form below with a Lorentzian lineshape\cite{spec} \begin{eqnarray} A(\omega,\mathbf{k})=\frac{1}{\pi}[\frac{\mu_{\mathbf{k}}^{2}\Gamma}{(\omega-E_{\mathbf{k}})^{2}+\Gamma^{2}}+\frac{\nu_{\mathbf{k}}^{2}\Gamma}{(\omega+E_{\mathbf{k}})^2+\Gamma^{2}}].\label{specfuc} \end{eqnarray} Here, $\Gamma$ characterizes the broadening of the spectral line and is assumed to be a constant. $\nu_{\mathbf{k}}^{2}=1-\mu_{\mathbf{k}}^{2}=\frac{1}{2}(1-\epsilon_{\mathbf{k}}/E_{\mathbf{k}})$ are the coherence factors. In our calculations, we set $T_{c}=90$~K, $\Delta(0)=0.03$~eV, and $\Gamma=0.01$~eV. When the electronic temperature $T_{e}$ changes, the spectrum function $A(\omega,\mathbf{k})$ also changes due to the temperature-dependent SC gap. Several physical qualities based on the spectral functions are usually given in the trARPES experiments, and will be calculated below in order to compare with the experimental results. One is the line-momentum-integrated ARPES intensity $I(\omega)$ (ARPES intensity hereafter) along a momentum cutline $L$ (the choice in our calculations is shown in Fig.~\ref{bandgap}(a)), \begin{eqnarray} I(\omega) =f(\omega) \int_{L}d\mathbf{k}A(\omega,\mathbf{k}),\label{inten} \end{eqnarray} where $f(\omega)$ is the Fermi-Dirac (FD) distribution function, which depends on the electronic temperature $T_{e}$. Note that $A(\omega,\mathbf{k})$ is also a function of $T_{e}$ via the SC gap function. In the two-temperature scenario, as $T_{e}$ decreases from $T_{e}(0)$ to the equilibrium temperature $T_{l}$, the SC gap increases and the thermal distribution of QPs is suppressed, then the number of excited QPs is reduced. This process is reflected in the time-evolution of the ARPES intensity $I(\omega)$.\cite{sc1,sc2,sc3} A more relevant quantity is the change of the ARPES intensity between the two temperatures $T_{e}$ and $T_{l}$, \begin{eqnarray} \delta I(\omega)&\equiv&I(\omega;T_{e})-I(\omega;T_{l}),\label{deltaI} \end{eqnarray} It is convenient to introduce the line-momentum-integrated density of states(DOS hereafter) along the momentum cutline $L$, $D_{L}(\omega)=\int_{L}d\mathbf{k}A(\omega,\mathbf{k})$, then $\delta I(\omega)$ is decomposed into two terms, \begin{eqnarray} \delta I(\omega)=D_{L}(\omega;T_{l})\delta f(\omega)+\delta D_{L}(\omega)f(\omega;T_{e}),\label{diffint} \end{eqnarray} where $\delta f(\omega)\equiv f(\omega;T_{e})-f(\omega;T_{l})$ is the change of the distribution function, and $\delta D_{L}(\omega)\equiv D_{L}(\omega;T_{e})-D_{L}(\omega;T_{l})$ is the change of the DOS along $L$. This decomposition is vital for us to understand the results below. In Fig.~\ref{bandgap}(c) and (d), we show the results of $\delta I(\omega)$ for the system for two sets of FS angles, i.e. a diagonal cutline $\phi=45^{\circ}$ (nodal cutline) and an off-nodal cutline $\phi=31^{\circ}$, respectively. The two temperatures are $T_{e}=90$K, and $T_{l}=20$K. $\delta I$ is strongly dependent on the cutline. For the nodal cutline, $\delta I$ is approximately antisymmetric with respect to $\omega$, similar to the change of the Fermi distribution function $\delta f(\omega)$. In particular, $\delta I =0$ at $\omega=0$. For the non-nodal cutline, the shape is far from the antisymmetric, and $\delta I(\omega=0)$ is finite and positive, and $\delta I=0$ occurs at $\omega < 0$. Our calculations are in good agreement with the experimental data,\cite{sc3} which are reproduced in Fig.~\ref{bandgap}(e) and (f) for comparison. The strong FS angle dependence of the ARPES intensity is associated with the symmetry of the SC gap function. For the nodal cutline of ($\phi=45^{\circ}$), the second term in Eq.~(\ref{diffint}) vanishes. Furthermore, $\delta f(-\omega)=-\delta f(\omega)$ and $\delta f(\omega)$ is of substantial value only within a window of $|\omega| < 3k_{b}T_{e}$. Since $D_{L}(\omega;T_{l})$ doesn't vary sharply near $\omega=0$, we have $\delta I(\omega)$ approximately antisymmetric as numerically shown in Fig.~\ref{bandgap}(c). For the off-nodal case, the second term in Eq.~(\ref{diffint}) plays an important role, and $\delta I(\omega)$ is no longer antisymmetric. At $\omega=0$, we have $\delta D_{L}(0)>0$ according to Eq.~(\ref{specfuc}) and $f(0)=\frac{1}{2}$, thus $\delta I(0)=\frac{1}{2}\delta D_{L}(0)>0$. Another important quality given in trARPES experiments is the angle-resolved QP density. Following the experiments,\cite{sc1,sc3} we define $I$ as the integral of the ARPES intensity $I(\omega)$ over $\omega$ above the Fermi energy along the momentum cutline $L$ at temperature $T_{l}$, and $\Delta I$ is the change of $I$ when the electronic temperature increased from $T_{l}$ to $T_{e}$, i.e. \begin{eqnarray} I=\int_{0}^{\infty}d\omega I(\omega),\quad\Delta I=\int_{0}^{\infty}d\omega \delta I(\omega).\label{IdI} \end{eqnarray} Below we analysis the dynamics of $\Delta I$ to further explain the trARPES experiments. \section{Calculation Results} \subsection{Angle Dependence of Photoexcited Quasiparticle Density} It has been observed that the photoexcited QP density is strongly dependent on the FS angle $\phi$ similar to the d-wave SC gap function.\cite{sc1} According to our discussions above, the electronic temperature has been increased from $T_{l}$ to $T_{e}(0)$ after the laser pumping, because the detailed electronic dynamics to reach such a quasi-equilibrium state has been washed out due to the large time resolution of the ARPES measurements. Then the SC gap is reduced and even closed if $T_{e}(0)\geq T_{c}$ and more QPs are also excited. Thus we can calculate the photoexcited QP density according to Eq.~(\ref{IdI}), and the results of $\Delta I/I$ for the temperatures $T_{e}(0)$=90K, 80K, 70K, 60K, 50K and $T_{l}$=20K are given in Fig.~\ref{IniInt}(a). The calculation results reproduce the angle-dependence relation observed in experiments,\cite{sc1} as shown in Fig.~\ref{IniInt}(c). Notice that higher pump fluence corresponds to higher electronic temperature $T_{e}(0)$. To understand the results, we recall that $\Delta I$ has both the contributions from thermal broadening and gap reduction according to Eq.~(\ref{diffint}). We denote $\Delta I\equiv\Delta I_{t}+\Delta I_{g}$, where $\Delta I_{t}=\int_{0}^{\infty}d\omega D_{L}(\omega;T_{l})\delta f(\omega),\quad \Delta I_{g}=\int_{0}^{\infty}d\omega \delta D_{L}(\omega)f(\omega;T_{e}).$ The angle dependence of $\Delta I_{t}/I$ is expected to be weak as long as $D_{L}(\omega,T_{l})$ doesn't vary sharply near the Fermi energy. Specially, if $D_{L}(\omega;T_{l})$ is approximated by its value $D_{L}(0;T_{l})$ at Fermi energy, $\Delta I_{t}/I$ will be independent on the FS angle $\phi$. While for $\Delta I_{g}/I$, its angle dependence comes from the change of DOS $\delta D_{L}(\omega)$ which is strongly dependent on the d-wave gap function. For the nodal case($\phi=45^{\circ}$), $\delta D_{L}(\omega)=0$ gives $\Delta I_{g}/I=0$; while in the off-nodal region, the gap reduction will increase the DOS near the Fermi energy and thus $\Delta I_{g}/I>0$. As an example, Fig.~\ref{IniInt}(b) shows the two contributions $\Delta I_{t}/I$ and $\Delta I_{g}/I$ to $\Delta I/I$ for $T_{e}(0)=90$K and $T_{l}=20$K. It is found that $\Delta I_{t}/I$ is only weakly dependent on the FS angle $\phi$, while $\Delta I_{g}/I$ strongly depends on $\phi$ in the similar way as the d-wave gap function, which supports our analysis. When the pump fluence is lower, the temperature $T_{e}(0)$ is closer to $T_{l}$, then the gap reduction becomes smaller, and the angle dependence of $\Delta I/I$ becomes weak. This tendency is shown in Fig.~\ref{IniInt}(a), and is also found in the trARPES experiments in Fig.~\ref{IniInt}(c). \begin{figure}[htbp] \includegraphics[scale=0.16,clip]{Fig2.eps} \caption{(Color online)(a) The dependence of relative ARPES intensity change $\Delta I/I$ on the Fermi surface angle $\phi$ for several temperatures $T_{e}$ with $T_{l}=20$K. (b) Thermal broadening contribution $\Delta I_{t}/I$ and gap reduction contribution $\Delta I_{g}/I$ to the relative ARPES intensity change $\Delta I/I$ for temperature $T_{e}=90$K and $T_{l}=20$K. (c) Experimental result for the photoexcited QP density, taken from Ref.~\onlinecite{sc1}. }.\label{IniInt} \end{figure} \subsection{Angle Dependence of Quasiparticle Decay Rate} We further analyze the angle-dependent decay rate of QPs. It has been observed that the QPs in the off-nodal region decay faster than the ones in the nodal region.\cite{sc3} In the two-temperature scenario, the ARPES measures the quasi-equilibrium electronic state characterized by the temperature $T_{e}(t)$. We first calculate the normalized $\Delta I$ for different temperatures at given FS angle $\phi$, and the results are shown in Fig.~\ref{TempScale}(a). Here, we have fixed $T_{l}=20$K, and varied the temperature $T_{e}$ from $90$K to $20$K, and then normalize $\Delta I$ by its value at $90$K. It is found that for given temperature $T_{e}$, the normalized $\Delta I$ decreases when the angle $\phi$ moves from the nodal region to the off-nodal region. Then with the help of Eq.~(\ref{decay}), the temperature dependence of the normalized $\Delta I$ in Fig.~\ref{TempScale}(a) is mapped to the corresponding time-dependence relation, as shown in Fig.~\ref{TempScale}(b). Here we have set $r=1.5\times10^{-4}~\text{K}^{-2}\text{ps}^{-1}$ to fit the experimental results. Our results here reproduce the observations that the decay of normalized $\Delta I/I$ depends on the FS angle\cite{sc3} shown in Fig.~\ref{TempScale}(c). \begin{figure}[htbp] \includegraphics[scale=0.16,clip]{Fig3.eps} \caption{(Color online)(a) The temperature dependence of normalized $\Delta I$ for several FS angles with initial electronic temperature $T_{e}(0)$=90K and lattice temperature $T_{l}$=20K. (b) Decay curves of normalized $\Delta I$ for several FS angles by mapping the curves in (a) to the time scale in the help of Eq.~(\ref{decay}). The fitting parameter $r=1.5\times10^{-4}\text{K}^{-2}\text{ps}^{-1}$. (c) Experimental decay curves of normalized $\Delta I$ at $\phi=45^{\circ}$ and $\phi=31^{\circ}$ respectively, taken from Ref.~\onlinecite{sc3}. }\label{TempScale} \end{figure} The results in Fig.~\ref{TempScale} can also be understood from the two contributions $\Delta I_{t}$ and $\Delta I_{g}$ in $\Delta I$. For the nodal case ($\phi=45^{\circ}$), we have $\Delta I_{g}=0$, while $\Delta I_{t}\sim D_{L}(0;T_{l})k_{b}(T_{e}-T_{l})$ if the DOS $D_{L}(\omega;T_{l})$ is approximated by its value at Fermi energy. This explains the nearly linear dependence of the normalized $\Delta I$ on the temperature for $\phi=45^{\circ}$ in Fig.~\ref{TempScale}(a). In the off-nodal case, $\Delta I_{g}$ due to the gap variation contributes to $\Delta I$ in addition to $\Delta I_{t}$. When the temperature $T_{e}$ decreases, the gap becomes larger and $\delta D_{L}(\omega)$ decreases. Thus in the off-nodal region the normalized $\Delta I$ deviates from the linear temperature-dependence relation and decays faster, as shown in Fig.~\ref{TempScale}(a). After mapping to the time scale, the normalized $\Delta I$ then shows the angle-dependent decay rate of QPs. \begin{figure}[htbp] \includegraphics[scale=0.16,clip]{Fig4.eps} \caption{(Color online)(a) and (b): Decay curves of normalized $\Delta I$ with different initial electronic temperature $T_{e}(0)$ and lattice temperature $T_{l}=20$K for $\phi=45^{\circ}$ and $\phi=31^{\circ}$ respectively. The fitting parameter $r=1.5\times10^{-4}\text{K}^{-2}\text{ps}^{-1}$ in Eq.~(\ref{decay}). (c) and (d) : Experimental decay curves of normalized $\Delta I$ with different pumping fluences for $\phi=45^{\circ}$ and $\phi=31^{\circ}$ respectively, taken from Ref.~\onlinecite{sc3}. Higher pumping fluence corresponds to higher initial electronic temperature $T_{e}(0)$.} \label{FluScale} \end{figure} \subsection{Fluence Dependence of Quasiparticle Decay Rate} It is also observed that the QPs at fixed angle relax faster if the sample is pumped by higher fluence laser.\cite{sc3} The reason however is not the same as the angle-dependence case discussed above, considering that the superconducting gap is not involved in the nodal direction ($\phi=45^{\circ}$). The nearly linear dependence of $\Delta I$ on the temperature $T_{e}$ shown in Fig.~\ref{TempScale}(a) implies that the decay of the normalized $\Delta I$ along the cutline $\phi=45^{\circ}$ directly reflects the decay of $T_{e}$. This fluence-dependent decay behavior can be explained by the recombination process of QPs into Cooper pairs,\cite{exp1} and thus justifies the assumed decay equation (\ref{decay}) for $T_{e}$. With Eq.~(\ref{decay}) and the parameter $r=1.5\times10^{-4}~\text{K}^{-2}\text{ps}^{-1}$, we got the decay of the normalized $\Delta I$ with different initial temperatures $T_{e}(0)$ for $\phi=45^{\circ}$ and $\phi=31^{\circ}$, as shown in Fig.~\ref{FluScale}(a) and (b). The calculation results reproduces the experimental observations that higher fluence induces faster QP decay rate, as shown in Fig.~\ref{FluScale}(c) and (d). Thus, unlike the angle-dependent decay rate of QPs which is associated with the d-wave gap dynamics, the fluence-dependent decay rate of QPs is due to the pairwise recombination of QPs into Cooper pairs. \section{Conclusion} In conclusion, we show that the main features of the trARPES observations for d-wave superconductors so far can be explained within a simple two-temperature scenario. In this picture, the effective electronic temperature affects both the thermal distribution of the quasiparticles and the superconducting gap. The angle dependence of the photoexcited quasiparticle density and the quasiparticle decay rate are associated with the d-wave gap dynamics, while the fluence-dependent quasiparticle decay rate is attributed to the pairwise recombination of QPs into Cooper pairs. Different from the original explanations of the experimental results,\cite{sc1,sc3} the two-temperature scenario here doesn't refer to the details of the microscopic scattering processes, which have been washed out due to the thermalization of e-e scattering in the time scale $\tau_{ee}$. Our results suggest that the phenomenological two-temperature model could be a good starting point to analyze the trARPES experiments before extracting other interesting microscopic dynamics processes. Furthermore, better time-resolution in experiment will be crucial for the development of more powerful trARPES technique. \begin{acknowledgements} We thank Jianqiao Meng for indicating us to the trARPES technique, and thank Weiqiang Chen, Zijian Yao, and Hongmin Jiang for helpful discussions. This work is supported by the Hong Kong grants of University Grant Council AoE/P-04/08 and GRC HKU707010, and by NSFC 11274269. \end{acknowledgements}
\section{Introduction} In recent years, we have witnessed the trend of using larger and larger neural network (NN) models to deliver improved accuracy and generalization in various machine learning tasks~\cite{devlin2018bert, fedus2021switch}. However, training these models requires a considerable amount of on-device GPU memory. Unfortunately, the increase of GPU memory capacity has been relatively slow, leading to a fundamental barrier to the development of large NN models. Activation Compressed Training (ACT) is a promising approach to reduce the memory footprint of models during training. As all layers' activations need to be kept in the memory for computing the gradients during training, ACT reduces memory consumption by compressing these saved activations. Prior work~\cite{chakrabarti2019backprop, fu2020don, chen2021actnn, evans2021ac} has shown the effectiveness of ACT by reducing activation footprint by up to $12\times$ with 2-bit activations. Although ACT has already demonstrated impressive compression capabilities, previous work on ACT is restricted to specific NN architectures. For example, ActNN~\cite{chen2021actnn} is a quantization framework for convolutional NNs only; Mesa~\cite{pan2021mesa} proposes a per head/layer quantization method for vision transformers; and AC-GC~\cite{evans2021ac} derives convergence error bound for different types of operators separately. Developing a generic ACT framework is challenging. Theoretically, convergence guarantees must be made without assumptions on the network architecture. Algorithmically, the framework should find effective compression strategies for all kinds of networks automatically. From the system perspective, the framework should support arbitrary NN operations, including user-defined ones. In this work, we propose GACT\xspace, a general framework for ACT that is agnostic to the NN architecture. Neither specialized mathematical derivations nor customized implementation is needed to support different operators. To enable this, we develop a general convergence theory by analyzing the stochastic gradient (SG) introduced by ACT. We show that the SG can be well approximated by a linearized version, which is unbiased to stochastic compressors. The variance of the linearized gradient has a particularly simple structure that allows a numerical algorithm to predict the variance given a compression strategy. Then, we generate the strategy by approximately solving an integer program. We implement our method as a library based on PyTorch that can be quickly integrated into real-world machine learning systems. The library also provides several optimization levels to explore the trade-off between memory and speed. We demonstrate the flexibility and efficiency of GACT\xspace on various tasks, including image classification, object detection, text, and graph node classification. Our evaluation shows that GACT\xspace can reduce activation memory by up to 8.1$\times$, enabling training with a 24.7$\times$ larger batch size on the same GPU. In sum, our main contributions are as follows: \begin{itemize} \itemsep0em \item We propose a general convergence theory for ACT. \item We develop an algorithm that automatically estimates the sensitivity of each compressed tensor and selects the optimal compression strategy. \item We build efficient implementation of GACT\xspace in PyTorch with an easy-to-use API that can also be combined with other memory-saving techniques seamlessly. \end{itemize} \section{Related Work}\label{sec:related} \textbf{Activation Compressed Training.} ACT has been applied to convolutional NNs using different compressors, such as quantizers~\cite{chakrabarti2019backprop, fu2020don,chen2021actnn}, JPEG~\cite{evans2020jpeg}, or scientific data compression algorithms~\cite{jin2021novel,evans2021ac}. ACT is also applied to transformers~\cite{pan2021mesa} and graph NNs~\cite{anonymous2022exact}. However, the existing theory for ACT~\cite{chakrabarti2019backprop,fu2020don,chen2021actnn,evans2021ac} relies on the case-by-case analysis of specific network operators, such as convolution, ReLU, and batch normalization. It also requires dedicated implementations for each operator. On the contrary, GACT\xspace focuses on the generality of activation compressed training, not a specific quantizer design, which is the main topic of previous work. Instead of assuming that the network is a stack of layers, GACT formulates the problem as a computational graph of operators. This is general enough to cover transformers~\cite{vaswani2017attention}, graph NNs~\cite{kipf2016semi}, second-order derivatives, and unknown future~architectures. \noindent{\textbf{Reduced Precision Training.}} Apart from ACT, reduced precision training~\cite{micikevicius2018mixed, wu2018training,wang2018training,banner2018scalable,chen2020statistical,sun2020ultra} performs calculations directly on low precision data, reducing the computation cost and memory footprint simultaneously. To achieve this, specialized kernels are used to calculate on low precision data. In contrast, ACT only considers storage, and it can thus use more flexible compression strategies and achieve a much better compression ratio with the same accuracy loss. \noindent{\textbf{Memory-Efficient Training.}} Gradient checkpointing~\cite{chen2016training, jain2019checkmate} trades computation for memory by dropping some of the activations in the forward pass from memory and recomputing them in the backward pass. Swapping~\cite{kirisame2020dynamic, huang2020swapadvisor, wang2018superneurons, peng2020capuchin} offloads activation or model parameters to an external memory (e.g., CPU memory). Recent work~\cite{beaumont2021efficient} explores the possibility of combining the gradient checkpointing and swapping. All these methods save memory by storing fewer tensors on the GPU. In contrast, GACT\xspace compresses the saved tensors and is complementary to these approaches. Moreover, the generality of GACT\xspace enables easy combination with these methods, which we explore in this paper. \section{Formulation} We first present the mathematical formulation of our activation compressed training (ACT) framework. As we would like to develop a general ACT algorithm, applicable to a wide range of NN architectures, we make minimal assumptions on our formulation. Throughout the paper, we define the variance of a vector $x$ as $\Var{x}=\E{\norm{x}^2}-\norm{\E{x}}^2$. \subsection{Activation Compressed Training} In this work, we abstract the forward propagation as two functions $\ell(x; \theta)$ and $h(x; \theta)$. Both take a datum $x$ and the model parameter $\theta$ as the input. The loss function $\ell(x; \theta)$ outputs the loss of the network $\theta$ on datum $x$. The context function $h(x; \theta)$ outputs tensors to be stored in the memory for computing the gradients, which are referred as the \emph{context}. Assume that the context consists of $L$ tensors, where each tensor $h^{(l)}(x; \theta)$ is represented by a flattened $D_l$-dimensional vector. Denote $h(x; \theta)=(h^{(l)}(x; \theta))_{l=1}^L$. Our notations are somewhat unconventional in the sense that we do not explicitly define each layer's activation. We do not even assume that there is a NN. It could be any computational graph that saves context tensors. Given a dataset $\mathcal X=\{x_n\}_{n=1}^N$, define the batch loss $\mathcal L(\theta):=\frac{1}{N}\sum_{n=1}^N \ell(x; \theta)$. The dataset can be equivalently represented as an empirical data distribution $p_{\mathcal X}(x):=\frac{1}{N}\sum_{n=1}^N \delta(x-x_n)$, where $\delta$ is the Dirac delta function. The batch loss can be written as $\mathcal L(\theta)=\mathbb E_{\mathcal X}[\ell(x; \theta)]$, where $\mathbb E_{\mathcal X}$ denotes for taking expectation over $p_{\mathcal X}$. The network is trained with stochastic gradient descent (SGD)~\cite{bottou2010large}. Starting from an initial model $\theta_0$, at the $t$-th iteration, SGD updates the model with: \begin{align} \theta_{t+1}\leftarrow \theta_t - \eta \nabla_{\theta} \ell(x; \theta_t),\label{eqn:sgd} \end{align} where $\eta$ is a learning rate, and the SG $\nabla_{\theta} \ell(x; \theta)$ is computed on a random datum $x\sim p_{\mathcal X}$. Notice that $\Es{\mathcal X}{\nabla_\theta \ell(x; \theta)}=\nabla_{\theta} \mathcal L(\theta)$, i.e., the SG is an unbiased estimator of the batch gradient $\nabla_{\theta} \mathcal L(\theta)$. Crucially, the SG can be written in the form $\nabla_{\theta} \ell(x; \theta_t)=g(h(x; \theta_t); \theta_t)$. In other words, the back propagation only depends on the forward propagation through the context $h(x; \theta_t)$. The entire context must be kept in memory for computing the gradients. The context dominates the memory consumption in many applications. ACT reduces the training memory footprint by compressing the context. Let $Q(h)$ be a compressor, which converts $h$ to compact formats while keeping $Q(h)\approx h$. Then, ACT computes the gradient with compressed context: \begin{align}\label{eqn:ac} \theta_{t+1}\leftarrow \theta_t - \eta g(Q(h(x; \theta_t)); \theta_t). \end{align} We refer to $g(Q(h(x; \theta_t); \theta_t)$ as the activation compressed (AC) gradient. ACT is significantly more memory efficient then the plain SGD, Eq.~(\ref{eqn:sgd}), since it only needs to store a compressed version of the context. Suppose the original context $h(x; \theta_t)$ consists of 32-bit floating point tensors, and $Q(\cdot)$ is a compressor which quantizes tensors to 2-bit integers, ACT will reduce the context memory by $16\times$. Fig.~\ref{fig:architecture} illustrates the computational graph of ACT with these notations. In the following presentation, we might denote $h(x, \theta)$ simply by $h$ when there is no confusion. \begin{figure}[t] \centering \includegraphics[width=\mywidth]{figures/architecture.pdf} \ifisarxiv \else \vspace{-2em} \fi \caption{\small The architecture of GACT\xspace.} \label{fig:architecture} \end{figure} \subsection{Convergence of ACT}\label{sec:convergence} ACT is a lossy approximation of SGD, as it uses an approximate gradient $g(Q(h); \theta)$. Therefore, some kind of theoretical guarantee is required for ACT to be useful. Fortunately, analyzing ACT is made significantly simpler by introducing an \emph{unbiased stochastic} compressor $Q(\cdot)$, such that $\Es{Q}{Q(x)}=x$ for any $x$. $\Es{Q}{\cdot}$ means taking expectation over the compressor. In this way, $g(Q(h); \theta)$ can be viewed as a stochastic estimator of the batch gradient $\nabla \mathcal L(\theta)$, but the randomness comes not only from the datum $x$ but also the compressor $Q(\cdot)$. Therefore, ACT is still an SGD algorithm. Standard analytical tools for SGD~\cite{bottou2018optimization} are applicable for studying ACT. SGD algorithms have particular good properties when the SG is unbiased. In our case, this means $\Es{Q}{g(Q(h);\theta)}=g(h;\theta)$. However, the SG is biased general, even when the stochastic compressor itself is unbiased.\footnote{Consider the example $g(h)=\mathbb I(h\ge 0.5)$, where $h\in [0, 1]$ and its AC gradient $g(Q(h))=\mathbb I(Q(h)\ge 0.5)$ with the compressor $Q(h)\sim \mathrm{Bernoulli}(h)$. Then, $\E{g(Q(h))}=P(Q(h)=1)=h\ne g(h).$ } The key technique in this work is to construct an unbiased approximation of the AC gradient by linearizing the gradient function $g(\cdot; \theta)$. Consider the first-order Taylor expansion of $g(\cdot; \theta)$ at $h$: \begin{align}\label{eqn:first-order} \hat g(Q(h); h, \theta):= g(h; \theta) + J(h, \theta) \Delta h, \end{align} where $J(h, \theta):=\frac{\partial g(h;\theta)}{\partial h}$ is a Jacobian matrix, $\Delta h:=Q(h)-h$ is the compression error. We further denote $\hat g_{x\theta}(Q(h); h):=\hat g(Q(h); h, \theta)\vert_{h=h(x; \theta)}$ and $J_{x\theta}(h):=J(h, \theta)\vert_{h=h(x; \theta)}$ for short. Since $\E{\Delta h(x; \theta)}=0$, $\hat g_{x\theta}(Q(h); h)$ is an unbiased SG, Furthermore, the approximation error is small: \begin{proposition}\label{prop:bias-order} Assuming that $g(h; \theta)$ is twice differentiable w.r.t. $h$, and the second order derivative is bounded, then \begin{align*}&\E{\norm{g(Q(h); \theta) - \hat g_{x\theta}(Q(h); h)}_2} = O(\Vars{Q}{\Delta h}).\end{align*} \end{proposition} Since $\Delta h$ itself is unbiased, $\Vars{Q}{\Delta h}=\Es{Q}{\norm{\Delta h}^2}$ is simply the expected compression error. Prop.~\ref{prop:bias-order} implies that the linearization error is bounded by the compression error. The linearized gradient $\hat g$ is accurate if the compression is accurate. Using $\hat g$ as a bridge, we arrive in the following convergence theorem: \begin{theorem}\label{thm:convergence} Assume that:\\ \noindent\textbf{A1.} $\mathcal L(\theta)$ is a continuous differentiable, $\nabla\mathcal L(\theta)$ is $\beta$-Lipschitz continuous.\\ \noindent\textbf{A2.} $\mathcal L(\theta)$ is bounded below by $\mathcal L_*$.\\ \noindent\textbf{A3.} $g(h; \theta)$ is differentiable w.r.t. $h$ and $\exists b>0$, s.t. $\forall \theta, \mathbb E\norm{g(Q(h(x; \theta)); \theta)-\hat g_{x\theta}(Q(h); h)}\le b$.\\ \noindent\textbf{A4.} $\exists \sigma^2>0$, s.t., $\forall\theta$, $\Var{\hat g_{x\theta}(Q(h); h)}\le \sigma^2$. \\ Then, for all $\eta < \frac{1}{2\beta}$, if we run ACT defined as Eq.~(\ref{eqn:ac}) for $T$ iterations, then we have \begin{align*} \min_{t=0, \dots, T-1}\E{\norm{\nabla \mathcal L(\theta_t)}^2} \le \frac{4(\mathcal L(\theta_0)-\mathcal L_*)}{\eta T}+3b^2+ \eta\beta\sigma^2 \end{align*} \end{theorem} \paragraph{Remark: } The analytical technique used in Thm.~1 is rather standard, see Thm.~4.8 in~\citet{bottou2018optimization}. However, we consider the variance term $\sigma^2$ of the \emph{linearized gradient}, rather than the SG itself. This formulation brings better analytical properties and an adaptive algorithm for determining the compression scheme, as we shall see soon in Sec.~4. The convergence of ACT is affected by both the linearization error (A3) and the variance of the unbiased gradient $\hat g(\cdot; \theta)$ (A4). The latter is characterized as: \begin{proposition}\label{prop:var-order} $ \Var{\hat g_{x\theta}(Q(h); h)} =\Vars{\mathcal X}{g(h; \theta)} + \mathbb E_{\mathcal X}\left[\Vars{Q} { \hat g_{x\theta}(Q(h); h) }\right], $ where the second term on the RHS equals to $ \mathbb E_{\mathcal X}\left[\Vars{Q} {J_{x\theta}(h)\Delta h}\right]=O\left(\Vars{Q}{\Delta h}\right).$ \end{proposition} Prop.~\ref{prop:var-order} separates the variance from different noise sources. $\Vars{\mathcal X}{g(h(x, \theta); \theta)}$ is the variance raised by random sampling of data (``sampling variance''). $\mathbb E_{\mathcal X}\left[\Vars{Q} {J_{x\theta}(h)\Delta h(x, \theta)}\right]$ is the variance raised by compression. Now, the convergence in Thm.~\ref{thm:convergence} is depicted by $3b^2+\eta\beta\sigma^2$. By Prop.~\ref{prop:bias-order}, $b^2=O(\Vars{Q}{\Delta h}^2)$. By Prop.~\ref{prop:var-order}, $\sigma^2 =O(1)+O(\Vars{Q}{\Delta h})$, since the sampling variance is not affected by compression. Therefore, when the compression is accurate ($\Delta h\rightarrow 0$), the impact of the linearization error is negligible, and the variance of the unbiased gradient dominates. ACT behaves as if the AC gradient is unbiased. \section{Adapting the Compression Rate} In a network, some context tensors (such as those stored for computing the cross entropy loss) are extremely sensitive, a small amount of compression would result in diverged training, while other tensors are quite robust to compression. Therefore, we must apply different amounts of compression for each context tensor. As a general framework, we have no prior knowledge of the users' model architecture, so we designed an algorithm to infer the sensitivity for each context tensor and determine their compression rate automatically. There is a tradeoff between the compression error and the storage requirement. We represent the storage requirement of the compressed context in \emph{bits per dimension}. We assume that $b_l$ bits/dim. are used for compression $h^{(l)}$, and $Q_{b_l}(h^{(l)})$ be the compression result. Let $b=(b_l)_{l=1}^L$ be a \emph{compression scheme}, $Q_b(h):=\{Q_{b_l}(h^{(l)})\}_{l=1}^L$, and $\Delta_b h = Q_b(h) - h$. \subsection{Structure of Variance} As discussed in Sec.~\ref{sec:convergence}, when the compression is relatively accurate, the variance plays the main role in determining the convergence. Therefore, we would like to investigate how the compression scheme would impact the variance. Formally, we are interested in: \begin{align*} V(b; h, \theta) := \Vars{Q} { \hat g(Q_b(h); h, \theta) }. \end{align*} Once $V(b, h; \theta)$ is known, we can find the minimum variance compression scheme under a given total bits budget $B$, by solving the integer programming problem: \begin{align} \min_b V(b; h(x; \theta), \theta),~~~\mbox{s.t. }\sum_{i=1}^L b_l D_l \le B,\label{eqn:ilp} \end{align} where $D_l$ is the dimensionality of $h^{(l)}$. To proceed, we need the following assumptions on the compressor $Q_b(\cdot)$:\\ \noindent\textbf{Assumption B1: } The compressed result is element-wise uncorrelated. That is, for any $i\ne j$, $\Cov{Q_b(h)_i}{Q_b(h)_j}=0$.\\ \noindent\textbf{Assumption B2: } For compressing $h^{(l)}(x; \theta)$ to $b_l$ bits/dim., the compression error can be written in the form $\Var{\Delta_{b_l} h^{(l)}(x; \theta)_j}\le R_{lj}(x; \theta)S(b_l)$, where $S(b_l)$ is a known function. This isolates the effect of $b_l$ through the unary factor $S(b_l)$.\\ Both assumptions can be achieved by a stochastic rounding quantizer~\cite{courbariaux2015binaryconnect}, where $R_{lj}(x; \theta)=\frac{1}{4}\left(\max_k h_k^{(l)}-\min_k h_k^{(l)}\right)^2$ and $S(b) = (2^{b_l}-1)^{-2}$. See Appendix~\ref{sec:appendix-var-structure} for the derivations. The following theorem reveals the structure of the variance: \begin{theorem}\label{thm:variance-structure} Under assumptions B1, B2, there exists a family of functions $\{c_l(h, \theta)\}_{l=1}^L$, such that the compression variance can be written in the form \begin{align}\label{eqn:var-decomposition} V(b; h, \theta)\le \sum_{l=1}^L c_l(h, \theta) S(b_l). \end{align} \end{theorem} \subsection{Computing Sensitivity} \label{sec:compute_sensitivity} Thm.~\ref{thm:variance-structure} reveals two good properties of the variance: (1) the impact of compressing different context tensors simply sums up, without affecting each other; and (2) the compression scheme only impacts the variance through $S(b_l)$. Both properties are brought about by linearization. Since $S(\cdot)$ is a known function, we only need to know $c_l(h, \theta)$ to solve problem Eq.~(\ref{eqn:ilp}). $c_l(h, \theta)$ can be understood as the sensitivity of the AC gradient to the compression of the $l$-th tensor. We can compute $c_l(h, \theta)$ numerically by leveraging the idempotence of compressing a tensor: \\ \noindent\textbf{Assumption B3: } If $h=Q(h^\prime)$ for some $h^\prime$ with non-zero probability, then $Q(h)=h$ and $\Vars{Q}{Q(h)}=0$. Let $Q^{\neg (l)}_b(h)=\{Q_{b_1}(h^{(1)}), \dots, h^{(l)}, \dots, Q_{b_L}(h^{(L)})\}$ be some tensors, where every tensor except $h^{(l)}$ is compressed. Plug $h=Q^{\neg (l)}_b(h)$ into Eq.~(\ref{eqn:var-decomposition}), and use B3, we have \begin{align*} V(b; Q^{\neg (l)}_b(h), \theta) \le c_l(Q^{\neg (l)}_b(h), \theta) S(b_l). \end{align*} The left hand side can be approximated by taking $\hat g(Q_b(h); h, \theta)\approx g(Q_b(h); \theta)$. Assume that $c_l(\cdot, \theta)$ is reasonably continuous, we have \begin{align*} c_l(h, \theta) \approx \Vars{Q} { g(Q_b(h); \theta) }\vert_{h=Q_b^{\neg (l)}(h)} / S(b_l). \end{align*} The variance can be replaced by empirical variance. \begin{algorithm}[t] \caption{Numerical algorithm for computing $c_l(h, \theta)$.}\label{alg:autoprec} \begin{algorithmic} \REQUIRE A gradient evaluation function $g(\cdot; \theta)$ \REQUIRE A series of $L+1$ random seeds $(r_l)_{l=1}^{L+1}$. \REQUIRE Any compression scheme $b=(b_l)_{l=1}^L$ \STATE $\forall l$, seed $Q^{(l)}$ with $r_l$ \STATE $g_0\leftarrow g(Q_b(h); \theta)$ \COMMENT{First iteration} \STATE $\forall l$, seed $Q^{(l)}$ with $r_l$ \STATE seed $Q^{(l)}$ with $r_{L+1}$ \STATE $g_1\leftarrow g(Q_b(h); \theta)$ \COMMENT{Second iteration, with another seed} \textbf{Return} $\frac{1}{2}\norm{g_0 - g_1}^2 / S(b_l)$ \end{algorithmic} \end{algorithm} Alg.~\ref{alg:autoprec} illustrates this idea. To compute $\Vars{Q} { g(Q_b(h); \theta) }$ at $h=Q_b^{\neg (l)}(h)$, we keep the random seeds fixed for all the compressors except the $l$-th one. We compute the empirical variance by two evaluations of $g(Q_b(h); \theta)$, which are two NN iterations (forward + backward propagation). Finally, we assume that $c(h, \theta)$ remains stable for different mini-batches $h$, and along the training trajectory $(\theta_t)$. Therefore, we maintain a $c_l$ for each tensor $l$, which is updated by periodically running Alg.~\ref{alg:autoprec}. Eq.~(\ref{eqn:ilp}) is approximately solved by the $O(L\log_2 L)$ greedy algorithm~\cite{chen2021actnn}. Another useful feature of this approach is predicting failure (in an \emph{a posteriori} manner). If the compression variance $V(b; h, \theta)$ is dominating the overall gradient variance $\Var{g(Q(h); \theta_t)} $, compression is adding too much noise to the gradient, and the convergence might be affected. The overall gradient variance can be computed by maintaining a running mean of the gradient. If $V(b; \theta)/\Var{\hat g(Q(h); \theta_t)}$ is too large, we can raise an alert to the user to increase the storage budget. \section{System Implementation} We implemented GACT\xspace as a lightweight library in PyTorch. Users can use GACT\xspace for any NN architecture with several lines of code change. GACT\xspace uses low-level PyTorch hooks to capture context tensors, so it supports arbitrary operators, including custom operators defined by users. We implemented efficient CUDA kernels to infer tensor sensitivity and to perform compression during run time. GACT\xspace uses the same per-group quantizer in ActNN~\cite{chen2021actnn} as the compressor. However, GACT\xspace differs from ActNN in several aspects. ActNN relies on manual analytical deduction to compute the sensitivity for different operators, while GACT\xspace infers tensor sensitivity automatically, as described in Sec.~\ref{sec:compute_sensitivity}. Moreover, ActNN performs layer-level quantization. It has to implement an activation compressed version for each operator and substitute operators during the training (e.g., replace {\code torch.nn.Conv2d} with {\code actnn.Conv2d}). In contrast, GACT\xspace runs at tensor level and uses a single hook interface to compress saved tensors for all operators. \subsection{General API} \ifisarxiv \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{figures/api.pdf} \caption{Usage example of GACT\xspace} \label{fig:act_layers} \end{figure} \else \begin{figure}[t] \centering \includegraphics[width=0.92\linewidth]{figures/api.pdf} \vspace{-1.4em} \caption{\small Usage example of GACT\xspace} \label{fig:act_layers} \end{figure} \fi As shown in Fig.~\ref{fig:act_layers}, the interface of GACT\xspace is straightforward and intuitive, requiring the user to (i) initialize the GACT\xspace controller and specify an optimization level (Line 5); (ii) install hooks (Line 6); and (iii) instruct GACT\xspace how to perform forward and backward propagation (Lines 13-17) and pass it as a function ({\code fwdbwdprop}) to the controller (Line 19). We require users to specify (iii) because GACT\xspace needs to numerically run the forward and backward pass to infer tensor sensitivity. Although {\code fwdbwdprop} is passed to the controller every iteration, it is only called internally every {\code adapt\_interval} iterations when tensor sensitivity changes. As shown in Sec.~\ref{sec:cpr_strategy}, tensor sensitivity stabilizes quickly after the first several epochs, {\code adapt\_interval} can thus be set to a large number, introducing negligible impact on training speed. \subsection{System Architecture} Fig.~\ref{fig:architecture} shows an overview of GACT\xspace. The GACT\xspace controller has three modules: Adaptivate Algorithm; Compressor; and Decompressor. In the forward pass, the controller uses PyTorch {\code pack\_hook} to capture all context tensors. Then Adaptive Algorithm infers tensor sensitivity based on gradients and assigns higher bits to more sensitive tensors, as described in Sec.~\ref{sec:compute_sensitivity}. The bits information is used to instruct Compressor to perform quantization. In the backward pass, Decompressor dequantizes context tensors and uses {\code unpack\_hook} to send the dequantized results back to the PyTorch's auto differentiation engine. The controller is also responsible for swapping quantized tensors to the CPU and prefetching them back during the backward propagation if swapping is enabled. \subsection{Identifying Tensors to Quantize} The {\code pack\_hook} and {\code unpack\_hook} process all types of context tensors, including activation, parameters trained by the optimizer, and training states such as running mean/variance used by batch normalization. To guarantee that only the activations are quantized, we filter out saved parameters by recording the data pointers of all the model parameters before training, and we skip quantization if the input tensor pointer exists in the parameter pointer set. Similarly, GACT\xspace does not quantize training states by checking if the input tensor requires gradients. However, using hooks blindly disables some memory-saving optimization. For example, in a transformer's self-attention layer, the keys, query, value tensors are all calculated from the same input tensor. The saved objects of the three operations thus all refer to the same tensor. In this case, PyTorch triggers the {\code pack\_hook} three times. If we perform quantization blindly, we waste computation resources and introduce extra memory consumption because the same underlying tensor is quantized and saved more than once. GACT\xspace avoids duplication by generating footprints for each input context tensor. We use the CUDA data pointer, sampled data points, and the tensor statistics (e.g., sum) as the footprint. GACT\xspace manages all quantized context tensors and uses the footprint to differentiate them. If a tensor is already quantized, GACT\xspace will skip quantization and return previous results directly. \subsection{Parallel Swap and Prefetch} To further reduce activation memory, we combine GACT\xspace with swapping. All compressed tensors are offloaded to the CPU during the forward pass and swapped back in the backward pass. Here, we replace the original tensor with quantized activation, as data movement is more expensive than computation. Swapping the original tensor saves the quantization overhead but adds more data movement cost between CPU and GPU. As shown in Sec.~\ref{sec:swap}, quantization overhead is much smaller than copying full-precision data to CPU in modern GPU architecture. Furthermore, we create two new streams (swap in/out) to parallelize the computation and swapping operation to reduce the swap overhead. The forward computation and swap-out process happen in parallel during the forward pass. During the backward pass, in each layer the swap-in stream is responsible for prefetching the compressed activation of the previous layer to avoid synchronization overhead. We leverage the CUDA event to ensure tasks in different streams are executed in the correct order. \subsection{Other Memory Optimizations} \textbf{Gradient checkpointing.} Gradient checkpointing~\cite{chen2016training} works by dividing the NN into segments. The algorithm only stores the inputs of each segment and recomputes the dropped activations segment by segment during backpropagation. The memory consumption is thus the cost of storing the inputs of all segments plus the maximum memory cost to backpropagate each segment. When combined with gradient checkpointing, GACT\xspace can reduce the memory consumption of both parts. GACT\xspace reduces the memory consumption of the first part by quantizing the segment inputs. Moreover, the activations saved during the recompute phase are also quantized, reducing the memory cost of the second part. Combining GACT\xspace with gradient checkpointing might introduce more training noise because the recompute starts from quantized segment inputs, making the forward pass of recompute phase not exact. However, in Sec.~\ref{sec:swap}, we show the noise introduced by forwarding from the quantized tensors is negligible. \ifisarxiv \noindent{\textbf{Memory efficient self-attention.}} \else \textbf{Memory efficient self-attention.} \fi When the batch size is very large, the single layer after dequantization occupies a large amount of memory and prevents the batch size from increasing further. We observe this problem in transformer-based models where self-attention has quadratic space complexity in terms of sequence length. To reduce the memory footprint of the self-attention layer, we implement the algorithm introduced in~\cite{rabe2021self} that achieves linear space complexity, and combines it with GACT\xspace. \subsection{Optimization level} \begin{table}[t] \centering \vspace{-1em} \caption{\small Optimization levels for GACT\xspace. \label{tab:opt_level}} \resizebox{0.9\mywidth}{!}{ \begin{tabular}{ccc} \toprule Level & Compression Strategy & Bits\\ \midrule L0 & Do not compress & 32\\ L1 & per-group quantization with auto-precision & 4\\ L2 & L1 + swapping/prefetching & 4 \\ CB1 & L1 + gradient checkpointing & 4\\ CB2 & CB1 + efficient self-attention & 4\\ \bottomrule \end{tabular}} \end{table} To exploit the trade-off between memory saving and training speed, GACT\xspace provides several optimization levels. Higher levels can save more memory but with more overhead. Tab.~\ref{tab:opt_level} lists these optimization levels. L1 uses per-group quantization with the adaptive algorithm. L2 combines per-group quantization with swapping and prefetching. For transformer-based models, CB1 combines GACT\xspace with gradient checkpointing. CB2 further reduces the peak memory by adding efficient self-attention to CB1. \section{Experiments} We first demonstrate the effectiveness of the GACT\xspace adaptive algorithm. We further apply GACT\xspace to a wide range of machine learning tasks, including image classification, object detection, text, and graph node classification. We compare the training accuracy and activation compression rate for full precision, adaptive 4/3/2 (using GACT\xspace to adaptively decide quantization bits with an average of 4/3/2 bit) and fix-4 bit (quantizating all tensors uniformly with 4 bits). Next, we study the trade-off between compression rate and training throughput and compare GACT\xspace with other state-of-the-art memory-saving methods. Lastly, we demonstrate the flexibility of GACT\xspace by exploring the possibility of combining it with other memory optimization methods (CB1, CB2 as listed in Table~\ref{tab:opt_level}). We use open-source model implementations for all tasks. \subsection{Compression Strategy} \label{sec:cpr_strategy} \ifisarxiv \begin{figure}[t] \centering \subfigure[Inferred per-tensor $c_l$ (line) and bits/dim. (bar) for VGG-11. Layers with * have a preceding ReLU layer with shared context. drop=dropout, loss=cross entropy~loss. ]{ \includegraphics[width=0.36\linewidth]{figures/bits.pdf} } \subfigure[Gradient variance.]{ \includegraphics[width=0.18\linewidth]{figures/var.pdf} } \subfigure[Evolution of the per-tensor sensitivity. Each line is $c_l$ for a tensor.]{ \includegraphics[width=0.18\linewidth]{figures/evo.pdf} } \caption{Effectiveness of the adaptive algorithm.} \label{fig:sensitivity} \end{figure} \else \begin{figure}[t] \centering \subfigure[Inferred per-tensor $c_l$ (line) and bits/dim. (bar) for VGG-11. Layers with * have a preceding ReLU layer with shared context. drop=dropout, loss=cross entropy~loss. ]{ \includegraphics[width=0.82\mywidth]{figures/bits.pdf} } \subfigure[Gradient variance.]{ \includegraphics[width=0.4\mywidth]{figures/var.pdf} } \subfigure[Evolution of the per-tensor sensitivity. Each line is $c_l$ for a tensor.]{ \includegraphics[width=0.4\mywidth]{figures/evo.pdf} } \vspace{-1em} \caption{Effectiveness of the adaptive algorithm.} \label{fig:sensitivity} \end{figure} \fi We first test the effectiveness of our adaptive compression rate algorithm for training VGG-11~\cite{simonyan2014very} on ImageNet. Fig.~\ref{fig:sensitivity}(a) plots the inferred per-tensor sensitivity $c_l$ and the corresponding optimal bits/dim. GACT\xspace assigns more bits to more sensitive layers. The context tensor saved by the cross-entropy loss operator is most sensitive. A small amount of compression leads to a huge gradient variance. This makes sense since the loss is the first operator to back-propagate through, where the error accumulates. Therefore, GACT\xspace assigns 32 bits/dim. for the tensors in the classification head. With the adaptive algorithm, GACT\xspace with an average of 4 bits/dim. achieves smaller gradient variance than uniformly assigning 8 bits/dim. for all the tensors, as shown in Fig.~\ref{fig:sensitivity}(b). Finally, Fig.~\ref{fig:sensitivity}(c) shows that the sensitivity $c_l(h; \theta_t)$ remains stable during training.Therefore, periodically updating $c_l$ at a large interval is reasonable, and this introduces negligible impact on training speed. \subsection{Optimization level} \begin{table}[t] \centering \caption{For classification, we train VGG11~\cite{simonyan2014very}, ResNet-50~\cite{he2016deep}, and Swin-Tiny~\cite{liu2021Swin} on ImageNet~\cite{imagenet_cvpr09}. For object detection, we train RetinaNet~\cite{lin2017focal}, Faster R-CNN~\cite{ren2015faster} on Coco~\cite{lin2014microsoft}. We report accuracy on validation sets (Div. indicates diverge) and the compression rate of context tensors (numbers in brackets) for both tasks. } \resizebox{\mywidth}{!}{ \begin{tabular}{c|c|c|c|c} \toprule Task & Model & FP32 & \shortstack{GACT\xspace \\Adapt 4bit (L1)} & \shortstack{GACT\xspace \\Adapt 2bit} \\ \midrule \multirow{2}{3em}{Cls.} & VGG11 & 68.75 & 68.77 (2.84$\times$) & 68.49 (3.34$\times$) \\ & ResNet-50 & 77.29 & 76.96 (6.69$\times$) & 76.13 (11.39$\times$)\\ & Swin-tiny & 81.18 & 80.92 (7.44$\times$) & 77.91 (13.73$\times$)\\ \hline \multirow{2}{3em}{Det.} & Faster RCNN & 37.4 & 37.0 (4.86$\times$) & 36.1 (6.81 $\times$)\\ & RetinaNet & 36.5 & 36.3 (3.11$\times$) & Div. \\ \bottomrule \end{tabular}} \label{tab:vision_acc} \end{table} We apply GACT\xspace on various computer vision tasks, including image classification and object detection, as shown in Fig.~\ref{tab:vision_acc}. We also vary the average bits used by the adaptive algorithm to explore the memory accuracy trade-off. On both tasks, GACT\xspace L1 achieves comparable ($<0.5\%$ accuracy drop) or even better results than the full precision training, while reducing activation memory by up to 7.44$\times$. Here, we list the accuracy of FP32 as the strongest accuracy baseline. For other lossy methods we consider in Sec.~\ref{sec:mem-speed}, the accuracy is no better than FP32, and we list their training accuracy in Appendix~\ref{sec:baseline-acc}. Notice that here GACT\xspace Adapt 2bit diverges on the detection task. This is because, as shown in Sec.\ref{sec:convergence}, although ACT has unbiased gradients, the compression error and learning rate affect the convergence. When using 2 bit, the compression error is large and the learning rate has to be reduced accordingly to guarantee convergence. However, we do not want to slow training by decreasing the learning rate. All experiments are run with the same learning rate as the full precision. Therefore when compression error is large, the training diverges. Furthermore, we observe that the memory reduction varies among networks because GACT\xspace does not quantize intermediate states, and the size of intermediate states differs between networks. For example, in VGG11, when the batch size is 128, GACT\xspace reduces the saved tensor size from 5889MB to 2080MB, among which 78\% (1494MB) is used to store the intermediate index for the max-pooling layer that is not quantized by GACT\xspace. Next, we demonstrate the flexibility of GACT\xspace by applying it to a wider variety of natural language processing (NLP) and graph machine learning (Graph) tasks. We run multiple seeds for each task, and we report the mean$\pm$std of accuracy/F1 across runs as shown in Tab.~\ref{tab:nlp-graph-acc-cpr}. We include the detailed experimental setup in Appendix \ref{sec:setup}. For both NLP and Graph tasks, GACT\xspace L1 achieves comparable training results with FP32, introducing less than 0.3\% accuracy/F1-score drop, while reducing activation memory by 4.18$\times$ to 7.93$\times$. Moreover, the results are stable across runs, introducing similar accuracy variance as FP32. We also show the training results of fix-4bit quantization, where all tensors are uniformly quantized with 4 bits. As shown in Tab.~\ref{tab:nlp-graph-acc-cpr}, fix-4 bit quantization causes significant accuracy/F1-score loss on various graph models. For Bert-large, fixed-4 bit quantization works fine because all the context tensors have similar sensitivity. On the other hand, GACT\xspace L1, using a similar amount of memory as always quantizing each layer to 4 bits, still performs on par with full precision training on all the models. This shows the necessity of using adaptive algorithms to assign bits based on tensor sensitivity for stabilized training. Moreover, for Bert-large and three graph models (GCN/GAT/GCNII), GACT\xspace converges and gives lossless results with 3 bits. Remarkably, across all the graph models, training with 2-bit GACT\xspace causes little accuracy loss ($<1\%$). This shows the robustness of our adaptive algorithm. \begin{table*}[t] \ifisarxiv \else \vspace{-1em} \fi \caption{Accuracy and activation compression rate for NLP and Graph tasks. Accuracy that drops $>$ 1\% is in italic font.} \label{tab:nlp-graph-acc-cpr} \centering \resizebox{0.96\linewidth}{!}{ \begin{tabular}{ c|c|c|c|c|c|c } \toprule Model & Dataset & FP32 & Fix 4bit & GACT\xspace Adapt 4bit (L1) & GACT\xspace Adapt 3bit & GACT\xspace Adapt 2bit \\ \midrule \multirow{4}{3em}{GCN} & Flickr & 51.17 $\pm$ 0.19 & 50.93 $\pm$ 0.16 (7.56$\times$) & 51.08 $\pm$ 0.18 (7.93$\times$) & 51.14 $\pm$ 0.18 (11.34$\times$) & 51.20 $\pm$ 0.18 (17.56$\times$)\\ & Reddit & 95.33 $\pm$ 0.07 & 94.42 $\pm$ 0.11 (7.55$\times$) & 95.32 $\pm$ 0.07 (7.90$\times$) & 95.31 $\pm$ 0.07 (9.70$\times$) & 95.34 $\pm$ 0.06 (13.68$\times$)\\ & Yelp & 39.86 $\pm$ 0.94 & 39.85 $\pm$ 1.22 (5.94$\times$) & 40.06 $\pm$ 0.74 (6.42$\times$) & 40.21 $\pm$ 0.82 (7.46$\times$) & 39.89 $\pm$ 1.45 (9.00$\times$)\\ & ogbn-arxiv & 71.51 $\pm$ 0.65 & \textit{68.61 $\pm$ 0.77 (7.54$\times$)} & 71.35 $\pm$ 0.36 (8.09$\times$) & 70.82 $\pm$ 0.95 (10.45$\times$) & 70.87 $\pm$ 0.66 (13.75$\times$)\\ \multirow{4}{3em}{GAT} & Flickr & 52.40 $\pm$ 0.28 & \textit{35.24 $\pm$ 11.90 (4.23$\times$)} & 52.26 $\pm$ 0.31 (4.34$\times$) & 51.68 $\pm$ 1.13 (5.04$\times$) & 51.62 $\pm$ 1.19 (5.46$\times$)\\ & Reddit & 95.95 $\pm$ 0.06 & \textit{59.37 $\pm$ 11.48 (4.12$\times$)} & 96.02 $\pm$ 0.09 (4.29$\times$) & 95.96 $\pm$ 0.06 (4.64$\times$) & 95.82 $\pm$ 0.06 (5.24$\times$)\\ & Yelp & 52.41 $\pm$ 0.69 & \textit{36.09 $\pm$ 13.70 (4.04$\times$)} & 52.18 $\pm$ 0.38 (4.18$\times$) & 51.63 $\pm$ 0.83 (4.53$\times$) & 51.15 $\pm$ 0.53 (5.24$\times$)\\ & ogbn-arxiv & 71.68 $\pm$ 0.54 & \textit{54.64 $\pm$ 5.62 (5.04$\times$)} & 71.80 $\pm$ 0.47 (5.09$\times$) & 71.47 $\pm$ 0.50 (6.14$\times$) & 71.21 $\pm$ 0.68 (6.98$\times$)\\ \multirow{4}{3em}{GCNII} & Flickr & 52.37 $\pm$ 0.16 & 52.28 $\pm$ 0.16 (4.84$\times$) & 52.31 $\pm$ 0.16 (4.91$\times$) & 52.36 $\pm$ 0.16 (5.54$\times$) & 52.23 $\pm$ 0.15 (6.44$\times$)\\ & Reddit & 96.32 $\pm$ 0.24 & \textit{86.50 $\pm$ 1.08 (4.51$\times$)} & 96.11 $\pm$ 0.22 (4.52$\times$) & 96.01 $\pm$ 0.33 (5.16$\times$) & 95.54 $\pm$ 0.29 (5.92$\times$)\\ & Yelp & 62.33 $\pm$ 0.20 & 62.21 $\pm$ 0.22 (5.26$\times$) & 62.28 $\pm$ 0.26 (5.34$\times$) & 62.53 $\pm$ 0.36 (6.29$\times$) & 62.33 $\pm$ 0.37 (7.28$\times$)\\ & ogbn-arxiv & 72.52 $\pm$ 0.12 & \textit{44.57 $\pm$ 5.01 (6.54$\times$)} & 72.28 $\pm$ 0.35 (6.74$\times$) & 72.22 $\pm$ 0.28 (7.98$\times$) & 71.74 $\pm$ 0.26 (10.24$\times$)\\ \hline \multirow{4}{3em}{Bert-large} & MNLI & 86.74 $\pm$ 0.24 & 85.98 $\pm$ 0.16 (7.55$\times$) & 86.61 $\pm$ 0.11 (7.38$\times$) & 86.68 $\pm$ 0.08 (9.13$\times$) & \textit{ 84.24 $\pm$ 0.74 (12.87$\times$)}\\ & SST-2 & 93.69 $\pm$ 0.30 & 93.46 $\pm$ 0.23 (7.55$\times$) & 93.54 $\pm$ 0.52 (7.30$\times$) & 93.20 $\pm$ 0.37 (9.05$\times$) & \textit{ 91.90 $\pm$ 1.04 (12.91$\times$)} \\ & MRPC & 88.20 $\pm$ 0.02 & 87.36 $\pm$ 0.19 (7.55$\times$) & 87.90 $\pm$ 0.10 (7.40$\times$) & 87.69 $\pm$ 0.07 (9.19$\times$) & \textit{ 82.54 $\pm$ 0.38 (12.91$\times$)}\\ & QNLI & 92.29 $\pm$ 0.14 & 92.34 $\pm$ 0.07 (7.55$\times$) & 92.44 $\pm$ 0.07 (7.42$\times$) & 92.43 $\pm$ 0.31 (9.19$\times$) & \textit{90.74 $\pm$ 0.13 (12.95$\times$)}\\ \bottomrule \end{tabular} } \ifisarxiv \else \vspace{-1em} \fi \end{table*} \subsection{Memory Saving and Computational Overhead} \label{sec:mem-speed} \iffalse \begin{figure}[t] \centering \subfigure{ \includegraphics[scale=0.74]{figures/mem_speed_resnet50.pdf} } \subfigure{ \includegraphics[scale=0.74]{figures/mem_speed_bert-large-cased.pdf} } \subfigure{ \includegraphics[scale=0.74]{figures/mem_speed_bert-base-cased.pdf} } \subfigure{ \includegraphics[scale=0.74]{figures/mem_speed_swin_tiny.pdf} } \caption{Training throughput vs batch size. Red cross mark means out-of-memory. The shaded yellow region denotes the possible batch sizes with full precision training given the memory budget. CKPT: Gradient checkpointing, L0/1/1: Different GACT\xspace optimization levels, CB1: Combine GACT\xspace with gradient checkpointing, CB2: Combine GACT\xspace with gradient checkpointing and efficient self-attention, ZeroOff: ZeRO-Offload.} \label{fig:mem_speed} \end{figure} \fi \begin{figure}[t] \centering \begingroup \setlength{\tabcolsep}{0pt} \renewcommand{\arraystretch}{1} \scriptsize \begin{tabular}{m{0.6cm}m{9cm}} \begin{minipage}{\linewidth}(a)\end{minipage} &\includegraphics[width=.85\linewidth]{figures/mem_speed_resnet50.pdf}\\ \begin{minipage}{\linewidth}(b)\end{minipage} &\includegraphics[width=.85\linewidth]{figures/mem_speed_bert-large-cased.pdf}\\ \begin{minipage}{\linewidth}(c)\end{minipage} &\includegraphics[width=.85\linewidth]{figures/mem_speed_swin_tiny.pdf}\\ \end{tabular} \endgroup \vspace{-1.5em} \caption{Training throughput vs batch size. Red cross mark means out-of-memory. The shaded yellow region denotes the batch sizes with full precision training given the memory budget. CKPT: Gradient checkpointing, ZeroOff: ZeRO-Offload. } \label{fig:throughput_batch_size} \end{figure} \textbf{Settings and baselines.} We implement the benchmark with PyTorch 1.10 and measure the memory saving and overhead of GACT\xspace on an AWS g4dn.4xlarge instance, which has a 16GB NVIDIA T4 GPU and 64GB CPU memory. On ResNet-50, we compare with ActNN~\cite{chen2021actnn}, a dedicated quantization framework for convolutional NNs, and DTR~\cite{kirisame2020dynamic}, a state-of-the-art rematerialization method for dynamic graphs. ``swap'' is a simple swapping strategy that swaps all activations to the CPU. For Bert-large, we also show the results on Mesa ~\cite{pan2021mesa}, a memory-saving resource-efficient training framework for transformers, and ZeRO-Offload~\cite{ren2021zero}, a highly optimized system for training large-scale language models. Gradient checkpointing uses the default checkpointing policy provided by the transformer library~\cite{wolf-etal-2020-transformers}, where only the input to each transformer block is saved before the backward pass. On Swin-tiny, we only include Mesa and swap because other baselines lack the support for this network. \textbf{Results.} We compare the training throughput of GACT\xspace against other memory saving systems in Fig.~\ref{fig:throughput_batch_size}. On ResNet-50, GACT\xspace achieves similar throughput as ActNN (ActNN optimization L5 is not listed because it optimizes PyTorch memory allocation, which is unrelated to quantization and can also be applied to GACT\xspace), but ActNN enables training with a larger batch size. This is expected because ActNN implements efficient, customized layers for different operators in convolutional NNs. For Bert-large, Zero-offload fails quickly because it only offloads optimizer states that occupy a small portion of total memory to CPU. GACT\xspace L1 outperforms Mesa because Mesa only compresses tensors to 8 bit. When the batch is bigger, the activation size of each segment becomes the memory bottleneck and prevents gradient checkpointing from increasing the batch size. Moreover, combining GACT\xspace with gradient checkpointing and efficient self-attention further reduces the peak memory, increasing the batch size by up to 24.7$\times$. Meanwhile, it introduces a small throughput overhead compared with the original gradient checkpointing. Across all the network architectures, GACT\xspace enables training with a 4.2$\times$ to 24.9$\times$ larger batch size under the same memory budget. \textbf{Network scaling.} With GACT\xspace, we can construct larger models or train with a higher image resolution. Tab.~\ref{tab:scale} compares the largest model we can train against full precision. With the same batch size and memory budget, GACT\xspace can scale a ResNet-152 to 7.0$\times$ deeper, 3.6$\times$ wider or 3.0$\times$ higher resolution. Similarly, Bert-large can be scaled to 2.0$\times$ deeper or 1.6$\times$ wider. In GCN, GACT\xspace enables training 10.0$\times$ deeper and 1.7$\times$ wider network. Overall, GACT\xspace maintains 75\% - 136\% original training throughput. \begin{table}[t] \caption{Largest models GACT\xspace can train with 16G GPU memory. In ResNet (batch size=64), D (depth): number of layers, W (width): base width of the bottleneck block, R (resolution): width and height of input images. In Bert-large (batch size=16) and GCN, D (depth): number of transformer/gcn blocks, W (width): hidden size.} \label{tab:scale} \centering \resizebox{\mywidth}{!}{% \begin{tabular}{cc|ccc|ccc} \toprule &\multirow{2}{3em}{Dim} & \multicolumn{3}{|c|}{Maximum Value} & \multicolumn{3}{|c}{ Throughput (TFLOPS)} \\ & & FP & L1 & L2 & FP & L1 & L2 \\ \midrule \multirow{3}{2em}{ResNet-152} & D & 160 & 460 & 1124 & 0.43 & 0.47 & 0.41 \\ & W & 88 & 304 & 320 & 0.44 & 0.89 & 0.6 \\ & R & 232 & 548 & 716 & 0.41 & 0.39 & 0.44 \\ \hline \multirow{2}{2em}{Bert-large} & D & 32 & 56 & 64 & 0.67 & 0.56 & 0.53 \\ & W & 1280 & 1488 & 2032 & 0.68 & 0.61 & 0.60 \\ \hline \multirow{2}{2em}{GCN} & D & 24 & 152 & 240 & 0.20 & 0.14 & 0.15 \\ & W & 2464 & 3948 & 4244 & 0.36 & 0.38 & 0.40 \\ \bottomrule \end{tabular}% } \end{table} \vspace{-1em} \subsection{Other Optimizations} \label{sec:swap} \begin{table}[t] \caption{Swap and prefetch speed/memory on Bert-large.} \label{tab:swap} \centering \resizebox{\mywidth}{!}{% \begin{tabular}{c|c|c|c} \toprule Algorithm & \shortstack{Speed\\(sequence/s)} & \shortstack{Peak Mem.\\ (MB)} & \shortstack{Total Mem.\\ (MB)} \\ \midrule FP32 & 16.41 & 9573 & 9527 \\ FP32 + swap & 6.02 & 5215 & 5093\\ GACT\xspace swap & 12.95 & 5426 & 5325 \\ GACT\xspace swap + prefetch & 14.02 & 5426 & 5324 \\ \bottomrule \end{tabular}% } \end{table} We evaluate the idea of combining GACT\xspace with swapping on Bert-large-cased. As shown in Tab.~\ref{tab:swap}, swapping compressed tensors is faster than swapping the original ones because communication between CPU and GPU is more time-consuming than computation. Combining GACT\xspace with swapping increases training speed by up to 2.3$\times$. Notice here that the peak memory use of ``GACT\xspace swap'' is slightly higher than ``FP32 + swap'' because GACT\xspace does not quantize and swap intermediate states such as running mean/var of BatchNorm layer. Moreover, prefetch increases the speed by about 7\% with negligible memory overhead. \begin{table} \caption{Accuracy of Bert-large-cased on SST-2 and QNLI datasets} \label{tab:acc-ckpt} \centering \resizebox{0.9\mywidth}{!}{% \begin{tabular}{c c c|c c c} \toprule Algorithm & SST-2 & QNLI & Algorithm & SST-2 & QNLI\\ \midrule FP32 & 93.58 & 92.42 & CB1 & 93.81 & 92.26\\ \bottomrule \end{tabular}% } \end{table} \begin{table} \caption{Memory use of different algorithms on Bert-large. AM1: Activation size before backward, AM2: Activation size after reforwading the first transformer block. When batch size = 288, L0 runs out of memory, and therefore it is not listed below. } \label{tab:mem-ckpt} \centering \resizebox{0.8\mywidth}{!}{% \begin{tabular}{c|c|c|c|c} \toprule Batch Size & Algorithm & \shortstack{AM1\\(MB)} & \shortstack{AM2\\(MB)} & \shortstack{Peak Mem.\\(MB)} \\ \midrule \multirow{4}{3em}{16} & L0 & 4434 & - & 9573 \\ & FP32 + CKPT & 210 & 394 & 5541 \\ & CB1 & 37 & 99 & 5286 \\ & CB2 & 31 & 79 & 5269 \\ \hline \multirow{4}{3em}{288} & FP32 + CKPT & 3783 & 7092 & 12885 \\ & CB1 & 515 & 1497 & 8251 \\ & CB2 & 486 & 1307 & 8102 \\ \bottomrule \end{tabular}% } \end{table} We next demonstrate combining GACT\xspace with gradient checkpointing (CB1). Gradient checkpointing is performed at the beginning of each transformer block, thus avoiding saving tensors generated within the block. We then apply GACT\xspace with gradient checkpointing, where the saved tensors are quantized with 4 bits. As shown in Tab.~\ref{tab:acc-ckpt}, the accuracy is unaffected. We also compare the activation memory and peak memory of CB1 and CB2 in Tab.~\ref{tab:mem-ckpt}. AM2 denotes the peak activation memory, which is the size of saved tensors after reforwarding the first transformer block. When batch size = 288, compared with gradient checkpointing on full precision (FP32), CB1 and CB2 reduce the peak activation size by 4.7$\times$ and 5.4$\times$ respectively. \section{Proof of Theorems} \subsection{Theorem 1: Convergence of ACT} Assume that:\\ \noindent\textbf{A1.} $\mathcal L(\theta)$ is a continuous differentiable, $\nabla\mathcal L(\theta)$ is $\beta$-Lipschitz continuous. .\\ \noindent\textbf{A2.} $\mathcal L(\theta)$ is bounded below by $\mathcal L_*$.\\ \noindent\textbf{A3.} $g(h; \theta)$ is differentiable w.r.t. $h$ and $\exists b>0$, s.t. $\forall \theta, \mathbb E\norm{g(Q(h(x, \theta)); \theta)-\hat g(h(x, \theta); \theta)}\le b$.\\ \noindent\textbf{A4.} $\exists \sigma^2>0$, s.t., $\forall\theta$, $\Var{\hat g(h(x, \theta)}\le \sigma^2$. \\ Then, for all $\eta < \frac{1}{2\beta}$, if we run ACT defined as Eq.~(\ref{eqn:ac}) for $T$ iterations, then we have \begin{align*} \min_{t=0, \dots, T-1}\E{\norm{\nabla \mathcal L(\theta_t)}^2} \le \frac{4(\mathcal L(\theta_0)-\mathcal L_*)}{\eta T}+3b^2+ \eta\beta\sigma^2 \end{align*} \begin{proof} Denote $m:=\nabla_\theta \mathcal L(\theta_t)$, $\epsilon:=\hat g(h(x, \theta_t); \theta_t)-m$, $d:=g(Q(h(x;\theta_t));\theta_t)-\hat g(h(x, \theta_t); \theta_t)$. Then, by A3 and A4, we have \begin{align} \E{\epsilon}&= \E{g(h(x, \theta_t); \theta_t) - \nabla_\theta \mathcal L(\theta_t)}+ \E{\iprod{J(x, \theta_t), \Delta Q(h(x, \theta_t))} }\nonumber\\ &=\iprod{J(x, \theta_t), \E{\Delta Q(h(x, \theta_t))}}=0. \label{eqn:epsilon}\\ \E{\norm{\epsilon}^2}&=\norm{\E{\epsilon}}^2 +\Var{\epsilon}=\Var{\hat g(h(x, \theta_t); \theta_t)}\le \sigma^2.\label{eqn:epsilon-sqr}\\ \E{\norm{d}}&\le b\label{eqn:d}. \end{align} By the definitions, the ACT dynamics can be written as \begin{align*} \theta_{t+1}\leftarrow \theta_t - \eta(m+d+\epsilon). \end{align*} By A1, we have \begin{align} \mathcal L(\theta_{t+1})\le \mathcal L(\theta_t) - \eta \iprod{m, m+d+\epsilon} + \frac{\beta\eta^2}{2}\norm{m+d+\epsilon}^2.\label{eqn:smoothness} \end{align} By Eq.s~(\ref{eqn:epsilon},\ref{eqn:d}) \begin{align} \E{\iprod{m, m+d+\epsilon}}\ge\norm{m}^2 - \norm{m}\norm{d}+ \iprod{m, \E{\epsilon}}\ge\norm{m}^2-\norm{m}b .\label{eqn:first-order-ineq} \end{align} By Eq.s~(\ref{eqn:epsilon},\ref{eqn:epsilon-sqr},\ref{eqn:d}), and $\norm{x+y}^2\le 2\norm{x}^2+2\norm{y}^2$, \begin{align} \E{\norm{m+d+\epsilon}^2}=\E{\norm{m+d}^2}+\Var{\epsilon}\le 2\E{\norm{m}}^2+2\E{\norm{d}}^2+\Var{\epsilon}=2\E{\norm{m}}^2+2b^2+\sigma^2.\label{eqn:second-order} \end{align} Taking expectation on both sides of Eq.~(\ref{eqn:smoothness}), plug in Eq.s~(\ref{eqn:first-order-ineq}, \ref{eqn:second-order}), and use $\eta<\frac{1}{2\beta}$, we have \begin{align*} \E{\mathcal L(\theta_{t+1})}\le& \mathcal L(\theta_t) - \eta (\norm{m}^2-\norm{m}b) + \frac{\beta\eta^2}{2}(2\E{\norm{m}}^2+2b^2+\sigma^2).\\ =&\mathcal L(\theta_t) - (\eta-\beta\eta^2)\norm{m}^2 +\eta \norm{m}b+\frac{\beta\eta^2}{2}(2b^2+\sigma^2)\\ =&\mathcal L(\theta_t) - \frac{\eta}{2}\norm{m}^2 +\eta \norm{m}b+\frac{\beta\eta^2}{2}(2b^2+\sigma^2). \end{align*} Completing the squares, \begin{align*} \E{\mathcal L(\theta_{t+1})}\le \mathcal L(\theta_t) - \frac{\eta}{2}(\norm{m}-b)^2 +\frac{\beta\eta^2}{2}(2b^2+\sigma^2). \end{align*} Take expectation on both sides and sum up for $t={0, \dots, T-1}$, \begin{align*} \E{\mathcal L(\theta_T)} - \mathcal L(\theta_0) \le -\frac{\eta}{2}\sum_{t=0}^{T-1}\mathbb E\left(\norm{\nabla \mathcal L(\theta_t)}-b\right)^2 +\frac{\beta\eta^2T}{2}(2b^2+\sigma^2). \end{align*} Reorganize the terms, \begin{align*} \Es{t}{\mathbb E\left(\norm{\nabla \mathcal L(\theta_t)}-b\right)^2 } \le \frac{2(\mathcal L(\theta_0)-\mathcal L(\theta_T))}{\eta T}+\eta\beta(2b^2+\sigma^2). \end{align*} Let $t_* = \operatornamewithlimits{argmin}_t \E{\norm{\nabla \mathcal L(\theta_t)}}$, and use A1, we have \begin{align*} \mathbb E\left(\norm{\nabla\mathcal L(\theta_{t_*})}-b\right)^2\le \frac{2(\mathcal L(\theta_0)-\mathcal L_*)}{\eta T}+\eta\beta(2b^2+\sigma^2). \end{align*} Use $(a+b)^2\le 2a^2 + 2b^2$, we have \begin{align*} \E{\norm{\nabla\mathcal L(\theta_{t_*})}^2}\le \frac{4(\mathcal L(\theta_0)-\mathcal L_*)}{\eta T}+(2\beta\eta+2)b^2+ \eta\beta\sigma^2 \le \frac{4(\mathcal L(\theta_0)-\mathcal L_*)}{\eta T}+3b^2+ \eta\beta\sigma^2. \end{align*} \end{proof} \subsection{Proposition 1: The Linearization Error} \begin{proof} Consider the gradient function $g(Q(h(x; \theta); \theta))$, whose output is a $P$-dimensional vector. Since it is twice differentiable, we construct the Taylor's expansion at $h(x; \theta)$ with Lagrange remainder: \begin{align*} \exists H_1, \dots, H_P, \mbox{s.t., }\forall i,~~ g_i(Q(h(x; \theta)); \theta) = g_i(h(x, \theta); \theta) + J_i(x, \theta)\Delta h(x, \theta) +\Delta h(x, \theta)^\top H_i \Delta h(x, \theta), \end{align*} where $J_i(h(x; \theta), \theta):=\frac{\partial g_i(h(x; \theta); \theta)}{\partial h}$. By the assumption, there exists $P>0$, such that the linearization error is \begin{align*} \norm{g(Q(h(x; \theta)); \theta) - \hat g(h(x; \theta); h(x; \theta), \theta)}_1 =\sum_{i=1}^P \Delta h(x, \theta)^\top H_i \Delta h(x, \theta) \le \gamma P \norm{\Delta h(x, \theta)}^2. \end{align*} Taking expectation, \begin{align*} &\E{\norm{g(Q(h(x; \theta)); h(x; \theta), \theta) - \hat g(h(x; \theta); \theta)}_2}\le \E{\norm{g(Q(h(x; \theta)); \theta) - \hat g(h(x; \theta); h(x; \theta), \theta)}_1}\\ &\le \gamma P \Var{\Delta h(x, \theta)} =O(\Var{\Delta h(x, \theta)}) \end{align*} \end{proof} \subsection{Proposition 2: The Order of the Variance} \newtheorem{innercustomthm}{Proposition} \newenvironment{customthm}[1] {\renewcommand\theinnercustomthm{#1}\innercustomthm} {\endinnercustomthm} The following proposition is convenient for isolating the different noise sources. \begin{customthm}{A} (Law of Total Variance) $$\Var{X}=\E{\Varcond{X}{Y}} + \Var{\Econd{X}{Y}}.$$ \end{customthm} \begin{proof} By definition \begin{align*} \Var{\hat g(h(x; \theta_t); h(x; \theta), \theta_t)} =\Var{g(h(x, \theta); \theta)} + \Var{J(h(x; \theta), \theta)\Delta h(x, \theta)}, \end{align*} where $\Var{g(h(x, \theta); \theta)}$ is the noise introduced by subsampling the data $x$. By law of total variance, \begin{align*} \Var{J(h(x; \theta), \theta)\Delta h(x, \theta)}= \mathbb E_{\mathcal X}\left[\Vars{Q} {J(h(x; \theta); \theta_t)\Delta h(x, \theta)} \right]+ \underbrace{ \Vars{\mathcal X}{\Es{Q}{J(h(x; \theta); \theta_t)\Delta h(x, \theta)}}}_{=0}, \end{align*} where \begin{align*} \Vars{Q}{J(h(x; \theta); \theta_t)\Delta h(x, \theta)} =&\Es{Q}{\norm{J(h(x; \theta); \theta_t)\Delta h(x, \theta)}^2} \le \Es{Q}{\norm{J(h(x; \theta); \theta_t)}^2\norm{\Delta h(x, \theta)}^2} \\=& \norm{J(h(x; \theta); \theta_t)}^2 \Es{Q}{\norm{\Delta h(x, \theta)}^2} = O\left(\Var{\Delta h(x, \theta)}\right). \end{align*} \end{proof} \subsection{Proposition 3: The Structure of the Variance}\label{sec:appendix-var-structure} Before investigating the structure of $\Vars{Q} {J(x; \theta_t)\Delta h(x, \theta)}$, let's do some recap: the parameter $\theta_t$ is a $P$-dimensional vector; the context difference $\Delta h(x, \theta)$ is a $D$-dimensional vector, and $J(x; \theta_t)$ is a $P\times D$ matrix. Recall that $\Delta h(x, \theta)$ is the concatenation of $L$-vectors, $\Delta h^{(l)}(x, \theta)$, and let $J^{(l)}(x, \theta):=\frac{\partial g}{\partial h^{(l)}}g\left((h^{(l)}(x; \theta))_{l=1}^L, \theta\right)$, which is a $P\times D_l$ matrix. Furthermore, let $h^{(l)}_j(x, \theta)$ be the $j$-th dimension, and $J^{(l)}_j(x, \theta)$ be its $j$-th column. To proceed, we need to make the following assumptions to the compressor $Q(\cdot): \mathbb R^{D}\rightarrow \mathbb R^{D}$:\\ \noindent\textbf{B1: } The compressed result is element-wise uncorrelated. That is, for any $i\ne j$, $\Cov{Q(h)_i}{Q(h)_j}=0$.\\ \noindent\textbf{B2: } For compressing a vector $h$ to $b$ bits, the compression variance of each dimension can be written in the form $\Var{Q(h)_j}\le R_j(h)S(b)$, where $S(\cdot)$ is a known function. Both assumptions can be achieved by a stochastic rounding~\cite{courbariaux2015binaryconnect} quantizer, where \begin{align*} Q(h)_j = \begin{cases} T_{h, b}^{-1}\left(\ceil{T_{h, b}(h_j)}\right) & \mbox{w.p. } T_{h, b}(h_j)-\floor{T_{h, b}(h_j)} \\ T_{h, b}^{-1}\left(\floor{T_{h, b}(h_j)}\right) & \mbox{otherwise}\\ \end{cases}, \end{align*} where $T_{h, b}(h_j)=(2^b-1)\frac{h_j - \min_j h}{\max_j h-\min_j h}$. Since each dimension is quantized independently, B1 is met. Moreover, \begin{align*} \Var{Q(h)_j}\le \frac{1}{4} \left(\frac{\max_j h-\min_j h}{(h_j - \min_j h)}\right)^2 (2^b-1)^{-2}=R_j(h) S(b), \end{align*} where \begin{align*} R_j(h) = \frac{1}{4} \left(\frac{\max_j h-\min_j h}{(h_j - \min_j h)}\right)^2,~~~ S(b) = (2^b-1)^{-2}. \end{align*} \begin{proof} By definition, \begin{align*} J(h; \theta)\Delta h = \sum_{l=1}^L \sum_{j=1}^{D_l} J^{(l)}_j(h; \theta_t) \Delta h^{(l)}_j. \end{align*} Using Assumption B1, we have \begin{align*} \Vars{Q}{J(h; \theta)\Delta h} &= \Es{Q}{\norm{\sum_{l=1}^L \sum_{j=1}^{D_l} J^{(l)}_j(h; \theta_t) \Delta h^{(l)}_j}^2}\\ &=\sum_{l=1}^L \sum_{j=1}^{D_l} \Es{Q}{\norm{J^{(l)}_j(h; \theta_t) \Delta h^{(l)}_j}^2}.\\ &=\sum_{l=1}^L \sum_{j=1}^{D_l} \norm{J^{(l)}_j(h; \theta_t)}^2 \Vars{Q}{\Delta h^{(l)}_j} \end{align*} Using Assumption B2, we have \begin{align*} \Vars{Q}{J(h; \theta)\Delta h} &\le\sum_{l=1}^L \sum_{j=1}^{D_l} \norm{J^{(l)}_j(h; \theta_t)}^2 R_l(h) S(b_l) =\sum_{l=1}^L c_l(h, \theta) S(b_l), \end{align*} where $c_l(\theta, h) :=R_l(h) \norm{ J^{(l)}(h; \theta_t)}^2_F$. \end{proof} \section{Training Accuracy of Baselines} \label{sec:baseline-acc} For all the baselines we compared in Sec.~\ref{sec:mem-speed}, only ActNN, Mesa, and ZeRO-Offload are lossy methods. All other methods are lossless and have the same training accuracy as FP32. For ResNet-50 on ImageNet, the training accuracy for FP32, GACT\xspace, ActNN L2, and ActNN L3 are 77.3, 77.0, 77.4, and 76.9. For Bert-Large on SST-2, the accuracy for FP32, GACT\xspace, Mesa, and ZeRO-Offload are 93.7, 93.5, 93.8, and 93.3. For Swin-tiny on ImageNet, the training accuracy for FP32, GACT\xspace, and Mesa are 81.2, 81.0, and 81.3 respectively. \section{Experiment Setup} \label{sec:setup} \subsection{Node classification task on graphs} We conduct experiments on four node classification datasets with standard splits, including Flickr, Reddit, Yelp from GraphSAINT~\cite{zeng2019graphsaint}, and ogbn-arxiv from Open Graph Benchmark (OGB)~\cite{hu2020open}. The four datasets cover extensive downstream applications with different scales. We use accuracy as the evaluation metric for multi-class classification and micro-F1 for multi-label classification. We run ten seeds (0 to 9) and report the average accuracy across~runs. We evaluate GACT\xspace on three representative GNN models, including GCN~\cite{kipf2016semi}, GAT~\cite{velivckovic2017graph}, and GCNII~\cite{chen2020simple_gcnii} under the full-batch training setting. All three models are implemented by CogDL~\cite{cen2021cogdl}, a toolkit for graph neural networks. \subsection{Text classification task} We select four largest datasets, MNLI, QQP, SST-2, and QNLI, from the GLUE benchmark~\cite{wang2018glue}. The four datasets cover different aspects of natural language understanding, including sentiment classification, natural language inference and paraphrase detection. We use the mainstream transformer implementation~\cite{wolf-etal-2020-transformers} to train Bert-large~\cite{devlin2019bert}. We run three seeds (42, 43, 44) and report F1 for QQP, accuracy for the others. \section{Conclusion} This paper presents GACT\xspace, an ACT framework for generic NN architectures. We prove the convergence of GACT\xspace without prior knowledge about operator type or network architecture by analyzing a linearized approximation of ATC's gradients. With the adaptive algorithm, GACT\xspace achieves negligible accuracy loss on various tasks, reducing activation memory by up to 8.1$\times$ and enabling training with up to 24.7$\times$ batch size compared with full precision training. \section*{Acknowledgements} This work was supported by the National Key Research and Development Project of China (No. 2021ZD0110502); NSF of China Project (No. 62106120), by the National Science Foundation through grants IIS-1955488, IIS-2027575, CCF-1723352, ARO W911NF2110339, ONR N00014-21-1-2724, and DOE award DE-SC0016260. We would also like to acknowledge partial support from DARPA, IARPA, the Sloan Foundation, NSF, and ONR. Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be inferred.
\section{Introduction}\label{intro} Recently in connection with the study of some very interesting geometrical and dynamical properties of nilmanifolds, the simply connected nilpotent Lie groups or equivalently their Lie algebras have received the attention of many researchers. Among these nilpotent Lie algebras, the two-step ones are the simplest (after the abelian ones) and most widely studied \cite{D-M, DDM, De, D-D, D-V1, Dem, E1, E2, F1, F2, GorM, GM, L-W1, L-W2, P, P-T}. Let $k$ be a field with characteristic not equal to 2. We recall that a non-abelian finite dimensional Lie algebra $\n$ over $k$ is said to be a {\em two-step nilpotent Lie algebra} if $[\n, [\n, \n]] = \{0\}$. Every two-step nilpotent Lie algebra $\n$ over $k$ can be realized as a vector space direct sum $V \oplus (\bigwedge^2V)/W$ where $V$ is a finite dimensional $k$-vector space and $W$ is a subspace of the exterior power $\bigwedge^2V$. The Lie bracket structure on $\n$ is given by: \begin{enumerate} \item $[v_1, v_2] = v_1 \wedge v_2 \text { mod } W \,\,\text{ for } v_1, v_2 \in V,$ and \item $[x, y] = 0 \text{ for } x \in \n \text{ and } y \in (\bigwedge^2V)/W$ \end{enumerate} A combinatorial approach for construction of two-step nilpotent Lie algebras was described in \cite{D-M}. Subsequently this construction has been used by many authors \cite{F1,F2,P-T,L-W2}. We recall the construction. Let $(S, E)$ be a finite simple graph where $S$ is the set of vertices and $E$ is the set of edges, where it is assumed that there are no loops and no multiple edges connecting the same pair of vertices. We associate with $(S, E)$ a two-step nilpotent Lie algebra $\n = \n(S, E)$ over $k$ in the following way. The underlying vector space of $\n$ is $V \oplus (\bigwedge^2V)/W$ where $V$ is the $k$-vector space consisting of formal $k$-linear combinations of elements of $S$ (so that $S$ is a basis of $V$), and $W$ is the subspace of the exterior power $\bigwedge^2V$ spanned by the vectors $\alpha \wedge \beta$ where $\alpha \beta \notin E$. The Lie bracket structure on $\n$ is defined as above. We note that the space $(\bigwedge^2V)/W$ has dimension $|E|$. In fact it is spanned by $\{\alpha \wedge \beta \text{ mod } W \mid \alpha \beta \in E\}$. Consequently $\n(S, E)$ has dimension $|S| + |E|$. Note that the construction of $\n(S,E)$ form the graph $(S, E)$ is functorial in the sense that if $f : (S, E) \to (S', E')$ is an isomorphism of graphs, then we obtain an isomorphism $f_{*}: \n(S, E) \to \n(S', E')$ in the natural way: $f_*(\alpha) = f(\alpha)$ for each $\alpha \in S$ and for each edge $\alpha \beta \in E$, $f_*(\alpha \wedge \beta) = f(\alpha) \wedge f(\beta)$. It is easy to see that $f_*$ extends linearly to a Lie algebra isomorphism from $\n(S,E)$ to $\n(S', E')$. In this note, we consider the converse question and prove the following: \begin{theorem}\label{main} Let $(S, E)$ and $(S', E')$ be finite simple graphs. If the two Lie algebras $\n(S, E)$ and $\n(S', E')$ are isomorphic then the graphs $(S, E)$ and $(S', E')$ are also isomorphic. \end{theorem} This result has already found use in some recent investigations \cite{L-W2, P-T}. In \cite{L-W2}, the authors provide a method to construct Einstein solvmanifolds by using the Lie algebras $\n(S, E)$. Using our main theorem, these solvmanifolds are isometric if and only if the graphs are isomorphic, which gives us examples of nonisometric Einstein solvmanifolds. In \cite{P-T}, the authors construct symplectic two-step nilpotent Lie algebras associated with graphs. In this case our main result implies that {\em there are exactly five non-isomorphic two-step nilpotent Lie algebras of dimension six associated with graphs in the above manner.} Indeed, there are five non-isomorphic graphs $(S, E)$ such that $|S| + |E| = 6 $. Now, $\dim \n(S, E)= |S| + |E|$. Hence using our Theorem \ref{main}, there are exactly five non-isomorphic two-step nilpotent Lie algebras of dimension six associated with graphs. This fact was used in \cite[Remark 1]{P-T}. The group of Lie automorphisms of $\n(S, E)$ was determined in terms of the graph $(S, E)$ in \cite{D-M}. Also, Anosov and ergodic automorphisms on corresponding nilmanifolds were studied. In \cite{L-W2}, explicit examples and non-examples of Einstein solvmanifolds were constructed using the Lie algebras $\n(S,E)$ (see also \cite{F1}). A combinatorial construction of the first and second cohomology groups of $\n(S,E)$ was given in \cite{P-T}, and was used to construct symplectic and contact nilmanifolds. \vspace{0.1cm} \begin{remark}\label{groups} {\rm Suppose the underlying field $ k = \mathbb R$. Let $N(S, E)$ denote the unique simply connected nilpotent Lie group corresponding to the Lie algebra $\n(S, E)$. Then the Lie group exponential map $\exp : \n(S,E) \to N(S, E)$ is a diffeomorphism (see \cite{R}, p. 6). We note that the Baker-Campbell-Hausdorff formula (\cite{V}, p. 119)) gives the multiplication law in $N(S,E)$ as follows: \[\exp(x)\exp(y) = \exp (x + y + \frac{1}{2}[x, y])\] for all $x, y \in \n(S,E)$. More precisely, we can realize $N(S,E)$ as $\n(S,E)$ (via the exponential map) with the multiplication defined by \[ (v_1, x_1). (v_2, x_2) = (v_1 + v_2, x_1 + x_2 + \frac{1}{2}[v_1, v_2]) \] for all $v_1, v_2 \in V$ and $x_1, x_2 \in (\bigwedge^2V)/W$. Then our Theorem~\ref{main} implies that the simply connected Lie groups $N(S, E)$ and $N(S',E')$ are Lie isomorphic if and only if the graphs $(S,E)$ and $(S', E')$ are isomorphic. This can been seen by using the fact that the simply connected Lie groups are Lie isomorphic if and only if their Lie algebras are Lie isomorphic (see \cite{Warner}, p. 101). } \end{remark} \vspace{0.1cm} \begin{remark}\label{referee} {\rm There have been some other constructions of algebraic structures associated with a simple graph. Some, though not all of these are related with the construction considered here. In \cite{KLNR}, the authors consider the $K$-algebra associated with a simple graph $(S,E)$ over a field $K$ generated by the set of vertices $S$ and with relations $\alpha \beta = \beta \alpha$ if and only if $\alpha \beta \notin E$. They proved that these $K$-algebras are isomorphic if and only the corresponding graphs are isomorphic. This result was further used to prove an analogous result for graph groups in \cite{Dr1}. In \cite{Dr1}, the author considers the group associated with graph $(S, E)$ which is defined as the group generated by the set $S$ and with relations $\alpha \beta = \beta \alpha$ if and only if $\alpha \beta \in E$. Here we remark that the simply connected nilpotent Lie group $N(S, E)$ (as in Remark \ref{groups}) is not finitely generated. Later in \cite{DuK1}, the authors introduce the free partially commutative Lie algebra $\mathfrak{l}(S, E)$ associated with $(S,E)$ which is the quotient of the free Lie algebra on the set $S$ modulo the Lie ideal generated by $\{ [\alpha, \beta]\mid \alpha\beta \notin E \}$. In fact, the authors study these structures in more generality but we will not go into the details here. Following \cite{Dr1, DuK2, KLNR}, it can be shown that these free partially commutative Lie algebras are Lie isomorphic if and only if the corresponding graphs are isomorphic (see Theorem 1 in \cite{Dr1} for example). Here we note that the quotient of $\lf(S,E)$ by the Lie ideal $\lf^3$ generated by the set $\{[x, [y, z]]\mid x, y, z \in \lf(S,E)\}$, is Lie isomorphic to our two-step nilpotent Lie algebra $\n(S,E)$. The isomorphism result for the free partially commutative Lie algebras can be recaptured by using our Theorem~\ref{main} (if the characteristic of the field $K$ is not equal to 2) as follows: Let $\lf= \lf(S,E)$ and $\lf' = \lf(S',E')$ be Lie isomorphic free partially commutative Lie algebras associated with graphs $(S, E)$ and $(S', E')$ respectively. Then the quotient Lie algebras $\lf / \lf^3$ and $\lf'/\lf'^3$ are Lie isomorphic. That means the two-step nilpotent Lie algebras $\n(S, E)$ and $\n(S', E')$ are isomorphic and our Theorem~\ref{main} implies that the graphs $(S, E)$ and $(S', E')$ are isomorphic.} \end{remark} \vspace{0.3cm} \section{Some Preliminaries}\label{prelim} Before we prove Theorem~\ref{main}, we need a few facts regarding the structure of the automorphism group of $\n = \n(S, E)$. We denote by ${\rm Aut}(\n)$ the group of Lie automorphisms of $\n$ and $T$ the set of automorphisms $\tau \in {\rm Aut}(\n)$ such that $\tau(V) = V$ where we recall that $V$ is the $k$-vector space with basis $S$, and $\n = V \oplus (\bigwedge^2 V)/W$. We denote by $G$ the subgroup of ${\rm GL}(V)$ consisting of the linear automorphisms $\tau|_V$ where $\tau \in T$. Next we see that the group $G$ consists of $g \in {\rm GL }(V)$ such that the subspace $W \subseteq \bigwedge^2 V$ is invariant under the induced natural action $\wedge^2 g$ of $g$ on $\bigwedge^2V$. Indeed, if $g \in G$ ( i.e. if $g = \tau|_V$ for some $\tau \in T$) and if $\alpha \beta \notin E$, then \begin{align*} g(\alpha) \wedge g (\beta) \text{ mod } W &= [\tau(\alpha), \tau(\beta)] \\&= \tau[\alpha, \beta]\\ & = 0, \end{align*} since $\tau$ is an automorphism of $\n$ and $[\alpha, \beta] = 0$ in $\n$. Conversely if $g \in {\rm GL }(V)$ such that $g(\alpha) \wedge g(\beta) \in W$ for all $\alpha \beta \notin E$, then $\tau = g \oplus \wedge^2 g$ defines a Lie automorphism of $\n$ such that $\tau(V) = V$ and hence $g \in G$. Since the condition that an element of $G$ stabilizes $W$ under the natural action on $\bigwedge^2V$ is represented by polynomial equations, we have therefore the following: \begin{lemma}\label{algebraic} $G$ is an algebraic group. \end{lemma} In fact, for any two-step nilpotent Lie algebra $\mathfrak{m} = V \oplus (\bigwedge^2 V)/W$ one can analogously define the subgroup $G$ of ${\rm GL}(V)$ consisting of the restrictions of automorphisms of $\m$ fixing $V$. A similar argument shows that $G$ is an algebraic group. However our next lemma holds only for two-step nilpotent Lie algebras arising from graphs. We prove that all linear automorphisms of $V$ which can be represented as diagonal matrices with respect to the basis $S$ can be extended to Lie automorphisms of $\n(S, E)$. \begin{lemma}\label{diagonal} Let $D_{S}$ denote the subgroup of ${\rm GL}(V)$ consisting of all elements which can be represented as diagonal matrices with respect to the basis $S$ of $V$. Then $D_{S} \subseteq G$. \end{lemma} \proof We recall that $G$ is a subgroup of ${\rm GL}(V)$ consisting of those linear automorphisms of $V$ whose induced action on $\bigwedge^2 V$ leaves the subspace $W$ invariant where $W \subseteq \bigwedge^2V$ spanned by elements $\alpha \wedge \beta$ such that $\alpha \beta \notin E$. Let $d \in D_{S}$, say $d(\alpha) = d_{\alpha} \alpha$ for some nonzero $d_{\alpha}$'s in $k$ and for all $\alpha \in S$. Then if $\alpha \beta \notin E$, we have $d(\alpha) \wedge d(\beta) = d_{\alpha}d_{\beta}(\alpha \wedge \beta) \in W$. Hence $d \in G$ and $D_{S} \subseteq G$. \hfill$\square$ \vspace{.2cm} \section{Proof of Theorem~\ref{main}} Let $\overline{k}$ denote the algebraic closure of $k$. First we note that if $F$ is an isomorphism from the Lie algebra $\n = \n(S, E)$ to $\n'= \n(S', E')$, then $ \overline{F} = F \otimes_k {\rm id}_{\overline{k}}$ ~is an isomorphism of $\overline{k}$-Lie algebras $\overline{\n} = \n \otimes_k \overline{k}$ and $\overline{\n'} = \n' \otimes_k \overline{k}$. To see this, recall that the Lie bracket in $\overline{\n}$ is defined by \[[x \otimes a, y \otimes b] = [x, y] \otimes ab \text{ for all } x, y \in \n \text{ and } a, b \in \overline{k}.\] Also $\overline{F}$ is defined by \[\overline{F} (\sum_i x_i \otimes a_i) = \sum_i F(x_i) \otimes a_i \text{ for all } x_i \in \n \text{ and } a_i \in \overline{k}.\] Then $\overline{F}$ is a $\overline{k}$-vector space isomorphism which follows from the fact that $F$ is a $k$-vector space isomorphism. Furthermore, for $x, y \in \n$ and $a, b \in \overline{k}$ we have \begin{align*} \overline{F}[x \otimes a, y \otimes b] = \overline{F}([x, y] \otimes ab) &= (F[x, y] \otimes ab) \\&=[F(x), F(x)] \otimes ab \text{ since $F$ is a Lie algebra isomorphism.} \\& = [F(x) \otimes a , F(x) \otimes b] = [\overline{F}(x \otimes a), \overline{F}(y \otimes b)]. \end{align*} Now without loss of generality we can assume that the field $k$ is algebraically closed. Indeed if $k$ is not already algebraically closed, and $F$ is an isomorphism from the Lie algebra $\n = \n(S, E)$ to $\n'= \n(S', E')$, we can replace $\n$ by $\overline{\n}$, $\n'$ by $\overline{\n'}$ and $F$ by $\overline{F}$. Then the new $F$ is an isomorphism of $\overline{k}$-Lie algebras $\n$ and $\n'$ as discussed above. We need to show that the graphs $(S, E)$ and $(S', E')$ defining the Lie algebras $\n$ and $\n'$ are isomorphic. We recall that $\n = V \oplus (\bigwedge^2V)/W$ where $V$ is the $k$-vector space with basis $S$ and $W \subset \bigwedge^2 V$ is as defined in the previous section. Similarly $\n' = V' \oplus (\bigwedge^2V')/W'$ where $V'$ is the $k$-vector space with basis $S'$. The proof will be in three steps. First we construct a new graph $(S'', E'')$ isomorphic to the graph $(S, E)$. The set of vertices $S''$ will be a basis of the vector space $V'$. Then in the next step we look a group $D_{S''}$ analogous to the group $D_S$ considered in Lemma~\ref{diagonal}. The theorem will follow from the relation between $D_{S''}$ and $D_{S'}$ to be explained in the third step. We begin by constructing a new graph $(S'', E'')$ isomorphic to the graph $(S, E)$ in the following way. Recall that $V'$ denote the $k$-vector space with $S'$ as basis and let $W'$ denote the subspace of $\bigwedge^2 V'$ spanned by the wedge products $\alpha \wedge \beta$ where $\alpha$ and $ \beta$ are vertices in $S'$ not connected by an edge in $E'$. Let $\pi: \n' \to V'$ denote the canonical linear projection with respect to the decomposition $\n' = V' \oplus (\bigwedge^2V')/W'$. We define $S''$ denote the subset of $V'$ given by $ \{\pi(F(\alpha)) \mid \alpha \in S \}$ and $E''$ denote the set of unordered pairs $\pi(F(\alpha)) \pi(F(\beta))$ such that $\alpha \beta \in E$, where, recall that $F$ is the isomorphism given between $\n$ and $\n'$. It follows that $S''$ is a basis of $V'$. Indeed, if $ \sum_{i=1}^{n} a_i \pi(F(\alpha_i)) = 0$ for $a_i$'s $\in \mathbb R$ and $\alpha_i$'s $\in S$, then $ F(\sum_{i=1}^{n} a_i \alpha_i) \in [\n', \n']$. Since $F$ is a Lie algebra isomorphism, we can see that $ \sum_{i=1}^{n} a_i \alpha_i \in [\n, \n] $. On the other hand each $\alpha_i$ is in $S$ so $\sum_{i=1}^{n} a_i \alpha_i \in V \cap [\n, \n]$, so that $ \sum_{i=1}^{n} a_i \alpha_i = 0$. By the linear independence of $S$, each $a_i = 0$. Now the derived algebras $[\n, \n]$ and $[\n', \n']$ have the same dimension since $\n$ and $\n'$ are isomorphic. Hence $|E| = |E'|$ since $|E| = \dim [\n, \n]$ and $|E'| = \dim [\n', \n']$. Hence $|S| = |S'|$ because $\n$ is of dimension $|S| + |E|$ and $\n'$ is of dimension $|S'|+|E'|$, and $\n$ and $\n'$ are isomorphic. Thus the sets $S'$ and $S''$ have exactly the same number of elements and hence $S''$ is a basis of $V'$ since $S'$ is a basis of $V'$. Since the graph $(S'', E'')$ as constructed above is isomorphic to the graph $(S, E)$, to prove the theorem, it is enough to show that there exists an isomorphism of graphs $f : (S', E') \to (S'', E'')$. Now as in Lemma~\ref{diagonal}, $D_{S'}$ denote the subgroup of ${\rm GL}(V')$ consisting of all elements which can be represented as diagonal matrices with respect to the basis $S'$ of $V'$. By Lemma \ref{diagonal}, we have $D_{S'} \subset G'$. Similarly let $D_{S''}$ denote the subgroup of ${\rm GL}(V')$ consisting of all elements which can be represented as diagonal matrices with respect to the basis $S'' = \{\pi(F(\alpha)) \mid \alpha \in S \}$ of $V'$. We claim that $D_{S''} \subset G'$ as well. To see this, let $d \in D_{S''}$ and $d(\pi(F(\alpha)) = d_{\alpha} \pi(F(\alpha)$ for all $\alpha \in S$ and for some nonzero $d_{\alpha}$'s in $k$. It suffices to show that if $\gamma$ and $\delta$ are vertices in $S'$ not connected by an edge in $E'$, then $d(\gamma) \wedge d(\delta) \in W'$ (i.e. $[d(\gamma), d(\delta)] = 0$ in $\n'$.) Now since $\gamma, \delta \in V'$ and $S''$ is a basis for $V'$, we represent $\displaystyle \gamma= \sum_{\alpha \in S} a_\alpha \pi(F(\alpha))$ and $\displaystyle \delta = \sum_{\beta\in S} b_\beta \pi(F(\beta))$ where each $a_\alpha$, $b_\beta$ is in $k$. Since $[\gamma, \delta] = 0$, we have $\displaystyle\left[\pi\left(F\left(\sum_{\alpha \in S} a_\alpha \alpha\right)\right), \pi\left(F\left (\sum_{\beta\in S} b_\beta \beta\right)\right)\right] = 0.$ This means that \[ \left[F\left(\sum_{\alpha \in S} a_\alpha \alpha\right), F\left(\sum_{\beta\in S} b_\beta\beta\right)\right] = 0,\] because in $\n'$ $[v + w, v' + w'] = 0$ if and only if $[v, v'] = 0$ for $v, v' \in V'$ and $w, w' \in (\bigwedge^2V')/W'$. Hence $\displaystyle \left[\sum_ {\alpha \in S} a_\alpha \alpha, \sum_{\beta\in S} b_\beta \beta\right] = 0$ in $\n$ since $F$ is an isomorphism. Now we define $\sigma$, an element of ${\rm GL}(V)$ by $\sigma (\alpha) = d_{\alpha} \alpha$ for each $\alpha \in S$. Then $\sigma \in D_S$ where $D_S$ is the subgroup of ${\rm GL}(V)$ consisting of diagonal automorphisms of $V$ with respect to $S$ and hence $\sigma \in G$ by Lemma \ref{diagonal}. So $\displaystyle \left[\sum_{\alpha\in S} a_\alpha \sigma(\alpha), \sum_{\beta\in S} b_\beta \sigma(\beta)\right] = 0$ in $\n$ which means that $\displaystyle \left[\sum_{\alpha\in S} a_\alpha d_{\alpha}\alpha, \sum_{\beta\in S} b_\beta d_{\beta} \beta\right] = 0$. Since $F$ is an isomorphism, $\displaystyle \left[\sum_{\alpha\in S} a_\alpha d_{\alpha}F(\alpha), \sum_{\beta\in S} b_\beta d_{\beta} F (\beta)\right] = 0$ in $\n'$ and hence \[\left[\sum_{\alpha\in S} a_\alpha d_{\alpha}\pi(F(\alpha)), \sum_ {\beta\in S}b_\beta d_{\beta} \pi(F (\beta))\right]= 0.\] Thus $[d(\gamma), d(\delta)] = 0$ in $\n'$ as we wanted to show. By Lemma~\ref{algebraic}, $G'$ is an algebraic group. Moreover, $D_{S'}$ and $D_{S''}$ are maximal tori in the connected component in Zariski topology of identity of $G'$ . But maximal tori in a connected algebraic group over an algebraically closed field (such as our $k$) are conjugate (see \cite{S}, p. 108), and hence there exists $g \in G'$ such that $$D_{S'} = g\left( D_{S''}\right)g^{-1}.$$ To define an isomorphism of graphs $f: S' \to S''$, we construct a special element of $D_{S'}$. For each $\alpha \in S'$, choose a nonzero $d'_{\alpha}\in k$ such that unless the set $\{\alpha, \beta\}$ equals the set $\{\gamma, \delta\}$ we have $d'_{\alpha}d'_{\beta} \neq d'_{\gamma} d'_{\delta}$. Since by hypothesis the field $k$ is algebraically closed, it is in particular infinite, and such $d_\alpha$'s may be chosen. Define $d' \in D_{S'}$ be the element whose matrix representation with respect to $S'$ given by $d'(\alpha) = d'_{\alpha} \alpha$. As note above, there exists $d'' \in D_{S''}$ such that $$d'= g d'' g^{-1}.$$ Since $d'$ and $d''$ are similar, they have the same eigenvalues. Further since $d'$ is diagonal with respect to $S'$ and $d''$ is diagonal with respect to $S''$, it follows that these diagonal entries are the same up to a permutation. Hence there exists a bijection $f$ of $S'$ with $S''$ such that $$d''(f (\alpha)) = d'_{\alpha} f(\alpha) \text { for all } \alpha \in S'.$$ We claim that $f$ is in fact an isomorphism of the graph $(S', E')$ with the graph $(S'', E'')$. Establishing this claim will conclude the proof of our theorem. It suffices now to show that $\alpha \beta \in E'$ if and only if $f(\alpha)f(\beta) \in E''$. Now $\alpha \beta \in E' $ if and only if $[\alpha, \beta] \neq 0$ in $\n'$. Also $f(\alpha)f(\beta) \in E''$ if and only if $[f(\alpha), f(\beta)] \neq 0$ in $\n'$. This follows from the definition of $E''$ and using that $F$ is an isomorphism. Indeed $f(\alpha) = \pi(F(\alpha'))$ and $f(\beta) = \pi(F(\beta'))$ for some $\alpha', \beta' \in S$. Hence $f(\alpha)f(\beta) \in E''$ if and only if $\alpha' \beta' \in E$. Since $F$ is an isomorphism, $\alpha' \beta' \in E$ if and only if $[F(\alpha'), F(\beta')] \neq 0$ in $\n'$ i.e. $[\pi(F(\alpha')), \pi(F(\beta'))] \neq 0$ in $\n'$. We will show that for $\alpha, \beta \in S'$, $[\alpha, \beta] \neq 0$ if and only if $[f(\alpha), f(\beta)] \neq 0$. Since the diagonal entries of the chosen element $d' \in D_{S'}$ (defined as above) are such that the pairwise products of the diagonal entries are distinct, $d'_\alpha d'_\beta$ is an eigenvalue of an extended automorphism $d'$ of $\n'$ if and only if $[\alpha, \beta] \neq 0$. Similarly $d'_\alpha d'_\beta$ is an eigenvalue of an extended automorphism $d''$ of $\n'$ if and only if $[f(\alpha), f(\beta)] \neq 0$. Now we note that \[ \left[g^{-1}(d'(\alpha)), g^{-1}(d'(\beta))] = d'_{\alpha} d'_{\beta} [g^{-1} (\alpha), g^{-1} (\beta)\right].\] On the other hand, we have \[\left[ g^{-1}(d'(\alpha)), g^{-1}(d('\beta))\right]= d''\left[g^{-1} (\alpha), g^{-1}(\beta)\right] \text{ since } g^{-1} d' = d'' g^{-1}. \] Since $g \in G'$, $[g^{-1} (\alpha), g^{-1}(\beta)] \neq 0$ if and only if $[\alpha, \beta] \neq 0$. Hence $d'_{\alpha} d'_{\beta}$ is an eigenvalue of $d''$ if and only if $d'_{\alpha} d'_{\beta}$ is an eigenvalue of $d''$. Thus $\alpha \beta \in E'$ if and only if $f(\alpha)f(\beta) \in E''$. Hence $f: (S',E') \to (S'', E'')$ is an isomorphism of graphs, this complete the proof of Theorem~\ref{main}. \vspace{0.3cm} \noindent {\it Acknowledgments:} I am thankful to Prof. Jorge Lauret for raising this question. I am also thankful to Prof. S. G. Dani for his help and Dr. Debraj Chakrabarti for help with the final version. I also express my gratitude to the referee for helpful suggestions and pointing out the references \cite{Dr1, DuK1, DuK2, KLNR}. \vspace{0.3cm}
\section{Introduction} Dynamic linear elastic problems appear on many length scales. On the large scales we can find problems relating to earthquakes and other geophysics problems. On the small scales ($\mu$m to mm - scale), we can think of the application of various biomedical treatments with ultrasound or shockwaves, where the biomaterial is often regarded as simply acoustic, even though this assumption might not always be justified~\citep{Rapet2019}. Advances in the development of ultrasonics and microfluidics have also renewed the interest in this area~\citep{Dual2012}, such as cell trapping as well as ultrasonic inspection. With the availability of MHz acoustic transducers in recent years, applications in these areas are probably going to increase. At intermediate length scales, say 1 m, one can think of studies regarding sound and vibration reduction caused by moving parts of machinery. Although numerical methods can be employed to solve dynamic linear elastic problems, they do not necessary give physical insight which can only be obtained from analytical solutions~\citep[Chap 3]{hills2021}. Steady state analytical solutions are rare but do exist~\citep{Lim2006}. Analytical solutions for dynamic linear elastic problems are even rarer (or at least not very well known). Sneddon and Berry on page 126 wrote ``There are very few exact solutions even of these steady state equations and such as they are limited to spheres and cylinders''. Moreover, those analytical solutions are not only rare, but also they often do not show the `full' range of physics of real systems; that is, some severe limitations are being imposed on the solution, such as the solution for a radially oscillating sphere which only gives longitudinal waves without transverse waves~\citep{Grasso2012}. Solutions that show both longitudinal and transverse waves do exist though, for example the elastic scattering wave solution by~\cite{Hinders1991} which is very similar to the Mie theory~\citep{Mie1908} in electromagnetics. A drawback of this solution is that an infinite sum of Bessel functions is needed to calculate the solution. Although elegant, it is not easy to guess how many terms one needs in this sum when using Bessel functions (the accuracy can even go down again, when too many terms are included). Classical authors such as Lamb and Love~\citep{Love1892} already described various analytical solutions for example for the internal resonances for elastic spheres or the solution for the elastic material in between two concentric spheres. \cite{Papargyri2009} presented some analytical solutions for gradient elastic solids. \cite{KlaseboerJElast2019} presented an analytical solution for a rigid sphere vibrating in an infinite elastic medium. The current article can be considered as a significant extension of this theory, by adding an elastic shell to the rigid core. This `shell' solution turns out to be much richer in physical phenomena than the case without a shell and is the focus of the current work. The solution is free of infinite sums and is relatively easy to calculate and visualize (a clear advantage that the 19th century classical authors did not have), which makes it ideal to validate numerical tools. In Fig~\ref{Fig:Explanation}, an illustration of the problem is given, the core rigid sphere with radius $a$ oscillates with an amplitude $U^0$ and angular frequency $\omega$ inside a concentric spherical shell with radius $b$ and density $\rho_{\mathrm{sh}}$. In doing this emitted (`e') or outgoing waves are generated in the shell. Reflected waves (`r') can also occur. Finally in the external infinite domain with density $\rho_{\mathrm{out}}$ transmitted (`t') waves can be generated. The material constants in the shell and the infinite domain are not necessarily the same. As we will see in the latter part of this work, the solution for a rigid vibrating core with a shell around it shows some surprising physics (resonance peaks for T and/or L waves when the vibrating frequency is changed), which do not occur for a core without such a shell, where a very smooth spectrum is obtained. \begin{figure*}[t] \begin{center} \includegraphics[width=0.75\textwidth]{Fig1.jpg} \caption{Schematic illustration of the problem under consideration; a rigid core sphere with radius $a$ oscillates periodically with an amplitude $U^0$ with frequency $\omega$ and is surrounded by a concentric spherical shell with radius $b$ and material constants $k_L^{\mathrm{sh}}$ and $k_T^{\mathrm{sh}}$. Both spheres are embedded in an infinite material with material constants $k_L^{\mathrm{out}}$ and $k_T^{\mathrm{out}}$, which may or may not be identical to the shell constants. Elastic waves are being emitted (labelled `e'), reflected (labelled `r') and transmitted (labelled `t') towards infinity. Both longitudinal (L) waves and transverse (T) waves can exist in this system. } \label{Fig:Explanation} \end{center} \end{figure*} Also, the analytical solutions with different parameters can be used as non-trivial test cases for numerical methods, for instance, one example has been given for validating a boundary element method in \ref{App:BEM}. Furthermore, the vibrating spherical core-shell system could possibly be used as a simple and elegant template for practical applications, such as, to design spherical piezoelectrical actuators to emit or harvest energy which efficiency highly depends on the material properties and frequency response~\citep{Covaci2020}, possibly with multiple spherical core-shell structures placed in arrays. Another way to generate either longitudinal or transverse waves can be achieved by vibrating the metallic core sphere using magnetic means, which can make the spherical core-shell system become a heat generator when it converts magnetic energy to heat via relaxation processes and hysteresis losses~\citep{Schmidt2007}. Along this line, our analytical solution can be used to optimize the design of hyperthermia agents using magnetic beads for cancer treatments~\citep{Philippova2011}. There are likely other applications in which a spherical core-shell system is a good approximation of a real physical system. The structure of this work is organized as follows. In Sec.~\ref{sec:anasol}, we demonstrate the derivation of the analytical solution for a vibrating rigid core with a shell in an infinite elastic medium and the detailed steps are given in~\ref{App:AppA} and~\ref{App:AppB}. In Sec.~\ref{sec:results}, we study the elastic wave phenomena at different oscillation frequencies followed by some discussions in Sec.~\ref{sec:discussion} which shows the limit case of solution and pulsed time domain solutions using the fast Fourier transform. The conclusion is given in Sec.~\ref{sec:conclusion}. \section{Dynamic elastic waves} \label{sec:anasol} \subsection{General theory} Within the approximation of small deformations and small stresses, the Navier equation for dynamic linear elasticity in the frequency domain can be written as~\citep{KlaseboerJElast2019,Love1892,Pelissier2007} \begin{align} \label{eq:Navier} c^2_L\nabla (\nabla \cdot \boldsymbol{u}) - c^2_T \nabla \times \nabla \times \boldsymbol{u} + \omega^2 \boldsymbol{u} = \boldsymbol{0} \end{align} where $\boldsymbol{u}$ is the (complex valued) displacement vector, $\omega$ is the angular frequency, and the constants $c_L$ and $c_T$ are the longitudinal dilatation and transverse shear wave velocities, respectively, that are defined in terms of the Lam\'e constants $\lambda$, $\mu$ and the density $\rho$~\citep{LandauLifshitz,Bedford1994}: \begin{equation} \label{eq:cTcL} \begin{aligned} c^2_L = (\lambda + 2 \mu)/\rho\quad ; \quad c^2_T = \mu/\rho. \end{aligned} \end{equation} Eq.~(\ref{eq:Navier}) essentially expresses the equilibrium of the elastic forces (the first two terms) and the inertial forces (the third term)\footnote{Here we have ignored volume forces and thermoelastic effects~\citep{Ruimi2012}}. It is well known that the displacement $\boldsymbol{u}$ can be decomposed into a transverse part $\boldsymbol{u}_{T}$ and a longitudinal part $\boldsymbol{u}_L$ as: \begin{align} \label{eq:HelmDec} \boldsymbol{u} = \boldsymbol{u}_{L}+\boldsymbol{u}_{T}, \end{align} with $\boldsymbol{u}_T$ being divergence free and $\boldsymbol{u}_L$ being curl free, thus: \begin{align}\label{eq:divCurlZero} \boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_{T} = 0 \quad ; \quad \boldsymbol{\nabla} \times \boldsymbol{u}_{L} = \boldsymbol{0}. \end{align} In this work, we will refer to the longitudinal waves as ``L" and to the transverse waves as ``T". These two sorts of waves are also often referred to as pressure waves and shear waves, respectively. Introducing Eq. (\ref{eq:HelmDec}) into Eq. (\ref{eq:Navier}) and considering the relations in Eq.~(\ref{eq:divCurlZero}), we obtain \begin{align} \label{eq:HelmuTuL} \nabla^2 \boldsymbol{u}_T + k^2_T \boldsymbol{u}_T = \boldsymbol{0} \quad ;\quad \nabla^2 \boldsymbol{u}_L + k^2_L \boldsymbol{u}_L = \boldsymbol{0}, \end{align} where $k_T = \omega/ c_T$ and $k_L = \omega/ c_L$ are the transverse and longitudinal wavenumbers, respectively. Thus both the transverse displacement $\boldsymbol{u}_T$ and the longitudinal displacement $\boldsymbol{u}_L$ satisfy the Helmholtz equation, yet with different wavenumbers. As is obvious from Eq.~(\ref{eq:cTcL}), the longitudinal wave velocity is always greater than the transverse wave speed, thus $c_L^2 \ge 2 c_T^2$ or in terms of wavenumbers $k_T \ge \sqrt{2}k_L$. \subsection{Theory for vibrating spheres}\label{sec:coreshellsol} As shown in Fig.~\ref{Fig:Explanation}, we impose that the geometry under consideration consists of a rigid core with radius $r=a$, surrounded by another concentric sphere with radius $r=b$. The material in between the two spheres (the shell) is elastic and indicated with `sh'. The core-shell sphere combination is situated inside a different external outer elastic material referred to as `out'. Since the core only vibrates along the $z$-axis, due to symmetry, we look for a solution that has a zero azimuthal $\varphi$ component for both the displacement and the stress (a similar framework was used by the authors to calculate the acoustic boundary layer around a vibrating sphere, see \cite{KLaseboerPhFl2020} \footnote{\cite{KLaseboerPhFl2020} studied acoustic boundary layers around a sphere in fluid dynamics. The same governing equations appear as in elasticity, except for the difference that $k_L$ and $k_T$ are now complex numbers and that ‘$\boldsymbol u$’ is now the velocity instead of the displacement. The focus was there on the phenomenon of ‘streaming’ which is a second order effect which causes a slow mean flow on top of the flow caused by the oscillation of the sphere. This non linear effect doesn't appear here. }). Such a solution can be written as: \begin{equation} \label{eq:phi_h} \begin{aligned} \boldsymbol u = \boldsymbol{u}_L + \boldsymbol{u}_L&= \nabla \left((\boldsymbol x \cdot \boldsymbol u^0) \frac{\phi(r)}{r}\right) + \nabla \times \left( (\boldsymbol x \times \boldsymbol u^0) \frac{h(r)}{r}\right)\\ &= \nabla [\phi(r) \cos{\theta}] U^0 \quad \; - \nabla \times [h(r) \sin{\theta} \boldsymbol{e}_{\varphi}] U^0 \end{aligned} \end{equation} where we have used spherical coordinates $(r,\theta,\varphi)$, $\boldsymbol{e}_{\varphi}$ is the unit vector in the $\varphi$ direction and $\boldsymbol{x}$ is the position vector $\boldsymbol{x}=(x,y,z)$. Two radial functions, $h(r)$ and $\phi(r)$, are to be determined\footnote{The solution with $\phi$ and $h$ can only present solutions in the plane made by the vectors $\boldsymbol x$ and $\boldsymbol u^0$. There are other analytical solutions for a spherical configuration that cannot be described with the $h -\phi$ framework. For example for a sphere periodically rotating back and forth with frequency $\boldsymbol{\Omega}$ around the $z$-axis, the following analytical solution can be found $\boldsymbol u = \boldsymbol u_T = -\frac{a^3}{e^{\mathrm{i} k_Ta}} \frac{1}{(\mathrm{i} k_Ta-1)} \nabla \times \left[\frac{e^{\mathrm{i} k_Tr}}{r} \boldsymbol \Omega \right]$. The solution now only consists of a transverse part, while there is no longitudinal component. } in which the term $\phi\equiv \phi(r)$ is a potential function and the term $h\equiv h(r)$ is inspired by the $h$-function in electrophoresis problems~\citep{Jayaraman2019, Ohsima1983}. It can easily be seen that the term with $\phi$ corresponds to the curl free vector $\boldsymbol{u}_L$ and the term with $h$ to the divergence free vector $\boldsymbol{u}_T$ (remember that $\nabla \times \nabla = \boldsymbol{0}$ and $\nabla \cdot \nabla \times = 0$). A constant vector $\boldsymbol{u}^0$ is introduced with length $|\boldsymbol{u}^0|=U^0$. It represents the amplitude of the displacement of the core sphere in the frequency domain. For the time being we will take $\boldsymbol{u}^0 =(u_x,u_y,u_z) = (0,0,U^0)$, which means that in the time domain this vector oscillates as $(0,0,U^0\cos(\omega t))$. Analytical solutions exist, for example a sphere harmonically changing its volume has an analytical solution\footnote{For a radially volume changing sphere the solution for the displacement is: $\boldsymbol u = \boldsymbol u_L = \frac{e^{\mathrm{i} k_Lr}}{r^3}(\mathrm{i} k_L r -1) \boldsymbol x$ }, yet this solution only shows L-waves and does not have any T-waves. It is therefore desirable to have some analytical solutions that at least show both L and T waves simultaneously. Eq.~(\ref{eq:phi_h}) will turn out to be sufficient to describe the displacement field caused by the vibration of a rigid core sphere, surrounded by an elastic shell, situated in an infinite other elastic material. The function $\phi$ is related to the Helmholtz equation as $\nabla^2 (\phi \boldsymbol x/r) + k_L^2 (\phi \boldsymbol x/r)= \boldsymbol 0$, while $h$ satisfies $\nabla^2 (h \boldsymbol x/r) + k_T^2 (h \boldsymbol x/r) =\boldsymbol 0$. This essentially implies that both $\phi(r) \cos (\theta)$ and $h(r) \cos (\theta)$ satisfy the Helmholtz equation. The task at hand is now to determine the two functions $\phi$ and $h$. Since the material properties of the shell and the outer material are different, we will search for $\phi^{\mathrm{sh}}$ and $h^{\mathrm{sh}}$ for the shell solution and $\phi^{\mathrm{out}}$ and $h^{\mathrm{out}}$ for the external domain. We will describe two different paths, one using tensor notation and another approach for readers more familiar with spherical coordinate systems and Bessel functions. Both approaches of course will lead to the same answer. The problem we wish to solve is to get the displacement field caused by the motion $\boldsymbol{u}^0$. In order to do so, we need to satisfy that both displacements and stresses are continuous across boundaries, that is: there are no gaps or stress jumps in the material boundaries. Thus the displacement at $r=a$ must obey $\boldsymbol{u}=\boldsymbol{u}^0$ and at $r=b$, both $\boldsymbol{u}$ and the traction $\boldsymbol{f}$ must be continuous. Eq.~(\ref{eq:phi_h}) can be written in an alternative more convenient way (note that we have deliberately kept the terms $h/r$ and $\phi/r$) by separating the terms with $\boldsymbol{u}^0$ and $\boldsymbol{x}$. For the shell we get: \begin{equation} \begin{aligned} \label{eq:u_in} \boldsymbol{u}^{\mathrm{sh}} = \left[ -r \frac{\mathrm{d}}{\mathrm{d} r} \left(\frac{h^{\mathrm{sh}}}{r}\right) - 2 \frac{h^{\mathrm{sh}}}{r} + \frac{\phi^{\mathrm{sh}}}{r}\right]\boldsymbol{u}^0 + \frac{\mathrm{d}}{\mathrm{d} r}\left(\frac{h^{\mathrm{sh}}}{r} +\frac{\phi^{\mathrm{sh}}}{r} \right) \frac{\boldsymbol{x} \cdot \boldsymbol{u}^0}{r} \boldsymbol{x} \end{aligned} \end{equation} where the shell solution consist of an `expanding' (subscript `e') and a `reflected' (subscript `r') wave with \begin{equation} \begin{aligned} h^{\mathrm{sh}}(r)=h_e(r) + h_r(r)=-a C_e^T \exp(\mathrm{i} k_T^{\mathrm{sh}} r) G(k_T^{\mathrm{sh}} r) -a C_r^T \exp(-\mathrm{i} k_T^{\mathrm{sh}} r) G^*(k_T^{\mathrm{sh}} r), \\ \phi^{\mathrm{sh}}(r)=\phi_e(r) + \phi_r(r)=-a C_e^L \exp(\mathrm{i} k_L^{\mathrm{sh}} r) G(k_L^{\mathrm{sh}} r) -a C_r^L \exp(-\mathrm{i} k_L^{\mathrm{sh}} r) G^*(k_L^{\mathrm{sh}} r) \end{aligned} \end{equation} with $G^*$ the complex conjugate of $G$ (also note the `-' sign in the $C_r$ exponentials)\footnote{Note that in terms of spherical Bessel functions: $y_1(kr) = [G(kr) \exp(\mathrm{i} kr) + G^*(kr) \exp(-\mathrm{i} kr)]/2=-\cos (kr)/(kr)^2 - \sin (kr)/(kr)$ and $j_1(kr) = [-G(kr)\exp(\mathrm{i} kr) +G^*(kr) \exp(-\mathrm{i} kr)]/(2i) = \sin(kr)/(kr)^2 - \cos(kr)/(kr)$.}, and $G(x)=\mathrm{i}/x-1/x^2$. Here $C_e^T$, $C_r^T$, $C_e^L$ and $C_r^L$ are dimensionless complex valued constants. The term with $C_e^T$ will give an expanding T-wave, the term with $C_r^T$ a reflected (spherical incoming) T-wave. Similarly the terms $C_e^L$ and $C_t^L$ give expanding and reflected L-waves in the shell. For the (infinite) external domain there is only a `transmitted expanding' (subscript `t') wave with \begin{equation} \begin{aligned} \label{eq:u_out} \boldsymbol{u}^{\mathrm{out}} = \left[ -r \frac{\mathrm{d}}{\mathrm{d} r} \left(\frac{h^{\mathrm{out}}}{r}\right) - 2 \frac{h^{\mathrm{out}}}{r} + \frac{\phi^{\mathrm{out}}}{r}\right] \boldsymbol{u}^0 + \frac{\mathrm{d}}{\mathrm{d} r}\left(\frac{h^{\mathrm{out}}}{r} +\frac{\phi^{\mathrm{out}}}{r} \right) \frac{\boldsymbol{x} \cdot \boldsymbol{u}^0}{r} \boldsymbol{x}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} h^{\mathrm{out}}(r)=h_t(r)=-a C_t^T \exp(\mathrm{i} k_T^{\mathrm{out}}r) G(k_T^{\mathrm{out}} r), \\ \phi^{\mathrm{out}}(r)=\phi_t(r)=-a C_t^L \exp(\mathrm{i} k_L^{\mathrm{out}}r) G(k_L^{\mathrm{out}} r) \end{aligned} \end{equation} with $k_T^{\mathrm{out}}$ and $k_L^{\mathrm{out}}$ the parameters for the external domain and $C_t^T$ and $C_t^L$ dimensionless constants. There are six unknown parameters: $C_e^T$, $C_r^T$, $C_e^L$, $C_r^L$, $C_t^T$ and $C_t^L$ which can be determined by matching the $u_i$ components of the displacement at $r=a$ (two equations), the displacement at $r=b$ (two equations), and finally the continuity of the shear stress at $r=b$ (two equations). The details of how to get these six parameters are described in \ref{App:AppA}. An alternative way of getting the solution using spherical coordinates and spherical Bessel and Hankel functions of the first kind is given in \ref{App:AppB}. Both approaches are equivalent and give the same result. This way of solving the problem corresponds more to the classical mathematical approach, yet the constants of \ref{App:AppA} correspond directly to the `emitted' and `reflected' waves in the elastic layer, while this is not explicitly the case in the approach of \ref{App:AppB}. \begin{figure*}[t] \begin{center} \includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3a.jpg} \includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3b.jpg} \includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3c.jpg}\\ \includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3d.jpg} \includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3e.jpg} \includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3f.jpg} \caption{Inner rigid core sphere `vibrating' periodically with amplitude $U^0$ in another concentric spherical shell ``sh'', the total embedded in an infinite external outer domain ``out'' with $b/a=2.0$. Parameters: $k_T^{\mathrm{sh}} a=2.5$, $k_L^{\mathrm{sh}} a=1.0$, $k_T^{\mathrm{out}}a=8.0$, $k_L^{\mathrm{out}}a=3.0$, $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}}=1.0$. The sphere oscillates from back to front of the figure. On the horizontal plane the total displacement vectors are plotted. A complicated pattern is formed due to the interaction of L and T waves. On the left of the plane the function $h(r) \cos(\theta)$ is plotted while on the right hand side $\phi(r) \cos(\theta)$ is plotted in color. Here time-snapshots are shown at 0/6, 1/6, 2/6, 3/6, 4/6 and 5/6 times the oscillation cycle. A moviefile is available for this case showing 30 time frames. }\label{Fig:sphereShell1} \end{center} \end{figure*} \section{Results} \label{sec:results} Some screenshots of the solution with parameters $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}} =1.0$, $b/a=2.0$, $\boldsymbol u^0 = (0,0,U^0)$, $k_T^{\mathrm{sh}} a =2.5$, $k_L^{\mathrm{sh}} a=1.0$, $k_T^{\mathrm{out}}a=8.0$ and $k_L^{\mathrm{out}} a=3.0$ are shown in Fig.~\ref{Fig:sphereShell1}. Since any solution $\boldsymbol{u} \exp(i \alpha)$, with $\alpha$ a phase factor, is also a solution of the problem, we can easily reconstruct the solution in the time domain, by choosing appropriate values for $\alpha$ for each time step. The inner core sphere vibrates front to back. The shell/outer medium boundary is indicated in transparent blue. A $40 \times 40$ grid is chosen on the horizontal plane and the displacements are indicated on this grid with arrows. An intricate pattern of displacements can be observed. Since $k_T^{\mathrm{out}} > k_T^{\mathrm{sh}}$ and $k_L^{\mathrm{out}} > k_L^{\mathrm{sh}}$, the waves in the outer domain are more densely packed than in the shell. In the outer domain waves can be seen to travel towards infinity, while in the shell complex interference patterns appear due to the interaction of the emitted and reflected waves. Next we wonder what will happen if we keep the physical system the same, but change the oscillation frequency $\omega$. Since $k=\omega/c$, this is essentially the same as multiplying all $k$ values (both L and T and the shell and the outer domain) by the same value. We choose an example with the following parameters: $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}} =3.0$, $b/a=2.0$, $\boldsymbol u^0 = (0,0,1)$. Take initially $k_T^{\mathrm{sh}} a =4.5$, $k_L^{\mathrm{sh}} a=2.0$, $k_T^{\mathrm{out}}a=2.0$ and $k_L^{\mathrm{out}} a=1.0$. Then gradually increase (or reduce) the frequency, i.e. multiply (or divide) each $k$ by $1.005$ and recalculate all $C$'s until $k_T^{\mathrm{sh}} a=33$. The results for the transmitted coefficients (in the outer domain) $|C_t^T|$ and $|C_t^L|$ are shown in Fig~\ref{Fig:solC_t}. A complicated spectrum of peaks and valleys appears. For some values of $k_T^{\mathrm{sh}} a$, $|C_t^T|$ is near zero, while for other values $|C_t^L|$ becomes near zero. The emitted and reflected coefficients $|C_e^T|$ and $|C_r^T|$ for the transverse waves in the shell are shown in Fig.~\ref{Fig:sol_erT}. The first peaks appear near $k_T^{\mathrm{sh}}=3.6$. Note that both $|C_e^T|$ and $|C_r^T|$ tend towards infinity when $k_T^{\mathrm{sh}} a=0$. Finally, the emitted and reflected coefficients $|C_e^L|$ and $|C_r^L|$ for the longitudinal waves in the shell are shown in Fig.~\ref{Fig:sol_erL}. Again both $|C_e^L|$ and $|C_r^L|$ tend towards infinity when $k_T^{\mathrm{sh}} a=0$. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\textwidth]{sweep1.jpg} \caption{Frequency response curve for a sphere with a shell. Transmitted T and L-coefficients $|C^T_t|$ and $|C^L_t|$ (i.e. into the external domain) as a function of the parameter $k_T^{\mathrm{sh}} a$. Here, we keep the material parameters the same, but the oscillation frequency $\omega$ is changed. The parameter are $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}} =3.0$, $b/a=2.0$, $\boldsymbol u^0/U^0 = (0,0,1)$. Take initially $k_T^{\mathrm{sh}} a =4.5$, $k_L^{\mathrm{sh}} a=2.0$, $k_T^{\mathrm{out}}a=2.0$ and $k_L^{\mathrm{out}} a=1.0$. Then change the frequency, thus multiply each $k$ by the same value and recalculate all constants. Note the rather chaotic character of this graph, with many maximum and minimum values for both the T and L-coefficients, however, not at the same frequencies. The `spectrum' shows remarkable peaks and valleys especially compared to the case when no shell as presented in Fig.~\ref{Fig:sol_noShell}. } \label{Fig:solC_t} \end{center} \end{figure*} Let us investigate what exactly happens in these peaks and valleys by investigating three cases. Based on Fig.~\ref{Fig:solC_t} or the zoom-in shown in Fig.~\ref{Fig:Cases}(a), around $k_T^{\mathrm{sh}} a =5.98$, both $|C^T_t|$ and $|C^L_t|$ are near a peak value. We will call this Case 1. Thus for Case 1, $k_L^{\mathrm{sh}} a = 5.98\times2.0/4.5$, $k_T^{\mathrm{out}} a =5.98\times2.0/4.5$ and $k_L^{\mathrm{out}} a =5.98/4.5$ (scale all wavenumbers with the same amount). The cases are indicated with blue large arrows for clarity in Fig.~\ref{Fig:Cases}(a). In Fig.~\ref{Fig:Cases}(b), the displacement pattern is shown with red arrows. We can see that both T- and L-waves appear in the outer domain. Around $k_T^{\mathrm{sh}} a = 8.18$ the constant $|C^T_t|$ becomes very small while $|C^L_t|$ becomes large. Take $k_L^{\mathrm{sh}} a = 8.18\times2.0/4.5$ and $k_T^{\mathrm{out}}a=8.18\times2.0/4.5$ and $k_L^{\mathrm{out}}a=8.18/4.5$, we call this Case 2. In Fig.~\ref{Fig:Cases}(c), the displacement pattern is shown with red arrows, we see that they are all pointing radially in or outwards, indicating that mainly L-waves occur in the outer domain. Finally, for Case 3, we take the value $k_T^{\mathrm{sh}} a = 12.70$, where a minimum in $|C^L_t|$ occurs, again we multiply all wavenumbers by the same amount as in Cases 2 and 3. Now we clearly see a T-wave in the outer domain (all displacement vectors in Fig.~\ref{Fig:Cases}(d) are 90 degrees rotated when compared to Case 2). In all three cases, $\phi(r)\cos(\theta)$ is shown on the right hand side of the horizontal plane and $h(r) \cos(\theta)$ is shown on the left hand side in color. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\textwidth]{sweep2.jpg} \caption{Frequency response for a sphere with shell. As Fig.\ref{Fig:solC_t}, but now for the emitted and reflected T-coefficients $|C^T_e|$ and $|C^T_r|$ (in the shell) as a function of $k_T^{\mathrm{sh}} a$. The peaks and valleys are mostly overlapping, but not always. Note that both coefficients diverge at $k_T^{\mathrm{sh}} a=0$.}\label{Fig:sol_erT} \end{center} \end{figure*} The constants $C_e^T$ and $C_r^T$, representing the emitted and reflected transverse waves in the shell, are shown in Fig.~\ref{Fig:sol_erT} and the $C_e^L$ and $C_r^L$ constants in Fig.~\ref{Fig:sol_erL} which show the emitted and reflected longitudinal waves in the shell. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\textwidth]{sweep3.jpg} \caption{Frequency response for a sphere with shell. As Fig.\ref{Fig:solC_t}, but now for the emitted and reflected L-coefficients: $|C^L_e|$ and $|C^L_r|$ (in the shell) as a function of $k_T^{\mathrm{sh}} a$. Note that both coefficients diverge at $k_T^{\mathrm{sh}} a=0$. } \label{Fig:sol_erL} \end{center} \end{figure*} \begin{figure*}[!ht] \begin{center} \subfloat[]{\includegraphics[width=0.4\textwidth]{sweep1b.jpg}} \quad\quad\quad \subfloat[]{\includegraphics[width=0.47\textwidth]{case1_kT6_0.jpg}}\\ \subfloat[]{\includegraphics[width=0.47\textwidth]{case2_kT8_18.jpg}}\quad \subfloat[]{\includegraphics[width=0.47\textwidth]{case3_kT12_67.jpg}} \caption{(a) Zoom in of Fig.~\ref{Fig:solC_t} with the three selected cases indicated by arrows. Vector plots in the horizontal plane: (b) Case 1: Both T-waves and L-waves are generated in the external domain. (c) Case 2: Mainly L-waves occur in the external domain. (d) Case 3: Mainly T-waves appear in the external domain. The function $h(r)\cos(\theta)$ is plotted on the left hand side of the horizontal plane and $\phi(r) \cos(\theta)$ is shown on the right hand side. It is thus possible, by a clever combination of materials and frequency to generate mainly L or mainly T waves or a combination of both. } \label{Fig:Cases} \end{center} \end{figure*} \section{Discussion} \label{sec:discussion} Note that $\boldsymbol{u^0}$ can be different from the (real valued) $\boldsymbol{u^0}/U^0=(0,0,1)$, it could assume a complex value as well (as long as it is a constant). For example $\boldsymbol{u^0}/U^0=(\mathrm{i},0,1)/\sqrt 2$ will give a circularly vibrating sphere (not shown here). The following six non-dimensional parameter space can be distinguished for the shell case: $b/a$, $k_T^{\mathrm{sh}} a$, $k_T^{\mathrm{sh}}/k_L^{\mathrm{sh}}$, $k_T^{\mathrm{sh}}/k_T^{\mathrm{out}}$, $k_T^{\mathrm{sh}}/k_L^{\mathrm{out}}$ and $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}}$ (or any combination of these parameters). For a typical $a=1$ mm application, with $c_L = 6000$ m/s and $\rho=1000$ kg/m$^3$, a value of $k_L^{\mathrm{sh}} a= 1.0$ would correspond to a frequency of $2 \pi \omega = 1$ MHz and for $k_L^{\mathrm{sh}} a =100$, one would need 100 MHz, a frequency that is now becoming available in acoustic transducers~\citep{Fei2016}. For an object with typical size $a=1$ m, the frequency will be 1 kHz for $k_L^{\mathrm{sh}} a=1$ and 100 kHz for $k_L^{\mathrm{sh}} a=100$. The current framework can easily be extended to multiple shells. For a core with a single shell, we had to solve a $6 \times 6$ matrix, with every additional shell we will have to add 4 more equations, thus a $10 \times 10$ matrix for a two-shells system for example. It is also possible to calculate the stresses caused by the movement of the sphere, although we have not shown them here. Although outside the scope of the current analytical solution, a real system could easily be built for example by embedding a steel sphere in an elastic material and exciting it by magnetic means, for example possibly to convert electrical to mechanical energy and to generate heat remotely via magnetic stimuli. On the other hand, the study of this relatively simple system, yet with complex behavior, opens the way to further study and better understand real systems with their associated resonances, noise generation, fatigue and failure behavior, frequency responses etc. As we have shown the current analytical solution exhibits non-trivial behavior. It has rich physical detail, for example the presence of both longitudinal and transverse waves, including interference between outgoing and reflected waves and is therefore ideally suited to test numerical solutions, for example those generated by finite element or boundary element codes. In \ref{App:BEM}, we have used the analytical solution to test a boundary element code based on the framework developed by~\cite{Rizzo1985}. Excellent agreement is achieved when the numerical solution is compared to the theory. \subsection{Vibrating sphere without a shell} A solution for a vibrating sphere without a shell was previously given by~\cite{KlaseboerJElast2019}\footnote{ In order to get back the same solutions the constants used there should be replaced by with $C^T_e = - 2 c_1$ and $C^L_e = k_L^2 a^2 [2 c_1/(k_T^2 a^2) + c_2]$, where $c_1$ and $c_2$ are the constants used by \cite{KlaseboerJElast2019}.}, which is a special case of the current work when the shell material is set to be the same as the outer material. For such a case which is the equivalent to no shell at all, the number of parameters mentioned in the previous section (six) reduces to two, namely $k_T^{\mathrm{out}} a$ and $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}$. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\textwidth]{sweepNoShell1.jpg} \caption{Frequency response curves for a sphere with no shell. $|C^T_t|$ (upper curves) and $|C^L_t|$ (lower curves) for various $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}$ ratios, from $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=\sqrt{2}$ (the smallest this ratio can be) to $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=2, 3, 5$ and $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=100$ in the inset (going towards the in-compressible limit). Note the smoothness of the curves when the parameter $k_T^{\mathrm{out}} a$ is changed (which is essentially the same as changing the driving frequency), which is in stark contrast to the curves shown in Fig.~\ref{Fig:solC_t}.} \label{Fig:sol_noShell} \end{center} \end{figure*} When we keep the material parameters constant and change the vibration frequency (thus changing $k_T^{\mathrm{out}} a$ and keeping $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}$ constant), we can calculate $|C_t^T|$ and $|C_t^L|$. The results are plotted in Fig.~\ref{Fig:sol_noShell}. When compared to Fig.~\ref{Fig:solC_t}, the smoothness of the curves in Fig.~\ref{Fig:sol_noShell} is immediately noticed. The larger the $k_T^{\rm\mathrm{out}}/k_L^{\mathrm{out}}$ ratio is, the smaller the longitudinal parameter $|C_t^L|$ becomes for low $k_Ta$ values $(k_Ta <<1)$. But $|C_t^T|$ and $|C_t^L|$ both converge towards a value of 1.0 for larger $k_T^{\mathrm{out}} a$ values. For a vibrating sphere with no shell, it seems not possible to have a zero L or zero T contribution according to Fig.~\ref{Fig:sol_noShell}. The only possibility to generate a near zero L-wave is to reduce the frequency to near zero values. For larger frequencies (thus larger $k_T^{\mathrm{out}} a$ values), all curves tend towards $|C_L^t|=|C_T^{t}|=1$. The $|C_t^L|$ curves all seem to be monotonously increasing. For $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=\sqrt{2}$ and $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=2$, the $|C_t^T|$ curves monotonously decrease. However, for $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=3$ and onward, these curves show a maximum value of $|C_t^L|$. For $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=3$, the maximum $|C_t^L|$ is at $k_T^{\mathrm{out}}a = 1.492$ with a value of $1.426$ ($|C_t^L|$ is $1.421$ at $k_T^{\mathrm{out}}a$ near zero). The maximum $|C_t^L|$ appears later for $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=5$ at $k_T^{\mathrm{out}}a = 3.592$ with a value of $1.498$. The maximum $|C_t^L|$ shifts to larger and larger values of $k_T^{\mathrm{out}}a$ when the ratio $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}$ increases further, but does not seem to go significantly above a value of $1.6$. For example, in the inset the curves for $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=100$ are shown where the maximum $|C_t^L|$ occurs around $k_T^{\mathrm{out}}a = 95.3$ with $1.607$. For even larger $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=1000$ (not shown) the maximum $|C_t^L|=1.612$ and occurs at $k_T^{\mathrm{out}}a = 970$ approximately\footnote{Fig.~\ref{Fig:sol_noShell} was generated by setting $b/a=2$, $k_L^{\mathrm{sh}}a=k_L^{\mathrm{out}} a$, $k_T^{\mathrm{sh}} a = k_T^{\mathrm{out}} a$ and $\rho_{\mathrm{sh}}=\rho_{\mathrm{out}}$ (thus setting the material of the shell identical to that of the outer domain). We then get $C_r^L=0$, $C_r^T=0$, $C_e^L=C_t^L$ and $C_e^T=C_t^T$, as it should be.}. \begin{figure*}[t] \begin{center} \includegraphics[width=0.75\textwidth]{FigPulse1.jpg} \caption{The used -/+ pulse for the FFT transform with width $W=5a$ and 128 points. The values from $i=57$ to $72$ are non-zero with a minus/plus pulse centered at $i=65$ as $\text{DATA}[i]=2(i+N-N_w/2)-1]=\sin(2x) \exp(-\alpha x/2)$ with $x = (i-1-N_w/2)\pi/N_w$, with $N=64$, $N_w=N/4$ and $\alpha=0.1$. Since the wave is antisymmetric, the lowest frequency of the 65 frequencies in the FFT corresponds to $k_T^{\mathrm{sh}}a=1.256$ and the highest to $k_T^{\mathrm{sh}}a=80.425$ (there is no need to calculate $k_T^{\mathrm{sh}}a=0$). } \label{Fig:FFT1Pulse} \end{center} \end{figure*} \subsection{Pulsed time domain solutions using the Fast Fourier Transform} Now that the response for each frequency can be calculated we can use the Fast Fourier Transform (FFT)~\citep{Bedford1994} to get the solution for the displacements, at each location and for each time instant, if we assume the core is exhibiting a pulsed vibration. The minus/plus pulse used is shown in Fig.~\ref{Fig:FFT1Pulse}. We have deliberately chosen an antisymmetric pulse in order to avoid issues with non vanishing displacements associated with $k_T^{\mathrm{sh}} a = 0$. The standard FFT procedure given in the book Numerical Recipes was used~\citep{Numrep}. \begin{figure*}[t] \begin{center} \includegraphics[width=0.32\textwidth]{PulseNoShell1.jpg} \includegraphics[width=0.32\textwidth]{PulseNoShell2.jpg} \includegraphics[width=0.32\textwidth]{PulseNoShell3.jpg}\\ \includegraphics[width=0.32\textwidth]{PulseNoShell4.jpg} \includegraphics[width=0.32\textwidth]{PulseNoShell5.jpg} \includegraphics[width=0.32\textwidth]{PulseNoShell6.jpg} \caption{Screenshots at different times, obtained with a FFT-transform of a single -/+ pulse (as shown in Fig.~\ref{Fig:FFT1Pulse}) for the case of a sphere with no shell. The sphere first moves to the right and then to the left before it stops moving. The resulting displacement patterns (in vectors) show the separation of the L-waves and the T-waves in the 4th and 5th image onward. The L-waves travel twice as fast for this particular case ($k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=2.0$). On the horizontal plane the scalar function $\phi(r) \cos \theta$ is also given, while on the vertical plane $h(r) \cos \theta$ is plotted in color. A moviefile is available for this case. } \label{Fig:FFT1} \end{center} \end{figure*} In Fig.~\ref{Fig:FFT1} a typical example of the separation of the T and L waves (where the L waves travel at twice the speed as the T waves) for the case with no shell associated with the pulse given in Fig.~\ref{Fig:FFT1Pulse}. Initially the T and L waves are interfering with each other, until from Frame 4 onward, the L wave (most outer wave) clearly separates from the T wave (inner wave). The material parameters for this case are $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=2.0$. \begin{figure*}[!ht] \begin{center} \includegraphics[width=0.32\textwidth]{PulseShell1.jpg} \includegraphics[width=0.32\textwidth]{PulseShell2.jpg} \includegraphics[width=0.32\textwidth]{PulseShell3.jpg}\\ \includegraphics[width=0.32\textwidth]{PulseShell4.jpg} \includegraphics[width=0.32\textwidth]{PulseShell5.jpg} \includegraphics[width=0.32\textwidth]{PulseShell6.jpg}\\ \includegraphics[width=0.32\textwidth]{PulseShell7.jpg} \includegraphics[width=0.32\textwidth]{PulseShell8.jpg} \includegraphics[width=0.32\textwidth]{PulseShell9.jpg}\\ \includegraphics[width=0.32\textwidth]{PulseShell10.jpg} \includegraphics[width=0.32\textwidth]{PulseShell11.jpg} \includegraphics[width=0.32\textwidth]{PulseShell12.jpg} \caption{Screenshots at different times, obtained with a FFT-transform of a single -/+ pulse (as shown in Fig.~\ref{Fig:FFT1Pulse}) for the case with a shell with $b/a=3$ (the second sphere is indicated in transparent yellow color). Multiple reflections and double reflections of the T and L-waves can be observed, which obviously do not occur for the case with no shell of Fig.~\ref{Fig:FFT1}. A moviefile is available for this case.} \label{Fig:FFTShell} \end{center} \end{figure*} Another case with a shell is shown next. The parameter chosen are $b/a=3$, $k_T^{\mathrm{sh}}a = 4.0$, $k_T^{\mathrm{sh}}/k_L^{\mathrm{sh}}=2.0$, $k_T^{\mathrm{sh}}/k_T^{\mathrm{out}}=4/7$, $k_T^{\mathrm{sh}}/k_L^{\mathrm{out}}=4/3$ and $\rho_{\mathrm{out}}/\rho_{\mathrm{in}}=1.0$. This parameter set will give results with not too many reflections. The results are shown in Fig.~\ref{Fig:FFTShell}. The shell/outer boundary is indicated with a transparent yellow sphere. Expanding waves can be observed caused by the pulse which is the same as the one used in Fig.~\ref{Fig:FFT1}. Also, reflected waves from the shell/outer boundary can be clearly distinguished and even doubly reflected waves (i.e. reflected waves that reflect once more on the inner rigid core sphere). In order to more easily differentiate the expanding and reflecting waves, they are indicated in each frame. \section{Conclusions} \label{sec:conclusion} The analytical solution for the dynamic elastic problem of a vibrating rigid sphere surrounded by an elastic shell, the total being immersed in another infinite medium is presented. The solution shows some surprisingly unexpected physics with various peaks for both the longitudinal (L) and transverse (T) response when the frequency of the vibration is changed. These do not appear for the simpler case of a sphere without an elastic shell layer, where the frequency response is a smooth line. In practice, this means that almost pure L or T waves can be generated by carefully choosing the material parameters and the frequency of the vibration of the core sphere. This complex behavior, which is not present for spheres without a shell, appears very similar to the unique properties that can be observed with mechanical metamaterials, see \cite{Kelkar2020} or \cite{Wang2014}. Since all the responses for multiple frequencies can be easily obtained in the frequency domain, we can use the FFT framework to predict the response to a pulsed vibration in the time domain. Some examples are shown for a narrow pulse, which shows the separation of the L and T waves which move out radially as clearly distinctive pulses after some time has passed. In this article we have just scratched the surface regarding the multitude of possible solutions; there are six dimensionless parameters that can be varied in the core-shell vibration problem. The solution could be considered as the first approximation of an oscillating body in an elastic material. The analytical solutions can also be used as benchmark cases to test numerical solutions obtained with for example the finite element method (where boundary conditions at infinity are not easy to implement) or a boundary element method (which have hyper-singular integrals that need to be treated with extreme precaution), even in the time domain when the fast Fourier transform framework is used (such as in the example of Fig.~\ref{Fig:FFTShell}). The implementation of the solution is relatively straightforward, without any infinite sums or other mathematical difficulties. The codes (in Fortran language) used to generate the plots in this article are available from the authors on request. \section*{Acknowledgments} Q.S. was supported by the Australian Research Council (ARC) through Grants DE150100169, FT160100357 and CE140100003.
\section{INTRODUCTION} Pathological diagnosis is used to precisely examine cancer, and the demand for its automation and a Computer Aided Diagnosis (CAD) system has increased in recent years. To meet this demand, many methods have been developed for segmenting the tumor regions or classifying the types of cancer. In such methods, a Whole Slide Image (WSI), which is a digital slide image captured by a scanner, is generally used for pathological diagnosis. WSI is a gigapixel image that contains the magnified images of the tissue. It enables us to observe precise features of tissues such as the shape or texture of an individual cell. In actual diagnosis, pathologists observe cellular features and staining degree at high magnification. On the other hand, they observe the distribution of features and overall appearance at low magnification. Through such multifaceted observations at various magnifications, they are able to diagnose the class and stage of cancer and identify the factors of their decisions. WSI is too large to be inputted into a Convolutional Neural Network (CNN), which has been widely used for many pattern recognition tasks including pathological image segmentation. To overcome this constraint, WSI is usually cropped into patch images. Each patch is inputted into the model, and then the class is predicted individually. However, this cropping process will lose the global information, such as the cropped location in the tissue and its adjacent features. The loss of this information may lead to a decrease in the performance of the classification model. Particularly, cervical cancer, which is the target of this paper, has different distribution trends in each class; the stage increases as malignancy develops in the epithelium (which is located at the boundary of tissue in WSI), and the cancer stage is reached when the malignancy invades the inside of the tissue. To be more specific, the early stage of cervical cancer only exists around the boundary and rarely exists inside. This distributional characteristic indicates that the location of the patch gives powerful prior information for classifying the cervical cancer stage. In this paper, we focus on the differences in the distribution characteristics between the stages. To quantify this characteristic, we used the Distance from the Boundary of tissue (DfB). A DfB is a value for each pixel that indicate the distance from the boundary of the tissue; a distant pixel from the boundary ({\it i.e.}, a pixel located near the center of the tissue) has a higher value as shown in Fig. \ref{fig:propsed_method}. In other words, the pixel value of DfB provides the information of the pixel location in the tissue. In our method, we simply introduce this DfB value into the patch-level classification task for pathological image segmentation. Our main contributions in this paper are as follows: (1) To deal with the problem of the global information lost by cropping it into patches, we introduce the DfB value into the patch classification task in pathological image segmentation, in which DfB contains the distribution characteristics that differ among classes. (2) We found that the effectiveness of the distance information in cervical cancer classification varies with the distance from the boundary. (3) Our proposed method improves the prediction performance for the Non-Neoplasm class. \begin{figure}[t] \begin{center} \includegraphics [keepaspectratio, width=0.93\linewidth]{figures/example.pdf} \caption{Examples of Whole Slide Image (WSI) and ground truth for cervical cancer. We can observe that the tumor regions (LSIL and HSIL) appear around the boundary of the tissue.} \label{fig:example_gt} \end{center} \vspace{-3mm} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics [keepaspectratio, width=0.95\linewidth]{figures/proposed_method.pdf} \vspace{-3mm} \caption{Overview of proposed method. DfB patch is cropped from the same region of Original patch. All pixels of DfB patch are converted into its averaged value. Method1 concatenates the averaged DfB patch with Original patch before inputting it into CNN. Method2 concatenates the single averaged value with feature values outputted from CNN. After concatenation, these feature values are inputted into Fully Connected (FC) layers to classify.} \label{fig:propsed_method} \end{center} \vspace{-3mm} \end{figure*} \begin{figure}[th] \begin{center} \includegraphics [keepaspectratio, width=0.9\linewidth]{figures/DFB_Histogram.pdf} \vspace{-3mm} \caption{Patch ratio per DfB in each class. The DfB ranges of Non-Neop., LSIL, and HSIL class data are 0 - 133, 0 - 83, and 0 - 88 respectively. LSIL and HSIL incline to locate at the lower DfB area (where is around the boundary).} \label{fig:patch_ratio} \end{center} \vspace{-3mm} \end{figure} \section{RELATED WORKS} Many methods have been proposed for pathological image analysis, where most of them take a patch-based approach. The patch-based methods segment the huge image (WSI) into small patches and then classify each patch image~\cite{AltunbayD2010}\cite{ChangH2014}\cite{CruzRoaA2014}\cite{MousaviHS2015}. However, patch-based approaches have a trade-off: the patch with high magnification has detailed features like the shape and texture of cells, while it loses the features of the surrounding area. To handle this problem, using multi-scale features enables the model to capture both detailed features and wide-ranging texture patterns, adaptively \cite{tokunaga2019adaptive}. Even if this \cite{tokunaga2019adaptive} approach uses a wide field of view with low resolution, the location information in the entire image it is still difficult to use since a WSI is significantly large. A distance map is used for individual cell segmentation tasks in pathological images. Cells in pathological images are densely distributed and their boundary is blurry, which may lead to segmenting some cells as one instance. To segment individual cells, Neylor et al. \cite{naylor2018segmentation} solved this segmentation task by multi-task learning that simultaneously predicts segmentation of cells and regression of the intra-nuclear distance maps. Multi-task learning facilitates the method to identify the boundary of individual cells. Their purpose for using a distance map is different from ours: we use distance information of an entire tissue. \section{DISTANCE FROM BOUNDARY OF TISSUE} In this study, we classify each patch image into three classes; 1. Non-Neoplasm (Non-Neop.) 2. Low Squamous Intraepithelial Lesion (LSIL), 3. High Squamous Intraepithelial Lesion (HSIL). We investigated the prior of the DfB value for each class. Fig. \ref{fig:patch_ratio} shows the distribution of the DfB value for each class, in which the horizontal and vertical axes indicate the DfB value and the normalized frequency of patches in each distance, respectively. In this graph, we can observe that LSIL and HSIL are biased toward the boundary of the tissue, whereas Non-Neop. is widely distributed over the whole area. Naturally, the frequency of smaller distances tends to be high, since the patches near the boundary have a larger circumference than the patches inside. We effectively use this information for classifying patch images. \section{PROPOSED METHOD} Fig. \ref{fig:propsed_method} shows an overview of the proposed method that utilizes the DfB as prior information. To segment the huge image (WSI), we follow a patch-based classification approach. To address the problem of losing the cropped position of the patches, we use the distance of the patch from the tissue's boundary as input since DfB gives powerful prior information for the classification as discussed above. To input DfB into the model, we proposed two methods. One concatenates a DfB patch with the original patch before inputting it into the CNN model. The other concatenates the mean of DfB value with feature values before inputting it into Fully Connected (FC) layers. \subsection{Distance transform} To make a DfB image, we first roughly segment the tissue region by thresholding. In this process, we make a scaled-down WSI (from magnification-40x to magnification-2.5x) since the original magnification of WSI is too large to process image transformation and the detailed distance information is not so important. Next, we set the thresholds in Hue, Saturation, Value (HSV) color space. The tissue area and background area are segmented into a binary image as shown in Fig. \ref{fig:propsed_method}. Then, small objects are removed to get rid of fragments of tissue and fill the small holes to avoid a negative effect on the distance representation. After obtaining the tissue mask, we make the DfB image by using distance transformation \cite{borgefors1986distance} from the tissue mask. The values of all the pixels in the background area are 0, and the value of the pixel inside of the tissue is the distance from the boundary of the foreground region in the tissue mask, in which the unit of the distance is the pixel distance in the resized image. In our dataset, the maximum value was 140. \subsection{Network} Our method has two ways to input DfB into the model: one is inputting a DfB patch into a CNN with an original patch (Method1 in Fig. \ref{fig:propsed_method}), and the other is inputting a DfB value into FC layers with feature values that are extracted from the CNN (Method2 in Fig. \ref{fig:propsed_method}). Before inputting DfB, a DfB patch is cropped from the position corresponding to its original patch. Here, the location where the patch is cropped is important information, while the detailed location in a patch is not necessary. Therefore, the DfB patch converts into the mean of its values. This model outputs the probabilistic score of each class through a Softmax function after FC layers. For the loss function of the CNN and FC layers, we used a categorical cross-entropy for the output of the Softmax function. In the inference process, we make a prediction map to visually understand which area is well classified. A prediction map is made as follows: First, WSI is cropped into patches with a stride equal to the patch size, and the corresponding DfB is concatenated in the same manner as the training step. The CNN predicts the class of each patch. Finally, prediction classes of each patch are merged and masked the area that is not subject to classification or background. \section{EXPERIMENTS} We evaluated our method on three-class (Non-Neop., LSIL, and HSIL) classification for WSIs of uterine cervix. For this experiment, we compared the confusion matrices and five metrics with four methods: Baseline, DfB + CNN, DfB + FC, and DfB + FC (Transfer). For the initial weights of DfB + FC (Transfer), we used the weights of the pre-trained baseline's model and updated all the weights using DfB. \subsection{Experimental setup} Images of a sliced uterine cervix stained by Hematoxylin and Eosin (H\&E) were captured by a virtual slide scanner with a maximum magnification of 40x. In the experiment, we used 282 WSIs. To generate the ground truth of the dataset, three pathologists manually annotated the regions of three cervical cancer grades: 1. Non-Neop., 2. LSIL, and 3. HSIL. To train our model, WSI was cropped into patches with 256 $\times$ 256 pixels window size, 256 pixels stride size, and 40x magnification. The patches were randomly flipped along the horizontal and vertical axes for data augmentation. To handle class imbalances, we randomly remove samples from the majority class (under-sampling) and add more samples by duplicating the existing one from the minority class (over-sampling). This data augmentation process was only executed for the training set. We experimented using five-fold cross-validation; 282 WSIs (194,425 patch images) were divided into 5 sets (56 or 57 WSIs per fold). One set was used in the test, and the other sets were used for training. 20\% of WSIs in the training set were used as validation data, and the rest was used as training data. Validation data was used as a criterion for selecting the best performed model. After that, each WSI was split into patches. Only patches that consisted of a single class were used for training and testing except for the process of making prediction maps. We used ResNet-50 \cite{he2015deep} for CNN and Adam \cite{kingma2017adam} for the optimizer with a learning-rate of $10^{-3}$. ResNet-50 was pretrained on ImageNet \cite{deng2009imagenet}. The optimization was suspended when the mRecall did not improve in five epochs. We evaluated five metrics: Accuracy, mean of the per-class Recall (mRecall), mean of the per-class Precision (mPrecision), F1-score (F1), and mean of the Intersection of Union (mIoU). They are defined as: Accuracy = $\frac{\sum_{c}TP_c}{\sum_{c}(TP_c + FN_c)}$, mRecall = $\frac{1}{M}\sum_{c}\frac{TP_c}{TP_c + FN_c}$, mPrecision = $\frac{1}{M}\sum_{c}\frac{TP_c}{TP_c + FP_c}$, F1 = $2 \times \frac{mRecall \times mPrecision}{mRecall + mPrecision}$, mIoU = $\frac{1}{M}\sum_{c}\frac{TP_c}{TP_c + FP_c + FN_c}$, where $M$ is the number of classes, and $TP_c$, $FP_c$, and $FN_c$ are the numbers of true positives, false positives, and false negatives for class $c$, respectively. \begin{figure*}[tbhp] \begin{center} \includegraphics [keepaspectratio, width=0.9\linewidth]{figures/confusion_matrix.pdf} \vspace{-2mm} \caption{Comparison of confusion matrices for three-class classification. Diagonal components of a matrix is a recall for each class. From left to right, Baseline, DfB+CNN, DfB+FC, and DfB+FC (Transfer).} \label{fig:confusion_matrix} \end{center} \vspace{-3mm} \end{figure*} \subsection{Experimental Results} Fig. \ref{fig:confusion_matrix} shows the confusion matrices of three-class classification. These are the aggregated results for each test set of cross-validation. Each element of the matrix is normalized by the total number of its true labels, and the diagonal components represent recall of each class. In DfB + CNN and DfB + FC (Transfer), the score of Non-Neop.-class was better than the baseline's score. Moreover, in DfB + FC (Transfer) improved LSIL score. Table \ref{table:compare_metrics} shows the performance of these metrics for each ablation method. Our model DfB + FC (Transfer) achieved the highest scores in all metrics. In particular, mPrecision, and F1 were improved by 7.3\% and 4.6\%, respectively. We consider that the prior about the location information of each patch facilitates improving the performance. In contrast, DfB + FC deteriorated in all scores. It is assumed to be difficult to simultaneously learn the combination of image features and DfB value without pretraining. \begin{table}[t] \begin{center} \caption{Comparison of four methods in three-class (Non-Neop., LSIL, and HSIL) classification.} \begin{tabular}{lccccc} \hline Method & Acc. & mRecall & mPrec. & F1 & mIoU \\ \hline \hline Baseline & 0.926 & 0.909 & 0.678 & 0.777 & 0.834 \\ DfB + CNN & 0.935 & 0.898 & 0.697 & 0.785 & 0.814 \\ DfB + FC & 0.917 & 0.896 & 0.658 & 0.758 & 0.811 \\ \multicolumn{1}{c}{DfB + FC (Transfer)} & \textbf{0.948} & \textbf{0.911} & \textbf{0.751} & \textbf{0.823} & \textbf{0.836} \\ \hline \end{tabular} \label{table:compare_metrics} \end{center} \end{table} \begin{figure}[t] \begin{center} \includegraphics [keepaspectratio, width=0.9\linewidth]{figures/Diff_of_Recall_DfB-Baseline_All.pdf} \vspace{0mm} \caption{Difference in mean recall between DfB+FC (Transfer) and Baseline in each distance. Higher than 0.00 in y-axis indicates that DfB+FC (Transfer) is better than Baseline.} \label{fig:acc_diff_all} \vspace{0mm} \end{center} \end{figure} Fig. \ref{fig:acc_diff_all} shows the improvement of mean recall by using DfB+FC (Transfer) in each value of DfB. The horizontal axis indicates the DfB value, and the vertical indicates the improvement over the baseline at a certain distance. If the value of the y-axis is greater than 0, our method improved on the DfB. We can observe that our method improved on the baseline at almost all distances, in particularly at the larger values. Since there are no patches with DfB greater than approximately 90 in LSIL and HSIL, the improvement of our method at the larger values was significant. Fig. \ref{fig:predmap} shows the examples of prediction results. This is the comparison of DfB + FC (Transfer) and Baseline. In (b) DfB image, the brighter area has larger DfB values, which means the pixel is far from the boundary. As shown in Fig. \ref{fig:acc_diff_all}, the number of misclassified patches of Non-Neop. decreased. The reason for misclassification as LSIL in (d) Baseline is considered to be that a similar image feature's patch labeled as LSIL is included in the training set. The prior information, ({\it i.e.}, LSIL patches rarely exist in the interior of the tissue) works for predicting correctly. \section{CONCLUSIONS} We proposed a segmentation method in pathological images (WSI) that utilizes the Distance from the Boundary of the tissue (DfB). The proposed method can address the problem of the patch-based methods that lose the cropped position of the patches by using the distance of the patch from the tissue’s boundary as input with the original patch. The experiments demonstrated the effectiveness of our method; it improved the F1 by 4.6\% compared with the baseline method. In particular, the classification performance of Non-Neop. and LSIL, which are difficult even for pathologists, had improved. The proposed method might be applied to other squamous cell carcinomas as well. \begin{figure}[t] \begin{center} \includegraphics [keepaspectratio, width=\linewidth]{figures/predmap.pdf} \caption{Examples of prediction results. (a) Original image, (b) DfB image, (c) Ground truth, (d) Baseline, and (e) DfB+FC (Transfer). Gray, blue, red, and black for (c), (d), (e) indicates Non-Neop., LSIL, HSIL, and No-label area, respectively.} \label{fig:predmap} \end{center} \vspace{-4mm} \end{figure} \vspace{8mm} \noindent {\bf Acknowledgments:} This work was supported by AMED under Grant Number JP20lk1010036h0002 and JSPS KAKENHI Grant Numbers JP20H04211.
\section{Introduction} Fast radio bursts (FRBs) are cosmological radio transients with millisecond duration whose origin is enigmatic \citep{Lorimer2007,Thornton2013}. In the last decade there has been a remarkable increase in our knowledge about FRBs from the observational perspective \citep{Petroff2019,Cordes2019}. Identification of the host galaxy with sub-arcsecond localization accuracy for a dozen of FRBs has placed these sources at redshifts between $0.03$ and $0.66$ \citep{Chatterjee2017,Bannister2019,Ravi2019b,Prochaska2019,Marcote2020,Bhandari2020,Heintz2020}, confirming their cosmological origin. While it has been argued that all FRB sources could potentially repeat \citep[e.g.,][]{Ravi2019a}, some sources, such as FRB 121102 \citep{Spitler2016}, are statistically more active than the others \citep[e.g.,][]{Law2017,Palaniswamy2018} and intrinsic burst widths for the repeating sources are known to be larger as compared to those for the so-far non-repeating sources \citep{CHIME2019,Fonseca2020}, supporting the notion of potentially different origins the two. The spectral and polarimetric properties of FRBs at high time resolution are crucial to understanding their emission mechanisms and local environments \citep[e.g.,][]{Farah2018,Hessels2019,Nimmo2020}. One of such sources, thus-far non-repeating FRB 181112, was detected in the Commensal Real-time ASKAP Fast Transients (CRAFT) survey at $1.3$ GHz, with a duration of $2.1$ ms and a fluence of $26\ {\rm Jy\,ms}$, as reported by \citet{Prochaska2019}. The burst was localized to a star-forming galaxy at redshift $z=0.48$. Nevertheless, it has a linear polarization at negligible Faraday rotation measure (RM) $\sim10\ {\rm rad\ m^{-2}}$, which may disfavor an extremely magneto-ionic environment, such as a supernova remnant. Recently, \citet{Cho2020} carried out a high time ($3$-ns) resolution analysis of this burst, and found out that the burst is composed of a train of four narrow pulses separated by submilliseconds each. More recently, \citet{Day2020} also found that double-peaked FRBs 190102 and 190611 detected by ASKAP, both share some phenomenological similarities with FRB 181112. In this {\it Letter}, we consider implications of the observations of narrow pulses in FRB 181112. Intriguingly, the time interval between the first and third pulses ($\sim0.8$ ms) coincides with that between the second and fourth pulses, implying that there would be an underlying neutron star (NS) spinning at extremely short period $\sim0.8$ ms, and such a fast rotation could be most naturally achieved by a coalescence of binary neutron stars (BNS). In this case, spinning magnetic fields of merging NSs \citep{Totani2013} or the interaction between the NS magnetospheres \citep{Wang2016} during the final stage of a BNS merger inspiral can produce an FRB, which would be hidden at $0.5$-$1$ ms after the merger due to the subsequent mass ejection \citep{Yamasaki2018}. Therefore, if our interpretation of the underlying periodicity between sub-pulses of FRB 181112 is correct, it would provide strong support to the BNS-merger origin for this FRB. We also examine the idea that future co-detection of the gravitational wave (GW) from such FRBs provide the completely new information on the NS that is complementary to the relatively poor sensitivity of current GW detectors right after the BNS merger. This {\it Letter} is organized as follows. In \S \ref{s:FRB181112}, we present our interpretation of the temporal properties of FRB 181112 on the basis of BNS merger model for FRBs. In \S \ref{s:constraints}, possible constraints on the NS equations of state is presented, followed by discussion in \S \ref{s:discussion}. Throughout this work, we use a geometrical unit with $c=G=1$, where $c$ and $G$ are the speed of light and the gravitational constant, respectively. \section{FRB 181112 from BNS merger?} \label{s:FRB181112} \subsection{Interpretation of Four Narrow Pulses} \label{ss:period} According to \cite{Cho2020}, the singly-detected FRB 181112 consists of four narrow pulses with signal-to-noise ratio (S/N) of $220$, $5$, $28$, and $8$ respectively, arriving at times $t_1=0$ ms, $t_2=0.48\pm0.01$ ms, $t_3=0.808\pm0.004$ ms, and $t_4=1.212\pm0.002$ ms, where $t_i$ refers to the peak of the profile of pulse $i$. Although no significant periodicity cannot be claimed because of the small number of pulses, the interval $t_{31}$ = 0.808 ms between the pulses 1 and 3 is interestingly close to $t_{42}$ = 0.732 ms between the pulses 2 and 4, implying a tempting possibility of a periodicity around 0.8 ms \citep{Cho2020}. This is consistent with the duration (width) of an individual pulse $\lesssim0.1$ ms and negligible temporal pulse broadening due to scattering $\sim20\,\mu$s. Assuming that the four pulses randomly distribute within the time window of $2P\sim1.6$ ms, we estimate by a simple Monte Carlo method that there is a non-negligible $\lesssim18$\% probability for the $|t_{31}-t_{42}|$ to be less than the observed difference $0.076$ ms by chance alone. Though this chance probability alone does not allow us to claim the existence of periodicity with a high level of confidence, there is other circumstantial evidence to support the hypothesis that pulses 1 and 3 are of the same origin (see, e.g., Table 1 and Figure 1 of \citealt{Cho2020}). First, pulses 1 and 3 have similarly high S/N, which is in contrast to the low S/N of weak pulses 2 and 4. Secondly, the Faraday rotations were measured only in pulses 1 and 3 with their polarization angles being consistent with each other within $20^{\circ}$, suggesting that a similar magnetic field geometry may have been achieved in these two pulses. Furthermore, there is a potential similarity in the time-frequency structures of pulses. The dynamic spectra of pulses 1 and 3 extend across the observing band up to $\sim1450$ MHz, whereas pulses 2 and 4 possibly have a spectral cutoff at around $\lesssim1300$ MHz (see Figure 2 of \citealt{Cho2020}). Therefore, these data might support the interpretation of the $0.8$ ms periodicity, in which case the four pulses are emitted over two rotational periods. If the submillisecond periodicity is real, it is reasonable to regard it as the rotation of an underlying compact star, such as a NS. \citet{Lattimer2007} showed that the minimum spin period for a uniformly rotating NS with non-rotating mass $M$ and radius $R$ in fully relativistic calculations employing realistic hadronic EOSs is approximated as $P_{\rm min}\approx(0.96\pm0.03)\left(M_{\odot}/M\right)^{1/2}\left(R/10\ {\rm km}\right)^{3/2}$ ms, which applies to an arbitrary NS mass as long as it is not close to the maximum non-rotating mass (whereas for the maximum mass configuration, the coefficient reduces to $0.83$). That is, for $M\gtrsim1.4\ M_{\odot}$ star with its radius $R=10$ km, the minimum spin period would be limited to $P_{\rm min}\lesssim0.8$ ms regardless of EOS. Thus, most NS EOSs, at least in theory, allow for short spin period at $P=0.8$ ms seen in FRB 181112. \subsection{The Origin of Most Rapidly Spinning NS} What is the possible progenitor of a NS with such a short spin period? The most common channel for the formation of NSs is core collapse supernovae (CCSNe). Current evolutionary models of progenitors combined with numerical simulations of core collapses and explosions \citep[e.g.,][]{Spruit1998,Heger2000,Heger2005,Thompson2005,Ott2006,Nakamura2014} show that the spin period of a newborn NS can be as small as a few milliseconds only if the spin rate of the progenitor is sufficiently high. Meanwhile, the initial spins of pulsars are not well constrained by observations but, most likely, they lie in the vicinity of tens to hundreds of milliseconds \citep[e.g.,][]{Narayan1987,Lorimer1993,Kaspi2002,Faucher2006,Miller2015}. The most rapidly rotating pulsar currently known is J1748--2446ad with $P=1.4$ ms \citep{Hessels2006}, which however is not an isolated pulsar but a recycled one in a binary system, and so far no submillisecond pulsars have been found, despite vigorous pulsar explorations \citep[e.g.,][]{Lorimer2008}. Moreover, depending on the mass of SN ejecta, it takes about $10$--$100$ years for the surrounding environment to become transparent to the radio waves \citep[e.g.,][]{Murase2016,Kashiyama2017,Metzger2017}. Thus, even though the remnant NS from a CCSN is born rapidly rotating, with an initial spin period of submilliseconds, it would have significantly spun down by the time when an FRB produced by its activity could be observed, unless the initial NS magnetic field is too low and/or the NS angular momentum significantly increases due to a fallback accretion \citep[e.g.,][]{Shigeyama2018}. Therefore, the explanation of the submillisecond rotation by CCSNe requires fine tuning of the parameters. Another possible channel for NS formation is the coalescence of binary neutron stars (BNSs). Such a remnant after the merger, called massive NS, would start out rapidly rotating and gradually slow down through emission of gravitational and electromagnetic radiations \citep{Shibata2019} mass and also on the NS equations of state (EOS), it could survive for hundreds of milliseconds and eventually collapse to a black hole \citep{Hotokezaka2013} or it could actually remain stable indefinitely \citep{Shibata2019}. Since the remnant NS inherits the large kinetic energy $\sim10^{53}\ {\rm erg}$ of the binary orbital motion, its initial spin period is typically about $0.5$--$1.0$ ms, which is suggested by the numerical relativity simulations with the plausible value of the binary mass of $2.5$--$2.7M_\odot$ \citep{Radice2018}. In this respect, the explanation of the $P=0.8$ ms rotation seen in FRB 181112 would be most naturally interpreted as the spin rate of the BNS merger remnant without fine tuning of parameters. Furthermore, in the framework of BNS merger scenarios for FRBs \citep{Totani2013,Wang2016}, the rotational energy budget available for FRB emission dramatically increases until the moment of coalescence. Meanwhile, the dynamical ejecta begin to screen the radio emission at times about $0.5$--$1$ ms after the merger \citep{Yamasaki2018}, which may limit the maximum duration of an FRB and thus one will not ``see'' the subsequent FRB sub-pulses if any\footnote{If the remnant NS survives for long time ($\gtrsim1$ year) after the merger, its rotational or magnetic activity may produce repeating FRBs (\citealt{Yamasaki2018}, see also \citealt{Margalit2019,WangFY2020}).}. Therefore, we conclude that FRB 181112 could be most naturally interpreted as the repeated radio emissions from the remnant NS around the moment of coalescence that have survived the absorption due to the subsequent expansion of dynamical ejecta. \section{Future Implications on Neutron Star Equations of State} \label{s:constraints} While the possible presence of submillisecond periodicity in FRB 181112 strengthens the support for the BNS merger origin for this FRBs as shown in \S \ref{s:FRB181112}, the most unambiguous confirmation is only achieved by detecting the GW emission simultaneously with an FRB \citep{Totani2013,Zhang2014,Yamasaki2018,WangMH2020}. In this section, we discuss the future implications of a simultaneous detection of an FRB 181112-like FRB and the associated GW for NS matter EOSs. For these purposes we make use of the latest numerical-relativity simulations of BNS mergers (\S \ref{ss:simulation}). Based on this, we demonstrate some relations among key BNS-merger properties and show how FRB and GW observations can be combined with such relations to constrain the NS properties (\S \ref{ss:relations} and \S \ref{ss:Lambda}). \subsection{Simulation Data and Physical Quantities of Interest} \label{ss:simulation} We use the numerical-relativity simulations of BNS mergers \citep{Kiuchi2017,Kiuchi2020} performed with five phenomenological EOSs (polytropic EOSs for dense nuclear matter with broken power law, see \citealt{Read2009}), which produce a wide range of spherical NS radii $R_{1.35}=10.96$--$13.69$ km for a $1.35\ M_{\odot}$ star. As shown below, our purposes are to demonstrate the qualitative dependence of the remnant spin period on the remnant mass and to obtain the relationship between compactnesses before and after the merger. Therefore, a choice of relatively simple EOSs is sufficient. Here we try the models with total mass $m_{\rm tot}=2.50$--$2.73\,M_{\odot}$ and mass ratio $q=m_1/m_2=0.7$--$1.0$, where $m_1+m_2=m_{\rm tot}$. The primary quantities of our interest are the minimum spin period of the remnant ($P_{\rm rem}$), the remnant mass ($M_{\rm rem}$), and the binary tidal deformability parameter ($\tilde{\Lambda}$), which are extracted from the simulations as follows. In general, the remnant is initially rotating differentially, which is characterized by a slowly rotating core surrounded by a rapidly rotating outer layer, and depending on the magnetorotational instabilities and/or the neutrino cooling , the rotational profile evolves into Keplerian one \citep[e.g.,][]{Shibata2005,Fujibayashi2020}. Namely, the rotational profile is highly unstable (hence not appreciable) around the time of merger. Thus, we extract the minimum spin period by examining the location of the peak in the angular velocity profile along the equator of the merged NS remnant (or orbital plane) at about $10$--$15$ ms after the merger. The errors in $P_{\rm rem}$ arising from simulations are estimated to be $\lesssim6$\%. We approximate the remnant mass by the total mass of the NSs for simplicity (i.e., $M_{\rm rem}\sim m_{\rm tot}$). Other potential systematic uncertainties in $P_{\rm rem}$ and $M_{\rm rem}$ will be discussed in \S \ref{s:discussion}. This is reasonable because the total mass of tidal and shock-driven dynamical ejecta during the early post-merger phase is typically $\lesssim10^{-2}\ M_{\odot}$ \citep[e.g.,][]{Shibata2019}, which is negligible compared to the total mass of the system. Last but not least, the binary tidal deformability $\tilde{\Lambda}$ is directly extracted from the inspiral GWs \citep[][]{LIGO2017,De2018,De2018Erratum,LIGO2020}. \begin{figure}[t] \centering \includegraphics[width=.48\textwidth]{P_M.pdf} \caption{The post-merger spin period of the remnant NS $P_{\rm rem}$ as a function of remnant mass $M_{\rm rem}$, which is approximated as a total mass of the NSs. The vertical error bars are coming from the simulation uncertainties arising from the differential rotation of the remnant, which are set to be $6$\%. The different markers represent different EOSs and the non-rotating spherical NS radii for a $1.35\,M_\odot$ could be regarded as an effective parameter of the EOS. The simulation with softest EOS (corresponding to $R_{1.35}=10.96$ km) with high total mass $m_{\rm tot}\gtrsim2.7M_{\odot}$ are not shown because they collapse to a black hole within a few ms after the merger and hence $P_{\rm rem}$ is unavailable. The horizontal blue region represents the potential spin period estimated from FRB 181112 observation with an error of $0.1$ ms (see \S \ref{ss:period}) and the vertical orange region indicates the total mass of the BNS system inferred from the GW170817 observations $2.74_{-0.01}^{+0.04}\,M_{\odot}$ \citep{LIGO2017}. } \label{figure_1} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=.75\textwidth]{P_Lambda.pdf} \caption{$P_{\rm rem}/M_{\rm rem}$ as a function of binary tidal deformability parameter ${\Tilde{\Lambda}}^{1/5}$ (corresponding ${\Tilde{\Lambda}}$ values are indicated above the upper horizontal axis). The best-fitted result is shown as a solid black line with grey shaded region being the 1-$\sigma$ errors. The horizontal green region represents the $P_{\rm rem}/M_{\rm rem}$ value estimated from the hypothetical detection of an FRB-GW event (i.e., FRB 181112 and GW170817). The vertical purple region represents the constraints on the tidal deformability coming from the $P_{\rm rem}/M_{\rm rem}$ constraints. } \label{figure_2} \end{figure*} \subsection{Period-Mass Relation} \label{ss:relations} Figure \ref{figure_1} shows the relation between $P_{\rm rem}$ and $M_{\rm rem}$ for different EOSs. One can see that the $P_{\rm rem}$-$M_{\rm rem}$ relation strongly depends on the EOS (or radius $R_{\rm rem}$), and for each EOS there is a mild dependence of $P_{\rm rem}$ on $M_{\rm rem}$. These trends could be qualitatively understood if the remnant has a quasi-uniform rotation and it rotates with the Keplerian velocity at the surface, i.e., $P_{\rm rem}\propto R_{\rm rem}^{3/2}\,M_{\rm rem}^{-1/2}$. This plot is useful when considering a case of detecting the inspiral GW and coincidentally seeing a high-time-resolved FRB with submillisecond periodicity. In this case, $P_{\rm rem}$ and $R_{\rm rem}$ are both measurable by the FRB and GW observations, respectively. For instance, let us consider a hypothetical FRB-GW detection by taking $P_{\rm rem}$ from FRB 181112 and $M_{\rm rem}$ from GW170817. Then, one can constrain the allowed parameter space on the $P_{\rm rem}$-$M_{\rm rem}$ plane (see the area where the horizontal and vertical shaded regions intersect in Figure \ref{figure_1}). This demonstrates that the simultaneous measurement of $P_{\rm rem}$ and $M_{\rm rem}$ would provide an important constraint on the EOS. \subsection{Period/Mass-Tidal Deformability Relation} \label{ss:Lambda} Additionally, we consider a case where the tidal deformability (as well as $M_{\rm rem}$) is measured by the GW observations observation of a BNS inspiral. The binary tidal deformability, $\Tilde{\Lambda}$, can be written as \citep[][]{Flanagan2008,Hinderer2007} \begin{eqnarray} \label{eq:Lambda} \Tilde{\Lambda}&=&\frac{8}{13}[(1+7\eta-31\eta^2)(\Lambda_1+\Lambda_2)\nonumber\\ &+&\sqrt{1-4\eta}(1+9\eta-11\eta^2)(\Lambda_1-\Lambda_2)], \end{eqnarray} where $\eta=m_1m_2/m_{\rm tot}^2$ is the symmetric mass ratio and $\Lambda_i$ ($i=1,\,2$) is the tidal deformability of each star, defined as \begin{equation} \label{eq:Lambda_i} \Lambda_i\equiv\frac{2}{3}k_2^{(i)}\left(\frac{R_i}{m_i}\right)^5, \end{equation} where $k_2^{(i)}$ is the quadrupolar Love numbers of each NS. For simplicity, we consider near-equal-mass NSs with $\eta\sim1/4$, in which case $\Tilde{\Lambda}\sim\Lambda_i$. As shown in Figure \ref{figure_2}, one can see that the remnant quantity $P_{\rm rem}/M_{\rm rem}$ is closely related to the binary tidal deformability by the relation\footnote{This is qualitatively similar to the so-called ``(approximate) universal relations'' between the post-merger gravitational wave frequency and tidal deformability \citep[e.g.][]{Bauswein2012,Read2013,Bernuzzi2015,Rezzolla2016,Zappa2018,Kiuchi2020}}. \begin{equation} \label{eq:P/M-Lambda} \log_{10}\left[\left(\frac{P_{\rm rem}}{\rm ms}\right)\left(\frac{M_{\odot}}{M_{\rm rem}}\right)\right]\simeq a_0 +a_1{\Tilde{\Lambda}}^{1/5}, \end{equation} where $a_0=-1.22^{+0.08}_{-0.08}$ and $a_1=0.18_{-0.02}^{+0.02}$ are numerical coefficients with errors of 1-$\sigma$. This may be qualitatively understood as follows. By assuming that the remnant has a Keplerian rotation, $P_{\rm rem}/M_{\rm rem} \propto(R_{\rm rem}/M_{\rm rem})^{3/2}={\cal C}_{\rm post}^{-3/2}$, where ${\cal C}_{\rm post}$ is the compactness of the remnant NS. Meanwhile, the NSs' tidal deformability is related to the compactness of NSs before the merger ${\cal C}_{\rm pre}$ as $\tilde{\Lambda}^{1/5}\sim\Lambda_i^{1/5}\propto R_i/m_i={\cal C}_{\rm pre}^{-1}$ (see Eq. [\ref{eq:Lambda_i}])\footnote{A slightly different relationship between the binary tidal deformability and the compactness parameter $\tilde{\Lambda}\propto C^{-6}$ has also been proposed \citep{De2018,De2018Erratum}. Yet, this barely affects our conclusions.}. Namely, each quantity could be expressed in terms of compactness parameter. Therefore, the clear $P/M$-$\tilde{\Lambda}^{1/5}$ correlation in Eq. \eqref{eq:P/M-Lambda} may imply the existence of hidden relationship between ${\cal C}_{\rm pre}$ and ${\cal C}_{\rm post}$, which could be only investigated through numerical-relativity simulations. Given the remnant spin period and mass inferred by FRB and GW observations, respectively, one can see that Eq. \eqref{eq:P/M-Lambda} will provide a constraint on the tidal deformability $\Tilde{\Lambda}_{\rm FRB}$, which is completely independent from that directly measured from the inspiral GW $\Tilde{\Lambda}_{\rm GW}$. For instance with a hypothetical FRB-GW detection (taking $P_{\rm rem}$ from FRB 181112 and $M_{\rm rem}$ from GW170817), one would obtain $\Tilde{\Lambda}_{\rm FRB}\sim600$--$1000$. This is actually consistent with the tidal deformability directly measured from the GW170817 $100\lesssim\Tilde{\Lambda}_{\rm GW}\lesssim800$ \citep{LIGO2017}. We note that the upper limit on the $\Tilde{\Lambda}_{\rm GW}$ is robustly set by the GW analysis, whereas the lower limit is rather dependent on the prior physical information about the NSs. In contrast, as our method of using $P/M$-$\tilde{\Lambda}^{1/5}$ relation provides a constraint on $\tilde{\Lambda}_{\rm FRB}$ (hence on NS radii) with an error bar, this would be qualitatively different estimate and thus of great importance. The error in $\tilde{\Lambda}_{\rm FRB}$ is subject to the accuracy of the FRB and GW observations ($P_{\rm rem}$ and $M_{\rm rem}$) and the variance of $P_{\rm rem}/M_{\rm rem}$-$\tilde{\Lambda}^{1/5}$ relation (see \S \ref{s:discussion}). Curiously, even the possible disagreement between $\Tilde{\Lambda}_{\rm FRB}$ and $\Tilde{\Lambda}_{\rm GW}$ may allow us to test whether the empirical relation in Eq. \eqref{eq:P/M-Lambda}, derived solely from numerical relativity simulations, actually holds. A phase transition from normal nuclear matter to quark matter that can take place inside the NSs around the moment of coalescence might modify the BNS merger process, and the tight correlation between $P_{\rm rem}/M_{\rm rem}$ and $\tilde{\Lambda}$ obtained for pure nucleonic stars (as done in this work) may not persist anymore \citep{Bauswein2019}. For instance, sharp phase transitions lead to the smaller tidal deformabilities and also induce discontinuities in the relation between tidal deformability and mass \citep{Han2019,Nandi2020}. Consequently, such phase transitions would lead to a deviation of $\Lambda_{\rm FRB}$-$\Lambda_{\rm GW}$ relation from that shown in Figure \ref{figure_2}. \section{Summary and Discussion} \label{s:discussion} In this {\it letter}, we investigated the possibility that the separation among the sub-pulses in FRB 181112 could represent the rotation period of an underlying NS, and the extremely short period of about $0.8$ ms could be a strong evidence for a BNS merger. Base on this picture, we have shown that such a high spin rate inferred from a high time-resolved FRB would offer a unique opportunity to study the nature of the BNS merger remnant, particularly if co-detected with GW. First of all, since the information on the remnant spin period $P_{\rm rem}$ is not yet readily available with the current GW observation, the newly proposed method of detecting it via the high time-resolved FRBs is complementary. Moreover, if combined with the remnant NS mass $M_{\rm rem}$ inferred from GW observation, it would place a new constraint on the nuclear matter EOS. Our numerical relativity simulation suggests that the post-merger quantity $P_{\rm rem}/M_{\rm rem}$, or the tidal deformability of the merger remnant, has a tight correlation with the binary tidal deformability parameter $\Tilde{\Lambda}$ of NSs before they merge. Given this empirical relation, a joint FRB-GW observation will establish a new limit on $\Tilde{\Lambda}$. Therefore, if $\Tilde{\Lambda}$ is also well measured by GW data, a comparison between these two will provide further insights into our understanding of nuclear matter and BNS merger process. Besides the errors related to the simulation described in \S \ref{s:constraints}, there may be additional systematic uncertainties in $M_{\rm rem}$ and $P_{\rm rem}$ that would also propagate to $P_{\rm rem}$-$M_{\rm rem}$ (Figure \ref{figure_1}) and $P_{\rm rem}/M_{\rm rem}$-$\tilde{\Lambda}^{1/5}$ (Figure \ref{figure_2} and Eq. [\ref{eq:P/M-Lambda}]) relations. In this work, we approximated the mass of the remnant NS by the total mass of the pre-merger binary system. Meanwhile, a number of simulations have shown that the post-merger system generally consists of a central core with differential rotation (corresponding to the remnant NS considered here) and an accretion disk that uniformly rotates around it \citep[e.g.][]{Shibata2019}. In this context, the spatial extent of the remnant NS (or the total mass) is not a well-defined concept, but simulations using typical binary masses of $2.5$--$2.7\,M_{\odot}$ suggest that, depending on the mass of the disk, the uncertainty in $M_{\rm rem}$ is up to $0.1$--$0.3\,M_{\odot}$ \citep{Fujibayashi2020}, which translates into a fractional error in $M_{\rm rem}$ of $\lesssim10$\%. Similarly, $P_{\rm rem}$ may have multiple systematic uncertainties. First, the spin period is not a gauge-independent quantity in general relativity, and therefore the derived relations could in principle change when choosing different simulation setups. Nevertheless, by comparing the frequency of the dominant quadrupole mode of GW radiation, which is gauge invariant, with the remnant spin frequency we confirm that the effect of gauge is negligible (see Appendix \ref{s:appendix}). Secondly, since our simulation covers only a limited mass range, we need a more comprehensive study to evaluate the variance of those relations. Third, as the rotational profile of the remnant is time-dependent, the minimum spin period may change depending on when one extracts it from a simulation. Finally, the shock-wave heating during the coalescence, which depends on a BNS model, may also affect the rotational profile. Since this work is the very first step toward probing BNS merger EOS by means of FRBs, we leave the exploration of these possibilities for future works. Based on the BNS merger models \citep{Totani2013,Yamasaki2018} and also hinted by the observation of FRB 181112 \citep{Cho2020}, we predict a unique population of non-repeating FRBs having multiple sub-pulses with submillisecond periodicity. The full duration of such FRBs may be determined by the dynamical timescale of ejecta that would hide the radio waves at times of about $0.5$--$1$ ms after the coalescence \citep{Yamasaki2018}. As a result, no subsequent FRB sub-pulse would be observed. The FRB 121002 \citep{Champion2016} is the first FRB sample that clearly shows double components. However, due to its somewhat large separation between two peaks $2.4\pm0.4$ ms, it cannot be a strong evidence for a BNS merger. Meanwhile, recently discovered double-peaked FRBs 190102 and 190611 \citep{Day2020} could be good candidates for this population. The peak separations for FRBs 190102 and 190611 are about $0.5$ ms and $1$ ms, respectively with small scattering timescales ($0.04$ ms and $0.18$ ms, respectively) and the rotation measure for two sub-pulses in each burst are comparable, sharing many phenomenological similarities to FRB 181112 \citep{Day2020}. Further in-depth modelling of the radio emission signature from merging BNSs \citep[e.g.,][]{Palenzuela2013,Carrasco2020,Most2020,Wada2020} as well as their possible connection to FRBs will be required to see if such models can account for the sub-pulse separations observed. A population of FRBs that is similar to FRB 181102 will be found by ongoing (ASKAP) and future (e.g., SKA) high-time resolution surveys. Also, there is a fascinating possibility that GWs and radio waves can be accurately observed simultaneously by forecasting the BNS merger with the space-based detector DECi-hertz Interferometer Gravitational wave Observatory (DECIGO, \citealt{Kawamura2006,Sato2017}). Ultimately, in the era of third-generation detectors, such as the Einstein Telescope \citep[][]{Hild2011} and the Cosmic Explorer \citep[][]{Abbott2017}, post-merger GWs and FRBs similar to FRB 181102 will be detected simultaneously, enabling a direct comparison between the remnant spin periods obtained by FRBs and GWs. \acknowledgments SY gratefully acknowledges the support from the Institute for Cosmic Ray Research during the course of this work. TT was supported by JSPS/MEXT KAKENHI Grant Numbers 18K03692 and 17H06362. KK was supported by 18H01213. The numerical computation was performed on Cray XC50 at cfca of National Astronomical Observatory of Japan, Oakforest-PACS at Information Technology Center of the University of Tokyo, and on Cray XC40 at Yukawa Institute for Theoretical Physics, Kyoto University.
\section{Introduction} Plagiarism refers to the unacknowledged use of others' text or ideas. Stopping plagiarism is an important topic in the field of academic publication. Plagiarism detection systems compare a submitted suspicious document against a collection of source documents to find the cases of text re-use. It has been proved that plagiarism detection systems are effective tools for discouraging researchers to commit plagiarism \cite{BrownPreventing}. \par There are two main categories for plagiarism detection (PD) systems. External plagiarism detection systems search for pattern of text re-use between suspicious document and a collection of source documents. On the other hand, intrinsic plagiarism detection systems exploit stylometry algorithms to find the changes in writing style of the author. \par There are different plagiarism detection tools that have been investigated in multiple surveys \cite{LancasterClassifications,LukashenkoComputer}. Crosscheck \cite{ZhangCrossCheck,Zhangsurvey} and Turnitin \cite{BataneTurning} are the most popular ones in academic community. Although there are many plagiarism detection systems in English, plagiarism detection is still a serious task in less resourced languages. \par In this paper, we propose \textit{Hamtajoo}, a Persian plagiarism detection framework for investigating patterns of text re-use in Persian academic papers. Persian belongs to Arabic script-based languages which share some common characteristics. This language family has some common properties such as absence of capitalization, right to left direction, encoding issues in computer environment and lack of clear word boundaries in multi-token words \cite{FarghalyComputer}. The system works on a document level at the first stage and then focuses of paragraph and sentence level in the second detailed comparison stage. We have prepared a reference collection of all of the Persian academic papers indexed by SID\footnote{ http://www.sid.ir/} (Scientific Information Database). \par The rest of the paper is organized as follows. Section~\ref{relatedwork} describes the related work on recent plagiarism detection tools and methods. In section~\ref{proposedapproach}, we explain the proposed approach and also describe the algorithms that have been developed and implemented in the system. Section~\ref{systemarchitecture} comes with system design, architecture and functionalities. In section~\ref{experimentsandevaluation}, an evaluation framework for examining the system is described. Conclusion and the directions for the future work are presented in the final section. \par \section{Related Work} \label{relatedwork} In this section we investigate some of the available plagiarism detection systems. \par An Arabic plagiarism detection tool called Aplag has been introduced in \cite{MenaiAPlag}. For developing the plagiarism checker system, they have extracted the fingerprints on document, paragraph and sentence levels for saving the computation time. If the similarity between hashes of the two documents is above a specific threshold, then the process continues to paragraph level and so on. \par In a work accomplished by Alzahrani et al., an intelligent plagiarism reasoned named as iPlag, has been designed \cite{AlzahraniiPlag}. Scientific publications in same fields usually share same general information and have some common knowledge. Besides, each publication should convey specific contributions. In this system, they have processed various parts of a manuscript and weighted them based on their importance in plagiarism detection. So, the PD system pays more attention to the parts of a manuscript that has more contributions and lower weights given to less important parts. \par Meuschke et al. proposed Citeplag, a plagiarism detection system based on citation pattern matching \cite{MeuschkeCitePlag}. They search similar patterns of citations between source and suspicious documents to find cases of plagiarism. \par In a work accomplished by Leong and Lau, they have developed a document plagiarism detection system named as Check \cite{SiCHECK}. They try to eliminate unnecessary comparisons between documents with different subjects and so reduced the computational costs. \par A word-similarity sentence-based plagiarism detection tool on Web documents, named as SimPaD has been developed in \cite{PeraSimPaD}. They measure the similarity between sentences by computing word correlation factors and then generate a graphical view of sentences that are similar. \par Collberg et al. developed a system for self-plagiarism detection named as SPLAT \cite{CollbergSPLAT}. The system crawls websites of top fifty computer science departments and downloads the research papers. Then a text comparison algorithm compares all of the papers for instances of text re-use. \par It should be noted that most of the tools for detection cases of plagiarism are only pay attention to instances of verbatim copy plagiarism and cannot identify paraphrased passages of text which require semantic similarity detection methods. Our contribution in this paper is to use specific features in Persian to detect cases of paraphrased plagiarism. \section{Proposed Approach} \label{proposedapproach} The proposed approach for developing \textit{Hamtajoo} system is thoroughly described in this section. \par Stein et al. proposed a generic three-step retrieval process for an external PD system \cite{SteinStrategies} that is depicted in Figure~\ref{fig:1}. Their proposed model consists of a heuristic retrieval step for extracting a set of source documents from the source collection, a detailed analysis step for comparing the suspicious document with candidate source documents for finding the similar passages, and a knowledge-based post-processing step to analyze identical passages and investigate whether they contain proper quotations. \par \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{6.png} \caption{Generic three-step retrieval process for an external plagiarism detection system \cite{SteinStrategies}} \label{fig:1} \end{figure} A similar approach has been used to develop \textit{Hamtajoo} PD system. \textit{Hamtajoo} includes two main components, including the candidate retrieval and the text alignment modules. From the evaluation point of view, candidate retrieval is a recall-oriented task, while precision is mainly focused by the subsequent text alignment step \cite{PotthastOverview}. \par In this section we describe main proposed methods for the candidate retrieval and the text alignment modules. The evaluation results on the proposed algorithms are presented in the next section. \par \subsection{Candidate Retrieval Module} Since the comparison of submitted suspicious document with the entire source documents in the system would be very time consuming, a candidate retrieval stage is considered to decrease the search space. The aim is to find most similar documents with the submitted suspicious document and also to reduce the number of documents for the subsequent text alignment stage. Our approach to retrieve candidate documents is divided into four main steps: \begin{itemize} \item Chunking the suspicious document \item Noun phrase and keyword phrase extraction \item Query formulation \item Search control \end{itemize} Before these main steps, suspicious documents are passed through Parsiver pre-processing \cite{MohtajParsivar} block that includes stop words removal and unification of punctuation. Parsiver is an integrated package written in python which performs different kinds of pre-processing tasks in Persian. Each of the mentioned steps is described in more details as follows. \par \textbf{Chunking the suspicious document:} In this step, the submitted suspicious document is segmented into some parts called chunks. These chunks will be used for query construction based on keyword phrase and noun phrase extraction. Therefore, their length should be long enough to extract meaningful queries. On the other hand, these chunks may contain unknown numbers of plagiarism cases from source documents. Based on experiments on \textit{Hamtajoo} to choose the system parameters, 500 words length has been chosen as the chunk length. In other words, the suspicious document would be divided into chunks of 500 words length and then each chunk is tokenized into individual sentences. \par \textbf{Noun phrase and keyword phrase extraction:} This step has the main role in candidate retrieval task. In other words, extracting appropriate keywords could lead the candidate retrieval to perform more efficiently. There are many previous studies that tried to extract keywords by investigating content \cite{MatsuoKeyword,WittenKEA}. In order to construct queries from the suspicious document in our approach, both keyword phrases and noun phrases are extracted. \par Before starting the extraction process, sentences with low information content are discarded. For this purpose, the input sentences have been ranked based on their length and the number of nouns, and then we discard the lower 20\% of the sentences. The resulting sentences are long enough and have rich content for keyword extraction. The TF-IDF (Term Frequency - Invert Document Frequency) weighting scheme has been used to extract important words from the sentences. The keywords are extracted from top 3 most important sentences that contain the highest TF-IDF words. The chosen keywords include nouns with high TF-IDF values, remaining nouns in the sentence, and also adjectives and verbs with high TF-IDF values. Moreover, the noun phrase extraction is accomplished by processing the remaining sentences. The formulation is deployed based on the formal Persian noun phrase structure. For each noun phrase, a score is calculated based on TF-IDF values. \par \textbf{Query formulation:} For top ranked sentences selected from previous step, the extracted keywords are simply placed next to each other based on their order in sentence and are passed to the next step as a query. The Apache Lucene\footnote{https://lucene.apache.org/} is used to index the source documents. The constructed queries in this section are passed to Lucene application programming interface (API) to search them within the indexed documents. \par \textbf{Search control:} In this step, some of the constructed queries are dropped based on the previously downloaded documents. The input query is compared against the downloaded documents that are gathered from previous rounds of source retrieval steps. \par The retrieved documents based on the constructed queries are passed to the text alignment sub-system for more detailed analysis and comparison. For each submitted documents, 25 source documents are chosen in the average for text alignment analysis in the next stage. \subsection{Text Alignment Module} The candidate documents from the previous stage are passed to a text alignment module for detail comparison of text passages between suspicious document and the source candidates. In this stage, the exact position of text re-use cases will be detected and will be reported to the end users. Different methods and algorithms have been tested to choose the best one from the accuracy and speed points of view. A standard PD corpus and also a plagiarism detection lab are designed to compare the performance of different methods. The detail description of the compiled corpus will be presented in the "Experiments and Evaluation" section. \par Figure~\ref{fig:2} shows a snapshot of our PD lab. As depicted in the figure, different methods including character n-gram, word n-gram, Vector Space Model (VSM) and Latent Semantic Analysis (LSA) are deployed in the tool. Moreover, different parameters such as the range of n for n-gram similarity, the similarity threshold for VSM and LSA and the parameters related to different parts of pre-processing are embedded in our PD lab to analyze their impact on plagiarism detection performance. \begin{figure}[b] \centering \includegraphics[width=0.44\textwidth]{5.png} \caption{Snapshot of plagiarism detection lab} \label{fig:2} \end{figure} As depicted in the figure, the PD lab includes a dot plot graph that shows the cases of similarity between pairs of documents. Among different methods which have been developed in our PD lab, the VSM similarity is chosen based on accuracy and runtime criteria. \par The proposed model for detecting exact position of text re-use cases includes the following steps; First, we split the pairs of suspicious and candidate source documents into sentences. In the second step, for each sentence in source and suspicious document, the relevant vectors have been created based on TF-IDF weighting schema. The IDF measure has been computed on a large collection of academic manuscript and the TF counts the number of target word in the document. Finally, in the third step, a pairwise cosine similarity has been computed between all of the sentences in two documents. Pairs of the sentences with similarity higher than predefined threshold are considered as cases of text re-use. \section{System Architecture} \label{systemarchitecture} In this section, we describe the main technologies that are used to develop the PD system. \textit{Hamtajoo} is a web application composed of two main subsystems; the core (back-end) and front-end subsystems. The main stages of \textit{Hamtajoo} are depicted in Figure~\ref{fig:3}. \begin{figure}[t] \centering \includegraphics[width=0.25\textwidth]{flow.PNG} \caption{The block diagram of \textit{Hamtajoo} PD system} \label{fig:3} \end{figure} \subsection{Core (back-end) Sub-system} The Django (1.8) web framework has been used to develop the core subsystem of \textit{Hamtajoo}. Django is a free and open-source web framework, written in Python, which follows the model-view-template (MVT) architectural pattern . The user authentication, raw text extraction from submitted documents and the candidate retrieval and text alignment tasks are done in the core subsystem. \par To make a standard format from both submitted suspicious document and the source documents in the system, the Parsiver pre-processing toolkit \cite{MohtajParsivar} is used. Parsiver has been used to normalize all of the character encodings into a unified format and also to tokenize words in text documents. Moreover, we used Parsiver stemmer for different functions in text alignment module of the system. \par The input documents to \textit{Hamtajoo} can be of various file formats including ".doc", ".docx" and ".txt" formats. To extract the raw text from .doc and .docx documents, the win32com.client and docx2txt modules are used, respectively. The docx2txt is a pure python-based utility to extract text from .docx files. The code is taken and adapted from python-docx\footnote{https://github.com/ankushshah89/python-docx2txt}. We have used these modules to extract the body of text from the submitted documents. \par In order to construct the collection of source documents, all of the papers from SID scientific database were fed into the \textit{Hamtajoo} system. To do so, the Apache Lucene platform has been used to index the text documents. While the python is used to develop the core subsystem, the PyLucene API is used to develop the indexing system. PyLucene is a Python extension for accessing Java Lucene. Its goal is to allow using Lucene text indexing and searching capabilities from Python\footnote{https://lucene.apache.org/pylucene/index.html}. Moreover, MySQL database is employed to store the metadata information (e.g. author, title and publishing year of paper) that are extracted from indexed papers in the system. \subsection{Front-end Sub-system} The front-end subsystem contains user interface which makes it possible for end users to use \textit{Hamtajoo} for investigating plagiarism in their documents. Moreover, users can create and manage user accounts using the system interface. We exploited the Bootstrap web framework to develop the front-end subsystem. Bootstrap is a free and open-source front-end Web framework for designing websites and Web applications. It contains HTML- and CSS-based design templates for typography, forms, buttons, navigation and other interface components, as well as optional JavaScript extensions. \par Figure~\ref{fig:4} shows the main parts of front-end section of \textit{Hamtajoo}. This figure shows the main page of the system, where users can submit their documents for plagiarism analysis. It is possible for users to submit the raw text directly into the system and to upload their documents in doc, docx and txt formats. \par \begin{figure}[!b] \centering \includegraphics[width=0.46\textwidth]{1.png} \caption{Manuscript submission page of \textit{Hamtajoo}} \label{fig:4} \end{figure} Fig~\ref{fig:5} shows the system output for a submitted suspicious document. The system highlights the section of the submitted text which contains cases of text re-use with different colors for different source papers. Moreover, some general statistical information related to submitted document (i.e. number of words, number of paragraphs and ratio of plagiarism in the document) are depicted in the bottom side of the page. List of the source papers which have cases of text similarity with the submitted document can be shown in system output (two papers in this example). Users can examine the detail text similarity between the submitted text and source papers by clicking on listed papers. \begin{table*} \caption{Overall detection performance of \textit{Hamtajoo} vs. the algorithms presented in Persian PlagDet 2016 \cite{AsghariAlgorithms}} \label{tab:1} \centering \begin{tabular}{l|c|c|c|c|c} \toprule Team & Recall & Precision & Granularity & F-Measure & Plagdet \\ \midrule \textbf{Hamtajoo} & \textbf{0.9221} & \textbf{0.9345} & \textbf{1} & \textbf{0.9282} & \textbf{0.9282} \\ Mashhadirajab & 0.9191 & 0.9268 & 1.0014 & 0.9230 & 0.9220 \\ Gharavi & 0.8582 & 0.9592 & 1 & 0.9059 & 0.9059 \\ Momtaz & 0.8504 & 0.8925 & 1 & 0.8710 & 0.8710 \\ Minaei & 0.7960 & 0.9203 & 1.0396 & 0.8536 & 0.8301 \\ Esteki & 0.7012 & 0.9333 & 1 & 0.8008 & 0.8008 \\ Talebpour & 0.8361 & 0.9638 & 1.2275 & 0.8954 & 0.7749 \\ Ehsan & 0.7049 & 0.7496 & 1 & 0.7266 & 0.7266 \\ Gillam & 0.4140 & 0.7548 & 1.5280 & 0.5347 & 0.3996 \\ Mansourizadeh & 0.8065 & 0.9000 & 3.5369 & 0.8507 & 0.3899 \\ \bottomrule \end{tabular} \end{table*} \begin{figure}[!t] \centering \includegraphics[width=0.46\textwidth]{2.png} \caption{System output for detail comparison of source and suspicious documents} \label{fig:5} \end{figure} \section{Experiments and Evaluation} \label{experimentsandevaluation} In order to evaluate \textit{Hamtajoo} plagiarism detection system, two different experiments have been accomplished to measure the performance of different subsystems. Our candidate retrieval module has been evaluated in PAN 2015 international competition on plagiarism detection \cite{HagenSource}. The results are presented in the following subsection. To evaluate the text alignment module, the standard PD dataset of Persian PlagDet shared tasks \cite{AsghariAlgorithms} on plagiarism detection has been used. The performance of text alignment module is compared with all of the proposed methods in Persian PlagDet competition. \subsection{Candidate Retrieval Evaluation} As mentioned in the previous section, in order to evaluate the proposed candidate retrieval approach, we participated in the source retrieval task of PAN 2015 international shared task on plagiarism detection. The candidate retrieval module of \textit{Hamtajoo} achieved the best results in "runtime" and "No detection" measures. Moreover, \textit{Hamtajoo} achieved the second rank in recall measure and also the number of queries among all of the participants. The results of source retrieval shared task are presented in detail in \cite{HagenSource}. \subsection{Text Alignment Evaluation} The Persian PlagDet shared task at PAN 2016 \cite{AsghariAlgorithms} has been organized to promote the comparative assessment of NLP techniques for plagiarism detection with a special focus on plagiarism that appears in a Persian text corpus. Since the shard task was focused on Persian, we have evaluated the performance of \textit{Hamtajoo} using standard Persian PlagDet 2016 evaluation corpus. Table~\ref{tab:1} shows the performance results of \textit{Hamtajoo} in comparison to participants of Persian PlagDet 2016. As mentioned in the table, \textit{Hamtajoo} outperforms other systems in different evaluation measures. \section{Conclusion and Future Works} In this paper, we introduced \textit{Hamtajoo}, a Persian plagiarism detection system. It has been built around a semantic-based method considering specific features of Persian language. The system contains a resource collection comprised of about 480000 journal papers in Persian. Pre-processing of text documents was done in order to transform them into a unified representation, normalizing the text (e.g. unification of various character encodings) and sentence/word tokenization. \par By exploiting a graph showing the distribution of plagiarized passages across the document, the expert could achieve a better view of re-used text. The experimental results show the effectiveness of \textit{Hamtajoo} and its competitiveness against the other plagiarism checker tools. For adapting the system to a commercial package, a web-based system is developed to present the system to the academic society. \par As a plan for the future works, we aim to conduct more experiments on larger texts to detect the text re-use in long documents such as theses and dissertations with a reasonable computational complexity. Another additional work is to deal with bilingual algorithms for detecting plagiarized passages translated from English to Persian. Further improvements can also be done by integrating \textit{Hamtajoo} and \textit{Maglet} \cite{MohtajMaglet} that is a Persian journal recommender system. It can facilitate the manuscript submission process for academicians by checking plagiarism and also by finding appropriate journals for their manuscripts in an integrated system. \section*{Acknowledgment} This work has been accomplished in ICT research institute, ACECR and funded by Vice Presidency for Science and Technology of Iran - Grant No. 1164331. We would like to thank all of the members of ITBM and AIS research groups of ICT research institute for their contribution in corpus construction. Especial credit goes to Javad Rafiei and Khadijeh Khoshnava for their help in development and testing the algorithms.
\section{Introduction} Bayesian inferences require the specification of a prior, which contains a priori knowledge about the parameter(s). If the selected prior, for instance, is flawed, this may yield erroneous inferences. The goal of this paper is to measure the sensitivity of inferences to a chosen prior (known as \emph{robustness}). Since, in most cases, it becomes very challenging to come up with only a sole prior distribution, we consider a class, $\Gamma$, of all possible priors over the parameter space. To construct $\Gamma$, a preliminary prior $\pi_0$ is elicited. Then robustness for all priors $\pi$ in a neighborhood of $\pi_0$ is intended. A common accepted way to construct neighborhoods around $\pi_0$ is through contamination. Specifically, we will consider two different classes of contaminated or mixture of priors, which are given by \begin{equation} \label{contaminated} \Gamma_a=\left\{\pi(\theta): \pi(\theta)=(1-\epsilon)\pi_0(\theta)+\epsilon q(\theta), q \in Q\right\} \end{equation} and \begin{equation} \label{geometric} \Gamma_g=\left\{\pi(\theta): \pi(\theta)=c(\epsilon) \pi_0^{1-\epsilon}(\theta)q^{\epsilon}(\theta), q \in Q\right\}, \end{equation} where $\pi_0$ is the elicited prior, $Q$ is a class of distributions, $c(\epsilon)$ is normalizing constant and $0 \le \epsilon \le 1$ is a small given number denoting the amount of contamination. For other possible classes of priors, see for instance, De Robertis and Hartigan (1981) and Das Gupta and Studden (1988a, 1988b). The class (\ref{contaminated}) is known as the $\epsilon$-contaminated class of priors. Many papers about the class (\ref{contaminated}) are found in the literature. For instance, Berger (1984, 1990), Berger and Berliner (1986), and Sivaganesan and Berger (1989) used various choices of $Q$. Wasserman (1989) used (\ref{contaminated}) to study robustness of likelihood regions. Dey and Birmiwal (1994) studied robustness based on the curvature. Al-Labadi and Evans (2017) studied robustness of relative belief ratios (Evans, 2015) under class (\ref{contaminated}). On the other hand, the class (\ref{geometric}) will be referred as geometric contamination or mixture class. This class was first studied, in the context of Bayesian Robustness, by Gelfand and Dey (1991), where the posterior robustness was measured using Kullback-Leibler divergence. Dey and Birmiwal (1994) generalized the results of Gelfand and Dey (1991) under (\ref{contaminated}) and (\ref{geometric}) by using the $\phi$-divergence defined by $d(\pi(\theta|x), \pi_{0}(\theta|x))=\int \pi_{0}(\theta|x) \phi (\pi(\theta|x)/\pi_{0}(\theta|x))d \theta$ for a smooth convex function $\phi$. For example, $\phi(x)=x\ln x$ gives Kullbak-Leibler divergence. In this paper, we extend the results of Gelfand and Dey (1991) and Dey and Birmiwal (1994) by applying R\'enyi divergence on both classes (\ref{contaminated}) and (\ref{geometric}). This will give local sensitivity analysis on the effect of small perturbation to the prior. R\'enyi entropy, developed by Hungarian mathematician Alfr\'ed R\'enyi in 1961, generalizes the Shannon entropy and includes other entropy measures as special cases. It finds applications, for instance, in statistics (Kanaya and Han, 1995), pattern recognition (Jenssen, Hild, Erdogmus, Principe and Eltoft, 2003), economics (Bentes, Menezes and Mendes, 2008) and biomedicine (Lake, 2006). An outline of this paper is as follows. In Section 2, we give definitions, notations and some properties of R\'enyi divergence. In Section 3, we develop curvature formulas for measuring robustness based on R\'enyi divergence. In Section 4, three examples are studied to illustrate the results numerically. Section 5 ends with a brief summary of the results. \section{Definitions and Notations} Suppose we have a statistical model that is given by the density function $f_\theta(x)$ (with respect to some measure), where $\theta$ is an unknown parameter that belongs to the parameter space $\Theta$. Let $\pi(\theta)$ be the prior distribution of $\theta$. After observing the data $x$, by Bayes' theorem, the posterior distribution of $\theta$ is given by the density \begin{equation*} \pi(\theta|x) = \frac{f_\theta(x)\pi(\theta)}{m(x|\pi)}, \end{equation*} where \begin{equation*} m(x|\pi) = \int f_\theta(x)\pi(\theta) d\theta \end{equation*} is the prior predictive density of the data. To measure the divergence between two posterior distributions, we consider R\'enyi divergence (R\'enyi, 1961). R\'enyi divergence of order $a$ between two posterior densities $\pi(\theta|x)$ and $\pi_0(\theta|x)$ is defined as: \begin{eqnarray*} d=d(\pi(\theta|x),\pi_{0}(\theta|x))&=&\frac{1}{a-1}\ln\left(\int{{{\left(\pi(\theta|x)\right)^a}{\left(\pi_{0}(\theta|x)\right)^{1-a}}d\theta}}\right)\nonumber\\ &=&\frac{1}{a-1}\ln \left(E_{\pi_{0}(\theta|x)}\left[\left(\frac{\pi(\theta|x)}{\pi_{0}(\theta|x)}\right)^a\right]\right),\label{renyi} \end{eqnarray*} where $a>0$ and $E_{\pi_{0}(\theta|x)}$ denotes the expectation with respect to the density $\pi_0(\theta|x)$. It is known that $d(\pi(\theta|x),\pi_{0}(\theta|x))\ge 0$ for all $\pi(\theta|x),\pi_{0}(\theta|x), a>0$ and $d(\pi(\theta|x),\pi_{0}(\theta|x))= 0$ if and only if $\pi(\theta|x)=\pi_{0}(\theta|x)$. Note that, the case $a = 1$ is defined by letting $a \to 1$. This leads to the Kullbak-Leibler divergence. For further properties of R\'enyi divergences consult, for example, Li and Turner (2016). Following the idea of McCulloch (1989) and Dey and Birmiwal (1994) for calibrating, respectively, the Kullback-Leibler divergence and the $\phi$ divergence, it is also possible to calibrate R\'enyi divergence as follows. Consider a biased coin where $X=1$ (heads) occurs with probability $p$. Then R\'enyi divergence between an unbiased and a biased coin is \begin{equation*} d(f_0,f_1)=\frac{1}{a-1}\ln\left[2^{a-1}\left(p^a+(1-p)^{a}\right)\right], \end{equation*} where, for $x=0,1$, $f_0(x)=0.5$ and $f_1(x)=p^x(1-p)^{1-x}$. Now, setting $d(f_0,f_1)=d_0$ gives \begin{equation} 2^{1-a}e^{(a-1)d_0}=p^a+(1-p)^{a}. \label{interpret1} \end{equation} Then the number $p$ is the calibration of $d$. In general, equation (\ref{interpret1}) needs to be solved numerically for $p$. Note that, for the case $a=1$ (i.e. the Kullback-Leibler divergence) one may use the following explicit formula for $p$ due to McCulloch (1989): \begin{equation} p=0.5+0.5\left(1-e^{-2d_0}\right)^{1/2}. \label{interpret2} \end{equation} Values of $p$ close to 1 indicate that $f_0$ and $f_1$ are quite different, while values of $p$ close to 0.5 implies that they are similar. It is restricted that $p$ is chosen so that it is between 0.5 and 1 there is a one-to-one correspondence between $p$ and $d_0$. A motivating key fact about R\'enyi divergence follows from its Taylor expansion. Let $$f(\epsilon)=d(\pi(\theta|x),\pi_{0}(\theta|x))= \frac{1}{a-1}\ln\left(\int{{{\left(\pi(\theta|x)\right)^a}{\left(\pi_{0}(\theta|x)\right)^{1-a}}d\theta}}\right),$$ where $\pi(\theta|x)$ is the posterior distribution of $\theta$ given the data $x$ under the prior $\pi$ defined in (\ref{contaminated}) and (\ref{geometric}). Assuming differentiability with respect to $\epsilon$, the Taylor expansion of $f(\epsilon)$ about $\epsilon=0$ is given by \begin{equation*} f(\epsilon)=f(0)+\epsilon \frac{\partial f(\epsilon)}{\partial \epsilon} \bigg|_{\epsilon=0}+\frac{\epsilon^2}{2} \frac{\partial^2 f(\epsilon)}{\partial\epsilon^2}\bigg|_{\epsilon=0}+ \cdots. \end{equation*} Clearly, $f(0)=0$. If integration and differentiation are interchangeable, we have \begin{eqnarray*} \frac{\partial f(\epsilon)}{\partial \epsilon}&=&\frac {a}{1-a}\frac{\int \left(\pi_0(\theta|x)\right)^{1-a} \left(\pi(\theta|x)\right)^{a-1} \frac{\partial \pi(\theta|x)}{\partial \epsilon} d\theta} {\int \left(\pi_0(\theta|x)\right)^{1-a} \left(\pi(\theta|x)\right)^{a} d\theta}. \end{eqnarray*} Hence, \begin{eqnarray*} \nonumber \frac{\partial f(\epsilon)}{\partial \epsilon}\bigg|_{\epsilon=0}&=&\frac {a}{1-a} \int \frac{\partial\pi (\theta|x)}{\partial \epsilon} d \theta\\ &=&\frac {a}{1-a} \frac{\partial}{\partial \epsilon} \left(\int \pi (\theta|x) d \theta\right)=\frac {a}{1-a} \frac{\partial}{\partial \epsilon} (1)=0. \label{eq1} \end{eqnarray*} On the other hand, \begin{eqnarray*} \frac{\partial^2 f(\epsilon)}{\partial \epsilon^2}&=& \frac{\partial }{\partial \epsilon} \left(\frac {a}{1-a}\frac{\int \left(\pi_0(\theta|x)\right)^{1-a} \left(\pi(\theta|x)\right)^{a-1} \frac{\partial \pi(\theta|x)}{\partial \epsilon} d\theta} {\int \left(\pi_0(\theta|x)\right)^{1-a} \left(\pi(\theta|x)\right)^{a} d\theta}\right), \end{eqnarray*} which, at $\epsilon=0$, reduces to \begin{eqnarray*} \frac{\partial^2 f(\epsilon)}{\partial\epsilon^2}\bigg|_{\epsilon=0}&=& -a \int \frac{\left(\frac{\partial\pi (\theta|x)}{\partial \epsilon}\right)^2}{\pi(\theta|x)} d \theta \bigg|_{\epsilon=0}\\ &=& -a \int \left(\frac{\frac{\partial\pi (\theta|x)}{\partial \epsilon}}{\pi(\theta|x)}\right)^2 \pi(\theta|x) d \theta \bigg|_{\epsilon=0}\\ &=& -a E_{\pi(\theta|x)}\left[\left(\frac{\partial \ln \pi (\theta|x)}{\partial \epsilon}\right)^2 \right]\bigg|_{\epsilon=0}\\ &=& -a I_{\pi(\theta|x)}(\epsilon)\bigg|_{\epsilon=0}. \end{eqnarray*} Here $I_{\pi(\theta|x)}(\epsilon)=E_{\pi(\theta|x)}\left[\left(\frac{\partial \ln \pi (\theta|x)}{\partial \epsilon}\right)^2 \right]\bigg|_{\epsilon=0}$ is the Fisher information function for $\pi(\theta|x)$ (Lehmann and Casella, 1998). Thus, for $\epsilon \approx 0$, we have \begin{equation} d(\pi(\theta|x),\pi_{0}(\theta|x)) \approx -\frac{a\epsilon ^2} {2} I_{\pi(\theta|x)}(\epsilon). \label{fisher} \end{equation} Note that, $\partial^2 f(\epsilon)/\partial \epsilon^2\bigg|_{\epsilon=0}=\partial^2 d/\partial \epsilon^2 \bigg|_{\epsilon=0}$ is known as the local \emph{curvature} at $\epsilon=0$ of R\'enyi divergence. Formula (\ref{fisher}) justifies the use of the curvature to measure the Bayesian robustness of the two classes of priors $\Gamma_a$ and $\Gamma_g$ as defined in (\ref{contaminated}) and (\ref{geometric}), respectively. Also this formula provide a direct relationship between Fisher's information and the curvature of R\'enyi divergence. \section{Measuring Robustness Using R\'enyi Divergence} In this section, we explicitly obtain the local curvature at $\epsilon=0$ of R\'enyi divergence (i.e. ${\partial^2 d}/{\partial \epsilon^2}\bigg|_{\epsilon=0}$), to measure the Bayesian robustness of the two classes of priors $\Gamma_a$ and $\Gamma_g$ as defined in (\ref{contaminated}) and (\ref{geometric}), respectively. The resulting quantities are presumably much easier to estimate than working directly with R\'enyi divergence. \begin{theorem} \label{theorem1} For the $\epsilon$-contaminated class defined in (\ref{contaminated}), the local curvature of R\'enyi divergence at $\epsilon=0$ is \begin{equation*} C_{a}^{\Gamma_{a}} =\frac{\partial^2 d}{\partial \epsilon^2}\bigg|_{\epsilon=0} =aVar_{\pi_0(\theta|x)} \bigg[ \frac{q(\theta)}{\pi_0(\theta)} \bigg], \end{equation*} where $Var_{\pi_0(\theta|x)}$ denotes the variance with respect to $\pi_0(\theta|x)$. \end{theorem} \proof Under the prior $\pi$ defined in (\ref{contaminated}), the marginal $m(\theta|x)$ and the posterior distribution $\pi(\theta|x)$ can be written as \begin{eqnarray*} m(x|\pi)=(1-\epsilon) m(x|\pi_0)+\epsilon m(x|q) \end{eqnarray*} and \begin{eqnarray} \pi(\theta|x)&=&\frac{f_{\theta}(x) \pi(\theta)}{m(x|\pi)}\nonumber\\ &=& \frac{f_{\theta}(x) \left((1-\epsilon) \pi_{0}(\theta)+\epsilon q(\theta)\right)}{m(x|\pi)}\nonumber\\ &=& \lambda(x) \pi_0(\theta|x)+(1-\lambda(x)) q(\theta|x),\label{posterior1} \end{eqnarray} where \begin{equation*} \lambda(x)=(1-\epsilon)\frac{m(x|\pi_0)}{m(x|\pi)}. \end{equation*} Define \begin{eqnarray*} f(\epsilon)&=&d\left(\pi(\theta|x),\pi_{0}(\theta|x)\right)\\ &=&\frac{1}{a-1}\ln\left[\int \left(\pi(\theta|x)\right)^a\left(\pi_0(\theta|x)\right)^{1-a} d\theta\right]=\frac{1}{a-1}\ln\left[\int \gamma d\theta\right], \end{eqnarray*} where \begin{equation*} \gamma = \left(\pi(\theta|x)\right)^a\left(\pi_0(\theta|x)\right)^{1-a}=\left(\lambda(x) \pi_0(\theta|x)+(1-\lambda(x)) q(\theta|x)\right)^a\left(\pi_0(\theta|x)\right)^{1-a}. \end{equation*} Clearly, \begin{equation} \gamma\bigg|_{\epsilon=0} = \pi_0(\theta|x) \ \ \mbox{and} \ \ \int\gamma\bigg|_{\epsilon=0}d\theta = 1. \label{gamma} \end{equation} We have \begin{equation*} \frac{\partial\gamma}{\partial\epsilon}=a\frac{m(x|q) m(x|\pi_0)\left(q(\theta|x)-\pi_0(\theta|x)\right)} {\left[\epsilon q(\theta|x) m(x|q)+(1-\epsilon)m(x|\pi_0)\pi_0(\theta|x)\right]\left[(1-\epsilon) m(x|\pi_0)+\epsilon m(x|q)\right]} \end{equation*} and \begin{equation*} \frac{\partial\gamma}{\partial\epsilon}\bigg|_{\epsilon = 0}=a\frac{m(x|q)\left(q(\theta|x)-\pi_0(\theta|x)\right)}{m(x|\pi_0)}. \end{equation*} Thus, \begin{equation} \int\frac{\partial\gamma}{\partial\epsilon}d\theta\bigg|_{\epsilon = 0}=0. \label{eq6} \end{equation} Now, \begin{align*} \frac{\partial^2d}{\partial \epsilon^2}=\frac{\partial}{\partial \epsilon}\left(\frac{1}{a-1} \frac{\int\frac{\partial\gamma}{\partial\epsilon}d\theta}{\int\gamma d\theta}\right)&=\frac{1}{a-1}\frac{[\int\gamma d\theta][\int\frac{\partial^2\gamma}{\partial\epsilon^2}d\theta]-[\int\frac{\partial\gamma}{\partial\epsilon}d\theta]^2}{[\int\gamma d\theta]^2}. \end{align*} By (\ref{gamma}) and (\ref{eq6}), \begin{align*} \frac{\partial^2d}{\partial \epsilon^2}\bigg|_{\epsilon = 0}&=\frac{1}{a-1}\int\frac{\partial^2\gamma}{\partial\epsilon^2}\bigg|_{\epsilon=0}d\theta. \end{align*} We have \begin{align} \begin{split} \frac{\partial^2\gamma}{\partial\epsilon^2}\mid_{\epsilon=0} =&\bigg(\frac{\pi_0(\theta|x)m(x|\pi_0)-q(\theta|x)m(x|q)}{\pi_0(\theta|x)m(x|\pi_0)}+\frac{m(x|\pi_0)-m(x|q)}{m(x|\pi_0)}+\\ &\frac{a\frac{m(x|q)}{m(x|\pi_0)}\left(q(\theta|x)-\pi_0(\theta|x)\right)}{\pi_0(\theta|x)}\bigg)\times\\ &\ a\frac{m(x|q)}{m(x|\pi_0)}\left(q(\theta|x)-\pi_0(\theta|x)\right). \label{2ndderivativ} \end{split} \end{align} Since \begin{align} \nonumber \frac{m(x|q)}{m(x|\pi_0)}&=\frac{\int f_{\theta}(x) q(\theta)d\theta}{m(x|\pi_0)}=\frac{\int f_{\theta}(x) \pi_{0}(\theta)\frac{q(\theta)}{\pi_{0}(\theta)}d\theta}{m(x|\pi_0)} \\ \nonumber &=\int \pi_0(\theta|x)\frac{q(\theta)}{\pi_{0}(\theta)}d\theta\\ &=E_{\pi_0(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right], \label{expectation} \end{align} from (\ref{2ndderivativ}), we get \begin{align*} \begin{split} \frac{\partial^2\gamma}{\partial\epsilon^2}\bigg|_{\epsilon=0} =&a\left(2-E_{\pi_0(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]\right)E_{{\pi_0}(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right] \left(q(\theta|x)-\pi_0(\theta|x)\right)\\ &-a\left(E_{\pi_0(\theta|x)}\right)^2\left[\frac{q(\theta)}{\pi_0(\theta)}\right]\left(\frac{q(\theta|x)}{\pi_0(\theta|x)}\right) \left(q(\theta|x)-\pi_0(\theta|x)\right)\\ &+{a^2}\left(E_{\pi_0(\theta|x)}\right)^2\left[\frac{q(\theta)}{\pi_0(\theta)}\right]\frac{\left(q(\theta|x)-\pi_0(\theta|x)\right)^2}{\pi_0(\theta|x)}. \end{split} \end{align*} Therefore, \begin{eqnarray} \nonumber \frac{{\partial^2}d}{\partial\epsilon^2}\bigg|_{\epsilon=0}=a\bigg(\left(E_{\pi_0(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]\right)^2 E_{\pi_0(\theta|x)}\left[\left(\frac{q(\theta|x)}{\pi_0(\theta|x)}\right)^2\right]\\ -\left(E_{\pi_0(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]\right)^2\bigg). \label{eq9} \end{eqnarray} Note that, \begin{equation*} \left(\frac{q(\theta|x)}{\pi_0(\theta|x)}\right)^2=\left(\frac{q(\theta) f_{\theta}(x)/m(x|q)}{\pi(\theta)f_{\theta}(x)/m(x|\pi_0)}\right)^2 =\left(\frac{q(\theta)}{\pi(\theta)}\right)^2 \left(\frac{m(x|\pi_0)}{m(x|q)}\right)^2 \end{equation*} Hence, by (\ref{expectation}), \begin{equation} E_{\pi_0(\theta|x)}\left[\left(\frac{q(\theta|x)}{\pi_0(\theta|x)}\right)^2\right]= E_{\pi_0(\theta|x)}\left[\left(\frac{q(\theta)}{\pi_0(\theta)}\right)^2\right]\frac{1} {\left(E_{\pi_0(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]\right)^2}. \label{eq10} \end{equation} Thus, by (\ref{eq9}) and (\ref{eq10}), \begin{align*} \frac{{\partial^2}d}{\partial{\epsilon}^2}\bigg|_{\epsilon=0}&= a\left(E_{\pi_0(\theta|x)}\left[\left(\frac{q(\theta)}{\pi_0(\theta)}\right)^2\right]- \left(E_{\pi_0(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]\right)^2\right)\\ &=aVar_{\pi_0(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]. \end{align*} \endproof \begin{theorem} \label{theorem2} For the geometric contaminated class defined in (\ref{geometric}), the local curvature of R\'enyi divergence at $\epsilon$ = 0 is\\ \begin{equation*} C_{a}^{\Gamma_{g}}=\frac{\partial^2 d}{\partial \epsilon^2}\bigg|_{\epsilon=0} =aVar_{{\pi_0}(\theta|x)}\left[\ln\left(\frac{q(\theta)}{\pi_0(\theta)}\right)\right], \end{equation*} $Var_{\pi_0(\theta|x)}$ denotes the variance with respect to $\pi_0(\theta|x)$. \end{theorem} \proof Define \begin{equation*} \gamma = \left(\pi(\theta|x)\right)^a\left(\pi_0(\theta|x)\right)^{1-a}. \end{equation*} Thus, \begin{eqnarray*} d&=&\frac{1}{a-1}\ln\left(\int \gamma d \theta\right). \end{eqnarray*} We have \begin{eqnarray*} \frac {\partial d}{\partial \epsilon}&=&\frac{1}{a-1} \times \frac{\int{\frac {\partial \gamma }{\partial \epsilon}d \theta}} {\int \gamma d \theta} \end{eqnarray*} and \begin{eqnarray} \frac {\partial^2 d}{\partial \epsilon^2}&=&\frac{1}{a-1} \times \frac{\int \gamma d \theta \int{\frac {\partial^2 \gamma }{\partial \epsilon^2}d \theta}-\left(\int \frac {\partial \gamma}{\partial \epsilon} d \theta\right)^2} {\left(\int \gamma d \theta\right)^2}. \label{2nd} \end{eqnarray} Since $\gamma\bigg|_{\epsilon=0}=\pi_0(\theta|x)$, \begin{eqnarray*} \frac{{\partial^2}d}{\partial{\epsilon^2}}\bigg|_{\epsilon=0}&=&\int\frac{\partial^2\gamma}{\partial\epsilon^2}d\theta\bigg|_{\epsilon=0} -\left(\int\frac{\partial\gamma}{\partial\epsilon }d\theta\right)^2\bigg|_{\epsilon=0}. \end{eqnarray*} For the geometric class defined in (\ref{geometric}), \begin{eqnarray}\label{posterior2} \pi(\theta|x)=\frac{f_\theta(x) \pi(\theta)}{m(x|\pi)}=\frac{f_\theta(x) c(\epsilon) (\pi_0(\theta))^{1-\epsilon} (q(\theta))^{\epsilon}}{m(x|\pi)} \ \ \mbox{and} \ \ \pi_0(\theta|x)=\frac{f_\theta(x) \pi_0(\theta)}{m(x|\pi_0)}. \end{eqnarray} Thus, \begin{eqnarray*} \gamma&=&\frac{f_\theta(x)(c(\epsilon))^a (\pi_0(\theta))^{1-a\epsilon}(q(\theta))^{a\epsilon}}{(m(x|\pi))^{a}(m(x|\pi_0))^{1-a}}. \end{eqnarray*} Therefore, \begin{eqnarray*} \ln\left(\gamma\right)&=&a\ln \left(\frac{c(\epsilon)}{m(x|\pi)}\right)- a\epsilon\ln \left(\frac{\pi_0(\theta)}{q(\theta)}\right)+\ln \frac{f_{\theta}(x) \pi_0(\theta)}{ (m(x|\pi_0))^{1-a}}. \end{eqnarray*} We have \begin{eqnarray} \frac{\partial \gamma}{\partial \epsilon}=\gamma \frac{\partial \ln \gamma}{\partial \epsilon}=a \gamma\left(\frac{{\partial}}{\partial{\epsilon}}\ln \left(\frac{c(\epsilon)}{m(x|\pi)}\right)-\ln\left(\frac{\pi_0(\theta)}{q(\theta)}\right)\right). \label{derivative1} \end{eqnarray} As $$\frac{{\partial}}{\partial{\epsilon}}\ln \left(\frac{c(\epsilon)}{m(x|\pi)}\right)=E_{\pi_0(\theta|x)}\left[\ln\left(\frac{\pi_0(\theta)}{q(\theta)}\right)\right]$$ (Dey and Birmiwal, 1994, Theorem 3.2), we get \begin{eqnarray*} \frac{\partial \gamma}{\partial \epsilon}=a \gamma\left(E_{\pi_0(\theta|x)}\left[\ln\left(\frac{\pi_0(\theta)}{q(\theta)}\right)\right]-\ln\left(\frac{\pi_0(\theta)}{q(\theta)}\right)\right). \end{eqnarray*} Since $\gamma\bigg|_{\epsilon=0}=\pi_0(\theta|x)$, by (\ref{2nd}) and (\ref{derivative1}), it follows that $\int\frac{\partial \gamma}{\partial\epsilon}d\theta\bigg|_{\epsilon=0}=0$ and \begin{eqnarray*} \frac{{\partial^2}d}{\partial{\epsilon^2}}\bigg|_{\epsilon=0}&=&\int\frac{\partial^2\gamma}{\partial\epsilon^2}d\theta\bigg|_{\epsilon=0}. \end{eqnarray*} Now, by (\ref{derivative1}), \begin{eqnarray*} \frac{\partial^2 \gamma}{\partial \epsilon^2}&=&\frac{\partial}{\partial \epsilon}\left(a \gamma\left(E_{\pi_0(\theta|x)}\left[\ln\left(\frac{\pi_0(\theta)}{q(\theta)}\right)\right]-\ln\left(\frac{\pi_0(\theta)}{q(\theta)}\right)\right)\right)\\ &=&a \gamma\left(E_{\pi_0(\theta|x)}\left[\ln\left(\frac{\pi_0(\theta)}{q(\theta)}\right)\right]-\ln\left(\frac{\pi_0(\theta)}{q(\theta)}\right)\right)^2. \end{eqnarray*} Using the $\gamma\bigg|_{\epsilon=0}=\pi_0(\theta|x)$ one more time, we obtain \begin{equation*} \frac{{\partial^2}d}{\partial{\epsilon}^2}\bigg|_{\epsilon=0}=\int\frac{{\partial}^2{\gamma}}{\partial{\epsilon}^2}\bigg|_{\epsilon=0}d\theta=aVar_{{\pi_0}(\theta|x)}\left[\ln\left(\frac{q(\theta)}{{\pi_0}(\theta)}\right)\right]. \end{equation*} \endproof \section{Examples} In this section, the derived results are explained through three examples: the Bernoulli model, the multinomial model and the location normal model. In each example, the curvature values for the two classes \eqref{contaminated} and \eqref{geometric} are reported. Additionally, in Example 1, we computed R\'enyi divergence between $\pi(\theta|x)$ and $\pi_{0}(\theta|x)$ and reported the calibrated value $p$ as described in (\ref{interpret1}) and (\ref{interpret2}). Recall that, curvature values close to zero indicate robustness of the used prior whereas larger values suggest lack of robustness. On the other hand, values of $p$ close to 0.5 suggest robustness whereas values of $p$ close to 1 means absence of robustness. \medskip \noindent {\textbf{Example 1 (Bernoulli Model).}} \label{example1} Suppose $x=(x_1, \ldots, x_n)$ is a sample from a Bernoulli distribution with a parameter $\theta$. Let the prior ${\pi_0}(\theta)$ be $B$eta$(\alpha,\beta)$. That is, $$\pi_0(\theta)=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} \theta^{\alpha-1}(1-\theta)^{\beta-1}.$$ Thus, ${\pi_0}(\theta|x_1, \ldots, x_n)$ is \begin{eqnarray} B\mbox{eta}\left(\alpha+t,\beta+n-t\right),\label{post_example1} \end{eqnarray} where $t=\sum_{i=1}^n x_i.$ Let $q(\theta)$ be $B$eta$(c\alpha,c\beta)$ for $c>0$. Now consider the sample $x=( 0, 0, 1, 1, 0, 1, 1, 1, 1,0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1)$ of size $n=20$ generated from $B$ernoulli$(0.5)$. For comparison purposes, we consider several values of $\alpha, \beta$ and $c$. Although it is possible to find exact formulas of the curvature by some algebraic manipulation, it looks more convenient to use a Monte Carlo approach in this example. First, we sample $\theta^{(s)}, s=1,\ldots, 10^6$, from the posterior distribution (\ref{post_example1}). Then we compute the variance of $q(\theta^{(s)})/\pi_0(\theta^{(s)})$ and the variance of $\ln\left(q(\theta^{(s)})/\pi_0(\theta^{(s)})\right)$. This can be implemented straightforwardly in \textbf{\textsf{R}}. The values of the curvature for both classes \eqref{contaminated} and \eqref{geometric} are reported in Table \ref{tab1}. Remarkably, for the cases when $\alpha=\beta=1$ (uniform prior on $[0,1]$) and $\alpha=\beta=0.5$ (Jeffreys' prior), the curvature values are prominently small. \begin{table}[htbp] \centering \setlength{\tabcolsep}{4.5 mm} \caption{Values of the local curvature for two classes $\Gamma_{a}$ and $\Gamma_{g}$ for a sample generated from Bernoulli(0.5).} \label{tab1} \scalebox{0.76}{ \begin{tabular}[c]{llllllll} \toprule \multirow{2}[3]{*}{$\begin{pmatrix} \alpha\\ \beta \end{pmatrix}$} &\multirow{2}[3]{*}{$c$}&\multicolumn{2}{c}{$a=0.5$}&\multicolumn{2}{c}{$a=1$}&\multicolumn{2}{c}{$a=2$} \\\cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8} &&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$ \\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 0.5\\ 0.5 \end{pmatrix}$}&0.5&$8\times10^{-5}$&$0.0002$&$0.0001$&$0.0004$&$0.0003$&$0.0008$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0003$&$0.0002$&$0.0006$&$0.0004$&$0.0013$&$0.0008$\\ &3&$0.0098$&$0.0033$&$0.0196$&$0.0067$&$0.0393$&$0.0135$\\ &5&$0.0531$&$0.0135$&$0.1062$&$0.0271$&$0.2125$&$0.0543$\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 1\\ 1 \end{pmatrix}$}&0.5&$0.0003$&$0.0007$&$0.0007$&$0.0015$&$0.0014$&$0.0030$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0010$&$0.0007$&$0.0021$&$0.0015$&$0.0042$&$0.0030$\\ &3&$0.0241$&$0.0121$&$0.0483$&$0.0243$&$0.0967$&$0.0486$\\ &5&$0.1065$&$0.0486$&$0.2130$&$0.0972$&$0.4260$&$0.1945$\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 1\\ 3 \end{pmatrix}$}&0.5&$0.0265$&$0.0235$&$0.0530$&$0.0470$&$0.1060$&$0.0941$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0171$&$0.0235$&$0.0342$&$0.0470$&$0.0684$&$0.0941$\\ &3&$0.1061$&$0.3767$&$0.2122$&$0.7535$&$0.4244$&$1.5070$\\ &5&$0.1660$&$1.5070$&$0.3320$&$3.0141$&$0.6641$&$6.0282$\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 3\\ 1 \end{pmatrix}$}&0.5&$0.0089$&$0.0113$&$0.0179$&$0.0227$&$0.0133$&$0.0454$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0108$&$0.0113$&$0.0216$&$0.0227$&$0.0433$&$0.0454$\\ &3&$0.1162$&$0.1819$&$0.2324$&$0.3638$&$0.4648$&$0.7277$\\ &5&$0.2774$&$0.7277$&$0.5548$&$1.4555$&$1.1096$&$2.9110$\\ \bottomrule \end{tabular} } % \end{table}% While it is easier to quantify the curvature based on Theorem \ref{theorem1} and Theorem \ref{theorem2}, in this example, for comparison purposes, we computed R\'enyi divergence between $\pi(\theta|x)$ and $\pi_0(\theta|x)$ under class \eqref{contaminated} and class \eqref{geometric}. It can be shown that, under class \eqref{contaminated} in \eqref{posterior1}, $\pi(\theta|x)=\lambda(x)B\mbox{eta}\left(\alpha+t,\beta+n-t\right)+(1-\lambda(x))B\mbox{eta}\left(c\alpha+t,c\beta+n-t\right),$ where \begin{align*} \lambda(x)=\frac{(1-\epsilon)\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\frac{\Gamma(\alpha+t)\Gamma(\beta-t+n)}{\Gamma(\alpha+\beta+n)}}{(1-\epsilon)\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\frac{\Gamma(\alpha+t)\Gamma(\beta-t+n)}{\Gamma(\alpha+\beta+n)}+\epsilon\frac{\Gamma(c\alpha+c\beta)}{\Gamma(c\alpha)\Gamma(c\beta)}\frac{\Gamma(c\alpha+t)\Gamma(c\beta-t+n)}{\Gamma(c\alpha+c\beta+n)}}. \end{align*} Also, from \eqref{posterior2}, it can be easily concluded that the posterior $\pi(\theta|x)$ under class \eqref{geometric} is obtained as \begin{align*} \pi(\theta|x)&=K \times \frac{\theta^{t}(1-\theta)^{n-t}\left[B\mbox{eta}\left(\alpha,\beta\right)\right]^{1-\epsilon}\left[B\mbox{eta}\left(c\alpha,c\beta\right)\right]^{\epsilon}}{\left[ \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} \right]^{(1-\epsilon)}\left[ \frac{\Gamma(c\alpha+c\beta)}{\Gamma(c\alpha)\Gamma(c\beta)} \right]^{\epsilon}}, \end{align*} $$K=\frac{\Gamma(t+(1-\epsilon)(\alpha-1)+\epsilon(c\alpha-1)+1)\Gamma(n-t+(1-\epsilon) (\beta-1)+\epsilon(c\beta-1)+1)}{\Gamma((1-\epsilon)(\alpha+\beta-2)+\epsilon(c\alpha+c\beta-2)+n+2)}.$$ Note that, since $d(\pi(\theta|x),\pi_{0}(\theta|x))= \frac{1}{a-1} \ln\left(E_{\pi_{0}(\theta|x)}\left[\left(\frac{\pi(\theta|x)}{\pi_{0}(\theta|x)}\right)^a\right]\right)$, it possible to compute the distance based on a Monte Carlo approach. When $a=1$, $d(\pi(\theta|x),\pi_{0}(\theta|x))=E_{\pi_{0}(\theta|x)}\left[\frac{\pi(\theta|x)}{\pi_{0}(\theta|x)}\ln\left(\frac{\pi(\theta|x)}{\pi_{0}(\theta|x)}\right)\right]$, the Kullback-Leibler divergence. We also calibrated R\'enyi divergence as described in (\ref{interpret1}) and (\ref{interpret2}). The results based on class \eqref{contaminated} and \eqref{geometric} are reported, respectively, in Table \ref{exm1under1} and Table \ref{exm1under2}. \smallskip \begin{table}[htbp] \centering \setlength{\extrarowheight}{-1mm} \setlength{\tabcolsep}{2.5 mm} \caption{Values of $d_0$ and $p$ in \eqref{interpret1} (for $a\neq 1$) and \eqref{interpret2} (for $a=1$) under class \eqref{contaminated} for a sample generated from Bernoulli(0.5).} \label{exm1under1} \scalebox{0.7}{ \begin{tabular}[c]{llllllllllll} \toprule \multirow{2}[3]{*}{$\begin{pmatrix} \alpha\\ \beta \end{pmatrix}$} &\multirow{2}[3]{*}{$c$}&&\multicolumn{3}{c}{$a=0.5$}&\multicolumn{3}{c}{$a=1$}&\multicolumn{3}{c}{$a=2$} \\\cmidrule(lr){4-6}\cmidrule(lr){7-9}\cmidrule(lr){10-12} &&&$\epsilon=0.05$&$\epsilon=0.5$&$\epsilon=1$&$\epsilon=0.05$&$\epsilon=0.5$&$\epsilon=1$&$\epsilon=0.05$&$\epsilon=0.5$&$\epsilon=1$ \\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 0.5\\ 0.5 \end{pmatrix}$}&0.5&$d_0$&$2\times10^{-7}$&$4\times10^{-6}$&$9\times10^{-5}$&$5\times10^{-7}$&$3\times10^{-5}$&$0.0002$&$10^{-6}$&$7\times10^{-7}$&$0.0004$\\ &&$p$&(0.5003)&(0.5022)&(0.51)&(0.5005)&(0.5042)&(0.5107)&(0.5003)&(0.5041)&(0.5106)\\\cmidrule(lr){3-12} &1&$d_0$&0&0&0&0&0&0&0&0&0\\ &&$p$&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)\\\cmidrule(lr){3-12} &1.5&$d_0$&$2\times10^{-6}$&$4\times10^{-5}$&$0.0001$&$2\times10^{-7}$&$5\times10^{-5}$&$0.0001$&$3\times10^{-7}$&$0.0001$&$0.0003$\\ &&$p$&(0.5013)&(0.5068)&(0.5104)&(0.5003)&(0.5054)&(0.5098)&(0.5003)&(0.5053)&(0.5096)\\\cmidrule(lr){3-12} &3&$d_0$&$4\times10^{-6}$&$0.0004$&$0.0015$&$10^{-5}$&$0.0012$&$0.0028$&$3\times10^{-5}$&$0.0023$&$0.0054$\\ &&$p$&(0.5022)&(0.5204)&(0.5393)&(0.5031)&(0.5244)&(0.5379)&(0.5030)&(0.5239)&(0.5367)\\\cmidrule(lr){3-12} &5&$d_0$&$5\times10^{-5}$&$0.0019$&$0.0055$&$0.0001$&$0.0048$&$0.0102$&$0.0002$&$0.0090$&$0.0181$\\ &&$p$&(0.5071)&(0.5437)&(0.5741)&(0.5074)&(0.5493)&(0.5711)&(0.5074)&(0.5476)&(0.5676)\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 1\\ 1 \end{pmatrix}$}&0.5&$d_0$&$7\times10^{-7}$&$5\times10^{-5}$&$0.0003$&$10^{-6}$&$0.0001$&$0.0008$&$3\times10^{-6}$&$0.0002$&$0.0017$\\ &&$p$&(0.5007)&(0.5071)&(0.5193)&(0.5009)&(0.5083)&(0.5204)&(0.5007)&(0.5084)&(0.5207)\\\cmidrule(lr){3-12} &1&$d_0$&0&0&0&0&0&0&0&0&0\\ &&$p$&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)\\\cmidrule(lr){3-12} &1.5&$d_0$&$2\times10^{-7}$&$7\times10^{-5}$&$0.0003$&$10^{-6}$&$0.0002$&$0.0006$&$2\times10^{-6}$&$0.0003$&$0.0013$\\ &&$p$&(0.5003)&(0.5084)&(0.5193)&(0.5008)&(0.5100)&(0.5185)&(0.5007)&(0.51)&(0.5180)\\\cmidrule(lr){3-12} &3&$d_0$&$10^{-5}$&$0.0013$&$0.0050$&$5\times10^{-5}$&$0.0034$&$0.0092$&$0.0001$&$0.0065$&$0.0165$\\ &&$p$&(0.5042)&(0.5364)&(0.5706)&(0.5050)&(0.5416)&(0.5677)&(0.505)&(0.5405)&(0.5645)\\\cmidrule(lr){3-12} &5&$d_0$&$8\times10^{-5}$&$0.0050$&$0.0167$&0.0002&0.0124&0.0297&0.0004&0.0225&0.0494\\ &&$p$&(0.5092)&(0.5708)&(0.6279)&(0.5107)&(0.5785)&(0.6201)&(0.5106)&(0.5755)&(0.6125)\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 1\\ 3 \end{pmatrix}$}&0.5&$d_0$&$2\times10^{-5}$&$0.0032$&$0.0133$&$7\times10^{-5}$&$0.0067$&$0.0282$&$0.0001$&$0.0145$&$0.0623$\\ &&$p$&(0.5053)&(0.5565)&(0.6143)&(0.5059)&(0.5580)&(0.6171)&(0.5060)&(0.5604)&(0.6268)\\\cmidrule(lr){3-12} &1&$d_0$&0&0&0&0&0&0&0&0&0\\ &&$p$&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)\\\cmidrule(lr){3-12} &1.5&$d_0$&$2\times10^{-5}$&0.0023&0.0104&$3\times10^{-5}$&0.0045&0.0199&$7\times10^{-5}$&0.0088&0.0370\\ &&$p$&(0.505)&(0.5484)&(0.6015)&(0.5044)&(0.5476)&(0.5989)&(0.5044)&(0.5472)&(0.5971)\\\cmidrule(lr){3-12} &&$p$&(0.5081)&(0.5846)&(0.6878)&(0.5077)&(0.5834)&(0.6795)&(0.5077)&(0.5833)&(0.6793)\\\cmidrule(lr){3-12} &3&$d_0$&0.0001&0.0175&01213&0.0002&0.0349&0.2125&0.0005&0.0691&0.3421\\ &&$p$&(0.5119)&(0.6308)&(0.8181)&(0.5115)&(0.6299)&(0.7942)&(0.5117)&(0.6337)&(0.8193)\\\cmidrule(lr){3-12} &5&$d_0$&0.0002&0.0308&0.3423&0.0004&0.0638&0.5519&0.0008&0.1337&0.6003\\ &&$p$&(0.5145)&(0.6715)&(0.9536)&(0.5146)&(0.6731)&(0.9087)&(0.5144)&(0.6891)&(0.9535)\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 3\\ 1 \end{pmatrix}$}&0.5&$d_0$&$7\times10^{-6}$&0.0012&0.0063&$2\times10^{-5}$&0.0027&0.0135&$5\times10^{-5}$&0.0057&0.0295\\ &&$p$&(0.5026)&(0.5356)&(0.5791)&(0.5036)&(0.5369)&(0.5816)&(0.5034)&(0.5379)&(0.5866)\\\cmidrule(lr){3-12} &1&$d_0$&0&0&0&0&0&0&0&0&0\\ &&$p$&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)\\\cmidrule(lr){3-12} &1.5&$d_0$&$10^{-5}$&0.0013&0.0051&$2\times10^{-5}$&0.0025&0.0096&$4\times10^{-5}$&0.0048&0.0180\\ &&$p$&(0.5040)&(0.5364)&(0.5713)&(0.5034)&(0.5354)&(0.5692)&(0.5032)&(0.535)&(0.5674)\\\cmidrule(lr){3-12} &3&$d_0$&0.0001&0.0139&0.0600&0.0002&0.0286&0.1054&0.0005&0.0505&0.1711\\ &&$p$&(0.5125)&(0.6168)&(0.7342)&(0.5117)&(0.6143)&(0.7180)&(0.5119)&(0.6137)&(0.7160)\\\cmidrule(lr){3-12} &5&$d_0$&0.0003&0.0340&0.1724&0.0006&0.0657&0.2786&0.0012&0.1231&0.4062\\ &&$p$&(0.5196)&(0.68)&(0.865)&(0.5183)&(0.6754)&(0.8268)&(0.5177)&(0.6809)&(0.8539)\\ \bottomrule \end{tabular} } % \end{table}% \begin{table}[htbp] \centering \setlength{\extrarowheight}{-1mm} \setlength{\tabcolsep}{2.5 mm} \caption{Values of $d_0$ and $p$ in \eqref{interpret1} (for $a\neq 1$) and \eqref{interpret2} (for $a=1$) under class \eqref{geometric} for a sample generated from Bernoulli(0.5).} \label{exm1under2} \scalebox{0.7}{ \begin{tabular}[c]{llllllllllll} \toprule \multirow{2}[3]{*}{$\begin{pmatrix} \alpha\\ \beta \end{pmatrix}$} &\multirow{2}[3]{*}{$c$}&&\multicolumn{3}{c}{$a=0.5$}&\multicolumn{3}{c}{$a=1$}&\multicolumn{3}{c}{$a=2$} \\\cmidrule(lr){4-6}\cmidrule(lr){7-9}\cmidrule(lr){10-12} &&&$\epsilon=0.05$&$\epsilon=0.5$&$\epsilon=1$&$\epsilon=0.05$&$\epsilon=0.5$&$\epsilon=1$&$\epsilon=0.05$&$\epsilon=0.5$&$\epsilon=1$ \\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 0.5\\ 0.5 \end{pmatrix}$}&0.5&$d_0$&$4\times10^{-7}$&$2\times10^{-5}$&$9\times10^{-5}$&$10^{-6}$&$5\times10^{-5}$&0.0002&$2\times10^{-6}$&0.0001&0.0004\\ &&$p$&(0.5007)&(0.5043)&(0.51)&(0.5007)&(0.5054)&(0.5107)&(0.5007)&(0.5053)&(0.5106)\\\cmidrule(lr){3-12} &1&$d_0$&0&0&0&0&0&0&0&0&0\\ &&$p$&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)\\\cmidrule(lr){3-12} &1.5&$d_0$&$2\times10^{-6}$&$2\times10^{-5}$&0.0001&$3\times10^{-8}$&$4\times10^{-5}$&0.0001&$6\times10^{-8}$&$9\times10^{-5}$&0.0003\\ &&$p$&(0.5014)&(0.5053)&(0.5103)&(0.5001)&(0.5048)&(0.5098)&(0.5)&(0.505)&(0.5096)\\\cmidrule(lr){3-12} &3&$d_0$&$10^{-6}$&0.0004&0.0015&$6\times10^{-6}$&0.0007&0.0028&$10^{-5}$&0.0014&0.0054\\ &&$p$&(0.5013)&(00.5203)&(0.5390)&(0.5017)&(0.5195)&(0.5379)&(0.5014)&(0.5191)&(0.5367)\\\cmidrule(lr){3-12} &5&$d_0$&$9\times10^{-6}$&0.0015&0.0055&$2\times10^{-5}$&0.0028&0.0102&$5\times10^{-5}$&0.0054&0.0181\\ &&$p$&(0.5030)&(0.5390)&(0.5738)&(0.5038)&(0.5379)&(0.5711)&(0.5036)&(0.5367)&(0.5676)\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 1\\ 1 \end{pmatrix}$}&0.5&$d_0$&$10^{-6}$&$7\times10^{-7}$&0.0003&$2\times10^{-6}$&0.0002&0.0008&$5\times10^{-6}$&0.0004&0.0017\\ &&$p$&(0.5011)&(0.5087)&(0.5193)&(0.5012)&(0.5101)&(0.5204)&(0.5011)&(0.5103)&(0.5207)\\\cmidrule(lr){3-12} &1&$d_0$&0&0&0&0&0&0&0&0&0\\ &&$p$&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)\\\cmidrule(lr){3-12} &1.5&$d_0$&$6\times10^{-8}$&$5\times10^{-5}$&0.0003&$8\times10^{-7}$&0.0001&0.0006&$10^{-6}$&0.0003&0.0013\\ &&$p$&(0.5)&(0.5077)&(0.5193)&(0.5006)&(0.5093)&(0.5185)&(0.5007)&(0.5093)&(0.5180)\\\cmidrule(lr){3-12} &3&$d_0$&$8\times10^{-6}$&0.0009&0.0050&$2\times10^{-5}$&0.0026&0.0092&$5\times10^{-5}$&0.0048&0.0165\\ &&$p$&(0.5027)&(0.5309)&(0.5706)&(0.5035)&(0.5360)&(0.5677)&(0.5037)&(0.535)&(0.5645)\\\cmidrule(lr){3-12} &5&$d_0$&$3\times10^{-5}$&0.0035&0.0167&0.0001&0.0092&0.0297&0.0002&0.0165&0.0494\\ &&$p$&(0.5062)&(0.5596)&(0.6279)&(0.5074)&(0.5677)&(0.6201)&(0.5073)&(0.5645)&(0.6125)\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 1\\ 3 \end{pmatrix}$}&0.5&$d_0$&$2\times10^{-5}$&0.0030&0.0133&$6\times10^{-5}$&0.0064&0.0282&0.0001&0.0135&0.0623\\ &&$p$&(0.505)&(0.5555)&(0.6143)&(0.5056)&(0.5566)&(0.6171)&(0.5054)&(0.5583)&(0.6268)\\\cmidrule(lr){3-12} &1&$d_0$&0&0&0&0&0&0&0&0&0\\ &&$p$&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)\\\cmidrule(lr){3-12} &1.5&$d_0$&$3\times10^{-5}$&0.0028&0.0104&$5\times10^{-5}$&0.0053&0.0199&0.0001&0.0103&0.0370\\ &&$p$&(0.5059)&(0.5527)&(0.6015)&(0.5022)&(0.5517)&(0.5989)&(0.5053)&(0.5509)&(0.5971)\\\cmidrule(lr){3-12} &3&$d_0$&0.0004&0.0373&0.1213&0.0008&0.0690&0.2125&0.0017&0.1210&0.3421\\ &&$p$&(0.5216)&(0.6878)&(0.8181)&(0.5211)&(0.6795)&(0.7942)&(0.5209)&(0.6793)&(0.8193)\\\cmidrule(lr){3-12} &5&$d_0$&0.0018&0.1213&0.3423&0.0034&0.2125&0.5519&0.0067&0.3421&0.6003\\ &&$p$&(0.5425)&(0.8181)&(09536)&(0.5417)&(0.7942)&(0.9087)&(0.5411)&(0.8193)&(0.9535)\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 3\\ 1 \end{pmatrix}$}&0.5&$d_0$&$10^{-5}$&0.0014&0.0063&$3\times10^{-5}$&0.0031&0.0135&$6\times10^{-5}$&0.0065&0.0295\\ &&$p$&(0.5031)&(0.5381)&(0.5791)&(0.5040)&(0.5394)&(0.5816)&(0.5039)&(0.5403)&(0.5866)\\\cmidrule(lr){3-12} &1&$d_0$&0&0&0&0&0&0&0&0&0\\ &&$p$&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)&(0.5)\\\cmidrule(lr){3-12} &1.5&$d_0$&$10^{-5}$&0.0014&0.0052&$2\times10^{-5}$&0.0025&0.0096&$4\times10^{-5}$&0.0049&0.0180\\ &&$p$&(0.5041)&(0.5376)&(0.5720)&(0.5034)&(0.5359)&(0.5692)&(0.5033)&(0.5353)&(0.5674)\\\cmidrule(lr){3-12} &3&$d_0$&0.0002&0.0185&0.0604&0.0004&0.0338&0.1054&0.0008&0.0596&0.1711\\ &&$p$&(0.5153)&(0.6341)&(0.735)&(0.5145)&(0.6278)&(0.7180)&(0.5143)&(0.6239)&(0.7160)\\\cmidrule(lr){3-12} &5&$d_0$&0.0008&0.0604&0.1724&0.0016&0.1054&0.2786&0.0032&0.1711&0.4074\\ &&$p$&(0.53)&(0.735)&(0.865)&(0.5289)&(0.7180)&(0.8268)&(0.5284)&(0.7160)&(0.8545)\\ \bottomrule \end{tabular} } % \end{table}% Note that, from (\ref{fisher}), by multiplying the curvature value in Table \ref{tab1} by $\epsilon^2/2$, one may get the value of the corresponding distance in Table \ref{exm1under1} and Table \ref{exm1under2}. For instance, setting $\alpha=1, \beta =1, c=0.5, a=0.5$ in Table \ref{tab1}, gives $C_{a}^{\Gamma_{a}}=0.0265$. The corresponding distance is $0.0265 \times 0.5^2/2= 0.0033$, which close to the one reported in Table \ref{exm1under1}. Now we consider the Australian AIDS survival data, available in the \textbf{\textsf{R}} package ``\textsf{Mass}". There are 2843 patients diagnosed with AIDS in Australia before 1 July 1991. The data frame contains the following columns: state, sex, date of diagnosis, date of death at end of observation, status (``$A$" (alive) or ``$D$" (dead) at end of observation), reported transmission category, and age at diagnosis. Now, we consider the values of column status, then, under prior distribution given above, the values of the curvatures for two classes \eqref{contaminated} and \eqref{geometric} are summarized in Table \ref{tab-real}. \begin{table}[htbp] \centering \setlength{\tabcolsep}{4 mm} \caption{Values of the local curvature for the two classes $\Gamma_{a}$ and $\Gamma_{g}$ for the real data set AIDS.} \label{tab-real} \scalebox{0.76}{ \begin{tabular}[c]{llllllll} \toprule \multirow{2}[3]{*}{$\begin{pmatrix} \alpha\\ \beta \end{pmatrix}$} &\multirow{2}[3]{*}{$c$}&\multicolumn{2}{c}{$a=0.5$}&\multicolumn{2}{c}{$a=1$}&\multicolumn{2}{c}{$a=2$} \\\cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8} &&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$ \\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 0.5\\ 0.5 \end{pmatrix}$}&0.5&$9\times10^{-7}$&$2\times10^{-6}$&$10^{-6}$&$5\times10^{-6}$&$3\times10^{-6}$&$10^{-5}$\\ &1&0&0&0&0&0&0\\ &1.5&$4\times10^{-6}$&$2\times10^{-6}$&$8\times10^{-6}$&$5\times10^{-6}$&$10^{-5}$&$10^{-5}$\\ &3&0.0001&$4\times10^{-5}$&0.0003&$8\times10^{-5}$&0.0006&0.0001\\ &5&0.0009&0.0001&0.0019&0.0003&0.0038&0.0006\\ \hline \multirow{2}[3]{*}{$\begin{pmatrix} 1\\ 1 \end{pmatrix}$}&0.5&$4\times10^{-6}$&$10^{-5}$&$9\times10^{-6}$&$2\times10^{-5}$&$10^{-5}$&$4\times10^{-5}$\\ &1&0&0&0&0&0&0\\ &1.5&$10^{-5}$&$10^{-5}$&$3\times10^{-5}$&$2\times10^{-5}$&$6\times10^{-5}$&$4\times10^{-5}$\\ &3&0.0004&0.0001&0.0009&0.0003&0.0018&0.0006\\ &5&0.0025&0.0006&0.0051&0.0013&0.0102&0.0027\\ \hline \multirow{2}[3]{*}{$\begin{pmatrix} 1\\ 3 \end{pmatrix}$}&0.5&0.0005&0.0004&0.0010&0.0008&0.0021&0.0016\\ &1&0&0&0&0&0&0\\ &1.5&0.0002&0.0004&0.0004&0.0008&0.0008&0.0016\\ &3&0.0002&0.0064&0.0004&0.0129&0.0009&0.0259\\ &5&$10^{-5}$&0.0259&$3\times10^{-5}$&0.0518&$7\times10^{-5}$&0.1037\\ \hline \multirow{2}[3]{*}{$\begin{pmatrix} 3\\ 1 \end{pmatrix}$}&0.5&$2\times10^{-5}$&$5\times10^{-5}$&$5\times10^{-5}$&0.0001&0.0001&0.0002\\ &1&0&0&0&0&0&0\\ &1.5&$6\times10^{-5}$&$5\times10^{-5}$&0.0001&0.0001&0.0002&0.0002\\ &3&0.0014&0.0008&0.0029&0.0016&0.0058&0.0032\\ &5&0.0054&0.0032&0.0108&0.0064&0.0216&0.0129\\ \bottomrule \end{tabular} } \end{table}% \textbf{Example 2 (Multinomial model).} \label{example2} Suppose that $x=(x_1,x_2,\ldots, x_k)$ is an observation from a multinomial distribution with parameters $(N,(\theta_1,\ldots,\theta_k))$, where $\sum_{i=1}^{k}x_i=N$ and $\sum_{i=1}^{k}\theta_i=1$. Let the prior $\pi_0(\theta_1,\ldots,\theta_k)$ be $D$irichlet$(\alpha_1,$ $\ldots,\alpha_k)$. Then $\pi_0(\theta_1,\ldots,\theta_k|x)$ is $D\mbox{irichlet}(\alpha_1+x_1,\ldots,\alpha_k+x_k).$ Let $q(\theta_1,\ldots,\theta_k)\sim D\mbox{irichlet}(c\alpha_1,\ldots,c\alpha_k)$. We consider the observation $x=(6,4,5,5)$ generated from $M$ultinomial$(20,(1/4,1/4,1/4,1/4))$. As in Example 1, we use Monte Carlo approach to compute curvature values. Table \ref{tab2} reports values of the curvature for different values of $\alpha_1,\ldots,\alpha_k$ and $c$. Clearly, when $c=1$, the curvature values are 0. Also, for the cases when $\alpha_1=\alpha_2=\alpha_3=\alpha_4=1$ (uniform prior over $[0,1]^4$) and $\alpha_1=\alpha_2=\alpha_3=\alpha_4=0.5$ (Jeffreys' prior), the curvature values are prominently small. \begin{table}[htbp] \centering \setlength{\tabcolsep}{4.5 mm} \caption{Values of the local curvature for two classes $\Gamma_{a}$ and $\Gamma_{g}$ for a sample generated from Mn(20,(1/4,1/4,1/4,1/4)).}\label{tab2} \scalebox{0.76}{ \begin{tabular}[c]{llllllll} \toprule \multirow{2}[3]{*}{$\Biggl(\begin{smallmatrix} \alpha_{1}\\ \vdots\\ \alpha_{4} \end{smallmatrix}\Biggr)$} &\multirow{2}[3]{*}{$c$}&\multicolumn{2}{c}{$a=0.5$}&\multicolumn{2}{c}{$a=1$}&\multicolumn{2}{c}{$a=2$} \\\cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8} &&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$ \\\hline \multirow{4}[3]{*}{$\begin{pmatrix} 0.25\\ 0.25\\ 0.25\\ 0.25\\ \end{pmatrix}$}&0.5&$2\times10^{-5}$&$0.0006$&$5\times10^{-5}$&$0.0012$&$0.0001$&$0.0024$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0031$&$0.0006$&$0.0062$&$0.0012$&$0.0124$&$0.0024$\\ &3&$0.5285$&$0.0097$&$1.0570$&$0.0195$&$2.1141$&$0.0390$\\ &5&$8.4050$&$0.0301$&$16.816$&$0.0780$&$33.632$&$0.1560$\\\hline \multirow{4}[3]{*}{$\begin{pmatrix} 0.5\\ 0.5\\ 0.5\\ 0.5\\ \end{pmatrix}$}&0.5&$0.0001$&$0.0021$&$0.0003$&$0.0043$&$0.0004$&$0.0087$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0080$&$0.0021$&$0.0161$&$0.0043$&$0.0323$&$0.0087$\\ &3&$0.7706$&$0.0349$&$1.5413$&$0.0699$&$3.0826$&$0.1398$\\ &5&$8.0246$&$0.1398$&$16.049$&$0.2797$&$32.098$&$0.5595$\\\hline \multirow{4}[3]{*}{$\begin{pmatrix} 1\\ 1\\ 1\\ 1\\ \end{pmatrix}$}&0.5&$0.0008$&$0.0071$&$0.0017$&$0.0142$&$0.0035$&$0.0284$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0185$&$0.0071$&$0.0370$&$0.0142$&$0.0741$&$0.0284$\\ &3&$0.9799$&$0.1137$&$1.9598$&$0.2274$&$3.9196$&$0.4549$\\ &5&$6.7661$&$0.4549$&$13.532$&$0.9098$&$27.064$&$1.8197$\\\hline \multirow{4}[3]{*}{$\begin{pmatrix} 2\\ 1\\ 1\\ 1\\ \end{pmatrix}$}&0.5&$0.0018$&$0.0120$&$0.0037$&$0.0240$&$0.0074$&$0.0480$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0270$&$0.0120$&$0.0540$&$0.0240$&$0.1081$&$0.0480$\\ &3&$1.1052$&$0.1923$&$2.2104$&$0.3847$&$4.4209$&$0.7695$\\ &5&$6.3984$&$0.7695$&$12.796$&$1.5390$&$25.593$&$3.0780$\\ \bottomrule \end{tabular} } \end{table}% \smallskip \textbf{Example 3 (Location normal model).} \label{example3} Suppose that $x=(x_1,x_2,\ldots, x_n)$ is a sample from $N(\theta,1)$ distribution with $\theta \in \mathbb{R}^1$. Let the prior $\pi_{0}(\theta)$ of $\theta$ be $N(\theta_0,\sigma_0^2)$. Then \begin{align} \begin{split} \label{normal} \pi_{0}(\theta|x)\sim \mathcal{N}\left(\mu_x,\sigma^2_{x}\right), \end{split} \end{align} \begin{eqnarray*} \mu_x=\left(\frac{\theta_0}{\sigma_0^2}+n\bar{x}\right)\left(\frac{1}{\sigma_0^2}+n\right)^{-1}\ \ \text{and} \ \ \sigma^2_{x}=\left(\frac{1}{\sigma_0^2}+n\right)^{-1}. \end{eqnarray*} Let $q(\theta)\sim \mathcal{N}(c\theta_0,\sigma_{0}^{2})$, $c>0$. Due to some interesting theoretical properties in this example, we present the exact formulas of the curvature for class \eqref{contaminated} and class \eqref{geometric}. We have \begin{align} \begin{split} \nonumber \frac{q(\theta)}{\pi_0(\theta)}=\exp\left\{\frac{\theta_0\theta(c-1)+0.5\theta_0^2(1-c^2)}{\sigma_0^2}\right\}. \end{split} \end{align} Therefore, for the class (\ref{contaminated}), we have \begin{eqnarray*} Var_{{\pi_0}(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]&=&E_{{\pi_0}(\theta|x)}\left[\left(\frac{q(\theta)}{\pi_0(\theta)}\right)^2\right] -\left(E_{{\pi_0}(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]\right)^2\\ &=&\exp\left\{\frac{\theta_0^2(1-c^2)}{\sigma_0^2}\right\}\bigg[M_{{\pi_0}(\theta|x)}\left(\frac{2\theta_0(c-1)}{\sigma_0^2}\right)-\\ &&\left(M_{{\pi_0}(\theta|x)}\left(\frac{\theta_0(c-1)}{\sigma_0^2}\right)\right)^2\bigg], \end{eqnarray*} where $M_{{\pi_0}(\theta|x)}(t)$ is the moment generating function with respect to the density $\pi_{0}(\theta|x)$. Thus, \begin{eqnarray*} Var_{{\pi_0}(\theta|x)}\left[\frac{q(\theta)}{\pi_0(\theta)}\right]&=& \exp\left\{\frac{\theta_0^2(1-c^2)}{\sigma_0^2}\right\}\bigg[\exp\bigg\{\frac{2\theta_0(c-1)\mu_x}{\sigma_0^2}+\\ &&\frac{2\theta_0^2(c-1)^2 \sigma_x^2}{\sigma_0^4}\bigg\}-\exp\bigg\{\frac{2\theta_0(c-1)\mu_x}{\sigma_0^2}+\\ &&\frac{\theta_{0}^2(c-1)^2\sigma_x^2}{\sigma_0^4}\bigg\}\bigg]. \end{eqnarray*} On the other hand, for the geometric contaminated class, we have \begin{align*} \ln\left(\frac{q(\theta)}{\pi_0(\theta)}\right)=\frac{\theta_0\theta(c-1)+0.5\theta_0^2(1-c^2)}{\sigma_0^2}. \end{align*} Thus, by (\ref{normal}), we get \begin{eqnarray} \nonumber Var_{{\pi_0}(\theta|x)}\left[\ln\left(\frac{q(\theta)}{\pi_0(\theta)}\right)\right]&=& \frac{\theta_0^2(c-1)^2}{\sigma_0^4} Var_{{\pi_0}(\theta|x)}\left[\theta\right]\\ \nonumber &=&\frac{\theta_0^2(c-1)^2}{\sigma_0^4} \sigma^2_x\\ &=&\frac{\theta_0^2(c-1)^2}{\sigma_0^4} \left(\frac{1}{\sigma_0^2}+n\right)^{-1}. \label{example2_geometric} \end{eqnarray} Interestingly, from (\ref{example2_geometric}), $Var_{{\pi_0}(\theta|x)}\left[\ln\left(\frac{q(\theta)}{\pi_0(\theta)}\right)\right]$ depends on the sample only through its size $n$. As $n \to \infty$ or $\sigma_0 \to \infty$, $Var_{{\pi_0}(\theta|x)}\left[\ln\left(\frac{q(\theta)}{\pi_0(\theta)}\right)\right] \to 0,$ which indicates robustness. Also, as $\theta_{0} \to \infty$, $Var_{{\pi_0}(\theta|x)}\left[\ln\left(\frac{q(\theta)}{\pi_0(\theta)}\right)\right] \to \infty$ and no robustness will be found. Now we consider a numerical example by generating a sample of size $n=20$ from $N(4,1)$ distribution. We obtain \begin{quote} $x=(3.37, 4.18, 3.16, 5.59, 4.32, 3.17, 4.48, 4.73, 4.57, 3.69, 5.51, 4.38, 3.37,\newline 1.78, 5.12, 3.95, 3.98, 4.94, 4.82, 4.59)$ \end{quote} (with $t=\bar{x}=4.1905$). Table \ref{tab3} reports the values of the curvature for different values of $\theta_0, \sigma_0$ and $c$. \begin{table}[htbp] \centering \setlength{\tabcolsep}{4.5 mm} \caption{Values of the local curvature for two classes $\Gamma_{a}$ and $\Gamma_{g}$ for a sample generated from N(4,1).} \label{tab3} \scalebox{0.76}{ \begin{tabular}[c]{llllllll} \toprule \multirow{2}[3]{*}{$\begin{pmatrix} \theta_{0} \\ \sigma_{0}^{2} \end{pmatrix}$} &\multirow{2}[3]{*}{$c$}&\multicolumn{2}{c}{$a=0.5$}&\multicolumn{2}{c}{$a=1$}&\multicolumn{2}{c}{$a=2$} \\\cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8} &&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$&$C_{a}^{\Gamma_{a}}$&$C_{a}^{\Gamma_{g}}$ \\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 0.1\\ 0.1 \end{pmatrix}$}&0.5&$0.0001$&$0.0059$&$0.0002$&$0.0119$&$0.0004$&$0.0238$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.2908$&$0.0059$&$0.5816$&$0.0119$&$1.1633$&$0.0238$\\ &3&$498033.7$&$0.0953$&$996067.4$&$0.1907$&$1992135$&$0.3814$\\ &5&$8\times10^{12}$&$0.3814$&$10^{13}$&$0.7629$&$3\times10^{13}$&$1.5258$\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 0.5\\ 1 \end{pmatrix}$}&0.5&$0.0002$&$0.0014$&$0.0004$&$0.0029$&$0.0009$&$0.0059$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0081$&$0.0014$&$0.0162$&$0.0029$&$0.0325$&$0.0059$\\ &3&$10.629$&$0.0238$&$21.258$&$0.0476$&$42.517$&$0.0953$\\ &5&$2964.9$&$0.0935$&$2929.8$&$0.1907$&$11859.7$&$0.3814$\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 0.5\\ 5 \end{pmatrix}$}&0.5&$4\times10^{-5}$&$5\times10^{-5}$&$8\times10^{-5}$&$0.0001$&$0.0001$&$0.0002$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$8\times10^{-5}$&$5\times10^{-5}$&$0.0001$&$0.0001$&$0.0003$&$0.0002$\\ &3&$0.0031$&$0.0009$&$0.0063$&$0.0019$&$0.0127$&$0.0038$\\ &5&$0.0288$&$0.0038$&$0.0576$&$0.0076$&$0.1152$&$0.0152$\\\hline \multirow{2}[3]{*}{$\begin{pmatrix} 4\\ 5 \end{pmatrix}$}&0.5&$0.0001$&$0.0038$&$0.0029$&$0.0076$&$0.0059$&$0.0152$\\ &1&$0$&$0$&$0$&$0$&$0$&$0$\\ &1.5&$0.0020$&$0.0038$&$0.0040$&$0.0076$&$0.0080$&$0.0152$\\ &3&$3\times10^{-7}$&$0.0610$&$7\times10^{-7}$&$0.1220$&$10^{-6}$&$0.2441$\\ &5&$9\times10^{-23}$&$0.2441$&$10^{-22}$&$0.4882$&$3\times10^{-22}$&$0.9765$\\ \bottomrule \end{tabular} } \end{table}% Clearly, for large values of $\sigma^2_0$, the value of the curvature is small, which is an indication of robustness. For instance, for $\mu_0=0.5$ in Table \ref{tab3}, that value of the curvature when $\sigma^2_0=5$ is much smaller than the value of the curvature when $\sigma^2_0=1$. \smallskip \section{Conclusions} Measuring Bayesian robustness of two classes of contaminated priors is studied. The approach is based on computing the curvature of R\'enyi divergence between posterior distributions. The method does not require specifying values for $\epsilon$ and its computation is straightforward. Examples illustrating the approach are considered.
\section{Introduction} \vspace{0.5cm} The tracking of single particles is a powerful tool to probe physical and biological processes at the level of one macromolecule. In particular, the accumulation of experimental data in recent years has allowed to test models of diffusive transport in cells \cite{saxtonannurev,pederson}. Within aqueous compartments, {\it e.g.} the cell cytoplasm, Brownian diffusion is the basic transport mechanism for proteins \cite{dix}. Other studies, however, have reported subdiffusive behavior both in membranes \cite{saxtonannurev} and in the cytoplasm \cite{cox}, although the microscopic origin of anomalous diffusion remains unclear in this context. Crowded environments of the cell may cause slower diffusion than in pure water or other solvents, although not necessarily subdiffusion \cite{dix}. Conflicting results have generated a debate on the methodology for determining diffusion laws from single particle data, even for simple diffusion \cite{saxton}. In experiments, trajectories of high temporal and spatial resolution are often obtained at the expanse of statistical sample size. Trajectories may be few and short due to observation windows limited in space, a rapid decay of fluorescent markers or particle denaturation \cite{goulian}. These limitations complicate the determination of the nature of diffusion, {\it i.e.} a precise estimate of the diffusion constant or an anomalous exponent. In any case, time averaged quantities associated to a trajectory may be subjected to large fluctuations among trajectories. In the continuous-time random walk model of subdiffusive motion, time-averages of particle's observables generally are random variables distinct from their ensemble averages \cite{barkai}. For instance, the square displacement (after a time lag $t$) time-averaged along a given trajectory differs from the ensemble average \cite{barkai2}. By analyzing time-average displacements of a particular realization, subdiffusive motion can actually look normal, although with strongly differing diffusion constants from one trajectory to an other \cite{sokolov}. The Brownian case is different, but not as straightforward as often thought. Ergodicity, namely, the equivalence of time and ensemble-averages of the square displacement, only holds in this case in the infinite sample size limit. In practice, standard fitting procedures applied to finite (although long) trajectories of a same particle unavoidably lead to fluctuating estimates of the diffusion constant. Indeed, variations by orders of magnitude have been observed in experiments and simple random walk simulations \cite{goulian}. To our knowledge, no analytical results are available on the properties of these diffusion constant distributions. In this article, we present analytical and numerical results on the distributions of the diffusion constants estimated from single trajectories. We consider a standard fitting method based on time-averaged square displacements as well as other similar procedures amenable to analytical calculations. Generally we show that the problem consists of finding the distribution of a quadratic functional of Brownian motion with a time dependent measure. The first studies of the quadratic functionals of Brownian motion date back to a classic paper of Cameron and Martin in 1945 \cite{cam1945} and the problem has received much interest in the probability community ever since \cite{borodin, don1993, chan1994, rev1999,shi}. The formulation of path integrals for quantum mechanics provided a powerful tool to analyze this set of problems using methods more familiar to physicists \cite{fey1965, klein}, here the problem appears as the computation of the partition function of a quantum-harmonic oscillator with time dependent frequency. Various quadratic functionals of Brownian motion have been intensely studied by physicists \cite{khan1986} using a variety of methods. They arise in a plethora of physical contexts, for polymers in elongational flows \cite{dean1995}, a variety of problems related to Casimir/van der Waals interactions and general fluctuation induced interactions \cite{dean2005, pars2006, dean2007, dean2009, dean2010}, where, in harmonic oscillator language, both the frequency and mass depend on time. Quadratic functionals of Brownian motion also arise in the theory of electrolytes when one computes the one-loop or fluctuation corrections to the mean field Poisson-Boltzmann theory \cite{att1988, rudi1, rudi2, 1dc}. Finally we mention that functionals of Brownian motion also turn out to have applications in computer science \cite{majumdar}. In this paper we use the Feynman-Kac theorem to show that the generating function, or Laplace transform, of the probability density function of the estimators for diffusion coefficients can be expressed as a solution to an imaginary time Schr\"odinger equation. This Schr\"odinger equation describes a particle in a quadratic potential, whose frequency is time dependent. For the choices of time dependent frequency arising in the problem of estimated diffusion constants the resulting Schr\"odinger equation can be solved exactly. The inversion of the resulting Laplace transform to obtain the full distribution cannot be carried out exactly, however we are able to analyze the behavior of the distribution in both the lower and upper tails, thus giving a rather complete analytical description of its behavior. In general we find that the main characteristics of the distribution of the estimated diffusion coefficient depend little on the fitting procedure used and in all cases its most probable value is much smaller than the correct (average) diffusion constant. The probability of measuring a diffusion constant lower than average is actually larger than 1/2 (close to 2/3). \vspace{1cm} \section{Fits for the diffusion constant of a single trajectory} \vspace{0.5cm} Consider a one-dimensional Brownian process $B_t$ of variance $\langle B_t^2\rangle=2D_0 t\equiv a_0t$. Without restricting generality, we set $a_0=1$ and $0\le t\le 1$ in the following. If a particular trajectory $B_t$ is available but $a_0$ not known {\it a priori}, an estimate $a$ of this parameter can be obtained by performing a fit to the diffusion law. Several fitting procedure have been discussed in the context of molecule tracking within cells \cite{saxton}. Below, we consider 4 of them. One of the simplest method consists in calculating a least squares estimate based on the minimization of the sum \begin{equation}\label{F} F=\int_0^1 [B_t^2-l(t)]^2dt, \end{equation} where the diffusion law $l(t)$ can be taken as linear, \begin{equation} l(t)=a_Lt, \end{equation} or affine, \begin{equation} l(t)=a_At+b_A, \end{equation} typically. Given $B_t$, the minimization of (\ref{F}) with respect to the constant(s) yields the least squares estimate \begin{equation}\label{fit1} a_L=3\int_0^1 tB_t^2dt \quad\quad({\rm FIT 1}) , \end{equation} for the linear fit, and \begin{eqnarray}\label{fit2} a_A&=&6\int_0^1 (2t-1)B_t^2 dt\quad\quad({\rm FIT 2}) \label{f21}\\ b_A&=&-2\int_0^1 (3t-2)B_t^2 dt\label{f22} \end{eqnarray} for the affine one. \begin{figure} \begin{center} \epsfig{figure=example.ps,width=2.2in,angle=-90} \end{center} \vspace{0.0cm} \caption{{\bf Left panel:} Square position of a Brownian motion with $a_0=1$ as a function of time and the corresponding diffusion laws obtained with the fitting methods 1, 2 and 4. For this example, $a_L=0.318$, $a_A=0.397$ and $a_{MLE}=0.338$, three values significantly smaller than unity. {\bf Right panel:} time-average displacement calculated for the same trajectory, where Fit 3 gives $a_L^{(\delta)}=0.274$. Only at very short times $\overline{\delta^2}_t$ follows the ensemble average $a_0t$. The trajectory is a random walk of $N=50,000$ steps, with positions $x_n=\sum_{i=1}^n l_i$ where $1\le n\le N$ and $l_i=\pm1$. In the notation of the text, $n/N\rightarrow t$ and $x_n^2/N\rightarrow B_t^2$.} \label{fig:example} \end{figure} Another related method, often used in particle tracking experiments \cite{goulian} and numerical studies \cite{saxton}, consists in least-squares fitting the time-averaged square displacement, $\overline{\delta^2}_t$. For a finite trajectory, this quantity is defined as \begin{equation}\label{defdelta} \overline{\delta^2}_t= \frac{1}{1-t}\int_0^{1-t}(B_{t+s}-B_{s})^2 ds. \end{equation} Due to the ergodicity of normal diffusion processes, at times short compared to 1 the above average coincides with the ensemble average $\langle B_t^2\rangle$ \cite{barkai2}, {\it i.e.}, $\overline{\delta^2}_t\simeq t$ as $t\rightarrow 0$. However, due to practical limitations, experimental trajectories often have a small number of positions and $\overline{\delta^2}_t$ is analyzed for all (or a large fraction) of the available intervals $t$, like in ref. \cite{goulian}. Similarly, we do not restrict here to $t\ll1$ but fit over the whole time domain $0\le t\le 1$ instead. As shown by the numerical example of Figure (\ref{fig:example}-right) for a random walk with $N=50,000$ positions, the expected small $t$ behavior of $\overline{\delta^2}_t$ can be restricted to a very small interval compared to the total walk duration. Substituting $B_t^2$ by $\overline{\delta^2}_t$ in Eq.(\ref{F}) and adopting the linear fit, the new estimate simply reads: \begin{equation}\label{fit3} a^{(\delta)}_{L}=3\int_0^1 t\ \overline{\delta^2}_t\ dt \quad\quad({\rm FIT 3}). \end{equation} Yet another fitting method consists in maximizing the unconditional probability of observing the whole trajectory $B_t$, assuming that it is drawn from a Brownian process with mean-square displacement $at$. Namely, the maximum likelihood estimate (MLE), denoted as $a_{MLE}$, is the value of $a$ that maximizes the likelihood of $B_t$, defined as: \begin{equation} L=\prod_{t=0}^1P_a(B_t,t) =\prod_{t=0}^1 (2\pi at)^{-1/2}\exp\left(-\frac{B_t^2}{2at}\right), \end{equation} where $P_a(x,t)$ is the probability density of the Brownian process with constant $a$. By equating $\partial\ln L/\partial a$ to zero, one obtains \begin{equation}\label{fit4} a_{MLE}=\int_0^1dt\ \frac{B_t^2}{t}\quad\quad({\rm FIT 4}). \end{equation} The estimates given by the four methods above are represented in an example, see Figure (\ref{fig:example}). The numerical values are comparable but can differ significantly from unity. \section{Numerical results} The numerical distributions of the random variables $a_L$, $a_{MLE}$, $a_L^{(\delta)}$ and $a_A$ are displayed in Figure (\ref{fig:distrib}). The distributions are highly asymmetric and peaked near $X=0$, far from the average value $\langle X\rangle=1$. The most probable $X$ is a small positive number in each case, see Table 1. Although estimates of $X\sim 10$ can be sometimes observed, the median of the distribution is lower than $\langle X\rangle$. Namely, the probability of measuring a diffusion constant lower that the correct value is not 1/2, but close to 2/3 in all four cases. The probability of measuring a negative $a_A$ is not zero in the affine method (as already noticed in ref. \cite{goulian}) but close to 0.175. Table 1 summarizes the main properties of the distribution functions. Importantly, $a_L$ and $a_L^{(\delta)}$ practically obey the same distribution (Figure (\ref{fig:distrib}-right)), which is somewhat unexpected as $\overline{\delta^2}_t$ is a much smoother function than $B^2_t$. Thanks to this similitude, the analytical study of the simpler functional (\ref{fit1}), exposed in the next Section, brings many insights on the behavior of $a_L^{(\delta)}$. Distributions similar to ours for $a_L^{(\delta)}$ were determined in ref. \cite{goulian}, both numerically from random walk simulations and experimentally using R-phycoerythrin proteins in mammalian cells. \begin{figure} \begin{center} \epsfig{figure=distrib.ps,width=2.2in,angle=-90} \end{center} \vspace{0.0cm} \caption{{\bf Left panel:} Distributions of the parameters $X=a_L$ ($\bullet$ symbol) and $a_{MLE}$ ($\Box$ symbol). Inset: zoom of the same plot, where the solid lines represent the analytical expressions (\ref{pLan}) and (\ref{pMLEan}) valid for small $X$. {\bf Right panel:} Distributions of the parameters $a_L$ ($\bullet$ symbol) along with $a^{(\delta)}_L$ (solid line) and $a_A$ ($\circ$ symbol). Except for $a^{(\delta)}_L$, these results are obtained by averaging over $2\ 10^5$ random walks with $N=5\ 10^5$ steps.} \label{fig:distrib} \end{figure} \vspace{0.5cm} \begin{center} \begin{tabular}{l|c|c|c|c|} $\ X$ & $a_L$ & $a_L^{(\delta)}$ & $a_{MLE}$ & $a_A$ \\ \hline $\langle X\rangle$ & 1 & 1 & 1 & 1 \\ most probable $X$ & 0.11 & 0.16 & $0.25-0.3$& 0.01 \\ median & 0.54 & 0.56 & 0.66 & 0.42 \\ lower 5$\%$ & 0.086 & 0.12 & 0.17 & -0.20 \\ upper 5$\%$ & 3.43 & 3.33 & 2.97 & 4.08 \\ Prob$[X<\langle X\rangle]$ & 0.683 & 0.681 & 0.668 & 0.683 \\ \end{tabular} \vspace{0.5cm} Table 1: Main properties of the diffusion constant distributions. \end{center} \vspace{1cm} \section{Feynman-Kac formalism for the generating function } \vspace{0.5cm} In general the estimated fit parameters discussed above (FIT1, 2 and 4) are quadratic functionals of Brownian motion and take the form \begin{equation} X=\int_0^1 B^2_s\ w(s) ds. \end{equation} When $w(s)>0$ on $[0,1]$ the quadratic functional is positive and its generating function of $X$, is defined by \begin{equation} G(\sigma)=\int_0^\infty p(x)\exp(-\sigma x) dx = {\mathbb E}\left[\exp(-\sigma X)\right], \end{equation} where $p(x)$ is the probability density function of $X$. In order to compute $G$ we consider the following average of a quadratic functional of Brownian motion: \begin{equation} \Psi(x,t) = {\mathbb E}^x\left[\exp(-\sigma\int_t^1 B_s^2 w(s)ds)\right], \end{equation} where the expectation above is for a Brownian motion starting at $x$ at time $t$. Clearly in this notation we have $G(\sigma)=\Psi(0,0)$. We now write a Feynman-Kac type formula for $\Psi(x,t)$ by considering how the functional evolves in the the time interval $(t,t+dt)$. During this interval the Brownian motion moves from $x$ to $x+dB_t$, where $dB_t$ is an infinitesimal Brownian increment such that $\langle dB_t\rangle=0 $ and $\langle dB_t^2\rangle=dt$. Taking into account this evolution we can write to order $dt$ \begin{equation} \Psi(x,t) =\langle {\mathbb E}^{x+dB_t}\left[\exp(-\sigma\int_{t+dt}^1 B_s^2 w(s)ds)\right](1-dt\sigma w(t)x^2)\rangle \end{equation} where the brackets on the right hand side denote the average over $dB_t$. The above may now be written as \begin{equation} \Psi(x,t) =\langle \Psi(x+dB_t,t+dt)(1-dt\sigma w(t)x^2)\rangle. \end{equation} Expanding to second order in $dB_t$ and $dt$, taking the average over $dB_t$ and equating the terms of $O(1)$ and $O(dt)$ we obtain \begin{equation} {\partial \Psi\over \partial t}=-{1\over 2}{\partial ^2 \Psi\over \partial x^2}+\sigma w(t)x^2\Psi, \label{FK} \end{equation} which looks like a Schr\"odinger equation in a harmonic, time-dependent potential. The boundary condition for this equation is given by $\Psi(x,1)=1$ for all $x$. It is easy to see that the solution of equation (\ref{FK}) is given by \begin{equation} \Psi(x,t)= f(t)\exp(-{1\over 2}g(t)x^2) \end{equation} where \begin{eqnarray} {d f\over dt }&=& {1\over 2} fg \\ {dg\over dt} &=& g^2 -2\sigma w, \end{eqnarray} with the boundary conditions $g(1)=0$ and $f(1)=1$. Now we can eliminate the nonlinearity in the second equation by setting $g=-dh/dt/h$ which gives \begin{eqnarray} h{{d f\over dt}}+{1\over 2} f{{dh\over dt}}&=&0 \\ {d^2h\over dt^2} -2\sigma w h&=&0, \label{eqh1} \end{eqnarray} with the boundary conditions $h(1)=1$ and $dh/dt(t=1) = 0$. In terms of these functions the Laplace transform is now given by $G(\sigma)=f(0)=1/\sqrt{h(0)}$. We now make a change of time variable writing \begin{equation} {d\tau\over dt} = \sqrt{2 w(t) \sigma}, \end{equation} assuming for the moment that $w(t)$ is positive. In terms of this new temporal variable equation (\ref{eqh1}) can now be written as \begin{equation} {d^2h\over d\tau^2} +{{d^2\tau\over dt^2}\over \left( {d\tau\over dt}\right)^2}{dh\over d\tau}-h=0. \label{eqh2} \end{equation} In the class of problems we study in this paper (see Eqs.(\ref{fit1}), (\ref{fit2}) and (\ref{fit4})) the form of $w$ is \begin{equation} w(t) = (At+C)^{\alpha}, \end{equation} with $A$ and $C$ two constants. From this we can choose $\tau$ to be \begin{equation} \tau= {\sqrt{8\sigma}\over |A| (\alpha+2)}(At+C)^{\alpha+2\over 2} \end{equation} and equation ({\ref{eqh2}) becomes \begin{equation} {d^2h\over d\tau^2} +{\alpha\over (\alpha +2)\tau}{dh\over d\tau}-h=0. \label{eqh2b} \end{equation} The general solution to this equation can be shown to be \begin{equation} h(\tau) = \tau^{1\over \alpha+2}\left(DK_{1\over \alpha+2}(\tau) +EI_{1\over \alpha+2}(\tau)\right), \end{equation} where $K_\nu$ and $I_\nu$ are modified Bessel functions \cite{abrom}. The coefficients $D$ and $E$ are determined from the boundary conditions $h(\tau_1)=1$ and $dh/d\tau=0$ at $\tau_1=\tau(1) = \sqrt{8\sigma}(A+C)^{\alpha+2\over 2}/|A|(\alpha+2)$. Solving for $D$ and $E$ and using standard identities for Bessel functions \cite{abrom} we find that at $\tau_0=\tau(0)=\sqrt{8\sigma}C^{\alpha+2\over 2}/|A|(\alpha+2)$ \begin{equation}\label{h} h(\tau_0) =\tau_0^{1\over \alpha+2}\tau_1^{\alpha+1\over \alpha+2} \left(I_{-{\alpha+1\over \alpha+2}}(\tau_1)K_{1\over \alpha +2}(\tau_0) +K_{-{\alpha+1\over \alpha+2}}(\tau_1)I_{1\over \alpha +2}(\tau_0)\right), \end{equation} and thus \begin{equation} G(\sigma) = \left[ \tau_0^{1\over \alpha+2}\tau_1^{\alpha+1\over \alpha+2} \left(I_{-{\alpha+1\over \alpha+2}}(\tau_1)K_{1\over \alpha +2}(\tau_0) +K_{-{\alpha+1\over \alpha+2}}(\tau_1)I_{1\over \alpha +2}(\tau_0)\right)\right]^{-{1\over 2}}.\label{gen} \end{equation} \section{Asymptotic analysis for the probability density function} The general result equation (\ref{gen}) simplifies in the case where $\tau_0=0$, {\em i.e.} when $C=0$, which is the case for FIT1 (linear) and FIT4 (MLE). In this case the probability density function of the estimator of the diffusion coefficient $p(x)$ has support on $[0,\infty)$. We start by analyzing the behavior of $p(x)$ at small $x$. We proceed by using the small argument expansion of $K_\nu$ for $\nu>0$: \begin{equation} K_\nu(z)\sim {1\over 2} \Gamma(\nu)({1\over 2}z)^{-\nu} \end{equation} to obtain the exact result \begin{equation}\label{G1} G(\sigma)=\left[\Gamma({1\over \alpha + 2}) \left({\sqrt{2\sigma A^\alpha}\over \alpha +2}\right)^{\alpha+1\over \alpha +2} I_{-{\alpha+1\over \alpha+2}} \left({\sqrt{8\sigma A^\alpha}\over \alpha +2}\right)\right]^{-{1\over 2}}.\label{geng} \end{equation} The moments of $X$ can then be extracted using the series expansion for modified Bessel functions \cite{abrom} which gives \begin{equation}\label{G2} G(\sigma)=\left[\Gamma({1\over \alpha + 2}) \sum_{k=0}^\infty {1\over k!} {\left({2\sigma A^\alpha\over (\alpha +2)^2}\right)^k\over \Gamma({1\over \alpha + 2}+k)}\right]^{-{1\over 2}}. \end{equation} Without loss of generality we set $A=1$ and find the first two moments of $X$ to be given by \begin{eqnarray} \langle X\rangle &=& {1\over \alpha+2} \\ \langle X^2 \rangle &=& {3\alpha +7\over (\alpha+2)^2(\alpha+3)} \end{eqnarray} and thus \begin{equation} \langle X^2\rangle_c = {2\over (\alpha+2)(\alpha+3)} \end{equation} In FIT1 and FIT4, a single estimator for the diffusion constant has the form \begin{equation}\label{Xalpha} X_\alpha \equiv (\alpha+2)X=(\alpha+2)\int_0^1 t^\alpha B_t^2dt, \end{equation} with $\alpha=1$ and $-1$, respectively, which gives \begin{equation} \langle X_\alpha^2\rangle_c = 2(1- {1 \over \alpha+3}), \end{equation} From this we see that the MLE estimate of the diffusion coefficient has a variance $\langle X_{-1}^2\rangle= 1$ where as the simple linear fit has a larger variance $\langle X_{1}^2\rangle= 3/2$. Of course these variances can be computed directly and the above analysis serves as a check on our formalism to compute the full probability density function. An interesting comparison can be made with the estimator $X_{ep}$ which uses just the final value of the mean squared displacement \begin{equation} X_{ep} =B_1^2, \end{equation} here we find the variance \begin{equation} \langle X^2_{ep}\rangle _c =2, \end{equation} which is clearly bigger than all the integral estimators above. Before embarking on inversion of the generating function $G(\sigma)$ to obtain the probability density function $p(x)$, a simple check of our results is to numerically compute $G(\sigma)$ from our simulation data. In Figure (\ref{fig:laplace}) are shown the Laplace transforms $G(\sigma)$ obtained from both Eq.(\ref{G1}) [or (\ref{G2})] and the numerical distributions $p(x)$, we see that the agreement is perfect. \begin{figure} \begin{center} \epsfig{figure=laplace.ps,width=2.6in,angle=-90} \end{center} \vspace{-0.0cm} \caption{Laplace transforms of the distributions of $X=a_L$ and $a_{MLE}$ (cases $\{A=3$, $\alpha=1\}$ and $\{A=1$, $\alpha=-1\}$, respectively). The solid lines are given by Eq.(\ref{G1}); the points represent the simulations results.} \label{fig:laplace} \end{figure} The behavior of $X$ at small values (when it is always positive) can be extracted by examining the characteristic function, or equivalently the Laplace transform of the probability density function $p(x)$ of $X$. Using the large $z$ asymptotic expansion \begin{equation} I_\nu(z)\simeq {1\over \sqrt{2\pi z}}\exp(z) \end{equation} and setting $A=1$, we find for large $\sigma$: \begin{equation} G(\sigma)\simeq (4\pi)^{1\over 4}\Gamma^{-{1\over 2}}({1\over \alpha + 2}) \left( {2\sigma\over (\alpha +2)^2}\right)^{-{\alpha\over 8(\alpha+2)}} \exp\left(-{\sqrt{2\sigma}\over (\alpha+2)}\right) \end{equation} The behavior of $p(x)$ at small $x$ can now be extracted by noticing that the integral \begin{equation} I= \int_0^\infty\exp(-\sigma x)\exp(-{d\over x})x^c \ dx \end{equation} is dominated by its value at small $x$ and thus can be evaluated by the saddle point method as \begin{equation} I\simeq \sqrt{\pi\over \sigma} \exp(-2\sqrt{\sigma d}) \left({d\over \sigma}\right)^{2c+1\over 4} \end{equation} from which we deduce that for small $x$ \begin{equation} p(x)\simeq \pi^{- {1\over 4}} \Gamma^{-{1\over 2}}({1\over \alpha + 2}) (\alpha+2)^{-{\alpha+4\over 2(\alpha+2)}}\ x^{-{5\alpha+12\over 4(\alpha+2)}} \exp\left(-{1\over 2 (\alpha+2)^2 x}\right). \end{equation} From this we obtain the probability density of $X=X_{\alpha}$ [Eq.(\ref{Xalpha})] at small $x$ to be: \begin{equation}\label{px} p_\alpha (x)\simeq \pi^{- {1\over 4}} \Gamma^{-{1\over 2}}({1\over \alpha + 2}) (\alpha+2)^{-{\alpha+4\over 4(\alpha+2)}}\ x^{-{5\alpha+12\over 4(\alpha+2)}} \exp\left(-{1\over 2 (\alpha+2)x}\right). \end{equation} The distribution exhibits an essential singularity at $x=0$, as expected from the general asymptotic result of Shi \cite{shi}. For the linear fit estimate ($\alpha=1$), Eq.(\ref{px}) gives \begin{equation}\label{pLan} p_1(x)\simeq c_1\ x^{-{17\over 12}}\exp\left(-{1\over 6x} \right) \end{equation} with \begin{equation} c_1=3^{-{5\over 12}}\pi^{- {1\over 4}}\Gamma({1\over 3})^{-{1\over 2}} \approx 0.29035..., \end{equation} and for the MLE ($\alpha=-1$) \begin{equation}\label{pMLEan} p_{-1}(x)\simeq c_{-1}\ x^{-{7\over 4}}\exp\left(-{1\over 2x} \right) \end{equation} with \begin{equation} c_{-1}=\pi^{- {1\over 4}}\approx 0.75112... \end{equation} The expressions above compare well with the simulation results at small $x$ (Figure (\ref{fig:distrib}-left), inset). The distributions (\ref{pLan}) and (\ref{pMLEan}) actually present a maximum at $x^*=2/17\approx 0.118$ and $x^{*}=2/7\approx 0.286$, respectively. Despite that the asymptotic results start to fail when $x$ becomes too large, these values are still in good agreement with the most probable values of Table 1. A more detailed comparison in the small $x$ regime is displayed in Figure (\ref{fig:scaling}), where $p(x)x^{\beta}$ obtained from the numerics is plotted as a function of $1/x$, with $\beta=17/12$ and $7/4$. The behaviors at large arguments are nearly indistinguishable from the exponential laws predicted by Eqs.(\ref{pLan}) and (\ref{pMLEan}). \begin{figure} \begin{center} \epsfig{figure=scaling.ps,width=2.6in,angle=-90} \end{center} \vspace{-0.0cm} \caption{Rescaled numerical distributions $p(x)x^{\beta}$ with $\beta=17/12$ (linear fit, black dots) and $\beta=7/4$ (MLE fit, diamonds) as a function of $1/x$. The solid lines are the analytical forms $c_1\exp(- {1 \over 6x})$ and $c_{-1}\exp(- {1\over 2x})$ from Eqs.(\ref{pLan}) and (\ref{pMLEan}), respectively.} \label{fig:scaling} \end{figure} In order to extract the behavior of the probability distribution for large $x$ we need to examine the singularities of the generating function $G(\sigma)$ for $\sigma < 0$, in this regime \begin{equation}\label{largeg0} G(\sigma)=\left[ \Gamma(\frac{1}{\alpha+2}) \left(\frac{\sqrt{2|\sigma|A^{\alpha}}}{\alpha+2}\right)^{\frac{\alpha+1}{\alpha+2}} J_{-\frac{\alpha+1}{\alpha+2}} \left(\frac{\sqrt{8|\sigma|A^{\alpha}}}{\alpha+2}\right) \right]^{-1/2}, \end{equation} from the identity $J_{\nu}(z) =\sum_{k=0}^{\infty}(-1)^{k}(z/2)^{2k+\nu}/[k!\Gamma(k+\nu+1)]$. This Bessel function of the first kind oscillates and has simple zeros, at these zeros $G$ diverges. Let us denote $u^*$ as the lowest positive zero of $J_{-\frac{\alpha+1}{\alpha+2}}(u)$. When $u\equiv\sqrt{8|\sigma|A^{\alpha}}/(\alpha+2)\rightarrow u^*$ from below, \begin{equation}\label{largeg1} \left[J_{-\frac{\alpha+1}{\alpha+2}}(u)\right]^{-1/2}\simeq \sqrt{ \frac{2|\sigma^*|}{u^*|J_{-\frac{\alpha+1}{\alpha+2}}^{\prime}(u^*)|} } (\sigma-\sigma^*)^{-1/2} \end{equation} where $\sigma\rightarrow\sigma^*=-u^{*2}(\alpha+2)^2/(8A^{\alpha})$ from above. We now note that \begin{equation} \int_0^\infty dx \ {\exp(-\omega x)\over \sqrt{x}} \exp(\sigma x) = \sqrt{ {\pi\over \omega-\sigma}}\label{largel1} \end{equation} for $\omega > \sigma$. Comparing Eqs.(\ref{largeg1}) and (\ref{largel1}), one deduces from (\ref{largeg0}) the large $x$ behavior: \begin{equation} p(x)\simeq \frac {2\left(\frac{u^*}{2}\right)^{-\frac{\alpha+1}{2(\alpha+2)}} |\sigma^*|^{\frac{1}{2}}} {\sqrt{u^*\Gamma(\frac{1}{\alpha+2}) |J_{-\frac{\alpha+1}{\alpha+2}}^{\prime}(u^*)|} } \ \frac {e^{-|\sigma^*|x}} {\sqrt{2\pi x}}. \label{bigx} \end{equation} For the linear fit ($\alpha=1$, $A=3$), one finds $u^*=1.2430...$ and \begin{equation}\label{largexlinear} p_{1}(x)\approx 1.1675 \frac{e^{-0.5794x}}{\sqrt{2\pi x}}, \end{equation} whereas for the MLE ($\alpha=-1$, $A=1$), $u^*=2.4048...$ and \begin{equation}\label{largexmle} p_{-1}(x)\approx 1.5212 \frac{e^{-0.7228x}}{\sqrt{2\pi x}}. \end{equation} These asymptotic expressions are compared with the numerical results in Figure (\ref{fig:largex}-left). \begin{figure} \begin{center} \epsfig{figure=figlargex.ps,width=2.in,angle=-90} \epsfig{figure=figlargexfit2.ps,width=2.in,angle=-90} \end{center} \vspace{-0.0cm} \caption{{\bf Left panel:} Rescaled numerical distributions $p(x)x^{1/2}$ for the linear (black dots) and MLE (diamonds) fits. The solid lines are the exponential laws from Eqs.(\ref{largexlinear}) and (\ref{largexmle}). {\bf Right panel:} Same quantity for the affine fit (FIT2). The solid lines are the asymptotic forms at large $x$ and large $-x$, see Eqs.(\ref{largexfit2}) and (\ref{largemxfit2}).} \label{fig:largex} \end{figure} The interpretation of this result is rather straight forward, if we consider a Gaussian random $Y$ variable of mean zero and variance $\gamma^2$ then the probability distribution is \begin{equation} p_Y(x) ={1\over \sqrt{2\pi \gamma^2}}\exp(-{x^2\over 2\gamma^2}). \end{equation} Now defining $Z=Y^2$ we find that the the probability density function of $Z$ is \begin{equation} p_Z(x) = {\exp(-{x\over 2 \gamma^2})\over \sqrt{2\pi\gamma^2 x}}, \end{equation} which has the same functional form as equation (\ref{bigx}). This means that for large values of $x$ the random variable $X$ has the same distribution as a squared Gaussian random variable. This is not surprising as the variable $X$ can be viewed as an infinite sum of Gaussian random variables. Note that the full probability density function for the end point estimator $X_{ep}$ is given by (as $\gamma^2=2$) \begin{equation} p_{ep}(x) = {\exp(-{0.25 x})\over \sqrt{4\pi x}}, \end{equation} and so the distribution of this simple estimator decays much more slowly that the two integral estimators discussed above. In the case of the affine fit, FIT2, both the estimators $a_A$ and $b_A$, defined in equations (\ref{f21}) and (\ref{f22}), can be negative as the respective functions $w$ change sign. The probability density function is thus two sided. When $\tau_0$ becomes imaginary in Eq.(\ref{h}), this solution must be modified by substituting $I_{1/3}(\tau_0)$ and $I_{-1/3}(\tau_0)$ by $-J_{1/3}(|\tau_0|)$ and $J_{-1/3}(|\tau_0|)$, respectively \cite{abrom}. In turn, when $\tau_1$ becomes imaginary, $I_{2/3}(\tau_1)$ and $I_{-2/3}(\tau_1)$ are replaced by $J_{2/3}(|\tau_1|)$ and $J_{-2/3}(|\tau_1|)$, respectively. For large $x>0$ the probability density function can be obtained from the closest zero of $h(\sigma)$ from zero in the negative direction, denoted by $\sigma^*_-$, and the analysis above goes through to give \begin{equation}\label{largexfit2} p(x)\approx A_-\frac {e^{-|\sigma^*_-|x}} {\sqrt{2\pi x}},\quad {\rm with\ } |\sigma^*_-|=0.4596...\ {\rm and}\ A_-=0.9239..., \end{equation} in the case $X=a_A$. For the variable $X=b_A$, one finds $|\sigma^*_-|=3.2229...$ and $A_-=1.4734...$ As $X$ can become negative we also have zeros of $h(\sigma)$ for positive values of $\sigma$; now if the first of these zeros from the origin is $\sigma^*_+$ then the same analysis as above implies, for $x<0$: \begin{equation}\label{largemxfit2} p(x)\approx A_+\frac {e^{\sigma^*_+x}}{\sqrt{2\pi |x|}},\quad {\rm with\ } \sigma^*_+=4.2439...\ {\rm and}\ A_+=0.8381..., \end{equation} in the case $X=a_A$. For $X=b_A$, one obtains $\sigma^*_+=2.4485...$\ and $A_+=1.1886...$ These asymptotic results are tested in Figure (\ref{fig:largex}-right) on the two sided distribution arising for both the coefficients $a_A$ and $b_A$, showing very good agreement. \section{Conclusion} We have shown that a general class of statistical estimators that can be used to extract diffusion constants from the squared displacement of single Brownian trajectories are in fact quadratic functionals of Brownian motion. Numerically we have seen that such estimators have a tendency to yield values which are typically lower than the correct average value. In addition we have seen that the statistics of the estimated diffusion constants from these trajectories resemble closely those obtained from fitting the time averaged squared displacement $\overline{\delta^2}_t$, defined in equation (\ref{defdelta}), despite the fact that the resulting trajectory appears much more regular than an unaveraged Brownian squared displacement, as demonstrated in Figure (\ref{fig:example}-right). An interesting and outstanding problem would be to carry out our analysis for estimators of type $\delta^2_t$. Such an extension is clearly desirable as it deals with a quantity more commonly used in single particle tracking experiments. However from a technical point of view the resulting path integrals, while being for quadratic functionals of Gaussian processes, are highly non-local in time and it is probable that their evaluation will require the introduction of new mathematical methods. Our final analysis was only limited by the problem of carrying out a full Laplace inversion of the generating function $G(\sigma)$ to obtain the full probability density function. However we point out that the generating function is actually easy to estimate from numerical data for the purpose of comparison with our analytical results, as demonstrated in Figure (\ref{fig:laplace}). In addition the generating function can be inverted analytically in certain asymptotic regimes. When the estimator is always positive, and consequently $p(x)=0$ for $x<0$, the behavior of $p(x)$ for small $x$ can be extracted. We find that it has an essential singularity at $x=0$ and a maximum value, this estimate of the maximum value is in good agreement for the most likely value of $x$ coming from the full probability density function. For positive estimators the large $x$ behavior of $p(x)$ turns out to be that of a squared Gaussian random variable, reflecting that fact that the estimator itself is an infinite sum of Gaussian random variables. This remains true when the estimator can have negative values, {\em i.e} when $w(t)$ can change sign. In this case the probability density function for $X$ is that of a Gaussian squared for large $x$ as is that of $-X$ for large negative $x$. Finally new methods are being introduced into single particle trajectory analysis to estimate diffusion constants and exponents associated with anomalous diffusion, for instance methods based on the mean maximal excursion \cite{mme}, and it would be interesting to examine the distributions associated with such estimators. \vskip 1 truecm \noindent{\bf Acknowledgements:} We would like to than Alain Comtet and Marc M\'ezard for useful discussions on the subject of this paper. DB would like to thank the Universit\'e de Toulouse (Paul Sabatier) for an invited Professor's position during which this work was initiated. DSD acknowledges support from the Institut Universitaire de France. \section*{References}
\section{Introduction} Recall that a subset $K$ in a metric space ${\spc{U}}$ is called \emph{weakly convex} if any two points $x,y\in K$ can be connected by a minimizing geodesic in $K$. Let ${\spc{U}}$ be a metric space and $K\subset {\spc{U}}$. A distance nonexpanding map $f\:{\spc{U}}\to K$ such that $f(x)=x$ for any $x\in K$ is called a \emph{short retraction to $K$}. If in addition a local Lipschitz constant of $f$ is strictly less than 1 at any point $x\notin K$, then we say that $f$ is a \emph{strictly short retraction from ${\spc{U}}$ to $K$}. \begin{thm}{Theorem}\label{thm:retraction:Phi} Let $\spc{U}$ be a complete length $\CAT(\kappa)$ space. Suppose $K$ is a weakly convex closed subset in $\spc{U}$ and there is $p\in \spc{U}$ such that $|p-x|\le \tfrac\pi2$ for any point $x\in K$. \begin{enumerate}[(a)] \item If $p\in K$ and $\kappa\le 1$, then there is a short retraction $\Phi\:\spc{U}\to K$. \item If $\kappa<1$, then there is a strictly short retraction $\Phi\:\spc{U}\to K$. \end{enumerate} \end{thm} This statement is a generalization of the following well known statement about $\CAT(0)$ spaces: \emph{If ${\spc{U}}$ is a complete length $\CAT(0)$ space and $K$ is a closed convex subset in ${\spc{U}}$, then the closest point projection ${\spc{U}}\to K$ is a short retraction. Moreover, if $\spc{U}$ is a $\CAT(\kappa)$ space for some $\kappa<0$, then the closest point projection is a strictly short retraction}. The theorem and a small trick imply the following: \begin{thm}{Corollary}\label{cor} Let $\spc{U}$ be a complete length $\CAT(\kappa)$ space. Denote by $\Delta$ the diagonal in $\spc{U}\times \spc{U}$; that is, $\Delta=\{\,(x,x)\in \spc{U}\times \spc{U}\,\}$. Suppose there is a point $p\in \spc{U}$ such that $|p-x|\le \tfrac\pi2$ for any point $x\in \spc{U}$. \begin{enumerate}[(a)] \item If $\kappa\le 1$, then there is a short retraction $\Psi\:\spc{U}\times \spc{U}\to \Delta$. \item If $\kappa<1$, then there is a strictly short retraction $\Psi\:\spc{U}\times \spc{U}\to \Delta$. \end{enumerate} \end{thm} It is well known that if $\spc{U}$ is a complete length $\CAT(0)$ space, then the midpoint map $\spc{U}\times \spc{U}\to \spc{U}$ is $\tfrac1{\sqrt{2}}$-Lipschitz and therefore it induces a short retraction $\spc{U}\times \spc{U}\to\Delta$. The corollary provides an analogous statement for $\CAT(1)$ spaces. \parbf{Motivation.} In \cite[(4.1)]{kendall}, Wilfrid Kendall observed that if $\spc{B}$ is a regular geodesic ball of radius $r<\tfrac\pi2$ in a manifold with sectional curvature at most 1, then, for an appropriate choice of constant $\lambda$, the function \[(x,y)\mapsto \frac{1+\lambda-\cos|x-y|_{\spc{B}}}{\cos|p-x|_{\spc{B}}\cdot \cos|p-y|_{\spc{B}}} \] has convex level sets in ${\spc{B}}\times {\spc{B}}$. He also shows the existence of a nonnegative convex function on ${\spc{B}}\times {\spc{B}}$ that vanishes only on the diagonal \cite[(4.2)]{kendall}. These observations became a useful tool to study the Dirichlet problem and its relatives; they allowed to extend a number of results from Hadamard manifolds to Riemannian manifolds of small size and more generally to $\CAT(1)$ spaces~\cite{yokota,BFHMSZ,fuglede,serbinowski,lytchak-stadler}. Our original goal was to make this tool transparent for geometers. Corollary~\ref{cor} can be considered as a more geometric version of this tool. While Kendall's condition is optimal for uniqueness and regularity questions, the existence statements can be derived from Theorem~\ref{thm:retraction:Phi} in a slightly greater generality, as we are going to explain now. We will need the following definition, introduced by Stefan Wenger \cite{Wenger-1comp}; for the definitions of ultrafilters and ultracompletions we refer to \cite{Wenger-1comp,guo-wenger,akp}. A metric space ${\spc{U}}$ is called \emph{$1$-complemented} if for some \emph{non-principal ultrafilter} $\omega$ there exists a short retraction of the ultracompletion ${\spc{U}}^{\omega}$ to~${\spc{U}}$. Examples of $1$-complemented spaces include all proper spaces, all CAT(0) spaces and all $L^p$ spaces for $1\leq p\leq \infty$ \cite[Proposition 2.1]{guo-wenger}. Recall that if ${\spc{U}}$ is $\CAT(\kappa)$, then so is~${\spc{U}}^\omega$. Applying these observations together with Theorem~\ref{thm:retraction:Phi}, we obtain \begin{thm}{Theorem}\label{thm:complemented} Let ${\spc{U}}$ be a complete length $\CAT(1)$ space. Assume there exists some $p\in {\spc{U}}$ such that $|p-x|\le \tfrac\pi2$ for any point $x\in {\spc{U}}$. Then ${\spc{U}}$ is $1$-complemented. \end{thm} Let us list a few existence results which follow from the theorem, assuming that the space ${\spc{U}}$ is as above: \begin{enumerate}[(a)] \item\label{dirichlet} The existence of a solution $u$ of Dirichlet problem on the minimization of energy in $W^{1,2} (\Omega, {\spc{U}})$ on any Lipschitz domain $\Omega$ in a Riemannian manifold with prescribed trace $tr(u)$; see \cite{KS} and \cite[Theorem 1.4]{guo-wenger}. \item\label{current} The existence of a minimal integral $k$-current filling any prescribed boundary in ${\spc{U}}$; see \cite{Ambrosio} and \cite[Theorem 3.3]{Wenger-1comp}. \item The existence of a conformally parametrized disc $u:D\to {\spc{U}}$ of minimal area for a given boundary curve $\gamma$, which is a Jordan curve of finite length in ${\spc{U}}$; see \cite{LWplateau} and \cite[Theorem 1.2]{guo-wenger}. \item\label{center} For any Radon measure $\mu$ on ${\spc{U}}$ there exists a center of mass $x\in {\spc{U}}$ for the measure~$\mu$ \cite{Sturm, yokota}. \end{enumerate} If in the theorem we assume strict inequality $|p-x|< \tfrac\pi2$, then the existence results are known in all the cases (\ref{dirichlet}--\ref{center}). Moreover, the uniqueness holds true under this stronger assumption in the cases (\ref{dirichlet}), (\ref{current}), and (\ref{center}); see \cite{yokota,serbinowski}. In our boundary case uniqueness definitely fails; for example geodesics between points in a round hemisphere are not unique. The uniqueness in each case can be shown using Corollary \ref{cor}. Indeed if there are different solutions of one of these problems, then their product in ${\spc{U}}\times {\spc{U}}$ does not lie in the diagonal. The latter contradicts the existence of the strictly short retraction $\Psi$ provided by Corollary \ref{cor}. \parbf{About the proofs.} We use a new tool which we call $r$-\emph{tractrix flow}, a special time-dependent gradient flow. It gives a family of maps $\phi_t$ for a given rectifiable curve $t\z\mapsto\gamma(t)$. The important properties of the tractrix flow are collected in Proposition~\ref{prop-def}. In particular, (1) if $\spc{U}$ is $\CAT(1)$ and $r\le \tfrac\pi2$, then $\phi_t$ is short for any $t$, and (2) if $r< \tfrac\pi2$, then the local Lipschitz constant of $\phi_t$ at $p$ is strictly less than 1 if $p\ne \phi_t(p)$. In the proof of Theorem~\ref{thm:retraction:Phi}, the tractrix flow is applied in a space obtained by gluing to $\spc{U}$ a spherical cone over $K$; this space is $\CAT(1)$ by Reshetnyak's gluing theorem. In Appendix~\ref{Another way} we indicate another way of proving Theorem~\ref{thm:retraction:Phi}. In the proof of Corollary~\ref{cor} the additional trick consists in identifying the product space $\spc{U}\z\times \spc{U}$ with a subset of the spherical join $\spc{U}\star \spc{U}$ and applying Theorem~\ref{thm:retraction:Phi} to the latter. The tricks in both proofs show that it is useful to consider singular spaces even in the case when the original space $\spc{U}$ is smooth; this is a powerful freedom of Alexandrov's world. More involved examples of such arguments are given by Dmitry Burago, Sergei Ferleger, and Alexey Kanonenko \cite{BFK}, Paul Creutz~\cite{creutz}, and Stephan Stadler~\cite{stadler}. \parbf{Acknowledgements.} We thank Christine Brenier, Nicola Gigli, and the anonymous referee for helpful comments. A. Lytchak was partially supported by DFG grant SPP 2026. A. Petrunin was partially supported by Simons foundation collaboration grant for mathematicians 584781. \section{Tractrix flow}\label{sec:Tractrix flow} For $\CAT(\kappa)$ spaces, we will follow the conventions in \cite{akp}. First let us describe the tractrix flow informally. Suppose that two points $p$ and $q$ in ${\spc{U}}$ are connected to each other by a thread of fixed length $r$. Imagine that the point $q$ follows the curve $\gamma$ and drags $p$ if the thread is tight; if the thread is not tight, then $p$ does not move. Then the trajectory of the point $p$ will be called $r$-\emph{tractrix of $p$ with respect to~$\gamma$}. The family of maps $\phi_t$ that send the initial position of $p$ to its position at the time $t$ will be called the $r$-\emph{tractrix flow} defined by $\gamma$. More formally, suppose $\gamma\:[a,b]\to \spc{U}$ is a 1-Lipschitz curve. An $r$-tractrix with respect to $\gamma$ is defined as a gradient curve for the time-dependent family of functions \[f_t=- \max\{r,\dist_{\gamma(t)}\};\] here $\dist_{x}$ denotes the distance function from the point $x$. We also assume that the initial point lies in $\bar B(\gamma(a),r)$. (We denote by $\bar B(x,r)$ and $B(x,r)$ the closed and open balls of radius $r$ centered at $x$.) The $r$-\emph{tractrix flow} with respect to $\gamma$ is defined as a family of maps \[\phi_t\:\bar B(\gamma(a),r)\z\to\bar B(\gamma(t),r)\] whose trajectory $t\mapsto \phi_t(p)$ is the $r$-tractrix starting at $p\in \bar B(\gamma(a),r)$. The following proposition includes the properties of the tractrix flow which will be used further in the paper. \begin{thm}{Proposition}\label{prop-def} Let $\gamma\:[a,b]\to \spc{U}$ be a 1-Lipschitz curve in a complete length $\CAT(\kappa)$ space $\spc{U}$ for some $\kappa\le 1$ and $r<\pi$. Set $\bar B_t=\bar B(\gamma(t),r)$. Then the $r$-tractrix flow $\phi_t\:\bar B_a\to\bar B_t$ is uniquely defined. Moreover \begin{enumerate}[(a)] \item \label{approx} For any $t$, the map $\phi_t$ is a limit of compositions $\theta_{t_n} \circ \dots\circ \theta_{t_0}$ for $\delta \to 0$, where $a=t_0<\dots<t_n=t$ is any partition of $[a,t]$ with $|t_i-t_{i-1}|<\delta$ and where $\theta_{t_i} : \bar B_{t_{i-1}}\to \bar B_{t_i}$ denotes the closest point projection. \item \label{sharafutdinov} If the family of balls $\bar B_t$ is decreasing in $t$ (that is, if $\bar B_{t_1}\supset \bar B_{t_2}$ for $t_1<t_2$), then $\phi_b$ is a strong deformation retraction from $\bar B_a$ to $\bar B_b$. \item \label{non-strict} If $r=\tfrac\pi2$, then $\phi_t$ is short for any $t$; \item\label{strict} If $r=\tfrac\pi2$ and $\kappa<1$, then there is a positive constant $\eps$ such that the local Lipschitz constant of $\phi_t$ at $p$ is bounded above by $\exp(-\eps\cdot\ell)$, where $\ell=|p-\phi_t(p)|_{\spc{U}}$. \end{enumerate} \end{thm} Historically the first relative of the tractrix flow is the so called \emph{Sharafutdinov's retraction} \cite{sharafutdinov} --- a family of maps associated to a continuous family of convex sets (in our case these sets are the balls $\bar B_t$). Second relative is the \emph{pursuer flow} introduced and studied by Stephanie Alexander, Richard Bishop, Robert Ghrist and Chanyoung Jun \cite{ABG,jun-thesis,jun,jun:grad}. Time-dependent gradient flows were studied by Chanyoung Jun \cite{jun-thesis,jun:grad}, by Lucas C. F. Ferreira and Julio C. Valencia-Guevara \cite{ferreira-valencia}, and by Alexander Mielke, Riccarda Rossi, and Giuseppe Savar\'{e} \cite{mielke-rossi-savare}. Unfortunately Proposition~\ref{prop-def} does not follow directly from the results in these papers; for this reason we provide in an appendix a short proof of the existence of gradient flows of Lipschitz time-dependent family of semiconcave functions in $\CAT(\kappa)$ spaces. \parit{Proof.} Consider $f_t=- \max\{r,\dist_{\gamma(t)}\}$ as a family of functions defined in $B(\gamma(t),r\z+\delta)$ for sufficiently small $\delta>0$. Note that the family $f_t$ is Lipschitz. By $\CAT(1)$-comparison, each $f_t$ is $\lambda$-concave for a fixed $\lambda$. Moreover if $r<\tfrac\pi2$, then $\lambda=0$ and if $r=\tfrac\pi2$, then $\lambda\to 0$ as $\delta\to 0$. Consider the map $\phi_t\:\alpha(a)\mapsto \alpha(t)$, where $\alpha$ is a $f_t$-gradient curve. By \ref{prop:time-dependent}, if $\phi_t(p)$ is defined, then it is unique. Consider the function $\ell(t)\z=|\phi_t(p)-\gamma(t)|$. By the definition of the flow, we have that $\ell'\le 0$ if $\ell> r$. It follows that $\phi_t$ is defined for all $t$ and maps $\bar B_a$ to $\bar B_t$. \parit{(\ref{approx}).} Given a partition $a=t_0<t_1<\dots<t_n=t$ with $|t_i -t_{i-1}|<\delta$, consider a locally constant approximation $\hat f_t$ of the family $f_t$ defined by $\hat f_t=f_{t_i}$ if $t_i\le t < t_{i+1}$. Denote by $\hat\phi_t$ the corresponding flow. Given $p\in \bar B_a$, set $p_i=\hat\phi_{t_i}(p)$. Observe that $p_{i}=\theta_{t_i}(p_{i-1})$ for each $i$. By the distance estimate (\ref{Distance estimate}) the flow $\hat\phi_t$ converges to $\phi_t$ as the partition gets finer and finer, hence the result. \parit{(\ref{sharafutdinov}).} It is sufficient to notice that $\phi_t(p)=p$ if $|p-\gamma(s)| \leq r$ for all $a\leq s \leq t$. \parit{(\ref{non-strict}).} Applying the distance estimate (\ref{Distance estimate}) for $s=0$, we get that \[|\phi_t(p)-\phi_t(q)|\le |p-q|\cdot e^{\lambda\cdot (t-a)}\] for any $p,q\in \bar B_a$, $t\ge a$. If $r\le \tfrac\pi2$, then the inequality holds for arbitrary $\lambda> 0$; hence (\ref{non-strict}) follows. \parit{(\ref{strict}).} The proof of the strict inequality follows directly from (\ref{approx}) and the following general consequence of the $\CAT(\kappa)$ comparison: There exists some $\eps>0$ such that the closest point projection $\theta:\bar B(w,\frac \pi 2 +\eps)\z\to\bar B(w,\frac \pi 2)$ in any $\CAT(\kappa)$ is strictly short and satisfies \[|\theta (p) -\theta (q)| \leq e^{-\eps\cdot |\theta (p) -p|}\cdot|p-q|.\] \qedsf \section{Proofs}\label{sec:proofs} Recall that spherical join $\spc{U}\star\spc{V}$ of two metric spaces $\spc{U}$ and $\spc{V}$ is defined as the unit sphere (equipped with the angle metric) in the product of Euclidean cones $\Cone \spc{U}\times \Cone\spc{V}$. If diameters of $\spc{U}$ and $\spc{V}$ do not exceed $\pi$, then $\spc{U}\star\spc{V}$ can be defined as a metric space that admits an onto map $\iota\:\spc{U}\times\spc{V}\times[0,\tfrac\pi2]\to \spc{U}\star\spc{V}$ such that \[ \begin{aligned} |\iota(u_1,v_1,t_1)&-\iota(u_2,v_2,t_2)|_{\spc{U}\star\spc{V}}= \\ &=\arccos[\sin t_1\cdot\sin t_2\cdot \cos|u_1-u_2|_{\spc{U}}+\cos t_1\cdot \cos t_2\cdot \cos|v_1-v_2|_{\spc{V}}]. \end{aligned} \eqlbl{join-formula} \] Recall that the join of two $\CAT(1)$ spaces is $\CAT(1)$ \cite[Corollary 3.14]{bridson-haefliger}. \parit{Proof of \ref{thm:retraction:Phi}.} Consider the join of $K$ with a one-point space, $\spc{J}=K\star \{s\}$. Since $\spc{J}$ is a $\CAT(1)$ space, by Reshetnyak's gluing theorem \cite[8.9.1]{akp}, the space $\spc{W}$ glued from ${\spc{U}}$ and $\spc{J}$ along $K$ is a $\CAT(1)$ space; moreover ${\spc{U}}$ and $\spc{J}$ are convex subsets in $\spc{W}$. \begin{wrapfigure}{o}{50 mm} \vskip-0mm \centering \includegraphics{mppics/pic-2} \end{wrapfigure} Let $\gamma$ be the geodesic in $\spc{W}$ from $p$ to the pole $s$ of $\spc{J}$. Set $\bar B_t=\bar B(\gamma(t),\tfrac\pi2)_{\spc{W}}$, then \begin{itemize} \item $\bar B_0=\bar B(p,\tfrac\pi2)_{\spc{U}}\cup \spc{J}$, \item $\bar B_{\frac\pi2}=\spc{J}$, in particular $\bar B_{\frac\pi2}\cap \spc{U}=K$, \item the family $\bar B_t$ is decreasing in $t$. \end{itemize} To see the last statement, note that $\spc{J}\subset \bar B_0$. Further, by definition of a spherical join we have $|x-p|_{\spc{W}} \le |x\z-\gamma (s)|_{\spc{W}}$ for any $x\in K$. By construction of the metric on $\spc{W}$ the same inequality holds for any $x\in \spc{U}$, Therefore $\bar B_s\subset \bar B_0$. Finally, by $\CAT(1)$ comparison, $\bar B_t\supset \bar B_s\cap \bar B_0=\bar B_s$ for any $s\ge t \ge 0$. According to \ref{prop-def}(\ref{sharafutdinov}), the $\tfrac\pi2$-tractrix flow $\phi_t$ is a strong deformation retraction of $\bar B_0$ to $\bar B_{\frac\pi2}$. By \ref{prop-def}(\ref{non-strict}) $\phi_{\frac\pi2}$ is a short. If $\kappa<1$, then by \ref{prop-def}(\ref{strict}), $\phi_{\frac\pi2}$ is a strictly short retraction. Since ${\spc{U}}$ is $\CAT(1)$, given a point $x\in B(p,\pi)_{\spc{U}}$ there is unique geodesic $\gamma_x$ parametrized by its length from $p$ to $x$. By $\CAT(1)$ comparison, the map \[\Theta(x)= \begin{cases} p&\text{if\ }|p-x|_{\spc{U}}\ge \pi, \\ \gamma_x(\pi-|p-x|_{\spc{U}})&\text{if\ }|p-x|_{\spc{U}}< \pi. \end{cases} \] is a short retraction of ${\spc{U}}$ to $\bar B(p,\tfrac\pi2)_{\spc{U}}=\bar B_0\cap \spc{U}$. Moreover $\Theta$ is strictly short retraction if $\kappa<1$. Therefore the composition $\Phi=\phi_{\frac\pi2}\circ\Theta$ induces a short retraction of ${\spc{U}}$ to $K$ which is strictly short if $\kappa<1$. Finally, we need to take care of the case $\kappa<1$ and $p\notin K$. Denote by $\bar p\in K$ the closest point to $p$; by $\CAT(\kappa)$ comparison it exists and unique. Note that $|\bar p-x|_{\spc{U}}< |p-x|_{\spc{U}}$ for any $x\in K$; therefore $K\subset B(\bar p,\tfrac\pi2)$. It remains to apply the construction above with $\bar p$ instead $p$. \qeds \parit{Proof of \ref{cor}.} Consider the spherical join $\spc{U}\star\spc{U}$ and the map $\iota$ described at the beginning of the section. Note that \ref{join-formula} implies that the map $(u,v)\mapsto \iota(u,v,\tfrac\pi4)$ induces a length preserving map \[\Theta\:{\tfrac1{\sqrt{2}}}\cdot(\spc{U}\times\spc{U})\to\spc{U}\star\spc{U}.\] In particular, $\Theta$ is short. Note that the diagonal $\tfrac1{\sqrt{2}}\cdot\Delta$ is a convex set in $\tfrac1{\sqrt{2}}\cdot(\spc{U}\times\spc{U})$. Moreover \ref{join-formula} implies that the restriction of $\Theta$ to $\tfrac1{\sqrt{2}}\cdot\Delta$ is distance preserving. In particular, the image $K=\Theta(\tfrac1{\sqrt{2}}\cdot\Delta)$ is a weakly convex set in $\spc{U}\star\spc{U}$. Further note that $|q-y|_{\spc{U}\star\spc{U}}\le \tfrac\pi2$ for any $y\in \spc{U}\star\spc{U}$ and $q=\Theta(p,p)$. Applying \ref{thm:retraction:Phi}, we get a short retraction $\Phi\:\spc{U}\star\spc{U}\to K$. Since $\Theta$ is short, it induces the needed short retraction $\Psi\:\spc{U}\times\spc{U}\to \Delta$. Finally, by \ref{thm:retraction:Phi}, if $\kappa<1$, then $\Phi$ is a strictly short retraction and therefore so is $\Psi$. \qeds
\section{Introduction} This article is about efficiently lifting algebraic curves over finite fields to characteristic zero, in a genus and gonality preserving way, with an application to $p$-adic point counting. Throughout, our curves are always understood to be geometrically irreducible, but not necessarily non-singular and/or complete. By the genus of a curve we mean its geometric genus, unless otherwise stated. As for the gonality of a curve over a field $k$, we make a distinction between two notions: by its \emph{$k$-gonality} we mean the minimal degree of a non-constant $k$-rational map to the projective line, while by its \emph{geometric gonality} we mean the $\bar{k}$-gonality, where $\bar{k}$ denotes an algebraic closure of $k$. We also make a notational distinction between projective, affine or toric (= affine minus coordinate hyperplanes) $n$-space in characteristic zero, in which case we write $\PPK^n, \AAK^n, \TTK^n$, and their finite characteristic counterparts, where we opt for $\PPq^n, \AAq^n, \TTq^n$. Apart from that we avoid reference to the base field, which should always be clear from the context. Similarly we write $\QQ$ for the field of rational numbers and $\FF_q$ for the finite field with $q$ elements, where $q$ is a power of a prime number $p$. For each such $q$ we fix a degree $\log_pq$ extension $K \supset \QQ$ in which $p$ is inert, and let $\mathcal{O}_K$ denote its ring of integers. We then identify $\FF_q$ with the residue field $\mathcal{O}_K / (p)$. Our lifting problem is as follows: \begin{problem} \label{liftingproblem} Given a curve $\overline{C}$ over $\FF_q$, find an efficient algorithmic way of producing a polynomial $f \in \mathcal{O}_K[x,y]$ such that \begin{enumerate} \item[(i)] its reduction mod $p$ defines a curve that is birationally equivalent to $\overline{C}$, \item[(ii)] the curve $C \subset \AAK^2$ it defines has the same genus as $\overline{C}$, \item[(iii)] its degree in $y$ equals the $\FF_q$-gonality of $\overline{C}$. \end{enumerate} \end{problem} Note that these conditions imply that the $K$-gonality of $C$ equals the $\FF_q$-gonality of $\overline{C}$, because the gonality cannot increase under reduction mod $p$; see e.g.\ \cite[Thm.\,2.5]{derickx}. We are unaware of whether an $f$ satisfying (i-iii) exists in general. Grothendieck's existence theorem~\cite{illusie} implies that in theory one can achieve (i) and (ii) over the ring of integers $\ZZ_q$ of the $p$-adic completion $\QQ_q$ of $K$, but, firstly, it is not clear that we can always take $f$ to be defined over $\mathcal{O}_K$ and, secondly, we do not know whether it is always possible to incorporate (iii), let alone in an effective way. To give a concrete open case, we did not succeed in dealing with Problem~\ref{liftingproblem} for curves of genus four having $\FF_q$-gonality five, which can only exist if $q \leq 7$. (However, as we will see, among all curves of genus at most five, the only cases that we cannot handle are pathological examples of the foregoing kind.) We are intentionally vague about what it means to be \emph{given} a curve $\overline{C}$ over $\FF_q$. It could mean that we are considering the affine plane curve defined by a given absolutely irreducible polynomial $\overline{f} \in \FF_q[x,y]$. Or it could mean that we are considering the affine/projective curve defined by a given more general system of equations over $\FF_q$. In all cases we will ignore the cost of computing the genus $g$ of $\overline{C}$. Moreover, in case $g = 0$ we assume that it is easy to realize $\overline{C}$ as a plane conic (using the anticanonical embedding) and if $g = 1$ we ignore the cost of finding a plane Weierstrass model. By the Hasse-Weil bound every genus one curve over $\FF_q$ is elliptic, so this is indeed possible. If $g \geq 2$ then we assume that one can easily decide whether $\overline{C}$ is hyperelliptic or not (note that over finite fields, curves are hyperelliptic iff they are geometrically hyperelliptic, so there is no ambiguity here). If it is then we suppose that it is easy to find a generalized Weierstrass model. If not then it is assumed that one can effectively compute a canonical embedding \[ \kappa : \overline{C} \hookrightarrow \PPq^{g-1} \] along with a minimal set of generators for the ideal of its image. The latter will usually be our starting point. Most of the foregoing tasks are tantamount to computing certain Riemann-Roch spaces. There is extensive literature on this functionality, which has been implemented in several computer algebra packages, such as Magma~\cite{magma} and Macaulay2~\cite{macaulay}. The idea is then to use the output polynomial $f$ as input to a recent algorithm due to the second author~\cite{tuitman1,tuitman2} for computing the Hasse-Weil zeta function of $\overline{C}$. This algorithm uses $p$-adic cohomology, which it represents through the map $\pi : C \rightarrow \PPK^1 : (x,y) \mapsto x$. The algorithm only works if $C$ and $\pi$ have appropriate reduction modulo $p$, in a rather subtle sense for the precise description of which we refer to \cite[Ass.\,1]{tuitman2}. This condition is needed to be able to apply a comparison theorem between the (relative) $p$-adic cohomology of $\overline{C}$ and the (relative) de Rham cohomology of $C \otimes \QQ_q$, which is where the actual computations are done. For such a theorem to hold, by dimension arguments it is necessary that $C$ and $\overline{C}$ have the same genus, whence our condition (ii). This may be insufficient, in which case $f$ will be rejected, but for $p > 2$ our experiments show that this is rarely a concern as soon as $q$ is sufficiently large. Moreover, in many cases below, our construction leaves enough freedom to retry in the event of a failure. The algorithm from~\cite{tuitman1,tuitman2} has a running time that is sextic in $\deg \pi$, which equals the degree in $y$ of $f$, so it is important to keep this value within reason. Because the $\FF_q$-gonality of $\overline{C}$ is an innate lower bound, it is natural to try to meet this value, whence our condition (iii). At the benefit of other parameters affecting the complexity, one could imagine it being useful to allow input polynomials whose degree in $y$ exceeds the $\FF_q$-gonality of $\overline{C}$, but in all cases that we studied the best performance results were indeed obtained using a gonality-preserving lift. At the same time, looking for such a lift is a theoretically neat problem. \begin{remark} For the purpose of point counting, it is natural to wonder why we lift to $\mathcal{O}_K$, and not to the ring $\ZZ_q$, which is a priori easier. In fact, most computations in the algorithm from~\cite{tuitman1,tuitman2} are carried out to some finite $p$-adic precision~$N$, so it would even be sufficient to lift to $\mathcal{O}_K/(p^N) = \ZZ_q/(p^N)$. A first reason for lifting to $\mathcal{O}_K$ is simply that this turns out to be possible in the cases that we studied, without additional difficulties. A second more practical reason is that at the start of the algorithm from~\cite{tuitman1,tuitman2} some integral bases have to be computed in the function field of the curve. Over a number field $K$ this is standard and implemented in Magma, but to finite $p$-adic precision it is not clear how to do this, and in particular no implementation is available. Therefore, the integral bases are currently computed to exact precision, and we need $f$ to be defined over $\mathcal{O}_K$. \end{remark} \paragraph*{Contributions} As explained in Section~\ref{section_preliminary} the cases where $\overline{C}$ is rational, elliptic or hyperelliptic are straightforward. In this article we give a recipe for tackling Problem~\ref{liftingproblem} in the case of curves of $\FF_q$-gonality $3$ and $4$. Because of their practical relevance, our focus lies on curves having genus at most five, which is large enough for the main trigonal and tetragonal phenomena to be present. The details can be found in Section~\ref{section_lowgenus}; more precisely in Sections~\ref{section_genus3},~\ref{section_genus4} and~\ref{section_genus5} we attack Problem~\ref{liftingproblem} for curves of genus three, four and five, respectively, where we restrict ourselves to finite fields $\FF_q$ having odd characteristic. Each of these sections is organized in a stand-alone way, as follows: \begin{itemize} \item In a first part we classify curves by their $\FF_q$-gonality $\gamma$ and solve Problem~\ref{liftingproblem} in its basic version (except for some pathological cases such as pentagonal curves in genus four or hexagonal curves in genus five, which are irrelevant for point counting because these can only exist over extremely small fields). If the reader is interested in such a basic solution only, he/she can skip the other parts, which are more technical. \item Next, in an optimization part we take into account the fact that the actual input to the algorithm from~\cite{tuitman1,tuitman2} must be monic when considered as a polynomial in $y$. This is easily achieved: if we write \[ f = f_0(x)y^\gamma + f_1(x)y^{\gamma - 1} + \dots + f_{\gamma - 1}(x)y + f_\gamma(x), \] then the birational transformation $y \leftarrow y / f_0(x)$ gives \begin{equation} \label{mademonic} y^\gamma + f_1(x)y^{\gamma - 1} + \dots + f_{\gamma - 1}(x)f_0(x)^{\gamma - 2}y + f_\gamma(x)f_0(x)^{\gamma - 1}, \end{equation} which still satisfies (i), (ii) and (iii). But one sees that the degree in $x$ inflates, and this affects the running time. We discuss how our basic solution to Problem~\ref{liftingproblem} can be enhanced such that (\ref{mademonic}) becomes a more compact expression. \item We have implemented the algorithms from this paper in the computer algebra system Magma. The resulting package is called \verb{goodmodels{ and can be found at the webpage \url{http://perswww.kuleuven.be/jan_tuitman}. In a third part we report on this implementation and on how it performs in composition with the algorithm from~\cite{tuitman1,tuitman2} for computing Hasse-Weil zeta functions. We give concrete runtimes, memory usage and failure rates, but avoid a detailed complexity analysis, because in any case the lifting step is heavily dominated by the point counting step. All computations were carried out with Magma v2.22 on a single Intel Core i7-3770 CPU running at 3.40 GHz. The code used to generate the tables with running times, memory usage and failure rates can be found in the subdirectory \verb{./profiling{ of \verb{goodmodels{. \end{itemize} As we will see, the case of trigonal curves of genus five provides a natural transition to the study of general curves of $\FF_q$-gonality $3$ and $4$. These are discussed in Section~\ref{section_lowgonality}, albeit in a more sketchy way. \paragraph*{Consequences} The main consequences of our work are that \begin{itemize} \item computing Hasse-Weil zeta functions using $p$-adic cohomology has now become practical on virtually all curves of genus at most five over finite fields $\FF_q$ of (small) odd characteristic, \item the same conclusion for curves of $\FF_q$-gonality at most four looms around the corner, even though some hurdles remain, as explained in Section~\ref{section_lowgonality}, \item we have a better understanding of which $\FF_q$-gonalities can occur for curves of genus at most five, see the end of Section~\ref{section_firstfacts} for a summarizing table. \end{itemize} We stress that the general genus five curve, let alone the general tetragonal curve of any given genus, cannot be tackled using any of the previous Kedlaya-style point counting algorithms, that were designed to deal with elliptic curves~\cite{satoh}, hyperelliptic curves~\cite{denefvercauteren,harrison,harveylarger,hubdeform,kedlaya}, superelliptic curves~\cite{gaudrygurel,minzlaff}, $C_{ab}$ curves~\cite{CHV,DVCab,walker} and nondegenerate curves~\cite{CDV,tuitmanthesis}, in increasing order of generality. We refer to~\cite{CV} for a discussion of which classes of curves do admit a nondegenerate model. \paragraph*{A reference problem ($\dagger$)} At sporadic places in this article, we refer to a paper that develops its theory over $\CC$ only, while in fact we need it over other fields, such as $\overline{\FF}_q$. This concern mainly applies to the theory of genus five curves due to Arbarello, Cornalba, Griffiths and Harris~\cite[VI.\S4.F]{cornalba}. We are convinced that most of the time this is not an issue (the more because we rule out even characteristic) but we did not sift every one of these references to the bottom to double-check this: we content ourselves with the fact that things work well in practice. In our concluding Section~\ref{section_lowgonality} on trigonal and tetragonal curves, the field characteristic becomes a more serious issue, for instance in the Lie algebra method developed by de Graaf, Harrison, P\'ilnikov\'a and Schicho~\cite{GHPS}. More comments on this will be given there. Each time we cite a $\CC$-only (or characteristic zero only) reference whose statement(s) we carry over to finite characteristic without having verified the details, we will indicate this using the dagger symbol $\dagger$. \paragraph*{Acknowledgements} We would like to thank Arnaud Beauville, Tom De Medts, Jeroen Demeyer, Steve Donnelly and Josef Schicho for answering several of our questions. A large part of this paper was prepared while the first author was affiliated with the University of Ghent. The second author is a postdoctoral research fellow of the Research Foundation Flanders (FWO). Further support for this research was received from the project G093913N of the Research Foundation Flanders (FWO) and from the European Commission through the European Research Council under the FP7/2007-2013 programme with ERC Grant Agreement 615722 MOTMELSUM. \section{Background} \label{section_preliminary} \subsection{First facts on the gonality} \label{section_firstfacts} Let $k$ be a field and let $C$ be a curve over $k$. The geometric gonality $\gamma_\text{geom}$ of $C$ is a classical invariant. It is $1$ if and only if the genus of $C$ equals $g = 0$, while for curves of genus $g \geq 1$, by Brill-Noether theory $\gamma_\text{geom}$ lies in the range \[ 2, \dots, \lceil g / 2 \rceil + 1. \] For a generic curve the upper bound $\lceil g / 2 \rceil + 1$ is met~\cite{CDPR}, but in fact each of the foregoing values can occur: inside the moduli space of curves of genus $g \geq 2$ the corresponding locus has dimension $\min \{ 2g - 5 + 2\gamma_\text{geom}, 3g - 3 \}$; see~\cite[\S8]{arbarello}${}^\dagger$. From a practical point of view, determining the geometric gonality of a given curve is usually a non-trivial computational task, although in theory it can be computed using so-called scrollar syzygies~\cite{weimann}. In the arithmetic (= non-geometric) case the gonality has seen much less study, even for classical fields such as the reals~\cite{coppensmartens}. Of course $\gamma_\text{geom}$ is always less than or equal to the $k$-gonality $\gamma$, but the inequality may be strict. In particular the Brill-Noether upper bound $\lceil g / 2 \rceil + 1$ is no longer valid. For curves of genus $g = 1$ over certain fields $\gamma$ can even be arbitrarily large~\cite{clark}. As for the other genera, using the canonical or anticanonical linear system one finds \begin{itemize} \item if $g = 0$ then $\gamma \leq 2$, \item if $g \geq 2$ then $\gamma \leq 2g-2$. \end{itemize} These bounds can be met. We refer to~\cite[Prop.\,1.1]{poonengonality} and the references therein for precise statements, along with some additional first facts. If $k = K$ is a number field then the notion of $K$-gonality has enjoyed more attention, both from a computational~\cite{derickx,derickxvanhoeij} and a theoretical~\cite{poonengonality} point of view, especially in the case where $C$ is a modular curve. This is due to potential applications towards effective versions of the uniform boundedness conjecture; see~\cite{sutherland} for an overview. In the non-modular case not much literature seems available, but our rash guess would be that almost all (in any honest sense) curves of genus $g \geq 2$ over $K$ meet the upper bound $\gamma \leq 2g - 2$. This is distantly supported by the Franchetta conjecture; see again~\cite[Prop.\,1.1]{poonengonality} and the references therein for a more extended discussion. Over finite fields $k = \FF_q$ the notion has attracted the attention of coding theorists in the context of Goppa codes~\cite{tsfasman}. They proved the following result: \begin{lemma} If the $\overline{C}$ is a curve over a finite field $\FF_q$ then its $\FF_q$-gonality is at most $g + 1$. Moreover, if equality holds then $g \leq 10$ and $q \leq 31$. \end{lemma} \begin{proof} See \cite[\S4.2]{tsfasman}. \end{proof} In \cite[\S4.2]{tsfasman} it is stated as an open problem to find tighter bounds for the $\FF_q$-gonality. In fact we expect the sharpest possible upper bound to be $\lceil g/2 \rceil + 1 + \varepsilon$ for some small $\varepsilon$; maybe $\varepsilon \leq 1$ is sufficient as soon as $q$ is large enough. A byproduct of this paper is a better understanding of which $\FF_q$-gonalities can occur for curves of genus at most five, in the cases where $q$ is odd (the cases where $q$ is even should be analyzable in a similar way). The following table summarizes this. \begin{center} \small \begin{tabular}{c|c|c|c|c} $g$ & Brill-Noether & possible $\FF_q$-gonalities & possible $\FF_q$-gonalities & $B$ \\ & upper bound & (union over all odd $q$) & (for a given odd $q > B$) & \\ \hline $0$ & $1$ & $1$ & 1 & 1 \\ $1$ & $2$ & $2$ & 2 & 1 \\ $2$ & $2$ & $2$ & 2 & 1 \\ $3$ & $3$ & $2,3,4$ & $2,3$ & $29$\\ $4$ & $3$ & $2,3,4,5$ & $2,3,4$ & $7$\\ $5$ & $4$ & $2,3,4,5,6^?$ & $2,3,4,5$ & $3$\\ \end{tabular} \end{center} For background we refer to Section~\ref{section_preliminary_comments} (for $g \leq 2$), Lemma~\ref{genus3gonality} (for $g=3$), Lemma~\ref{genus4gonality} (for $g=4$), and Lemma~\ref{genus5gonality}, Remark~\ref{remarkjeroen} and Remark~\ref{remarkgon6} (for $g=5$). The question mark indicates that over $\FF_3$ there might exist curves of genus $g=5$ having $\FF_3$-gonality $6$, but there also might not exist such curves, see Remark~\ref{remarkgon6}. \subsection{Baker's bound} \label{section_bakersbound} Throughout a large part of this paper we will use the convenient language of Newton polygons. Let \hfill \phantom{x} \begin{wrapfigure}{r}{3.1cm} \hfill \includegraphics[width=3cm]{newt_polygon.pdf} \end{wrapfigure} \[ f = \sum_{(i,j) \in \ZZ_{\geq 0}^2} c_{i,j} x^iy^j \in k[x, y] \] be an irreducible polynomial over a field $k$. Then its Newton polygon $ \Delta(f)$ is defined as $\text{conv} \left\{ \, \left. (i,j) \in \ZZ_{\geq 0}^2 \, \right| \, c_{i,j} \neq 0 \, \right\} \subset \RR^2 $. Note that $\Delta(f)$ lies in the first quadrant and meets the coordinate axes in at least one point each, by the irreducibility of $f$. Let $C$ be the affine curve that is cut out by $f$. Then one has the following bounds on the genus and the gonality of $C$, purely in terms of the combinatorics of $\Delta(f)$. \paragraph*{Genus} The genus of $C$ is at most \emph{the number of points in the interior} of $\Delta(f)$ having integer coordinates: this is Baker's theorem. See~\cite[Thm.\,2.4]{beelen} for an elementary proof and~\cite[\S10.5]{coxlittleschenck} for a more conceptual version (using adjunction theory on toric surfaces). If one fixes the Newton polygon then Baker's bound on the genus is generically attained, i.e.\ meeting the bound is a non-empty Zariski-open condition; this result is essentially due to Khovanskii \cite{khovanskii}. An explicit sufficient generic condition is that $f$ is nondegenerate with respect to its Newton polygon~\cite[Prop.\,2.3, Cor.\,2.8]{CDV}. \paragraph*{Gonality} The $k$-gonality is at most the \emph{lattice width} $\lwidth(\Delta(f))$ of $\Delta(f)$. By definition, the lattice width is the minimal height $d$ of a horizontal strip \[ \left\{ \left. \, (a,b) \in \RR^2 \, \right| \, 0 \leq b \leq d \, \right\} \] inside which $\Delta(f)$ can be mapped using a unimodular transformation, i.e.\ an affine transformation of $\RR^2$ with linear part in $\GL_2(\ZZ)$ and translation part in $\ZZ^2$. \begin{center} \includegraphics[width=9cm]{unimodulartransf.pdf} \end{center} This is discussed in \cite[\S2]{caco}, but briefly the argument goes as follows. By applying the same transformation to the exponents, which is a $k$-rational birational change of variables, our polynomial $f$ can be transformed along with its Newton polygon. When orienting $f$ in this way one obtains $\deg_y f = \lwidth(\Delta(f))$, and the gonality bound follows by considering the $k$-rational map $(x,y) \mapsto x$. If a unimodular transformation can be used to transform $\Delta(f)$ into \begin{center} \polfig{2Upsilon.pdf}{2.4}{2.4}{$2 \Upsilon$} \begin{minipage}[b]{1cm} \begin{center} or \end{center} \vspace{0.65cm} \end{minipage} \polfig{dSigma.pdf}{2.4}{2.4}{$d \Sigma$} \end{center} for $d \geq 2$, then the \emph{geometric} gonality enjoys the sharper bound $\lwidth(\Delta(f)) - 1$ (amounting to $3$ resp.\ $d-1$); see \cite[Thm.\,3]{caco}. If one fixes the Newton polygon then the sharpest applicable foregoing upper bound on the geometric gonality, i.e.\ \begin{itemize} \item $\lwidth(\Delta(f)) - 1$ in the exceptional cases $2\Upsilon$, $d\Sigma$ ($d \geq 2$), \item $\lwidth(\Delta(f))$ in the non-exceptional cases, \end{itemize} is generically met, and again nondegeneracy is a sufficient condition~\cite[Cor.\,6.2]{linearpencils}. In fact, the slightly weaker condition of meeting Baker's genus bound is already sufficient \cite[\S4]{linearpencils}. \begin{remark} The results from~\cite{linearpencils} are presented in characteristic zero only, but~\cite[Cor.\,6.2]{linearpencils} holds in finite characteristic too, as can be seen as follows. Assume for simplicity that $\Delta(f)$ is not of the form $2\Upsilon$ or $d\Sigma$ for some $d \geq 2$, these cases are easy to deal with separately. Suppose that $C$ meets Baker's bound but that the gonality of $C$ is strictly less than $\lwidth(\Delta(f))$, say realized by a map $\pi : C \rightarrow \PPq^1$. We split this map in the usual way into a purely inseparable and a separable part \[ C \stackrel{F_q}{\longrightarrow} C^{F_q} \stackrel{\pi_s}{\longrightarrow} \PPq^1, \] where $F_q$ denotes an appropriate Frobenius power and $C^{F_q}$ is the curve defined by $f^{F_q}$, the polynomial obtained by applying $F_q$ to each coefficient of $f$. Note that $\Delta(f) = \Delta(f^{F_q})$, so one sees that $C^{F_q}$ also meets Baker's bound because Frobenius preserves the genus~\cite[Prop.\,IV.2.5]{hartshorne}. Clearly $\deg \pi_s < \lwidth(\Delta(f^{F_q}))$. Now the crucial ingredient in the proof of~\cite[Cor.\,6.2]{linearpencils} is a theorem due to Serrano on the possibility of extending morphisms from curves to ambient surfaces, which assumes $\chr k = 0$. However as Serrano points out~\cite[Rmk.\,3.12]{serrano} his theorem also holds in finite characteristic, provided that the morphism is separable, the ambient surface $S$ is rational, and $h^0(\mathcal{O}_S(C))$ is large enough compared to the degree of the morphism to be extended. The reader can verify that these conditions are satisfied when applying the proof of~\cite[Thm.\,6.1]{linearpencils} to $\pi_s$, leading to the conclusion that it is necessarily of the form $(x,y) \mapsto x^ay^b$ for some pair of coprime integers $a,b$. This contradicts that $\deg \pi_s < \lwidth(\Delta(f^{F_q}))$. \end{remark} Summing up in the non-geometric case, if we are not in the exceptional cases $2 \Upsilon, d\Sigma$ ($d \geq 2$) then meeting Baker's bound is sufficient for the $k$-gonality to equal $\lwidth(\Delta(f))$. In the exceptional cases the $k$-gonality is either $\lwidth(\Delta(f))$ or $\lwidth(\Delta(f)) - 1$.\\ \noindent This yields a large class of defining polynomials $\overline{f} \in \FF_q[x,y]$ for which finding an $f \in \mathcal{O}_K[x,y]$ satisfying (i), (ii) and (iii) is easy. Indeed, by semi-continuity the genus cannot increase under reduction modulo $p$. Therefore if $\overline{f}$ attains Baker's upper bound on the genus, then it suffices to pick any $f \in \mathcal{O}_K[x,y]$ that reduces to $\overline{f}$ mod $p$, in such a way that $\Delta(f) = \Delta(\overline{f})$: the corresponding curve $C / K$ necessarily attains Baker's upper bound, too. If moreover we are not in the exceptional cases $2\Upsilon$ and $d\Sigma$ ($d \geq 2$), then from the foregoing discussion we know that both the $\FF_q$-gonality of $\overline{C}$ and the $K$-gonality of $C$ are equal to $\lwidth(\Delta(\overline{f})) = \lwidth(\Delta(f))$. A unimodular transformation then ensures that $\deg_y f = \lwidth(\Delta(f))$ as desired; such a transformation is computationally easy to find~\cite{feschet}. It is therefore justifiable to say that conditions (i), (ii) and (iii) are easy to deal with for almost all polynomials $\overline{f} \in \FF_q[x,y]$. But be cautious: this does not mean that almost all \emph{curves} $\overline{C} / \FF_q$ are defined by such a polynomial. In terms of moduli, the locus of curves for which this is true has dimension $2g+1$, except if $g=7$ where it is $16$; see \cite[Thm.\,12.1]{CV}. Recall that the moduli space of curves of genus $g$ has dimension $3g - 3$, so as soon as $g \geq 5$ the defining polynomial $\overline{f}$ of a plane model of a generic curve $\overline{C}/ \FF_q$ of genus $g$ can never attain Baker's bound. For such curves, the foregoing discussion becomes \emph{counterproductive}: if we take a naive coefficient-wise lift $f \in \mathcal{O}_K[x,y]$ of $\overline{f}$, then it is very likely to satisfy Baker's bound, causing an increase of genus. This shows that $f$ has to be constructed with more care, which is somehow the main point of this article. \subsection{Preliminary discussion} \label{section_preliminary_comments} We will attack Problem~\ref{liftingproblem} in the cases where the genus $g$ of $\overline{C}$ is at most five (in Section~\ref{section_lowgenus}) or the $\FF_q$-gonality $\gamma$ of $\overline{C}$ is at most four (in Section~\ref{section_lowgonality}), where we recall our overall assumption that $q$ is odd. In this section we quickly discuss the cases where $g$ and/or $\gamma$ are at most $2$. \begin{remark} Note that for the purpose of computing the Hasse-Weil zeta function using the algorithm from~\cite{tuitman1,tuitman2}, the characteristic $p$ of $\FF_q$ should moreover not be too large: this restriction is common to all $p$-adic point counting algorithms. For the lifting methods described in the current paper, the size of $p$ does not play a role. \end{remark} If $\overline{C}$ is a curve of genus $g = 0$ then we can assume that $\overline{C} = \mathbb{P}^1$, because every plane conic carries at least one $\FF_q$-point, and projection from that point gives an isomorphism to the line. In particular $\gamma = 1$ if and only if $g = 0$, in which case Problem~\ref{liftingproblem} can be addressed by simply outputting $f = y$. Next, if $g = 1$ then we can assume that $\overline{C}$ is defined by a polynomial $\overline{f} \in \FF_q[x,y]$ in Weierstrass form, i.e.\ $\overline{f} = y^2 - \overline{h}(x)$ for some squarefree cubic $\overline{h}(x) \in \FF_q[x]$. In this case $\gamma = 2$, and any $f \in \mathcal{O}_K[x,y]$ for which $\Delta(f) = \Delta(\overline{f})$ will address Problem~\ref{liftingproblem} (for instance because Baker's bound is attained, or because a non-zero discriminant must lift to a non-zero discriminant). Finally, if $g \geq 2$ then $\overline{C}$ is geometrically hyperelliptic if and only if $\kappa$ realizes $\overline{C}$ as a degree $2$ cover of a curve of genus zero \cite[IV.5.2-3]{hartshorne}. By the foregoing discussion the latter is isomorphic to $\PPq^1$, and therefore every geometrically hyperelliptic curve $\overline{C} / \FF_q$ admits an $\FF_q$-rational degree $2$ map to $\PPq^1$. In particular, one can unambiguously talk about hyperelliptic curves over $\FF_q$. In this case it is standard how to produce a defining polynomial $\overline{f} \in \FF_q[x,y]$ that is in Weierstrass form, i.e.\ $\overline{f} = y^2 - \overline{h}(x)$ for some squarefree $\overline{h}(x) \in \FF_q[x]$. Then again any $f \in \mathcal{O}_K[x,y]$ for which $\Delta(f) = \Delta(\overline{f})$ will address Problem~\ref{liftingproblem}. \begin{remark} \label{remark_gonalityoverFq} Let $g^1_d$ be a complete base-point free $\FF_q$-rational linear pencil of degree~$d$ on a non-singular projective curve $\overline{C} / \FF_q$. Then from standard arguments in Galois cohomology (that are specific to finite fields) it follows that this $g^1_d$ automatically contains an $\FF_q$-rational effective divisor, which can be used to construct an $\FF_q$-rational map to $\PPq^1$ of degree $d$. See for instance the proof of~\cite[Lem.\,6.5.3]{gilles}. This gives another way of seeing that a geometrically hyperelliptic curve over $\FF_q$ is automatically $\FF_q$-hyperelliptic, because the hyperelliptic pencil $g^1_2$ is unique, hence indeed defined over $\FF_q$. The advantage of this argument is that it is more flexible: for instance it also shows that a geometrically trigonal curve $\overline{C} / \FF_q$ of genus $g \geq 5$ always admits an $\FF_q$-rational degree $3$ map to $\PPq^1$, again because the $g^1_3$ on such a curve is unique. So we can unambiguously talk about trigonal curves from genus five on. \end{remark} Summing up, throughout the paper, it suffices to consider curves of $\FF_q$-gonality $\gamma > 2$, so that the canonical map $\kappa: \overline{C} \rightarrow \PPq^{g-1}$ is an embedding. In particular we have $g \geq 3$. From the $p$-adic point counting viewpoint, all omitted cases are covered by the algorithms of Satoh~\cite{satoh} and Kedlaya~\cite{harrison,kedlaya}. \section{Curves of low genus} \label{section_lowgenus} \subsection{Curves of genus three} \label{section_genus3} \subsubsection{Lifting curves of genus three} \label{basicsolutiontogenus3} Solving Problem~\ref{liftingproblem} in genus three in its basic version is not hard, so we consider this as a warm-up discussion. We first analyze which $\FF_q$-gonalities can occur: \begin{lemma} \label{genus3gonality} Let $\overline{C} / \FF_q$ be a non-hyperelliptic curve of genus $3$ and $\FF_q$-gonality $\gamma$, and assume that $q$ is odd. If $\# \overline{C}(\FF_q) = 0$ then $\gamma = 4$, while if $\# \overline{C}(\FF_q) > 0$ (which is guaranteed if $q > 29$) then $\gamma = 3$. \end{lemma} \begin{proof} Using the canonical embedding we can assume that $\overline{C}$ is a smooth plane quartic. It is classical that such curves have geometric gonality $3$, and that each gonal map arises as projection from a point on the curve. For a proof see \cite[Prop.\,3.13]{serrano}, where things are formulated in characteristic zero, but the same argument works in positive characteristic; alternatively one can consult~\cite{homma}. In particular if there is no $\FF_q$-point then there is no rational gonal map and $\gamma > 3$. But then a degree $4$ map can be found by projection from an $\FF_q$-point outside the curve. By \cite[Thm.\,3(2)]{howe} there exist pointless non-hyperelliptic curves of genus three over $\FF_q$ if and only if $q \leq 23$ or $q = 29$. \end{proof} We can now address Problem~\ref{liftingproblem} as follows. As in the proof we assume that $\overline{C}$ is given as a smooth quartic in $\PPq^2$. First suppose that $\# \overline{C}(\FF_q) = 0$. Because this is possible for $q \leq 29$ only, the occurrence of this event can be verified exhaustively. In this case the Newton polygon of the defining polynomial $\overline{f} \in \FF_q[x,y]$ of the affine part of $\overline{C}$ equals: \begin{center} \polfig{genus3_standard.pdf}{2.4}{2.4}{$\Delta_3^{0,0}$} \end{center} In particular Baker's bound is attained, and a naive Newton polygon preserving lift $f \in \mathcal{O}_K[x,y]$ automatically addresses (i), (ii) and (iii). If $\#\overline{C}(\FF_q)>0$ then one picks a random $\FF_q$-point $P$ (which can be found quickly) and one applies a projective transformation that maps $P$ to $(0 : 1 : 0)$. After doing so the Newton polygon of $\overline{f} \in \FF_q[x,y]$ becomes contained in (and typically equals): \begin{center} \polfig{genus3_chopped.pdf}{2.4}{2}{$\Delta_3^{1,0}$} \end{center} Again Baker's bound is attained, and a naive Newton polygon preserving lift $f \in \mathcal{O}_K[x,y]$ satisfies (i), (ii) and (iii). It is important to transform the curve \emph{before} lifting to characteristic $0$. Indeed, if one would immediately lift our input quartic to a curve $C \subset \PPK^2$ then it is highly likely that $C(K) = \emptyset$, and therefore that the $K$-gonality equals $4$ (by the same proof as above). This type of reasoning plays an important role throughout the paper, often in a more subtle way than here. \begin{remark}[purely notational] The indices $i,j$ in $\Delta_3^{i,j}$ refer to the multiplicities of intersection of $\overline{C}$ with the line at infinity at the coordinate points $(0:1:0)$ and $(1:0:0)$, assuming that it is defined by a polynomial having Newton polygon $\Delta_3^{i,j}$. Note that $\Delta_3^{0,0}$ is just another way of writing $3\Sigma$. \end{remark} \noindent \varhrulefill[0.4mm] \vspace{-0.3cm} \begin{algo} \label{algorithm_genus3} Lifting curves of genus $3$: basic solution \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \noindent \textbf{Input:} non-hyperelliptic genus $3$ curve $\overline{C}$ over $\FF_q$ \noindent \textbf{Output:} lift $f \in \mathcal{O}_K[x,y]$ satisfying (i), (ii), (iii) that is supported \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_3^{0,0}$ if $\overline{C}(\FF_q) = \emptyset$, or else \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_3^{2,0}$ \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \noindent \small 1 \normalsize: $\overline{C} \gets \text{CanonicalImage}(\overline{C})$ in $\PPq^2 = \proj \FF_q[X,Y,Z]$ \noindent \small 2 \normalsize: \textbf{if} $q > 29$ or $\overline{C}(\FF_q) \neq \emptyset$ (verified exhaustively) \textbf{then} \noindent \small 3 \normalsize: \qquad $P := \text{Random}(\overline{C}(\FF_q))$ \noindent \small 4 \normalsize: \qquad apply automorphism of $\PPq^2$ transforming $T_P(\overline{C})$ into $Z=0$ \noindent \small 5 \normalsize: \qquad \textcolor{white}{apply automorphism of $\PPq^2$ transforming} and $P$ into $(0:1:0)$ \noindent \small 6 \normalsize: \textbf{return} NaiveLift(Dehomogenization${}_Z$(DefiningPolynomial($\overline{C}$))) \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \end{algo} \subsubsection{Optimizations} \label{optim_genus3} For point counting purposes we can of course assume that $q > 29$, so that $\gamma = 3$. By applying (\ref{mademonic}) to a polynomial with Newton polygon $\Delta_3^{1,0}$ one ends up with a polynomial that is monic in $y$ and that has degree $4 + (\gamma - 1) = 6$ in $x$. This can be improved: in addition to mapping $P$ to $(0:1:0)$, we can have its tangent line $T_P(\overline{C})$ sent to the line at infinity. If we then lift $\overline{f}$ to $\mathcal{O}_K[x,y]$ we find an $f$ whose Newton polygon is contained in (and typically equals): \begin{center} \polfig{genus3_rationalpoint.pdf}{2.4}{2}{$\Delta_3^\text{2,0}$} \end{center} In particular $f$ is monic (up to a scalar) and $\deg_x f \leq 4$. We can in fact achieve $\deg_x f = 3$ in all cases of practical interest. Indeed, with an asymptotic chance of $1/2$ our tangent line $T_P(\overline{C})$ intersects $\overline{C}$ in two other rational points. The above construction leaves enough freedom to position one of those points $Q$ at $(1:0:0)$. The resulting lift $f$ then becomes contained in (and typically equals) \begin{center} \polfig{genus3_tangentpluspoint.pdf}{2}{2}{$\Delta_3^\text{2,1}$} \end{center} In the case of failure we retry with another $P$. If $q > 59$ (say) then there are enough $\FF_q$-points $P \in \overline{C}$ for this approach to work with near certainty, although there might exist sporadic counterexamples well beyond that point. \begin{remark}[non-generic optimizations] \label{genus3flexremark} For large values of $q$ one might want to pursue a further compactification of the Newton polygon. Namely, if one manages to choose $P \in \overline{C}(\FF_q)$ such that it is an ordinary flex or such that $T_P(\overline{C})$ is a bitangent, then $T_P(\overline{C})$ meets $\overline{C}$ in a unique other point $Q$, which is necessarily defined over $\FF_q$. By proceeding as before one respectively ends up inside the first and second polygon below. If one manages to let $P \in \overline{C}(\FF_q)$ be a non-ordinary flex, i.e.\ a hyperflex, then positioning it at $(0:1:0)$ results in a polygon of the third form: \begin{center} \polfig{genus3_flex.pdf}{2}{2}{$\Delta_3^\text{3,1}$} \hspace{-1cm} \polfig{genus3_bitangent.pdf}{2}{2}{$\Delta_3^\text{2,2}$} \hspace{-1cm} \polfig{genus3_C34.pdf}{2.4}{2}{$\Delta_3^\text{4,0}$} \end{center} Heuristically, as $q \rightarrow \infty$ we expect to be able to realize the first two polygons with probablities $1-1/e$ and $1 - 1/\sqrt{e}$, respectively; more background can be found in an \texttt{arXiv} version of our paper (\texttt{1605.02162v2}). In contrast the hyperflex case $\Delta_3^\text{4,0}$ is very exceptional, but we included it in the discussion because it corresponds to the well-known class of $C_{3,4}$ curves: even though $\deg_x f = 4$ here, the corresponding point count is slightly faster. \end{remark} \begin{comment} Even though $\deg_x f = \deg_y f = 3$ is optimal, for large values of $q$ it is worth pursuing a further compactification of the Newton polygon. We claim that for a substantial percentage of all smooth plane quartics $\overline{C}$ over $\FF_q$ it is possible to end up inside one of the following polygons: \begin{center} \polfig{genus3_flex.pdf}{2}{2}{$\Delta_3^\text{3,1}$} \polfig{genus3_bitangent.pdf}{2}{2}{$\Delta_3^\text{2,2}$} \end{center} Following a heuristic that is explained below, we estimate that this is feasible in approximately $1 - 1/e^{3/2} \approx 77.6 \%$ of the cases. The idea is to choose $P$ such that \begin{itemize} \item either it is a flex, meaning that $T_P(\overline{C})$ intersects the curve at $P$ with multiplicity three. The fourth point $Q$ of intersection with $\overline{C}$ is necessarily rational, and the above procedure results in a Newton polygon of the first type. \emph{How to find such a $P$?} The flexes of $\overline{C}$ are cut out by its Hessian \[ \det \begin{pmatrix} \frac{\partial^2 F}{\partial X^2} & \frac{\partial^2 F}{\partial X \partial Y} & \frac{\partial^2 F}{\partial X \partial Z} \\ \frac{\partial^2 F}{\partial X \partial Y} & \frac{\partial^2 F}{\partial Y^2} & \frac{\partial^2 F}{\partial Y \partial Z} \\ \frac{\partial^2 F}{\partial X \partial Z} & \frac{\partial^2 F}{\partial Y \partial Z} & \frac{\partial^2 F}{\partial Z^2} \end{pmatrix} \] and therefore form a zero-dimensional scheme of degree $24$. So if there is indeed a rational flex then it can be found rapidly using a (non-involved) Gr\"obner basis computation. \begin{heuristic} The chance that this zero-dimensional scheme has a rational point is comparable to the chance that a univariate polynomial of degree $24$ has a rational root. This is approximately $1 - 1/e$; the exact limiting expression for $q \rightarrow \infty$ is \[ \frac{1}{2!} - \frac{1}{3!} + \frac{1}{4!} - \dots + \frac{1}{24!}. \] See for instance \cite[Ex.\,11.2.28]{mullenpanario}. \end{heuristic} \begin{remark} \todo{iets zeggen over karakteristiek $3$, mss funny curve vermelden} \end{remark} \item or $T_P(\overline{C})$ is a bitangent, meaning that $T_P(\overline{C}) = T_Q(\overline{C})$ for another, necessarily rational point $Q$. Then the above construction results in a Newton polygon of the second type. \emph{How to find such a $P$?} It is known that a smooth plane quartic $\overline{C}$ has exactly $28$ bitangents, corresponding to the ordinary nodes of its dual curve, which is the image of the morphism \[ \overline{C} \rightarrow \mathbb{P}^2 : ( \alpha : \beta : \gamma) \mapsto \left( \frac{\partial F}{\partial X} (\alpha), \frac{\partial F}{\partial Y} (\beta), \frac{\partial F}{\partial Z} (\gamma) \right) \] and can therefore be computed using Gr\"obner bases. Since we are only interested in tangent lines intersecting $\overline{C}$ in two rational points, we are looking for nodes of the dual curve with rational branches. \begin{heuristic} The chance that the dual curve has a node with rational branches is comparable to the chance that a univariate polynomial of degree $28$ has a rational root that is a square. This is about $1 - 1/\sqrt{e}$ by an argument similar to that in \cite[Ex.\,11.2.28]{mullenpanario}. \end{heuristic} \end{itemize} Assuming independence of events, we expect one of these approaches to work for roughly $1 - 1/e^{3/2}$ of all smooth plane quartics. This is confirmed by experiment. \begin{remark} One could also try to let $P$ be a hyperflex, meaning that $T_P(\overline{C})$ intersects the curve at $P$ with multiplicity $4$. This will work if and only if the zero-dimensional scheme cut out by the Hessian has a singular point, which it most likely does not. But if it does then the hyperflex $P$ can be found easily, and one obtains a Newton polygon of the form \begin{center} \polfig{genus3_C34.pdf}{2.4}{2}{$\Delta_3^\text{4,0}$} \end{center} i.e.\ $\overline{C}$ is a $C_{3,4}$ curve. This increases $\deg_x f$ to $4$ again, but nevertheless results in a faster point count; see below. \end{remark} \end{comment} \subsubsection{Implementation} \label{section_genus3implementationandtimings} We now report on timings, memory usage and failure rates of our implementation of the algorithms in this section for various values of $p$ and $q=p^n$. The first column in each table contains the time used to compute the lift to characteristic~$0$ averaged over $1000$ random examples. Then the second column gives the time used by the point counting code \verb{pcc{ from \cite{tuitman1,tuitman2} averaged over $10$ different random examples. Next, the third column contains the total memory used in the computation. Finally, the last column gives the number of examples out of the $1000$ where we did not find a lift satisfying \cite[Ass.\,1]{tuitman2}, which each time turned out to be $0$, i.e.\ we always found a good lift. \bigskip \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $p$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $11$ & 0.2 &$0.2$ & $ 32$ & 0 \\ $67$ & 0.2 &$0.6$ & $ 32$ & 0 \\ $521$ & 0.2 &$4.2$ & $ 64$ & 0 \\ $4099$ & 0.2 & $41$ & $165$ & 0 \\ $32771$ & 0.2 &$590$ & $1124$ & 0 \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $3^5$ & $0.4$ &$2.4$ & $ 64$ & 0 \\ $7^5$ & $0.4$ &$6.6$ & $ 64$ & 0 \\ $17^5$ & $0.4$ &$12$ & $ 76$ & 0 \\ $37^5$ & $0.4$ &$26$ & $124$ & 0 \\ $79^5$ & $0.4$ &$66$ & $241$ & 0 \end{tabular} \quad \begin{tabular}{r||r|r|r|r} &time &time & space & fails \\ $q$ &lift(s)&pcc(s) & (Mb) & /1000 \\ \hline \hline $3^{10}$ &$0.5$ &$15$ & $76$ & 0 \\ $7^{10}$ &$0.6$ &$40$ &$118$ & 0 \\ $17^{10}$ &$0.7$ &$82$ &$241$ & 0 \\ $37^{10}$ &$0.7$ &$181$ &$403$ & 0 \\ $79^{10}$ &$0.8$ &$473$ &$831$ & 0 \end{tabular} \bigskip \normalsize Alternatively, without using the methods from this section, we can just make any plane quartic monic using~(\ref{mademonic}), then lift naively to characteristic~$0$ and try to use this lift as input for \verb{pcc{. This way, we obtain the following three tables. \bigskip \scriptsize \tabcolsep=0.235cm \noindent \begin{tabular}{r||r|r|r} & time & space & fails \\ $p$ & pcc(s) & (Mb) & /1000 \\ \hline \hline $11$ & $0.4$ & $ 32$ & 225 \\ $67$ & $1.3$ & $ 32$ & 52 \\ $521$ & $8.7$ & $ 76$ & 5 \\ $4099$ & $83 $ & $ 307$ & 1 \\ $32771$ &$1153$ & $2086$ & 0 \end{tabular} \quad \begin{tabular}{r||r|r|r} & time & space & fails \\ $q$ & pcc(s) & (Mb) & /1000 \\ \hline \hline $3^5$ & $6.1$ & $ 32$ & $13$ \\ $7^5$ & $14$ & $ 32$ & $0$ \\ $17^5$ & $32$ & $ 80$ & $0$ \\ $37^5$ & $71$ & $156$ & $0$ \\ $79^5$ &$161$ & $288$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r} & time & space & fails \\ $q$ & pcc(s) & (Mb) & /1000 \\ \hline \hline $3^{10}$ & $42$ & $76$ & $0$ \\ $7^{10}$ & $94$ & $124$ & $0$ \\ $17^{10}$ & $248$ & $320$ & $0$ \\ $37^{10}$ & $524$ & $589$ & $0$ \\ $79^{10}$ &$1296$ &$1311$ & $0$ \end{tabular} \bigskip \normalsize Comparing the different tables, we see that the approach described in this section saves a factor of about $3$ in runtime and a factor of about $2$ in memory usage. Moreover, for small fields the naive lift of a plane quartic sometimes does not satisfy \cite[Ass.\,1]{tuitman2}, while this never seems to be the case for the lift constructed using our methods. \subsection{Curves of genus four} \label{section_genus4} \subsubsection{Lifting curves of genus four} \label{section_genus4lifting} By \cite[Ex.\,IV.5.2.2]{hartshorne} the ideal of a canonical model $\overline{C} \subset \PPq^3 = \text{Proj} \, \FF_q[X,Y,Z,W]$ of a non-hyperelliptic genus $g = 4$ curve is generated by a cubic $\overline{S}_3$ and a unique quadric $\overline{S}_2$. Since $q$ is assumed odd, the latter can be written as \[ \begin{pmatrix} X & Y & Z & W \end{pmatrix} \cdot \overline{M} \cdot \begin{pmatrix} X & Y & Z & W \end{pmatrix}^t, \qquad \overline{M} \in \FF_q^{4 \times 4}, \ \overline{M}^t = \overline{M}. \] Let $\chi_2 : \FF_q \rightarrow \{0, \pm 1\}$ denote the quadratic character on $\FF_q$. Then $\chi_2(\det \overline{M})$ is an invariant of $\overline{C}$, which is called the discriminant. If we let $S_2, S_3 \in \mathcal{O}_K[X,Y,Z,W]$ be homogeneous polynomials that reduce to $\overline{S}_2$ and $\overline{S}_3$ modulo $p$, then by \cite[Ex.\,IV.5.2.2]{hartshorne} these define a genus $4$ curve $C \subset \PPK^3$ over $K$, thereby addressing (i) and (ii). However as mentioned in Section~\ref{section_firstfacts} we expect the $K$-gonality of $C$ to be typically $2g - 2 = 6$. This exceeds the $\FF_q$-gonality of $\overline{C}$: \begin{lemma} \label{genus4gonality} Let $\overline{C} / \FF_q$ be a non-hyperelliptic curve of genus $4$ and $\FF_q$-gonality $\gamma$, and assume that $q$ is odd. If the discriminant of $\overline{C}$ is $0$ or $1$ then $\gamma = 3$. If it is $-1$ and $\# \overline{C}(\FF_{q^2}) > 0$ (which is guaranteed if $q > 7$) then $\gamma = 4$. Finally, if it is $-1$ and $\# \overline{C}(\FF_{q^2}) = 0$ then $\gamma = 5$. \end{lemma} \begin{proof} By \cite[Ex.\,IV.5.5.2]{hartshorne} our curve carries one or two geometric $g^1_3$'s, depending on whether the quadric $\overline{S}_2$ is singular (discriminant $0$) or not. In the former case the quadric is a cone, and the $g^1_3$ corresponds to projection from the top. This is automatically defined over $\FF_q$. In the latter case the quadric is $\FF_{q^2}$-isomorphic to the hyperboloid $\PPq^1 \times \PPq^1 \subset \PPq^3$ and the $g^1_3$'s correspond to the two rulings of the latter. If the isomorphism can be defined over $\FF_q$ (discriminant $1$) then the $g^1_3$'s are $\FF_q$-rational. In the other case (discriminant $-1$) the smallest field of definition is $\FF_{q^2}$. So we can assume that the discriminant of $\overline{C}$ is $-1$, and therefore that $\gamma > 3$. Now suppose that $\# \overline{C}(\FF_{q^2}) > 0$, which is guaranteed if $q > 7$ by \cite[Thm.\,2]{howe}. If there is an $\FF_q$-point then let $\overline{\ell}$ be the tangent line to $\overline{C}$ at it. In the other case we can find two conjugate $\FF_{q^2}$-points, and we let $\overline{\ell}$ be the line connecting both. In both cases $\overline{\ell}$ is defined over $\FF_q$, and the pencil of planes through $\overline{\ell}$ cuts out a $g^1_4$, as wanted. The argument can be reversed: if there exists a $g^1_4$ containing an effective $\FF_q$-rational divisor $D$, then by Riemann-Roch we find that $|K - D|$ is non-empty. In particular there exists an effective $\FF_q$-rational divisor of degree $\deg (K-D) = 2$ on $\overline{C}$, and $\# \overline{C}(\FF_{q^2}) > 0$. So if $\# \overline{C}(\FF_{q^2}) = 0$ then $\gamma > 4$. Now note that $\# \overline{C}(\FF_{q^5}) > 0$ by the Weil bound. So $\overline{C}$ carries an effective divisor $D$ of degree $5$. The linear system $|K-D|$ must be empty, for otherwise there would exist an $\FF_q$-point on $\overline{C}$. But then Riemann-Roch implies that $\dim |D| = 1$, i.e.\ our curve carries an $\FF_q$-rational $g^1_5$. \end{proof} \begin{remark} An example of a genus four curve $\overline{C}/\FF_3$ having $\FF_3$-gonality five can be found in an \texttt{arXiv} version of our paper (\texttt{1605.02162v2}). \end{remark} To address Problem~\ref{liftingproblem} in the non-hyperelliptic genus $4$ case we make a case-by-case analysis. \paragraph*{\underline{$\chi_2(\det \overline{M}_2) = 0$}} In this case $\overline{S}_2$ is a cone over a conic. A linear change of variables takes $\overline{S}_2$ to the form $ZW - X^2$, which we note is one of the standard realizations inside $\PPq^3$ of the weighted projective plane $\PPq(1,2,1)$. It is classical how to find such a linear change of variables (diagonalization, essentially). Projecting from $(0:0:0:1)$ on the $XYZ$-plane amounts to eliminating the variable $W$, to obtain \begin{equation} \label{genus4conic} Z^3 \overline{S}_3(X, Y , Z, \frac{X^2}{Z}) = \overline{S}_3(XZ,YZ,Z^2,X^2). \end{equation} After dehomogenizing with respect to $Z$, renaming $X \leftarrow x$ and $Y \leftarrow y$ and rescaling if needed, we obtain an affine equation $\overline{f} = y^3 + \overline{f}_2(x)y^2 + \overline{f}_4(x)y + \overline{f}_6(x)$, with $\overline{f}_i \in \FF_q[x]$ of degree at most $i$. Its Newton polygon is contained in (and typically equals): \begin{center} \polfig{genus4_conical.pdf}{3.2}{2}{$\Delta_{4,0}^0$} \end{center} So Baker's bound is attained and we take for $f \in \mathcal{O}_K[x,y]$ a naive coefficient-wise lift. \paragraph*{\underline{$\chi_2(\det \overline{M}_2) = 1$}} In this case $\overline{S}_2$ is a hyperboloid. A linear change of variables takes $\overline{S}_2$ to the standard form $XY - ZW$, which we note is the image of $\PPq^1 \times \PPq^1$ in $\PPq^3$ under the Segre embedding. Projection from $(0:0:0:1)$ on the $XYZ$-plane amounts to eliminating the variable $W$, to obtain \[ Z^3 \overline{S}_3(X,Y,Z,\frac{XY}{Z} ) = \overline{S}_3(XZ, YZ, Z^2, XY).\] After dehomogenizing with respect to $Z$ and renaming $X \leftarrow x$ and $Y \leftarrow y$ we obtain an affine equation $\overline{f} = \overline{f}_0(x) y^3 + \overline{f}_1(x) y^2 + \overline{f}_2(x) y + \overline{f}_3(x)$ with all $\overline{f}_i \in \FF_q[x]$ of degree at most $3$. Its Newton polygon is contained in (and typically equals) \begin{center} \polfig{genus4_hyperboloidal.pdf}{2}{2}{$\Delta_{4,1}^{0}$} \end{center} So Baker's bound is attained and we can take for $f \in \mathcal{O}_K[x,y]$ a coefficient-wise lift of $\overline{f}$. \paragraph*{\underline{$\chi_2(\det \overline{M}_2) = -1$}} This is our first case where in general no plane model can be found for which Baker's bound is attained \cite[\S6]{CV}. If $\overline{C}(\FF_{q^2}) = \emptyset$, or in other words if $\gamma = 5$, then unfortunately we do not know how to address Problem~\ref{liftingproblem}. We therefore assume that $\overline{C}(\FF_{q^2}) \neq \emptyset$ and hence that $\gamma = 4$. This is guaranteed if $q > 7$, so for point counting purposes this is amply sufficient. We follow the proof of Lemma~\ref{genus4gonality}: by exhaustive search we find a point $P \in \overline{C}(\FF_{q^2})$ along with its Galois conjugate $P'$ and consider the line $\overline{\ell}$ connecting both (tangent line if $P = P'$). This line is defined over $\FF_q$, so that modulo a projective transformation we can assume that $\overline{\ell} : X = Z = 0$. When plugging in $X = Z = 0$ in $\overline{S}_2$ we find a non-zero quadratic expression in $Y$ and $W$. Indeed: $\overline{S}_2$ cannot vanish identically on $\overline{\ell}$ because no three points of $\overline{S}_2(\FF_q)$ are collinear. Because $\overline{C}$ intersects $\overline{\ell}$ in two points (counting multiplicities) we find that \[ \overline{S}_3(0,Y,0,W) = (\overline{a}Y + \overline{b}W) \overline{S}_2(0,Y,0,W) \] for certain $\overline{a}, \overline{b} \in \FF_q$ that are possibly zero. Lift $\overline{S}_2$ coefficient-wise to a homogenous quadric $S_2 \in \mathcal{O}_K[X,Y,Z,W]$ and let $a,b \in \mathcal{O}_K$ reduce to $\overline{a},\overline{b}$ mod $p$. We now construct $S_3 \in \mathcal{O}_K[X,Y,Z,W]$ as follows: for the coefficients at $Y^3, Y^2W, YW^2, W^3$ we make the unique choice for which \[ S_3(0,Y,0,W) = (aY + bW) S_2(0,Y,0,W), \] while the other coefficients are randomly chosen lifts of the corresponding coefficients of $\overline{S}_3$. Then the genus $4$ curve $C \subset \PPK^3$ defined by $S_2$ and $S_3$ is of gonality $4$. Indeed, it is constructed such that the line $\ell : X = Z = 0$ intersects the curve in two points (possibly over a quadratic extension), and the pencil of planes through this line cuts out a $g^1_4$. Now we project our lift $C \subset \PPK^3$ from $(0:0:0:1)$ to a curve in $\PPK^2$. This amounts to eliminating $W$ from $S_2$ and $S_3$. By dehomogenizing the resulting sextic with respect to $Z$, and by renaming $X \leftarrow x$ and $Y \leftarrow y$ we end up with a polynomial $f \in \mathcal{O}_K[x,y]$ whose Newton polygon is contained in (and typically equals): \begin{center} \polfig{genus4_gon4.pdf}{3.2}{2.4}{$\Delta_{4,-1}^{6}$} \end{center} Geometrically, what happens is that the points of $C$ on $\ell$ are both mapped to $(0:1:0)$ under projection from $(0:0:0:1)$, creating a singularity there, which in terms of the Newton polygon results in $6\Sigma$ with its top chopped off. The polynomial $f$ satisfies (i), (ii) and (iii) from Problem~\ref{liftingproblem}. Note that Baker's bound is usually \emph{not} attained here: it gives an upper bound of $9$, while $C$ has genus $4$. So it is crucial to lift the equations to $\mathcal{O}_K$ \emph{before} projecting on the plane. \noindent \varhrulefill[0.4mm] \vspace{-0.3cm} \begin{algo} \label{algorithm_genus4} Lifting curves of genus $4$: basic solution \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \noindent \textbf{Input:} non-hyperelliptic genus $4$ curve $\overline{C}/\FF_q$ of $\FF_q$-gonality $\gamma \leq 4$ \noindent \textbf{Output:} lift $f \in \mathcal{O}_K[x,y]$ satisfying (i), (ii), (iii) that is supported \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_{4,0}^0$ if the discriminant is $0$, or else \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_{4,1}^0$ if the discriminant is $1$, or else \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_{4,-1}^6$ \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \noindent \small \phantom{0}1 \normalsize: $\overline{C} \gets \text{CanonicalImage}(\overline{C})$ in $\PPq^3 = \proj \FF_q[X,Y,Z,W]$ \noindent \small \phantom{0}2 \normalsize: $\overline{S}_2 \gets \text{unique quadric in Ideal}(\overline{C})$; $\overline{M}_2 \gets \text{Matrix}(\overline{S}_2)$; $\chi \gets \chi_2(\det \overline{M}_2)$ \noindent \small \phantom{0}3 \normalsize: $\overline{S}_3 \gets \text{cubic that along with } \overline{S}_2 \text{ generates Ideal}(\overline{C})$ \noindent \small \phantom{0}4 \normalsize: \textbf{if} $\chi = 0$ \textbf{then} \noindent \small \phantom{0}5 \normalsize: \quad apply automorphism of $\PPq^3$ transforming $\overline{S}_2=0$ into $ZW - X^2=0$ \noindent \small \phantom{0}6 \normalsize: \quad \textbf{return} NaiveLift(Dehomogenization${}_Z$($\overline{S}_3(XZ,YZ,Z^2,X^2)$)) \noindent \small \phantom{0}7 \normalsize: \textbf{else if} $\chi = 1$ \textbf{then} \noindent \small \phantom{0}8 \normalsize: \quad apply automorphism of $\PPq^3$ transforming $\overline{S}_2 = 0$ into $XY-ZW = 0$ \noindent \small \phantom{0}9 \normalsize: \quad \textbf{return} NaiveLift(Dehomogenization${}_Z$($\overline{S}_3(XZ,YZ,Z^2,XY)$)) \noindent \small 10 \normalsize: \textbf{else} \noindent \small 11 \normalsize: \quad $P := \text{Random}(\overline{C}(\FF_{q^2}))$; $P' := \text{Conjugate}(P)$ \noindent \small 12 \normalsize: \quad $\overline{\ell} \gets \text{line through $P$ and $P'$ (tangent line if $P = P'$)}$ \noindent \small 13 \normalsize: \quad apply automorphism of $\PPq^3$ transforming $\overline{\ell}$ into $X = Z = 0$ \noindent \small 14 \normalsize: \quad $S_2 \leftarrow \text{NaiveLift}(\overline{S}_2)$ \noindent \small 15 \normalsize: \quad $S_3 \leftarrow$ lift of $\overline{S}_3$ satisfying $S_3(0,Y,0,W) = (aY + bW)S_2(0,Y,0,W)$ for $a,b \in \mathcal{O}_K$ \noindent \small 16 \normalsize: \quad \textbf{return} Dehomogenization${}_Z$(res${}_W$($S_2,S_3$)) \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \end{algo} \subsubsection{Optimizations} \label{optim_genus4} \paragraph*{\underline{$\chi_2(\det \overline{M}_2) = 0$}} By applying (\ref{mademonic}) to a polynomial with Newton polygon $\Delta_{4,0}^{0}$ one ends up with a polynomial that is monic in $y$ and that has degree $6$ in $x$. This can be improved as soon as $\overline{C}(\FF_q)\neq 0$, which is guaranteed if $q > 49$ by \cite[Thm.\,2]{howe}. Namely we can view (\ref{genus4conic}) as the defining equation of a smooth curve in the weighted projective plane $\PPq(1, 2, 1)$. Using an automorphism of the latter we can position a given $\FF_q$-rational point $P$ at $(1:0:0)$ and the corresponding tangent line at $X = 0$, in order to end up with a Newton polygon that is contained in (and typically equals): \begin{center} \polfig{genus4_Xtangent.pdf}{2.4}{2}{$\Delta_{4,0}^{1}$} \end{center} See Remark~\ref{autsofP121} below for how to do this in practice. So we find $\deg_x f = 4$, which is optimal because the $g^1_3$ is unique in the case of a singular $\overline{S}_2$. There is a caveat here, in that the tangent line at $P$ might exceptionally be vertical, i.e.\ $P$ might be a ramification point of our degree $3$ map $(x,y) \mapsto x$. In this case it is impossible to position this line at $X = 0$, but in practice one can simply retry with another $P$. But in fact having a vertical tangent line is an even slightly better situation, as explained in Remark~\ref{genus4chi0remark} below. \begin{remark}~\label{autsofP121} The automorphisms of $\PPq(1,2,1)$ can be applied directly to $\overline{f}$. They correspond to \begin{itemize} \item substituting $y \leftarrow \overline{a} y + \overline{b} x^2 + \overline{c} x + \overline{d}$ and $x \leftarrow \overline{a}' x + \overline{b}'$ in $ \overline{f}$ for some $\overline{a},\overline{a}' \in \FF_q^\ast$ and $\overline{b},\overline{b}',\overline{c},\overline{d} \in \FF_q$, \item exchanging the line at infinity for the $y$-axis by replacing $\overline{f}$ by $x^6 \overline{f}(x^{-1},x^{-2}y)$, \end{itemize} or to a composition of both. For instance imagine that an affine point $P = (\overline{a},\overline{b})$ was found with a non-vertical tangent line. Then $\overline{f} \leftarrow \overline{f}(x + \overline{a}, y + \overline{b})$ translates this point to the origin, at which the tangent line becomes of the form $y = \overline{c} x$. Substituting $\overline{f} \leftarrow \overline{f}(x,y + \overline{c}x)$ positions this line horizontally, and finally replacing $\overline{f}$ by $x^6 \overline{f}(x^{-1},x^{-2}y)$ results in a polynomial with Newton polygon contained in $\Delta_{4,0}^1$. \end{remark} \begin{remark}[non-generic optimizations] \label{genus4chi0remark} If $P$ has a vertical tangent line then positioning it at $(1:0:0)$ results in a Newton polygon that is contained in (and typically equals) the first polygon below: \begin{center} \polfig{genus4_tangentruling.pdf}{2.8}{2}{$\Delta_{4,0}^{2}$} \polfig{genus4_C35.pdf}{2.8}{2}{$\Delta_{4,0}^{3}$} \end{center} Even though $\deg_x f = 5$ here, this results in a slightly faster point count. Such a $P$ will exist if and only if the ramification scheme of $(x,y) \mapsto x$ has an $\FF_q$-rational point. Following the same heuristic as in Remark~\ref{genus3flexremark} we expect that this works in about $1 - 1/e$ of the cases. If there exists a point of ramification index $3$ then one can even end up inside the second polygon. This event is highly exceptional, but we include it in our discussion because this corresponds to the well-known class of $C_{3,5}$ curves. \end{remark} \paragraph*{\underline{$\chi_2(\det \overline{M}_2) = 1$}} By applying (\ref{mademonic}) to a polynomial with Newton polygon $\Delta_{4,1}^{0}$ one ends up with a polynomial that is monic in $y$ and that has degree $3 + (\gamma - 1)3 = 9$ in $x$. This can be improved as soon as $\overline{C}(\FF_q) \neq 0$, which is guaranteed if $q > 49$ by \cite[Thm.\,2]{howe}. Assume as before that $\overline{S}_2$ is in the standard form $XY - ZW$. So it is the image of the Segre embedding \begin{equation} \label{segre} \PPq^1 \times \PPq^1 \hookrightarrow \PPq^3 : ( (X_0 : Z_0), (Y_0 : W_0) ) \mapsto (X_0W_0 : Y_0Z_0 : Z_0W_0 : X_0Y_0 ). \end{equation} That is: we can view $\overline{C}$ as the curve in $\mathbb{P}^1 \times \mathbb{P}^1$ defined by the bihomogeneous polynomial \[ \overline{S}_3(X_0W_0,Y_0Z_0,Z_0W_0,X_0Y_0) \] of bidegree $(3,3)$. Remark that if we dehomogenize with respect to both $Z_0$ and $W_0$ and rename $X_0 \leftarrow x$ and $Y_0 \leftarrow y$ then we get the polynomial $\overline{f}$ from before. Now if our curve has a rational point $P$, by applying an appropriate projective transformation in each component we can arrange that $P = ((1:0), (1:0))$. If we then dehomogenize we end up with a Newton polygon that is contained in (and typically equals): \begin{center} \polfig{genus4_hyperboloidal1.pdf}{2}{2}{$\Delta_{4,1}^{1}$} \end{center} So Baker's bound is attained and we take for $f \in \mathcal{O}_K[x,y]$ a naive coefficient-wise lift. Now applying (\ref{mademonic}) typically results in a polynomial of degree $3 + (\gamma - 1)2 = 7$ in $x$. \begin{remark}~\label{autsofP1P1} The automorphisms of $\PPq^1 \times \PPq^1$ can again be applied directly to $\overline{f}$. They correspond to \begin{itemize} \item substituting $y \leftarrow \overline{a} y + \overline{b}$ and $x \leftarrow \overline{a}' x + \overline{b}'$ in $ \overline{f}$ for some $\overline{a},\overline{a}' \in \FF_q^\ast$ and $\overline{b},\overline{b}' \in \FF_q$, \item exchanging the $x$-axis for the horizontal line at infinity by replacing $\overline{f}$ by $y^3 \overline{f}(x,y^{-1})$, \item exchanging the $y$-axis for the vertical line at infinity by replacing $\overline{f}$ by $x^3 \overline{f}(x^{-1},y)$, \end{itemize} or to a composition of these. For instance imagine that an affine point $P = (\overline{a}, \overline{b})$ was found, then $\overline{f} \leftarrow \overline{f}(x + \overline{a}, y + \overline{b})$ translates this point to the origin, and subsequently replacing $\overline{f}$ by $x^3y^3\overline{f}(x^{-1}, y^{-1})$ results in a polynomial with Newton polygon contained in $\Delta_{4,1}^1$. \end{remark} \begin{remark}[non-generic optimizations] If one manages to let $P$ be a point with a horizontal tangent line, i.e.\ if $P$ is a ramification point of the projection map from $\overline{C}$ onto the second component of $\PPq^1 \times \PPq^1$, then the Newton polygon even becomes contained in (and typically equals): \begin{center} \polfig{genus4_hyperboloidal2.pdf}{2}{2}{$\Delta_{4,1}^{2}$} \end{center} This eventually results in a polynomial $f \in \mathcal{O}_K[x,y]$ of degree $3 + (\gamma - 1)1 = 5$ in $x$. As in the discriminant $0$ case, we heuristically expect the probability of success to be about $1 - 1/e$. However, it is also fine to find a ramification point of the projection of $\overline{C}$ onto the first component of $\PPq^1 \times \PPq^1$, because we can change the role of $(X_0,Z_0)$ and $(Y_0,W_0)$ if wanted. Assuming independence of events, the percentage of non-hyperelliptic genus $4$ curves with discriminant $1$ that admit a Newton polygon of the form $\Delta_{4,1}^2$ should be approximately $1 - 1/e^2$. \end{remark} \begin{comment} \begin{remark} One could also try to let $P$ be a ramification point of index three. Such a point will usually not exist, but if it does then it can be found easily, and one obtains a Newton polygon that is contained in (and typically equals) \begin{center} \polfig{genus4_hyperboloidal3.pdf}{2}{2}{$\Delta_{4,1}^{3}$} \end{center} In particular $\overline{f} \in \FF_q[x,y]$ and its lift $f \in \mathcal{O}_K[x,y]$ are already monic in $y$, and so the resulting degree in $x$ is $3$. \end{remark} \end{comment} \paragraph*{\underline{$\chi_2(\det \overline{M}_2) = -1$}} By applying (\ref{mademonic}) to a polynomial with Newton polygon $\Delta_{4,-1}^{6}$ we end up with a polynomial that is monic in $y$ and that has degree $3 + (\gamma - 1)2 = 9$. This can be improved as soon as $\overline{C}(\FF_q) \neq 0$, which is guaranteed if $q > 49$ by \cite[Thm.\,2]{howe}. In this case we redo the construction with $\overline{\ell}$ the tangent line to a point $P \in \overline{C}(\FF_q)$. As before we apply a projective transformation to obtain $\overline{\ell} : X = Z = 0$, but in addition we make sure that $P = (0:0:0:1)$. This implies that $\overline{S}_2(0,Y,0,W) = Y^2$, possibly after multiplication by a scalar. We now proceed as before, to find lifts $S_2, S_3 \in \mathcal{O}_K[X,Y,Z,W]$ that cut out a genus $4$ curve $C \subset \PPK^3$, still satisfying the property of containing $(0:0:0:1)$ with corresponding tangent line $\ell : X = Z = 0$. If we then project from $(0:0:0:1)$ we end up with a quintic in $\PPK^2$, rather than a sextic. The quintic still passes through the point $(0:1:0)$, which is now non-singular: otherwise the pencil of lines through that point would cut out a $K$-rational $g^1_3$. We can therefore apply a projective transformation over $K$ that maps the corresponding tangent line to infinity, while keeping the point at $(0:1:0)$. After having done so, we dehomogenize to find a polynomial $f \in \mathcal{O}_K[x,y]$ whose Newton polygon is contained in (and typically equals) \begin{center} \polfig{genus4_gon4deg5.pdf}{2.8}{2.4}{$\Delta_{4,-1}^{5}$} \end{center} It still satisfies (i), (ii) and (iii), while here $\deg_xf \leq 5$. \subsubsection{Implementation} The tables below contain timings, memory usage and failure rates for $\chi_2=0,1,-1$ and various values of $p$ and $q=p^n$. For the precise meaning of the various entries in the table see Section~\ref{section_genus3implementationandtimings}.\\ \vspace{-0.2cm} \noindent \textbf{$\mathbf{\chi_2=0}$}\\ \scriptsize \tabcolsep=0.11cm \noindent \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $p$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $11$ & $0.01$ & $0.3$ & $32$ & $159$ \\ $67$ & $0.01$ & $1.4$ & $32$ & $2$ \\ $521$ & $0.01$ & $13$ & $73$ & $2$ \\ $4099$ & $0.01$ & $189$ & $323$ & $0$ \\ $32771$ & $0.01$ &$2848$ & $2396$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $3^5$ & $0.04$ &$6.6$ & $64$ & $2$ \\ $7^5$ & $0.05$ & $13$ & $73$ & $0$ \\ $17^5$ & $0.1$ & $32$ &$118$ & $0$ \\ $37^5$ & $0.1$ & $73$ &$197$ & $0$ \\ $79^5$ & $0.1$ &$183$ &$371$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s)&pcc(s) & (Mb) & /1000 \\ \hline \hline $3^{10}$ & $0.3$ & $34$ &$112$ & $0$ \\ $7^{10}$ & $0.4$ & $76$ &$156$ & $0$ \\ $17^{10}$ & $0.6$ &$205$ &$320$ & $0$ \\ $37^{10}$ & $0.7$ &$537$ &$653$ & $0$ \\ $79^{10}$ & $0.9$ &$1392$ &$1410$ & $0$ \end{tabular}\\ \normalsize \vspace{0.3cm} \noindent \textbf{$\mathbf{\chi_2=1}$}\\ \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $p$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $11$ & $0.01$ & $0.4$ & $32$ & $169$ \\ $67$ & $0.02$ & $1.8$ & $32$ & $1$ \\ $521$ & $0.02$ & $14$ & $76$ & $0$ \\ $4099$ & $0.02$ & $230$ & $508$ & $0$ \\ $32771$ & $0.02$ &$2614$ &$3616$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $3^5$ & $0.1$ & $7.5$ & $64$ & $0$ \\ $7^5$ & $0.1$ & $16$ & $112$ & $0$ \\ $17^5$ & $0.2$ & $41$ & $197$ & $0$ \\ $37^5$ & $0.2$ & $94$ & $320$ & $0$ \\ $79^5$ & $0.2$ &$241$ & $589$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s)&pcc(s) & (Mb) & /1000 \\ \hline \hline $3^{10}$ & $0.7$ & $41$ & $150$ & $0$ \\ $7^{10}$ & $1.2$ & $102$ & $320$ & $0$ \\ $17^{10}$ & $2.1$ & $276$ & $556$ & $0$ \\ $37^{10}$ & $2.8$ & $736$ & $1070$ & $0$ \\ $79^{10}$ & $3.9$ & $1904$ & $2016$ & $0$ \end{tabular}\\ \normalsize \vspace{0.3cm} \noindent \textbf{$\mathbf{\chi_2=-1}$}\\ \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $p$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $11$ & $0.06$ & $2.4$ & $73$ & $0$ \\ $67$ & $0.02$ & $4.3$ & $73$ & $0$ \\ $521$ & $0.02$ & $32$ & $124$ & $0$ \\ $4099$ & $0.03$ &$503$ & $815$ & $0$ \\ $32771$ & $0.02$ &$5958$ & $6064$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $3^5$ & $0.15$ & $20$ & $76$ & $0$ \\ $7^5$ & $0.3$ & $46$ &$156$ & $0$ \\ $17^5$ & $0.4$ &$108$ &$241$ & $0$ \\ $37^5$ & $0.6$ &$243$ &$403$ & $0$ \\ $79^5$ & $0.8$ &$570$ &$749$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $3^{10}$ & $1.3$ & $130$ & $273$ & $0$ \\ $7^{10}$ & $2.8$ & $312$ & $416$ & $0$ \\ $17^{10}$ & $5.0$ & $815$ & $813$ & $0$ \\ $37^{10}$ & $6.5$ &$1939$ & $1463$ & $0$ \\ $79^{10}$ & $8.4$ &$4609$ & $2942$ & $0$ \\ \end{tabular} \normalsize \bigskip \par Contrary to the genus~$3$ case, we see that for very small $p$ or $q=p^n$, sometimes we do not find a lift satisfying \cite[Ass.\,1]{tuitman2}. However, in these cases we can usually compute the zeta function by counting points naively, so not much is lost here in practice. Note that the point counting is considerably slower for $\chi_2=-1$ than for $\chi_2=0,1$ which is due to the map from the curve to $\PPq^1$ having degree $4$ instead of $3$ in this case. \subsection{Curves of genus five} \label{section_genus5} \subsubsection{Lifting curves of genus five} \label{section_genus5lifting} By Petri's theorem \cite{saintdonat} a minimal set of generators for the ideal of a canonical model \[ \overline{C} \subset \PPq^4 = \text{Proj} \, \FF_q[X,Y,Z,W,V] \] of a non-hyperelliptic genus $5$ curve consists of \begin{itemize} \item three quadrics $\overline{S}_{2,1}, \overline{S}_{2,2}, \overline{S}_{2,3}$ and two cubics $\overline{S}_{3,1}, \overline{S}_{3,2}$ in the trigonal case, \item just three quadrics $\overline{S}_{2,1}, \overline{S}_{2,2}, \overline{S}_{2,3}$ in the non-trigonal case. \end{itemize} So given such a minimal set of generators, it is straightforward to decide trigonality. We denote the space of quadrics in the ideal of $\overline{C}$ by $\mathcal{I}_2(\overline{C})$. Then in both settings $\mathcal{I}_2(\overline{C})$ is a three-dimensional $\FF_q$-vector space of which $\overline{S}_{2,1}, \overline{S}_{2,2}, \overline{S}_{2,3}$ form a basis. \paragraph*{Trigonal case} \hfill Here Petri's theorem moreover tells us that $\mathcal{I}_2(\overline{C})$ cuts out a smooth ir- \noindent \begin{minipage}[b]{9.5cm} reducible surface $\overline{S}$ that is a rational normal surface scroll of type $(1,2)$. This means that up to a linear change of variables, it is the image $\overline{S}(1,2)$ of \[ \PPq^1 \times \PPq^1 \hookrightarrow \PPq^4 : ((s:t),(u:v)) \mapsto (vst:ut:vt^2:us:vs^2),\] i.e.\ \hfill it is the ruled surface obtained by simultaneously pa- \end{minipage} \ \ \begin{minipage}[b]{4.1cm} \begin{center} \includegraphics[width=5cm]{trigscroll.pdf}\\ \vspace{0.4cm} \end{center} \end{minipage} \noindent rameterizing a line in the $YW$-plane (called the directrix) and a conic in the $XZV$-plane, each time drawing the rule through the points under consideration (each of these rules intersects our trigonal curve in three points, counting multiplicities). In other words, modulo a linear change of variables the space $\mathcal{I}_2(\overline{C})$ admits the basis \begin{equation} \label{defpols_scroll} X^2 - ZV, \qquad XY - ZW, \qquad XW - YV. \end{equation} Note that these are (up to sign) the $2 \times 2$ minors of \[ \left( \begin{array}{cc} X & V \\ Z & X \\ \end{array} \right| \hspace{-0.1cm} \left. \begin{array}{c} W \\ Y \\ \end{array} \right). \] It is not trivial to \emph{find} such a linear change of variables. A general method using Lie algebras for rewriting Severi-Brauer surfaces in standard form was developed by de Graaf, Harrison, P\'ilnikov\'a and Schicho~\cite{GHPS}, and a Magma function \texttt{ParametrizeScroll} for carrying out this procedure in the case of rational normal surface scrolls was written by Schicho. Unfortunately this was intended to work over fields of characteristic zero only, and indeed the function always seems to crash when invoked over fields of characteristic three; see also Remark~\ref{blinduse} below. We do not know how fundamental this flaw is, or to what extent it is an artefact of the implementation, but to resolve this issue we have implemented an ad hoc method that is specific to scrolls of type $(1,2)$. It can be found in \texttt{convertscroll.m}; more background on the underlying reasoning can be read in an \texttt{arXiv} version of this paper (1605.02162v2). Once our quadrics $\overline{S}_{2,1}, \overline{S}_{2,2}, \overline{S}_{2,3}$ are given by \eqref{defpols_scroll} we project from the line $X = Y = Z = 0$, which amounts to eliminating the variables $V$ and $W$, in order to obtain the polynomials \[ \overline{S}_{3,i}^\text{pr} = Z^3 \overline{S}_{3,i}(X,Y,Z,\frac{X^2}{Z},\frac{XY}{Z}) = \overline{S}_{3,i}(XZ,YZ,Z^2,X^2,XY) \] for $i=1,2$. Dehomogenizing with respect to $Z$ and renaming $X \leftarrow x$ and $Y \leftarrow y$ we obtain two polynomials $\overline{f}_1, \overline{f}_2 \in \FF_q[x,y]$, whose zero loci intersect in the curve defined by $\overline{f} = \gcd(\overline{f}_1,\overline{f}_2)$. The Newton polygon of $\overline{f}$ is contained in (and typically equals): \begin{center} \polfig{genus5_trigonal.pdf}{2.8}{2}{$\Delta_{5,\text{trig}}^{0,0}$} \end{center} Note that in particular $\overline{f}$ attains Baker's bound, and a naive Newton polygon preserving lift $f \in \mathcal{O}_K[x,y]$ satisfies (i), (ii) and (iii). An alternative (namely, toric) viewpoint on our construction of $\overline{f}$, along with more background on the claims above, is given in Section~\ref{section_trigonal}. \paragraph*{Non-trigonal case} In the non-trigonal case, let us write the quadrics as \[ \overline{S}_{2,i} = \begin{pmatrix} X & Y & Z & W & V \end{pmatrix} \cdot \overline{M}_i \cdot \begin{pmatrix} X & Y & Z & W & V \end{pmatrix}^t, \qquad \overline{M}_i \in \FF_q^{5 \times 5}, \ \overline{M}_i^t = \overline{M}_i. \] The curve $\mathfrak{D}(\overline{C})$ in $\PPq^2 = \text{Proj} \, \FF_q[\lambda_1, \lambda_2, \lambda_3]$ defined by \[ \det (\lambda_1 \overline{M}_1 + \lambda_2 \overline{M}_2 + \lambda_3 \overline{M}_3) = 0 \] parameterizes the singular members of $\mathcal{I}_2(\overline{C})$. It is a possibly reducible curve called the discriminant curve of $\overline{C}$, known to be of degree $5$ and having at most nodes as singularities \cite{cornalba}${}^\dagger$. The non-singular points correspond to quadrics of rank $4$, while the nodes correspond to quadrics of rank $3$. For a point $P \in \mathfrak{D}(\overline{C})(\FF_q)$, let us denote by $\overline{M}_P$ the corresponding $(5 \times 5)$-matrix and by $\overline{S}_P$ the corresponding quadric, both of which are well-defined up to a scalar. We define \[ \chi : \mathfrak{D}(\overline{C})(\FF_q) \rightarrow \{ 0, \pm 1 \} : P \mapsto \left\{ \begin{array}{ll} \chi_2(\pdet( \overline{M}_P) ) & \text{if $P$ is non-singular,} \\ 0 & \text{if $P$ is singular,} \\ \end{array} \right. \] where $\pdet$ denotes the pseudo-determinant, i.e.\ the product of the non-zero eigenvalues. If we let $S_{2,i} \in \mathcal{O}_K[X,Y,Z,W,V]$ be homogeneous polynomials that reduce to $\overline{S}_{2,i}$ modulo $p$, then by \cite[Ex.\,IV.5.5.3]{hartshorne} these define a genus $5$ curve $C \subset \PPK^4$ over $K$, thereby addressing (i) and (ii). But as mentioned in Section~\ref{section_firstfacts} we expect the $K$-gonality of $C$ to be typically $2g - 2 = 8$, which exceeds the $\FF_q$-gonality of $\overline{C}$: \begin{lemma} \label{genus5gonality} Let $\overline{C} / \FF_q$ be a non-hyperelliptic non-trigonal curve of genus $5$ and $\FF_q$-gonality $\gamma$, and assume that $q$ is odd. If there is a point $P \in \mathfrak{D}(\overline{C})(\FF_q)$ for which $\chi(P) \in \{0, 1 \}$ then $\gamma = 4$. If there does not exist such a point and $\# \overline{C}(\FF_{q^3}) > 0$ (which is guaranteed if $q > 3$) then $\gamma = 5$. If there does not exist such a point and $\# \overline{C}(\FF_{q^3}) = 0$ then $\gamma = 6$. \end{lemma} \begin{proof} By \cite[VI.Ex.\,F]{cornalba}${}^\dagger$ the geometric $g^1_4$'s are in correspondence with the singular quadrics containing $\overline{C}$. More precisely: \begin{itemize} \item Each rank $4$ quadric is a cone over $\PPq^1 \times \PPq^1$. By taking its span with the top, each line on $\PPq^1 \times \PPq^1$ gives rise to a plane intersecting the curve in $4$ points. By varying the line we obtain two $g^1_4$'s, one for each ruling of $\PPq^1 \times \PPq^1$. \item Each rank $3$ quadric is a cone with a $1$-dimensional top over a conic. By taking its span with the top, every point of the conic gives rise to a plane intersecting the curve in $4$ points. By varying the point we obtain a $g^1_4$. \end{itemize} There are no other geometric $g^1_4$'s. Over $\FF_q$, we see that there exists a rational $g^1_4$ precisely \begin{itemize} \item when there is a rank $4$ quadric that is defined over $\FF_q$, such that the base of the corresponding cone is $\FF_q$-isomorphic to $\PPq^1 \times \PPq^1$, or \item when there is a rank $3$ quadric that is defined over $\FF_q$. \end{itemize} In terms of the discriminant, this amounts to the existence of a $P \in \mathfrak{D}(\overline{C})$ for which $\chi(P) \in \{0, 1 \}$. So let us assume that $\gamma > 4$. If $\# \overline{C}(\FF_{q^3}) > 0$, which by the Serre-Weil bound is guaranteed for $q > 3$, then there exists an effective $\FF_q$-rational degree $3$ divisor $D$ on $\overline{C}$. Because our curve is non-trigonal we find $\dim |D| = 0$, so by the Riemann-Roch theorem we have that $\dim | K - D| = 1$, and because $\deg (K-D) = 5$ we conclude that there exists a rational $g^1_5$ on $\overline{C}$. (Remark: geometrically, this $g^1_5$ is cut out by the pencil of hyperplanes through the plane spanned by the support of $D$, taking into account multiplicities.) The argument can be reversed: if there exists a $g^1_5 \ni D$ for some $\FF_q$-rational divisor $D$ on $\overline{C}$, then Riemann-Roch implies that $|K-D|$ is non-empty, yielding an effective divisor of degree $3$, and in particular $\# \overline{C}(\FF_{q^3}) > 0$. So it remains to prove that if $\# \overline{C}(\FF_{q^3}) = 0$ then there exists a rational $g^1_6$. We make a case distinction: \begin{itemize} \item If $\# \overline{C}(\FF_{q^2}) > 0$ then there exists a rational effective divisor $D$ of degree $2$, and Riemann-Roch implies that $ \dim |K-D| = 2$, yielding the requested rational $g^1_6$ (even a $g^2_6$, in fact). \item If $\# \overline{C}(\FF_{q^2}) = 0$ then at least $\# \overline{C}(\FF_{q^6}) > 0$ by the Weil bound, so there exists a rational effective divisor $D$ of degree $6$. Then $K-D$ is of degree $2$ and by our assumption $|K - D|$ is empty. But then Riemann-Roch asserts that $\dim |D| = 1$, and we have our rational $g^1_6$. \end{itemize} This ends the proof. \end{proof} \begin{remark} \label{remarkjeroen} If $q$ is large enough then it is very likely that $\mathfrak{D}(\overline{C})(\FF_q)$ will contain a point $P$ with $\chi(P) \in \{0,1\}$, and therefore that $\gamma = 4$; a more precise discussion is given below. There do however exist counterexamples for every value of $q$, as is shown by a construction explained in an \texttt{arXiv} version of this paper (\texttt{1605.02162v2}). \end{remark} \begin{remark} \label{remarkgon6} We do not know whether gonality $6$ actually occurs or not. For this one needs to verify the existence of a non-trigonal genus five curve over $\FF_3$ which is pointless over $\FF_{27}$ and whose discriminant curve has no $\FF_3$-rational points $P$ for which $\chi(P) \in \{0,1\}$. We ran a naive brute-force search for such curves, but did not manage to find one. \end{remark} If $q$ is large enough and $\mathfrak{D}(\overline{C})$ has at least one (geometrically) irreducible component that is defined over $\FF_q$, then a point $P \in \mathfrak{D}(\overline{C})(\FF_q)$ with $\chi(P) \in \{0,1\}$ exists and therefore $\overline{C}$ has $\FF_q$-gonality $4$. To state a precise bound on $q$, let us analyze the (generic) setting where $\mathfrak{D}(\overline{C})$ is a non-singular plane quintic. In this case the `good' points $P$ are in a natural correspondence with pairs of $\FF_q$-points on an unramified double cover of $\mathfrak{D}(\overline{C})$; we refer to~\cite[\S2(c)]{beauville} and the references therein for more background. By Riemann-Hurwitz this cover is of genus $11$, for which the lower Serre-Weil bound is positive from $q > 467$ on. The presence of singularities or of absolutely irreducible $\FF_q$-components of lower degree can be studied in a similar way and leads to smaller bounds. There are two possible ways in which $\mathfrak{D}(\overline{C})$ does \emph{not} have an absolutely irreducible $\FF_q$-component: either it could decompose into two conjugate lines over $\FF_{q^2}$ and three conjugate lines over $\FF_{q^3}$, or it could decompose into five conjugate lines over $\FF_{q^5}$. But in the former case the $\FF_q$-rational point $P$ of intersection of the two $\FF_{q^2}$-lines satisfies $\chi(P) = 0$, so here too our curve $\overline{C}$ has $\FF_q$-gonality $4$. Thus the only remaining case is that of five conjugate lines over $\FF_{q^5}$, which can occur for every value of $q$. Let us now address Problem~\ref{liftingproblem}. First assume that $\gamma = 4$, i.e.\ that there exists a point $P \in \mathfrak{D}(\overline{C})(\FF_q)$ with $\chi(P) \in \{0, 1 \}$. This can be decided quickly: if $q \leq 467$ then one can proceed by exhaustive search, while if $q > 467$ it is sufficient to verify whether or not $\mathfrak{D}(\overline{C})$ decomposes into five conjugate lines. To \emph{find} such a point, we first look for $\FF_q$-rational singularities of $\mathfrak{D}(\overline{C})$: these are exactly the points $P$ for which $\chi(P) = 0$. If no such singularities exist then we look for a point $P \in \mathfrak{D}(\overline{C})(\FF_q)$ for which $\chi(P) = 1$ by trial and error. Once our point has been found, we proceed as follows. \paragraph*{\underline{$\chi(P) = 0$}} In this case $P$ corresponds to a rank $3$ quadric, which using a linear change of variables we can assume to be in the standard form $\overline{S} = ZW - X^2$. Choose homogeneous \begin{wrapfigure}{r}{5cm} \includegraphics[width=4.9cm]{genus5_projection2.pdf} \vspace{0.3cm} \end{wrapfigure} quadratic polynomials \[ \overline{S}_2, \overline{S}_2' \in \FF_q[X,Y,Z,W,V] \] that along with $\overline{S}$ form a basis of $\mathcal{I}_2(\overline{C})$. (In practice one can usually take $\overline{S}_2 = \overline{S}_{2,1}$ and $\overline{S}'_2 = \overline{S}_{2,2}$.) Let $S_2, S_2' \in \mathcal{O}_K[X,Y,Z,W,V]$ be quadrics that reduce to $\overline{S}_2, \overline{S}_2'$ modulo $p$. Along with \[ S = ZW - X^2 \in \mathcal{O}_K[X,Y,Z,W,V] \] these cut out a canonical genus $5$ curve $C \subset \PPK^4$. We view the quadric defined by $S$ as a cone over the weighted projective plane $\PPK(1,2,1)$ with top $(0:0:0:0:1)$. Our curve is then an intersection of two quadrics inside this cone, and by projecting from the top we obtain a curve $C^\mathrm{pr}$ in $\PPK(1,2,1)$. In terms of equations this amounts to eliminating $V$ from $S_2$ and $S_2'$ by taking the resultant $S_2^\mathrm{pr} := \text{res}_V (S_2,S_2')$, which is a homogeneous quartic. Now as in \eqref{genus4conic} we further eliminate the variable $W$ to end up with $S_2^\mathrm{pr}(XZ,YZ,Z^2,X^2)$. After dehomogenizing with respect to $Z$, renaming $X \leftarrow x$ and $Y \leftarrow y$ and rescaling if needed, we obtain an affine equation $ f = y^4 + f_2(x)y^3 + f_4(x)y^2 + f_6(x)y + f_8(x)$, with $f_i \in \mathcal{O}_K[x]$ of degree at most $i$. Its Newton polygon is contained in (and typically equals): \begin{center} \polfig{genus5_conical.pdf}{4}{2.4}{$\Delta_{5,0}^{0}$} \end{center} Note that Baker's genus bound reads $9$, so this exceeds the geometric genus by $4$. Thus it was important to lift $\overline{S}_2, \overline{S}_2'$ before projecting. \paragraph*{\underline{$\chi(P) = 1$}} In this case $P$ corresponds to a rank $4$ quadric whose pseudo-determinant is a square. Using a linear change of variables we can assume it to be in the standard form $\overline{S} = XY - ZW$, which is a cone over $\PPq^1 \times \PPq^1$ with top $(0:0:0:0:1)$. Choose homogeneous quadratic polynomials \[ \overline{S}_2, \overline{S}_2' \in \FF_q[X,Y,Z,W,V] \] that along with $\overline{S}$ form a basis of $\mathcal{I}_2(\overline{C})$. (In practice one can usually take $\overline{S}_2 = \overline{S}_{2,1}$ and $\overline{S}'_2 = \overline{S}_{2,2}$.) Let $S_2, S_2' \in \mathcal{O}_K[X,Y,Z,W,V]$ be quadrics that reduce to $\overline{S}_2, \overline{S}_2'$ modulo $p$. Along with \[ S = XY - ZW \in \mathcal{O}_K[X,Y,Z,W,V] \] these cut out a canonical genus $5$ curve $C \subset \PPK^4$, which can be viewed as an intersection of two quadrics inside a cone over $\PPK^1 \times \PPK^1$ with top $(0:0:0:0:1)$. We first project from \begin{wrapfigure}{l}{6cm} \includegraphics[width=5.9cm]{genus5_projection.pdf} \vspace{-1.3cm} \end{wrapfigure} this top, to obtain a curve $C^\mathrm{pr}$ in $\PPK^1 \times \PPK^1$. In terms of equations, this amounts to eliminating $V$ from $S_2$ and $S_2'$ by taking the resultant $S_2^\mathrm{pr} := \text{res}_V (S_2,S_2')$, which is a homogeneous quartic. As in the discussion following (\ref{segre}), we conclude that $C^\mathrm{pr} $ is defined by the bihomogeneous polynomial \begin{equation} \label{genus5bihomogeneous} S_2^\mathrm{pr}(X_0W_0,Y_0Z_0,Z_0W_0,X_0Y_0) \end{equation} of bidegree $(4,4)$. Let $f \in \mathcal{O}_K[x,y]$ be the polynomial obtained from (\ref{genus5bihomogeneous}) by dehomogenizing with respect to $Z_0$ and $W_0$ and by renaming $X_0 \leftarrow x$ and $Y_0 \leftarrow y$. Then the Newton polygon of $f$ is contained in (and typically equals): \begin{center} \polfig{genus5_bideg44.pdf}{2.4}{2.4}{$\Delta_{5,1}^0$} \end{center} In particular $\deg_y f = 4$, as wanted. Here again Baker's bound reads $9$, which exceeds the geometric genus by $4$. \paragraph*{\underline{$\forall P \in \mathfrak{D}(\overline{C})(\FF_q): \chi(P) = -1$}} This case is very rare, so we will be rather sketchy here. If $\gamma = 6$ then we do not know how to address Problem~\ref{liftingproblem}, which for point counting purposes is not an issue because this could only occur when $q = 3$. If $\gamma = 5$ then one can try to address Problem~\ref{liftingproblem} by following the proof of Lemma~\ref{genus5gonality}, similar to the way we treated the $ \chi(\det \overline{M}_2) = -1$ case in genus four. For instance this works as follows if $\overline{C}(\FF_q)$ has at least three non-collinear points, which is guaranteed as soon as $\#\overline{C}(\FF_q) \geq 4$, which in turn is guaranteed if $q > 101$ by the Serre-Weil bound. Apply a transformation of $\PPq^4$ to position these points at $(0:1:0:0:0)$, $(0:0:0:1:0)$ and $(0:0:0:0:1)$, so that the plane they span is $X = Z = 0$. This implies that the defining quadrics have no terms in $Y^2$, $W^2$ and $V^2$, a property which is of course easily preserved when lifting to $\mathcal{O}_K[X,Y,Z,W,V]$, resulting in a curve $C \subset \PPK^4$ again passing through $(0:1:0:0:0)$, $(0:0:0:1:0)$ and $(0:0:0:0:1)$. Eliminating $W$ and $V$, which geometrically amounts to projecting from the line $X = Y = Z = 0$, results in a sextic in $\PPK^2 = \proj K[X,Y,Z]$ passing through $(0:1:0)$ in a non-singular way (otherwise the pencil of lines through that point would cut out a $K$-rational $g^1_4$). We can therefore apply a projective transformation that maps the corresponding tangent line to infinity, while keeping the point at $(0:1:0)$. Then by dehomogenizing with respect to $Z$ and renaming $X \leftarrow x$ and $Y \leftarrow y$ we end up with a polynomial $f \in \mathcal{O}_K[x,y]$ whose Newton polygon is contained in (and typically equals): \begin{center} \polfig{genus5_gon5.pdf}{3.2}{2.8}{$\Delta^5_5$} \end{center} We omit a further discussion.\\ \noindent \varhrulefill[0.4mm] \vspace{-0.3cm} \begin{algo} \label{algorithm_genus5} Lifting curves of genus $5$: basic solution \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \noindent \textbf{Input:} non-hyperelliptic genus $5$ curve $\overline{C}/\FF_q$ of $\FF_q$-gonality $\gamma \leq 5$ \noindent \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad or of $\FF_q$-gonality $\gamma = 5$ and $\# \overline{C}(\FF_q) \geq 4$ \noindent \textbf{Output:} lift $f \in \mathcal{O}_K[x,y]$ satisfying (i), (ii), (iii) that is supported \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_{5,\text{trig}}^{0,0}$ if $\overline{C}$ is trigonal, or else \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_{5,0}^0$ if $\exists P \in \mathfrak{D}(\overline{C}): \chi(P) = 0$, or else \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_{5,1}^0$ if $\exists P \in \mathfrak{D}(\overline{C}): \chi(P) = 1$, or else \noindent \qquad \qquad \qquad $\bullet$ on $\Delta_5^5$ \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \noindent \small \phantom{0}1 \normalsize: $\overline{C} \gets \text{CanonicalImage}(\overline{C})$ in $\PPq^4 = \proj \FF_q[X,Y,Z,W,V]$ \noindent \small \phantom{0}2 \normalsize: \textbf{if} $\text{Ideal}(\overline{C})$ is generated by quadrics \textbf{then} \noindent \small \phantom{0}3 \normalsize: \quad $\overline{S}_{2,1}, \overline{S}_{2,2}, \overline{S}_{2,3} \gets \text{quadrics that generate $\text{Ideal}(\overline{C})$}$ \noindent \small \phantom{0}4 \normalsize: \quad $\overline{M}_i \gets \text{Matrix}(\overline{S}_{2,i})$ ($i=1,2,3$) \noindent \small \phantom{0}5 \normalsize: \quad $\mathfrak{D}(\overline{C}) \gets$ curve in $\PPq^2 = \proj \FF_q[\lambda_1, \lambda_2, \lambda_3]$ defined by $\det(\lambda_1 \overline{M}_1 + \lambda_2 \overline{M}_2 + \lambda_3 \overline{M}_3)$ \noindent \small \phantom{0}6 \normalsize: \quad \textbf{if} $q \leq 467$ and $\forall P \in \mathfrak{D}(\overline{C})(\FF_q): \chi(P) = -1 $ (verified exhaustively) \noindent \small \phantom{0}7 \normalsize: \quad \quad \textbf{or} $q > 467$ and $\mathfrak{D}(\overline{C})$ decomposes into five conjugate lines \textbf{then} \noindent \small \phantom{0}8 \normalsize: \quad \quad goodpoints $\gets$ false \noindent \small \phantom{0}9 \normalsize: \quad \textbf{else} \noindent \small 10 \normalsize: \quad \quad goodpoints $\gets$ true \noindent \small 11 \normalsize: \quad \textbf{if} goodpoints \textbf{then} \noindent \small 12 \normalsize: \quad \quad \textbf{if} $\mathfrak{D}(\overline{C})$ has $\FF_q$-rational singular point $P$ \textbf{then} \noindent \small 13 \normalsize: \quad \quad \quad $\overline{S}_2, \overline{S}_2' \gets \text{quadrics such that } \langle \overline{S}_P, \overline{S}_2, \overline{S}_2' \rangle_{\FF_q} = \langle \overline{S}_{2,1}, \overline{S}_{2,2}, \overline{S}_{2,3} \rangle_{\FF_q}$ \noindent \small 14 \normalsize: \quad \quad \quad apply automorphism of $\PPq^4$ transforming $\overline{S}_P$ into $WZ - X^2$ \noindent \small 15 \normalsize: \quad \quad \quad $S_2 \gets \text{NaiveLift}(\overline{S}_2)$; $S_2' \gets \text{NaiveLift}(\overline{S}_2')$; $S_2^\text{pr} \gets \text{res}_V(S_2, S_2')$ \noindent \small 16 \normalsize: \quad \quad \quad \textbf{return} Dehomogenization${}_Z(S_2^\text{pr}(XZ,YZ,Z^2,X^2))$ \noindent \small 17 \normalsize: \quad \quad \textbf{else} \noindent \small 18 \normalsize: \quad \quad \quad \textbf{repeat} $P \gets \text{Random}(\mathfrak{D}(\overline{C})(\FF_q))$ \textbf{until} $\chi(P) = 1$ \noindent \small 19 \normalsize: \quad \quad \quad $\overline{S}_2, \overline{S}_2' \gets \text{quadrics such that } \langle \overline{S}_P, \overline{S}_2, \overline{S}_2' \rangle_{\FF_q} = \langle \overline{S}_{2,1}, \overline{S}_{2,2}, \overline{S}_{2,3} \rangle_{\FF_q}$ \noindent \small 20 \normalsize: \quad \quad \quad apply automorphism of $\PPq^4$ transforming $\overline{S}_P$ into $XY - ZW$ \noindent \small 21 \normalsize: \quad \quad \quad $S_2 \gets \text{NaiveLift}(\overline{S}_2)$; $S_2' \gets \text{NaiveLift}(\overline{S}_2')$; $S_2^\text{pr} \gets \text{res}_V(S_2, S_2')$ \noindent \small 22 \normalsize: \quad \quad \quad \textbf{return} Dehomogenization${}_Z(S_2^\text{pr}(XZ,YZ,Z^2,XY))$ \noindent \small 23 \normalsize: \quad \textbf{else} \noindent \small 24 \normalsize: \quad \quad $P_1, P_2, P_3 \leftarrow$ distinct random points of $\overline{C}(\FF_q)$ \noindent \small 25 \normalsize: \quad \quad apply automorphism of $\PPq^4$ sending $P_1$, $P_2$, $P_3$ to $(0:1:0:0:0)$, \noindent \small \phantom{25} \normalsize \hfill $(0:0:0:1:0)$, $(0:0:0:0:1)$ \noindent \small 26 \normalsize: \quad \quad $S_{2,i} \leftarrow \text{NaiveLift}(\overline{S}_{2,i})$ $(i = 1,2,3)$ \noindent \small 27 \normalsize: \quad \quad $C^\text{pr} \leftarrow \text{res}_{W,V}(S_{2,1},S_{2,2},S_{2,3})$ \noindent \small 28 \normalsize: \quad \quad apply automorphism of $\PPK^2$ transforming $T_{(0:1:0)}(C^\text{pr})$ into $Z=0$ \noindent \small 29 \normalsize: \quad \quad \textbf{return} Dehomogenization${}_Z(C^\text{pr})$ \noindent \small 30 \normalsize: \textbf{else} \noindent \small 31 \normalsize: \quad apply automorphism of $\PPq^4$ transforming space of quadrics in $\text{Ideal}(\overline{C})$ to \noindent \small \phantom{31} \normalsize \hfill $\langle X^2 - ZV, XY - ZW, XW - YV \rangle_{\FF_q}$ \noindent \small 32 \normalsize: \quad $\overline{S}_{3,1}, \overline{S}_{3,2} \gets \text{cubics that along with quadrics generate $\text{Ideal}(\overline{C})$}$ \noindent \small 33 \normalsize: \quad $\overline{f}_i \gets \text{Dehomogenization}_{Z}(\overline{S}_{3,i}(XZ,YZ,Z^2,X^2,XY))$ ($i=1,2$) \noindent \small 34 \normalsize: \quad \textbf{return} NaiveLift($\gcd(\overline{f}_1, \overline{f}_2)$) \vspace{-0.2cm} \noindent \varhrulefill[0.4mm] \end{algo} \subsubsection{Optimizations} \label{optim_genus5} \paragraph*{Trigonal case} By applying (\ref{mademonic}) to a polynomial with Newton polygon $\Delta_{5,\text{trig}}^{0,0}$ we end up with a polynomial $f \in \mathcal{O}_K[x,y]$ that is monic in $y$ and that has degree $5 + (\gamma - 1)2 = 9$ in $x$. This can be improved as soon as our curve $\overline{C} / \FF_q$ has a rational point $P$, which is guaranteed if $q > 89$ by the Serre-Weil bound (probably this bound is not optimal). The treatment below is very similar to the genus four case where $\chi_2(\det \overline{M}_2) = 0$, as elaborated in Section~\ref{optim_genus4}. The role of $\PPq(1,2,1)$ is now played by our scroll $\overline{S}(1,2)$. Recall that the latter is a ruled surface spanned by a line (the directrix) and a conic that are being parameterized simultaneously. Using an automorphism of $\overline{S}(1,2)$ we can position $P$ at the point at infinity of the spanning conic, in such a way that the curve and the conic meet at $P$ with multiplicity at least two. This results in a Newton polygon that is contained in (and typically equals): \begin{center} \polfig{genus5_trigonal_tangenttoconic.pdf}{2.4}{2}{$\Delta_{5,\text{trig}}^{0,1}$} \end{center} See Remark~\ref{autsofS12} below for how this can be done in practice. Here an application of (\ref{mademonic}) typically results in $\deg_x f = 3 + (\gamma - 1)2 = 7$. There are two caveats here: our curve might exceptionally be tangent at $P$ to a rule of the scroll, in which case it is impossible to make it tangent to the conic at that point. Or worse: our point $P$ might lie on the directrix, in which case it is just impossible to move it to the spanning conic. In these cases one can most likely just retry with another $P$. But in fact these two situations are better, as explained in Remark~\ref{genus5trigonalcaveatremark} below. \begin{remark}~\label{autsofS12} The automorphisms of $\overline{S}(1,2)$ can be applied directly to $\overline{f}$. They correspond to \begin{itemize} \item substituting $y \leftarrow \overline{a} y + \overline{b} x + \overline{c}$ and $x \leftarrow \overline{a}' x + \overline{b}'$ in $ \overline{f}$ for some $\overline{a},\overline{a}' \in \FF_q^\ast$ and $\overline{b},\overline{b}',\overline{c} \in \FF_q$, \item exchanging the rule at infinity for the $y$-axis by replacing $\overline{f}$ by $x^5 \overline{f}(x^{-1},x^{-1}y)$, \end{itemize} or to a composition of both. For instance imagine that an affine point $P = (\overline{a},\overline{b})$ was found with a non-vertical tangent line. Then $\overline{f} \leftarrow \overline{f}(x + \overline{a}, y + \overline{b})$ translates this point to the origin, at which the tangent line becomes of the form $y = \overline{c} x$. Substituting $\overline{f} \leftarrow \overline{f}(x,y + \overline{c}x)$ positions this line horizontally, and finally replacing $\overline{f}$ by $x^5 \overline{f}(x^{-1},x^{-1}y)$ results in a polynomial with Newton polygon contained in $\Delta_{5,\text{trig}}^{0,1}$. \end{remark} \begin{remark}[non-generic optimizations] \label{genus5trigonalcaveatremark} As for the first caveat, if $\overline{C}$ turns out to be tangent at $P$ to one of the rules of the scroll then moving $P$ to the point at infinity of the spanning conic results in a Newton polygon that is contained in (and typically equals): \begin{center} \polfig{genus5_trigonal_ramification.pdf}{2.4}{2}{$\Delta_{5,\text{trig}}^{0,2}$} \end{center} Even though this yields $\deg_x f = 4 + (\gamma - 1)2 = 8$, the corresponding point count is slightly faster. Such a $P$ will exist if and only if the ramification scheme of $(x,y) \mapsto x$ has an $\FF_q$-rational point. Following the heuristics from Remark~\ref{genus3flexremark} we expect that this works in about $1 - 1/e$ of the cases. As for the second caveat, if $P$ is a point on the directrix of the scroll, we can move it to its point at infinity. This results in a Newton polygon that is contained in (and typically equals) the left polygon below. \begin{center} \polfig{genus5_trigonal_rationalpointondirectrix.pdf}{2.8}{2}{$\Delta_{5,\text{trig}}^{1,0}$} \polfig{genus5_trigonal_rationalpointondirectrixandonconic.pdf}{2.4}{2}{$\Delta_{5,\text{trig}}^{1,1}$} \end{center} This again gives us $\deg_x f = 5 + (\gamma - 1)1 = 7$, but here too the corresponding point count is faster. As explained in an \texttt{arXiv} version of our paper (\texttt{1605.02162v2}), the probability of being able to realize this polygon is about $1/2$, and one can even end up inside the right polygon with a probability of about $3/8$, yielding $\deg_x f = 4 + (\gamma - 1)1 = 6$. \end{remark} \paragraph*{Non-trigonal case} For point counting purposes it is advantageous to give preference to the case $\chi(P) = 0$, i.e.\ to use a singular point $P \in \mathfrak{D}(\overline{C})(\FF_q)$ if it exists. Some optimizations over the corresponding discussion in Section~\ref{optim_genus5} are possible, for instance generically one can replace $\Delta_{5,0}^0$ with the left polygon below: \begin{center} \polfig{genus5_conical_rat.pdf}{3.2}{2.4}{$\Delta_{5,0}^{1}$} \quad \polfig{genus5_conical_sing.pdf}{3.2}{2.4}{$\Delta_{5,0}^{2}$} \end{center} With an estimated probability of about $1 - (3/8)^\rho$ one can even end up inside the right polygon. Here $10 \geq \rho \geq 1$ denotes the number of singular points $P \in \mathfrak{D}(\overline{C})(\FF_q)$. We will spend a few more words on this in Remark~\ref{genus5conicaloptimizationremark} below, after having discussed the $\chi(P) = 1$ case. However usually such a singular $\FF_q$-point $P$ does not exist, i.e.\ $\rho = 0$. More precisely we expect that the proportion of curves for which $\mathfrak{D}(\overline{C})$ is a smooth plane quintic tends to $1$ as $q \rightarrow \infty$. Indeed, in terms of moduli the locus of (non-hyperelliptic, non-trigonal) genus five curves having a singular point on its discriminant curve has codimension one; see \cite{teixidor,looijenga}${}^\dagger$. For this reason we will focus our attention on the case $\chi(P) = 1$, and leave it to the interested reader to elaborate the remaining details. As for the case $\chi(P) = 1$, note that by applying (\ref{mademonic}) to a polynomial with Newton polygon $\Delta_{5,1}^{0}$ one ends up with a polynomial that is monic in $y$ and that has degree $4 + (\gamma - 1)4 = 16$ in $x$. With near certainty this can be reduced to $10$, as we will explain now. The idea is to exploit the fact that in practice the discriminant curve $\mathfrak{D}(\overline{C})$ contains enough $\FF_q$-rational points for there to be considerable freedom in choosing a $P$ for which $\chi(P) = 1$. We want to select a suited such $P$, by which we mean the following. As before, assume that an automorphism of $\PPq^4$ has been applied such that $\overline{S}_P = \overline{S} = XY - ZW$ and let $\overline{S}_2, \ \overline{S}_2' \in \FF_q[X,Y,Z,W,V]$ be quadrics that along with $\overline{S}$ cut out our curve $\overline{C}$. Now suppose that we would have projected $\overline{C}$ from the point $(0:0:0:0:1)$ \emph{before} lifting to characteristic $0$. Then we would have ended up with a curve $\overline{C}^\text{pr}$ in \[ \PPq^1 \times \PPq^1 : \overline{S} = 0 \quad \text{in } \PPq^3 = \proj \FF_q[X,Y,Z,W]. \] This curve has arithmetic genus $9$, because in fact that is what Baker's bound measures. Since the excess in genus is $9 - 5 = 4$ we typically expect there to be $4$ nodes. Our point $P$ is `suited' as soon as one of the singular points $Q$ of $\overline{C}^\text{pr}$ is $\FF_q$-rational. If $P$ is not suited, i.e.\ if there is no such $\FF_q$-rational singularity, then we retry with another $P \in \mathfrak{D}(\overline{C})(\FF_q)$ for which $\chi(P) = 1$. Heuristically we estimate the probability of success to be about $5/8$. In particular if there are enough candidates for $P$ available, we should end up being successful very quickly with overwhelming probability. Given such a singular point $Q \in \overline{C}^\text{pr}(\FF_q) \subset \PPq^1 \times \PPq^1$ we can move it to the point $((1:0),(1:0))$, similar to what we did in the genus $4$ case where $\chi_2(\det \overline{M}_2) = 1$. In terms of the coordinates $X,Y,Z,W$ of the ambient space $\PPq^3$ this means moving the point to $(0:0:0:1)$. Let's say this amounts to the change of variables \[ \begin{pmatrix} X \\ Y \\ Z \\ W \\ \end{pmatrix} \leftarrow A \cdot \begin{pmatrix} X \\ Y \\ Z \\ W \\ \end{pmatrix} \] where $A \in \FF_q^{4 \times 4}$. Then we can apply the change of variables \[ \begin{pmatrix} X \\ Y \\ Z \\ W \\ V \\ \end{pmatrix} \leftarrow \begin{pmatrix} A & 0 \\ 0 & 1 \\ \end{pmatrix} \cdot \begin{pmatrix} X \\ Y \\ Z \\ W \\ V \\ \end{pmatrix} \] directly to the defining polynomials $\overline{S}, \overline{S}_1, \overline{S}_2$ of $\overline{C}$ to obtain the curve $\overline{C}_\text{tr}$ cut out by \[ \overline{S} = XY - ZW, \ \overline{S}_{2,\text{tr}}, \ \overline{S}_{2',\text{tr}} \in \FF_q[X,Y,Z,W,V]. \] Indeed the transformation affects $\overline{S}$ at most through multiplication by a non-zero scalar. If we would now project from $(0:0:0:0:1)$ as before, we would end up with a curve $\overline{C}_\text{tr}^\text{pr} \subset \PPq^1 \times \PPq^1$ having a singularity at $((1:0),(1:0))$, which is at $(0:0:0:1)$ in the coordinates $X,Y,Z,W$. Recall that inside $\PPq^4$ we view $\overline{S}$ as the defining equation of a cone over $\PPq^1 \times \PPq^1$ with top $(0:0:0:0:1)$. The fact that the projected curve has a singularity at $(0:0:0:1)$ implies that the line $X = Y = Z = 0$ meets the curve at least twice, counting multiplicities (these points of intersection need not be $\FF_q$-rational). Thus after multiplying $\overline{S}_{2,\text{tr}}$ by a scalar if needed we find that \[ \overline{S}_{2,\text{tr}}(0,0,0,W,V) = \overline{S}'_{2,\text{tr}}(0,0,0,W,V) = \overline{a}W^2 + \overline{b}WV + \overline{c}V^2 \] for some $\overline{a}, \overline{b}, \overline{c} \in \FF_q$. Now lift $\overline{S}_{2,\text{tr}}$ and $\overline{S}_{2',\text{tr}}$ in a consistent way, in order to obtain quadrics $S_2, S_2' \in \mathcal{O}_K[X,Y,Z,W,V]$ satisfying \begin{equation*} S_2(0,0,0,W,V) = S_2'(0,0,0,W,V) = aW^2 + bWV + cV^2 \end{equation*} for elements $a,b,c \in \mathcal{O}_K$ that reduce to $\overline{a}, \overline{b}, \overline{c}$ modulo $p$. If we then proceed as before, we end up with a curve $C^\text{pr}$ in $\PPK^1 \times \PPK^1$ having a singularity at $((1:0),(1:0))$. This eventually results in a defining polynomial $f \in \mathcal{O}_K[x,y]$ whose Newton polygon is contained in (and typically equals): \begin{center} \polfig{genus5_bideg44_chop.pdf}{2.4}{2.4}{$\Delta_{5,1}^2$} \end{center} Applying \eqref{mademonic} to $f$ results in a polynomial having degree at most $4 + (\gamma - 1)2 = 10$ in $x$, as announced. \begin{comment} \begin{remark}[potential non-generic optimizations] Geometrically, the suited points $P$ form a one-dimensional family, which seems to leave us with an unexploited degree of freedom. For instance, it is natural to guess that for a zero-dimensional subscheme of points $P \in \mathfrak{D}(\overline{C})$ the corresponding curve $\overline{C}^\text{pr}$ has a singular point $Q$ with a horizontal (or vertical) branch. Then heuristically, with a chance of about $1 - 1/e \approx 63.2 \%$ this works out in an $\FF_q$-rational way, eventually leading to a lift $f \in \mathcal{O}_K[x,y]$ whose Newton polygon is contained in (and typically equals): \begin{center} \polfig{genus5_bideg44_chop3.pdf}{2.4}{2.4}{$\Delta_{5,1}^3$} \end{center} This eventually leads to degree at most $8$ in $x$. Unfortunately we could not come up with an efficient way of deciding the occurrence of this event. \end{remark} \end{comment} \begin{remark} \label{genus5conicaloptimizationremark} The same ideas apply to the case $\chi(P) = 0$, with the role of $\PPq^1 \times \PPq^1$ replaced by $\PPq(1,2,1)$. If the projection $\overline{C}^\text{pr}$ of $\overline{C}$ to $\PPq(1,2,1)$ has an $\FF_q$-rational singular point, then it can be arranged that the resulting curve $C^\text{pr} \subset \PPK(1,2,1)$ has a singularity at $(1:0:0)$, eventually yielding a polynomial $f \in \mathcal{O}_K[x,y]$ whose Newton polygon is contained in $\Delta^2_{5,0}$. As in the $\chi(P) = 1$ case we expect that the probability that this works out for a given $P$ is about $5/8$. But unlike the $\chi(P) = 1$ case there is not much freedom to retry in the case of failure: we have $\rho$ chances only. This explains our expected probability of $1 - (3/8)^\rho$ to be able to realize $\Delta^2_{5,0}$. If the foregoing fails every time then we can play the same game with a non-singular $\FF_q$-rational point $Q$ on $\overline{C}^\text{pr}$ (guaranteed to exist if $q > 89$ because then $\overline{C}$ has an $\FF_q$-rational point by the Serre-Weil bound). The result is a curve $C^\text{pr} \subset \PPK(1,2,1)$ containing the point $(1:0:0)$. We can then use an automorphism of $\PPK(1,2,1)$ to make $C^\text{pr}$ tangent to $X=0$ at that point (unless the tangent line is vertical, in which case we simply retry with another $Q$). This is done similarly to the way we handled the case $\chi_2(\det \overline{M}_2) = 0$ in Section~\ref{optim_genus4}: see in particular Remark~\ref{autsofP121}. In this way one ends up in $\Delta^1_{5,0}$. \end{remark} \begin{comment} \paragraph*{\underline{$\chi(P) = 0$}} By applying (\ref{mademonic}) to a polynomial with Newton polygon $\Delta_{5,0}^{0}$ one ends up with a polynomial that is monic in $y$ and that has degree $8$ in $x$. This can most likely be improved. Namely, suppose that we would have projected our curve $\overline{C}$ defined by \begin{equation} \label{defeqsofCchi0} \overline{S} = WZ - X^2, \ \overline{S}_2, \ \overline{S}_2' \in \FF_q[X,Y,Z,W,V] \end{equation} from the point $(0:0:0:0:1)$ \emph{before} lifting to characteristic $0$. Then we would have ended up with a curve $\overline{C}^\text{pr}$ in \[ \PPq(1,2,1) : \overline{S} = 0 \quad \text{in } \PPq^3 = \proj \FF_q[X,Y,Z,W]. \] We claim that the degree in $x$ can be reduced as soon as this projected curve has an $\FF_q$-rational point $Q$, which it necessarily does if $\overline{C}$ does (guaranteed if $q > 89$ by the Serre-Weil bound). This is achieved as follows: \begin{itemize} \item First assume that $Q$ is non-singular. It can be moved to the point $(1:0:0)$ of our weighted projective plane $\PPq(1,2,1)$, which in terms of the coordinates $X,Y,Z,W$ means moving it to $(0:0:0:1)$. Let's say this amounts to the change of variables \[ \begin{pmatrix} X \\ Y \\ Z \\ W \\ \end{pmatrix} \leftarrow A \cdot \begin{pmatrix} X \\ Y \\ Z \\ W \\ \end{pmatrix} \] where $A \in \FF_q^{4 \times 4}$. Then we can apply the change of variables \[ \begin{pmatrix} X \\ Y \\ Z \\ W \\ V \\ \end{pmatrix} \leftarrow \begin{pmatrix} A & 0 \\ 0 & 1 \\ \end{pmatrix} \cdot \begin{pmatrix} X \\ Y \\ Z \\ W \\ V \\ \end{pmatrix} \] directly on the defining polynomials \eqref{defeqsofCchi0} of $\overline{C}$ to obtain the curve $\overline{C}_\text{tr}$ cut out by \[ \overline{S} = WZ - X^2, \ \overline{S}_{2,\text{tr}}, \ \overline{S}_{2',\text{tr}} \in \FF_q[X,Y,Z,W,V]. \] Indeed the transformation affects $\overline{S}$ at most through multiplication by a non-zero scalar. If we would now project from $(0:0:0:0:1)$ as before, we would end up with a curve $\overline{C}_\text{tr}^\text{pr} \subset \PPq(1,2,1)$ passing through $(1:0:0)$, which is through $(0:0:0:1)$ in the coordinates $X,Y,Z,W$. Recall that inside $\PPq^4$ we view $\overline{S}$ as the defining equation of a cone over $\PPq(1,2,1)$ with top $(0:0:0:0:1)$. The fact that the projected curve passes through $(0:0:0:1)$ in a non-singular way implies that the line $X = Y = Z = 0$ meets the curve exactly once, counting multiplicities. Thus after multiplying by a scalar if needed, we find that \[ \left\{ \begin{array}{l} \overline{S}_{2,\text{tr}}(0,0,0,W,V) = ( \overline{a}W + \overline{b}V)(\overline{c}W + \overline{d}V) \\ \overline{S}'_{2,\text{tr}}(0,0,0,W,V) = (\overline{a}W + \overline{b}V)(\overline{c}'W + \overline{d}'V) \end{array} \right. \] for some $\overline{a}, \overline{b}, \overline{c}, \overline{d}, \overline{c}', \overline{d}' \in \FF_q$. Now lift $\overline{S}_{2,\text{tr}}$ and $\overline{S}_{2',\text{tr}}$ to obtain quadrics $S_2, S_2' \in \mathcal{O}_K[X,Y,Z,W,V]$ satisfying \begin{equation} \label{factorbyfactor} \left\{ \begin{array}{l} S_2(0,0,0,W,V) = (aW + bV)(cW + dV) \\ S_2'(0,0,0,W,V) = (aW + bV)(c'W + d'V) \end{array} \right. \end{equation} for elements $a,b,c,d,c',d' \in \mathcal{O}_K$ that reduce to $\overline{a}, \overline{b}, \overline{c}, \overline{d}, \overline{c}', \overline{d}'$ modulo $p$ (this determines how to lift the terms at $W^2, WV, V^2$; the other terms can be lifted naively). If we then proceed as before, we end up with a curve $C^\text{pr}$ in $\PPK(1,2,1)$ that passes through the $K$-rational point $(1:0:0)$. As in the genus four case $\chi_2(\det \overline{M}_2) = 0$ we can now apply an automorphism of $\PPK(1,2,1)$ to position the corresponding tangent line at $X = 0$, unless that tangent line is vertical in which case we leave the curve as it is. This eventually results in a defining polynomial $f \in \mathcal{O}_K[x,y]$ whose Newton polygon is contained in (and typically equals) one of \begin{center} \polfig{genus5_conical_rat.pdf}{3.2}{2.4}{$\Delta_{5,0}^{1}$} \quad \polfig{genus5_conical_tangent.pdf}{3.6}{2.4}{$\Delta_{5,0}^{2}$} \qquad \end{center} depending on whether the tangent line was vertical or not. \item If there is an $\FF_q$-rational singular point $Q$ then we can obtain a further compactification of the Newton polygon. We estimate the probability of this event as follows: \begin{heuristic} The chance that $\overline{C}^\text{pr}$ has a rational singular point $Q$ is comparable to the chance that a univariate polynomial of degree $4$ has a rational root, which is approximately $5/8 = 62.5 \%$. \end{heuristic} This is motivated by the fact that $\overline{C}^\text{pr}$ has arithmetic genus $9$, because that is in fact what Baker's bound measures; thus the excess in genus is $9 - 5 = 4$. In the case of failure we might be able to retry with another point $P \in \mathfrak{D}(\overline{C})(\FF_q)$ for which $\chi(P) = 0$. Assuming independence of events we obtain a probability of about $1 - (3/8)^\rho$ with $\rho \leq 10$ the number of $\FF_q$-rational singularities of $\mathfrak{D}(\overline{C})$. Given a singular point $Q \in \overline{C}^\text{pr}$ we proceed as above to end up with a curve $\overline{C}_\text{tr} \subset \PPq^4$ defined by \[ \overline{S} = WZ - X^2, \ \overline{S}_{2,\text{tr}}, \ \overline{S}_{2',\text{tr}} \in \FF_q[X,Y,Z,W,V] \] such that after projection from $(0:0:0:0:1)$ we find a curve $\overline{C}_\text{tr}^\text{pr} \subset \PPq(1,2,1)$ having a singularity at $(1:0:0)$, which is at $(0:0:0:1)$ in the coordinates $X,Y,Z,W$. This now implies that the line $X = Y = Z = 0$ meets the curve twice, counting multiplicities. Thus after multiplying $\overline{S}_{2,\text{tr}}$ with a non-zero scalar if needed, we find that \[ \overline{S}_{2,\text{tr}}(0,0,0,W,V) = \overline{S}'_{2,\text{tr}}(0,0,0,W,V), \] i.e.\ the terms in $W^2, WV, V^2$ are the same. If we now lift $\overline{S}_{2,\text{tr}}$ and $\overline{S}_{2',\text{tr}}$ to quadrics $S_2, S_2' \in \mathcal{O}_K[X,Y,Z,W,V]$ in a consistent way, it will remain true that \begin{equation} \label{samequadraticexpression} S_2(0,0,0,W,V) = S_2'(0,0,0,W,V). \end{equation} If we then proceed as before we end up with a curve $C^\text{pr}$ in $\PPK(1,2,1)$ having $(1:0:0)$ as a singular point. This eventually results in a defining polynomial $f \in \mathcal{O}_K[x,y]$ whose Newton polygon is contained in (and typically equals) \begin{center} \polfig{genus5_conical_sing.pdf}{3.2}{2.4}{$\Delta_{5,0}^{\text{sing},0}$} \end{center} In particular this yields $\deg_xf \leq 6$. If the quadratic expression in \eqref{samequadraticexpression} factors over $\FF_q$, which it does with an estimated chance of $50 \%$, then by lifting the linear forms separately as in \eqref{factorbyfactor}, we even find that $(1:0:0)$ is a singular point of $C^\text{pr}$ having $K$-rational branches.\todo{nog eens nadenken of dit klopt}\ In the unlikely event where one of the branches is vertical we end up inside a polygon of the right form below: \begin{center} \polfig{genus5_conical_horizontalbranch.pdf}{2.8}{2.4}{$\Delta_{5,0}^{\text{sing},1}$} \quad \polfig{genus5_conical_verticalbranch.pdf}{3.2}{2.4}{$\Delta_{5,0}^{\text{sing},2}$} \qquad \end{center} If not we can position a branch at $X = 0$ to end up inside a polygon of the left form. \end{itemize} \end{comment} \subsubsection{Implementation} The tables below contain timings, memory usage and failure rates for the trigonal and non-trigonal case and various values of $p$ and $q=p^n$. For the precise meaning of the various entries in the tables see Section~\ref{section_genus3implementationandtimings}.\\ \vspace{-0.2cm} \noindent \textbf{Trigonal}\\ \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $p$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $11$ & $0.02$ & $0.6$ & $96$ & $206$ \\ $67$ & $0.02$ & $2.4$ & $96$ & $45$ \\ $521$ & $0.02$ & $23$ & $112$ & $4$ \\ $4099$ & $0.02$ &$358$ &$548$ & $1$ \\ $32771$ & $0.02$ &$4977$ &$3982$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $3^5$ & $0.1$ & $17$ & $108$ & $6$ \\ $7^5$ & $0.1$ & $33$ & $150$ & $0$ \\ $17^5$ & $0.2$ & $76$ &$556$ & $0$ \\ $37^5$ & $0.2$ & $186$ &$1070$ & $0$ \\ $79^5$ & $0.3$ & $452$ &$1716$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s)&pcc(s) & (Mb) & /1000 \\ \hline \hline $3^{10}$ & $1.2$ & $82$ & $188$ & $0$ \\ $7^{10}$ & $2.0$ & $214$ & $621$ & $0$ \\ $17^{10}$ & $3.6$ & $587$ &$1366$ & $0$ \\ $37^{10}$ & $4.5$ &$1584$ &$2453$ & $0$ \\ $79^{10}$ & $6.3$ &$4039$ &$4176$ & $0$ \end{tabular}\\ \normalsize \vspace{0.3cm} \noindent \textbf{Non-trigonal}\\ \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $p$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $11$ & $0.1$ & $2.0$ & $64$ & $14$ \\ $67$ & $0.1$ & $7.2$ & $76$ & $0$ \\ $521$ & $0.2$ &$65$ & $165$ & $0$ \\ $4099$ & $0.2$ &$1326$ &$1326$ & $0$ \\ $32771$ & $0.2$ &$21974$ &$10329$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s) &pcc(s) & (Mb) & /1000 \\ \hline \hline $3^5$ & $2.5$ & $59$ & $229$ & $0$ \\ $7^5$ & $5.3$ & $114$ & $352$ & $0$ \\ $17^5$ & $10$ & $261$ & $556$ & $0$ \\ $37^5$ & $14$ & $662$ & $919$ & $0$ \\ $79^5$ & $19$ & $1552$ & $1494$ & $0$ \end{tabular} \quad \begin{tabular}{r||r|r|r|r} & time &time & space & fails \\ $q$ & lift(s)&pcc(s) & (Mb) & /1000 \\ \hline \hline $3^{10}$ & $16$ &$504$ & $780$ & $0$ \\ $7^{10}$ & $40$ &$1191$ & $1304$ & $0$ \\ $17^{10}$ & $89$ &$2946$ & $2231$ & $0$ \\ $37^{10}$ & $128$ &$7032$ & $3679$ & $0$ \\ $79^{10}$ & $193$ &$15729$ & $6267$ & $0$ \end{tabular} \bigskip \normalsize \section{Curves of low gonality} \label{section_lowgonality} \subsection{Trigonal curves} \label{section_trigonal} Recall from Remark~\ref{remark_gonalityoverFq} that from genus five on a curve $\overline{C} / \FF_q$ is trigonal iff it is geometrically trigonal. It is known~\cite{saintdonat} that a minimal set of generators for the ideal of a canonical model $\overline{C} \subset \PPq^{g-1} = \proj \FF_q[X_1,X_2,\dots,X_g] $ of a non-hyperelliptic curve of genus $g \geq 4$ over $\FF_q$ consists of \begin{itemize} \item $(g-2)(g-3)/2$ quadrics \[ \overline{S}_{2,1}, \overline{S}_{2,2}, \dots, \overline{S}_{2,(g-2)(g-3)/2} \] and $g-3$ cubics \[ \overline{S}_{3,1}, \overline{S}_{3,2}, \dots, \overline{S}_{3,g-3} \] if $\overline{C}$ is trigonal or $\FF_q$-isomorphic to a smooth curve in $\PPq^2$ of degree five, \item just $(g-2)(g-3)/2$ quadrics in the other cases. \end{itemize} So given such a minimal set of generators, it is straightforward to decide trigonality, unless $g=6$ in which case one might want to check whether $\overline{C}$ is isomorphic to a smooth plane quintic or not. See Remark~\ref{veronese} below for how to do this. From now on assume that we are given a trigonal curve $\overline{C} / \FF_q$ in the above canonical form. Then the quadrics $\overline{S}_{2,i}$ spanning $\mathcal{I}_2(\overline{C})$ are known to define a rational normal surface scroll $\overline{S}$ of type $(a,b)$, where $a,b$ are non-negative integers satisfying \begin{equation} \label{maroniconditions} a \leq b, \qquad a + b = g - 2, \qquad b \leq (2g-2)/3, \end{equation} called the Maroni invariants\footnote{The existing literature is ambiguous on this terminology. Some authors talk about \emph{the} Maroni invariant of a trigonal curve, in which case they could mean either $a = \min(a,b)$, or $b-a$.} of $\overline{C}$. This means that up to a linear change of variables, it is the image $\overline{S}(a,b)$ of \[ \PPq^1 \times \PPq^1 \hookrightarrow \PPq^{g-1} : ((s:t),(u:v)) \mapsto (ut^a : ut^{a-1}s : \dots : us^a : vt^b : vt^{b-1}s : \dots : vs^b), \] i.e.\ it is the ruled surface obtained by simultaneously parameterizing \begin{itemize} \item a rational normal curve of degree $a$ in the $\PPq^a$ corresponding to $X_1, X_2, \dots, X_{a+1}$, and \item a rational normal curve of degree $b$ in the $\PPq^b$ corresponding to $X'_1, X'_2, \dots, X'_{b+1}$, where $X'_i$ denotes the variable $X_{a+1+i}$, \end{itemize} each time drawing the rule through the points under consideration (each of these rules intersects our trigonal curve in three points, counting multiplicities). As a consequence, modulo a linear change of variables, the space $\mathcal{I}_2(\overline{C})$ admits the $2 \times 2$ minors of \begin{equation} \label{generalscrolleqs} \left( \begin{array}{cccc} X_1 & X_2 & \dots & X_a \\ X_2 & X_3 & \dots & X_{a + 1} \\ \end{array} \right| \hspace{-0.1cm} \left. \begin{array}{cccc} X'_1 & X'_2 & \dots & X'_b \\ X'_2 & X'_3 & \dots & X'_{b+1} \\ \end{array} \right) \end{equation} as a basis, for some $a,b$ satisfying \eqref{maroniconditions}. We assume that we have a function \texttt{ConvertScroll} at our disposal that upon input of $\mathcal{I}_2(\overline{C})$ and a pair $(a,b)$ satisfying \eqref{maroniconditions}, either \emph{finds} such a linear change of variables, or outputs `wrong type' in case the surface cut out by $\mathcal{I}_2(\overline{C})$ is not a scroll of type $(a,b)$. \begin{remark} \label{blinduse} If $g=5$ then $(1,2)$ is the only pair of integers satisfying \eqref{maroniconditions}, and one can use our ad hoc method from mentioned in Section~\ref{section_genus5lifting} to find the requested linear change of variables as above. For higher genus we have written an experimental version of \texttt{ConvertScroll} in Magma, which can be found in the file \texttt{convertscroll.m}. It blindly relies on Schicho's function \texttt{ParametrizeScroll}, which implements the Lie algebra method from~\cite{GHPS}. Unfortunately the latter is only guaranteed to work in characteristic zero, and indeed one runs into trouble when naively applying \texttt{ParametrizeScroll} over finite fields of very small characteristic; empirically however, we found that $p > g$ suffices for a slight modification of \texttt{ParametrizeScroll} to work consistently. We remark that it is an easy linear algebra problem to verify the correctness of the output, in case it is returned. In any case further research is needed to turn this into a more rigorous step. \end{remark} \begin{remark} \label{maroniorder} If `wrong type' is returned then one retries with another pair $(a,b)$ satisfying \eqref{maroniconditions}. From a moduli theoretic point of view~\cite{stohr}${}^\dagger$ the most likely case is $a = b = (g-2)/2$ if $g$ is even, and $a + 1 = b = (g-1)/2$ if $g$ is odd, so it is wise to try that pair first, and then to let $a$ decrease gradually. According to~\cite{schichosevilla}${}^\dagger$ the Lie algebra method implicitly computes the Maroni invariants, so it should in fact be possible to get rid of this trial-and-error part; recall that we just use the function \texttt{ConvertScroll} as a black box. \end{remark} \begin{remark}[$g=6$] \label{veronese} If `wrong type' is returned on input $(2,2)$ as well as on input $(1,3)$, then we are in the smooth plane quintic case and therefore $\overline{C}$ is not trigonal. Here $\mathcal{I}_2(\overline{C})$ cuts out a Veronese surface in $\PPq^5$, rather than a scroll. We will revisit this case at the end of the section. \end{remark} Once our quadrics $\overline{S}_{2,i}$ are given by the minors of \eqref{generalscrolleqs}, we restrict our curve $\overline{C}$ to the embedded torus \[ \TTq^2 \hookrightarrow \PPq^{g-1} : (x,y) \mapsto (y: xy: \dots : x^ay : 1 : x : \dots : x^b) \] by simply substituting \[ X_1 \leftarrow y, \ X_2 \leftarrow xy, \ \dots, \ X_{a+1} \leftarrow x^ay \quad \text{and} \quad X'_1 \leftarrow 1, \ X'_2 \leftarrow x, \ \dots, \ X'_{b+1} \leftarrow x^b. \] This makes the quadrics vanish identically, while the cubics become \[ \overline{f}_1, \overline{f}_2, \dots, \overline{f}_{g-3} \in \FF_q[x,y]. \] The ideal generated by these polynomials is principal, i.e.\ of the form $(\overline{f})$, where the Newton polygon of $\overline{f} = \gcd(\overline{f}_1, \overline{f}_2, \dots, \overline{f}_{g-3})$ is contained in (and typically equals): \begin{center} \begin{minipage}[b]{6.4cm} \begin{center} \includegraphics[height=2.7cm]{trig_general.pdf} \end{center} \end{minipage} \end{center} The correctness of these claims follows for instance from~\cite[\S3]{cacocanonical}. Note that in particular $\overline{f}$ attains Baker's bound, so a naive Newton polygon preserving lift $f \in \mathcal{O}_K[x,y]$ satisfies (i), (ii) and (iii). \begin{remark} It should be clear that the above is a generalization of the corresponding method from Section~\ref{section_genus5lifting}, where we dealt with trigonal curves of genus five. But the method also generalizes the genus four cases $\chi_2(\det \overline{M}_2) = 0$ and $\chi_2(\det \overline{M}_2) = 1$ from Section~\ref{section_genus4lifting}, where the scrolls are $\overline{S}(0,2) = \PPq(1,2,1)$ and $\overline{S}(1,1) = \PPq^1 \times \PPq^1$, respectively. \end{remark} \begin{remark} Here too one could try to compress the Newton polygon by clipping off boundary points, similar to what we did in Section~\ref{optim_genus5}. But as the genus grows the resulting speed-ups become less and less significant, and we omit a further discussion. \end{remark} \paragraph*{Example} Let us carry out the foregoing procedure for the curve defined by \[ (x^3 + x + 1)y^3 + 42(2x^4 + x^3 + 3x^2 + 3x + 1)y^2 + (x+1)(x^4 + 2x^2 + x + 1)y + 42(x^2+1) = 0 \] over $\FF_{43}$. This is the reduction mod $43$ of the modular curve $X_0^+(164)$, or rather an affine model of it, whose equation we took from~\cite{modular}. It is of genus $6$, while we note that Baker's bound reads $7$, so it is not met here. Using the intrinsic~\texttt{CanonicalMap} one computes that \vspace{-0.2cm} \scriptsize \[ \left\{ \begin{array}{l} X_1^2X_2 + 42X_1^2X_5 + 40X_1^2X_6 + 40X_1X_2X_6 + X_1X_3^2 + 2X_1X_3X_6 + 42X_1X_4^2 + 40X_1X_4X_5 + X_1X_4X_6 \\ \ \qquad \qquad + 6X_1X_5X_6 + 7X_1X_6^2 + 42X_2X_3^2 + 2X_2X_3X_6 + 41X_2X_6^2 + 42X_3^3 + 40X_3X_6^2 + 2X_4^2X_5 + 4X_4^2X_6 \\ \ \qquad \qquad + 4X_4X_5X_6 + X_4X_6^2 + 38X_5X_6^2 + 39X_6^3 \\ X_1^2X_3 + 42X_1^2X_6 + 39X_1X_2X_6 + X_1X_3^2 + 38X_1X_3X_6 + 42X_1X_4X_5 + X_1X_5X_6 + 7X_1X_6^2 + X_2X_3^2 \\ \ \qquad \qquad + 41X_2X_3X_6 + 8X_2X_6^2 + 42X_3^2X_6 + 4X_3X_6^2 + X_4^2X_6 + 5X_4X_5X_6 + X_4X_6^2 + 40X_5X_6^2 + 37X_6^3 \\ 42X_1^2X_6 + X_1X_2X_3 + 42X_1X_2X_6 + 39X_1X_3X_6 + 42X_1X_4X_5 + 42X_1X_5X_6 + 6X_1X_6^2 + X_2X_3^2 + 39X_2X_3X_6 \\ \ \qquad \qquad + 7X_2X_6^2 + X_3^3 + 42X_3^2X_6 + 5X_3X_6^2 + 42X_4^2X_6 + 5X_4X_5X_6 + 41X_4X_6^2 + X_5X_6^2 + 36X_6^3 \\ 42 X_1X_3 + 42X_1X_5 + X_2^2 + X_2X_6 + X_3X_6 + 42X_4^2 + 42X_4X_6 + X_5X_6 \\ 42X_1X_5 + X_2X_4 + X_2X_6 + 42X_4^2 + 42X_4X_6 + X_5X_6 \\ 42X_1X_6 + X_3X_4 + X_3X_6 + 42X_4X_5 + X_6^2 \\ 42X_1X_6 + X_2X_5 + 42X_4X_5 + X_6^2 \\ 42X_2X_6 + X_3X_5 \\ 42X_4X_6 + X_5^2 + 42X_6^2 \\ \end{array} \right. \] \normalsize is a minimal set of generators for the ideal $\mathcal{I}(\overline{C})$ of a canonical model $\overline{C} \subset \PPq^5$. We are clearly in the trigonal case, so the six quadrics must cut out a rational normal surface scroll. According to \eqref{maroniconditions} the type of the latter is either $(1,3)$ or $(2,2)$. Following Remark~\ref{maroniorder} we first try $(2,2)$, so we search for a linear change of variables taking $\mathcal{I}_2(\overline{C})$ to the space of quadrics spanned by the $2 \times 2$ minors of \begin{equation*} \left( \begin{array}{cc} X_1 & X_2 \\ X_2 & X_3 \\ \end{array} \right| \hspace{-0.1cm} \left. \begin{array}{cc} X_4 & X_5 \\ X_5 & X_6 \\ \end{array} \right). \end{equation*} Our experimental version of the function \texttt{ConvertScroll} turns out to work here, and the type $(2,2)$ was a correct guess: the change of variables returned by Magma reads \[ \begin{pmatrix} X_1 \\ X_2 \\ X_3 \\ X_4 \\ X_5 \\ X_6 \\ \end{pmatrix} \leftarrow \begin{pmatrix} 40 & 3 & 42 & 0 & 30 & 33 \\ 0 & 12 & 35 & 40 & 42 & 2 \\ 0 & 9 & 4 & 30 & 29 & 42 \\ 20 & 37 & 5 & 2 & 8 & 22 \\ 22 & 19 & 11 & 28 & 32 & 14 \\ 38 & 29 & 16 & 21 & 33 & 36 \\ \end{pmatrix} \cdot \begin{pmatrix} X_1 \\ X_2 \\ X_3 \\ X_4 \\ X_5 \\ X_6 \\ \end{pmatrix}. \] Applying this transformation to our generators of $\mathcal{I}(\overline{C})$ and then substituting \[ X_1 \leftarrow y, \ \ X_2 \leftarrow xy, \ \ X_3 \leftarrow x^2y, \ \ X_4 \leftarrow 1, \ \ X_5 \leftarrow x, \ \ X_6 \leftarrow x^2 \] annihilates the quadrics, while the cubics become \[ 6(x+27)(x+32)\overline{f}, \ \ 39(x+13)(x+20)\overline{f}, \ \ 2(x+13)^2\overline{f}\] respectively, where \[ \begin{array}{rcl} \overline{f} \hspace{-0.2cm} & = & \hspace{-0.2cm} x^4y^3 + 8x^4y^2 + 31x^4y + 29x^4 + 37x^3y^3 + 23x^3y^2 + 16x^3y + x^3 + 12x^2y^3 + 18x^2y^2 \\ & & \qquad \qquad \quad + 12x^2y + 25x^2 + 10xy^3 + 7xy^2 + 30xy + 11x + 13y^3 + 36y^2 + 3y + 2. \\ \end{array} \] For this polynomial Baker's bound is attained, so a naive lift to $f \in \mathcal{O}_K[x,y]$ satisfies (i), (ii), (iii). After making $f$ monic using \eqref{mademonic} it can be fed to the algorithm from \cite{tuitman1,tuitman2} to find the numerator \begin{multline*} 43^6 T^{12} + 43^5 \cdot 8 T^{11} + 43^4 \cdot 154 T^{10} + 43^3 \cdot 1032 T^9 + 43^2 \cdot 9911 T^8 + 43 \cdot 62496 T^7 \\ + 444940 T^6 + 62496 T^5 + 9911 T^4 + 1032 T^3 + 154 T^2 + 8 T + 1 \end{multline*} of the zeta function $Z_{\overline{C} / \FF_{43}}(T)$ in a couple of seconds. \begin{comment} We carry out the foregoing procedure for the plane affine curve over $\FF_{23}$ defined by \vspace{-0.2cm} \scriptsize \[ x^{12} + 3x^{11}y + 3x^{10}y^2 + x^9y^3 + 3x^8 + 9x^7y + 9x^6y^2 + 3x^5y^3 + 22x^4 + 22x^3y + x^3 + 3x^2y + x^2 + 3xy^2 + xy + y^3 + 1 \] \normalsize \noindent which is of genus $6$, as is easily verified using Magma. Note that Baker's genus bound reads $19$, so it is certainly not met here. Using the intrinsic~\texttt{CanonicalMap} one computes that \scriptsize \[ \left\{ \begin{array}{l} X_1^2X_2 + 3X_1X_2X_5 + X_3X_5^2 + 6X_1^2X_6 + 2X_1X_5X_6 + X_2X_6^2 + X_4X_6^2 + 3X_6^3 \\ X_1^2X_3 + 3X_1X_3X_5 + X_4X_5^2 + 3X_1X_2X_6 + 16X_2X_5X_6 + 16X_1X_6^2 + X_3X_6^2 + 2X_5X_6^2 \\ X_1^2X_4 + 3X_1X_4X_5 + X_5^3 + 3X_1X_3X_6 + 16X_3X_5X_6 + 7X_2X_6^2 + X_4X_6^2 + 22X_6^3 \\ X_2^2 + 22X_1X_3 + 16X_2X_6 + 16X_6^2 \\ X_2X_3 + 22X_1X_4 + 13X_3X_6 \\ X_3^2 + 22X_1X_5 + 10X_4X_6 \\ X_2X_4 + 22X_1X_5 + 13X_4X_6 \\ X_3X_4 + 22X_2X_5 + 20X_5X_6 \\ X_4^2 + 22X_3X_5 \\ \end{array} \right. \] \normalsize is a minimal set of generators for the ideal $\mathcal{I}(\overline{C})$ of its canonical model $\overline{C} \subset \PPq^5$. Thus we are indeed in the trigonal case, and the six quadrics must cut out a rational normal surface scroll. According to \eqref{maroniconditions} the type of the latter is either $(1,3)$ or $(2,2)$. We first try $(1,3)$: by naively running the Magma command~\texttt{ParametrizeScroll} we search for a linear change of variables taking $\mathcal{I}_2(\overline{C})$ to the space of quadrics spanned by the minors of \scriptsize \begin{equation*} \left( \begin{array}{c} X_1 \\ X_2 \\ \end{array} \right| \hspace{-0.1cm} \left. \begin{array}{ccc} X_3 & X_4 & X_5 \\ X_4 & X_5 & X_6 \\ \end{array} \right). \end{equation*} \normalsize \noindent This immediately works: the change of variables returned by Magma reads \scriptsize \[ \begin{pmatrix} X_1 \\ X_2 \\ X_3 \\ X_4 \\ X_5 \\ X_6 \\ \end{pmatrix} \leftarrow \begin{pmatrix} 2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 & 15 & 0 & 0 & 0 \\ 0 & 0 & 0 & 4 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 7 & 13 & 0 & 0 & 0 \\ \end{pmatrix} \cdot \begin{pmatrix} X_1 \\ X_2 \\ X_3 \\ X_4 \\ X_5 \\ X_6 \\ \end{pmatrix}. \] \normalsize \begin{remark} In the case of a failure we would have simply retried with $(2,2)$. However according to~\cite{schichosevilla}${}^\dagger$ the Lie algebra method implicitly computes the Maroni invariants, so it should in principle be possible to get rid of this trial-and-error part (as mentioned, here we just use the function \texttt{ParametrizeScroll} as a black box). \end{remark} Applying the above transformation to our generators of $\mathcal{I}(\overline{C})$ and then substituting \scriptsize \[ X_1 \leftarrow y, \ \ X_2 \leftarrow xy, \ \ X_3 \leftarrow 1, \ \ X_4 \leftarrow x, \ \ X_5 \leftarrow x^2, \ \ X_6 \leftarrow x^3 \] \normalsize \noindent makes the quadrics vanish, while the cubics become respectively $\overline{f}, x \overline{f}, x^2 \overline{f}$ with \scriptsize \[ \overline{f} = x^7 + 6x^2y^2 + 4x^2 + 21xy^3 + 19xy + y^2 + 16. \] \normalsize \noindent For this polynomial Baker's bound is attained, so a naive lift to $f \in \mathcal{O}_K[x,y]$ satisfies (i), (ii), (iii). After making $f$ monic using \eqref{mademonic} it can be fed to the algorithm from \cite{tuitman1,tuitman2} to find the numerator \vspace{-0.13cm} \scriptsize \[ 23^6 T^{12} - 23^5 \cdot 5 T^{11} + 23^4 \cdot 33 T^{10} - 23^3 \cdot 62 T^9 + 23^2 \cdot 481 T^8 + 23 \cdot 1323 T^7 - 185 T^6 + 1323 T^5 + 481 T^4 - 62 T^3 + 33 T^2 - 5 T + 1 \] \normalsize \noindent of the zeta function of $\overline{C}$ in about $3.25$ seconds. \end{comment} \paragraph*{Point counting timings} Despite the lack of a well-working function \texttt{ConvertScroll}, we can tell how the point counting algorithm from~\cite{tuitman1,tuitman2} should perform in composition with the above method, by simply assuming that $\overline{C}$ is \emph{given} as the genus $g$ curve defined by a suitably generic polynomial $\overline{f} \in \FF_q[x,y]$ supported on $\text{conv} \{ (0,0), (2b + 2 - a,0), (2a + 2 - b, 3), (0,3) \}$. Then we can immediately lift to $\mathcal{O}_K[x,y]$. The tables below give point counting timings and memory usage for randomly chosen such polynomials in genera $g = 6,7$, where for the sake of conciseness we restrict to the generic Maroni invariants $a = \lfloor (g - 2)/2 \rfloor$ and $b = \lceil (g-2)/2 \rceil$; the other Maroni invariants give rise to faster point counts.\\ \vspace{-0.2cm} \noindent \textbf{$\mathbf{g=6}$}\\ \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r} & time & space \\ $p$ & pcc(s) & (Mb) \\ \hline \hline $11$ & $0.9$ & $32$ \\ $67$ & $6.0$ & $32$ \\ $521$ & $70$ & $118$ \\ $4099$ & $769$ & $824$ \\ $32771$ & $8863$ &$6829$ \end{tabular} \quad \begin{tabular}{r||r|r} & time & space \\ $q$ & pcc(s) & (Mb) \\ \hline \hline $3^5$ & $33$ & $76$ \\ $7^5$ & $64$ & $80$ \\ $17^5$ & $176$ &$197$ \\ $37^5$ & $415$ &$371$ \\ $79^5$ & $1035$ &$791$ \end{tabular} \quad \begin{tabular}{r||r|r} &time & space \\ $q$ &pcc(s) & (Mb) \\ \hline \hline $3^{10}$ & $183$ & $188$ \\ $7^{10}$ &$503$ &$320$ \\ $17^{10}$ &$1490$ &$749$ \\ $37^{10}$ &$3970$ &$1663$ \\ $79^{10}$ &$10945$ &$3716$ \end{tabular}\\ \normalsize \vspace{0.3cm} \noindent \textbf{$\mathbf{g=7}$}\\ \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r|r} & time & space & \\ $p$ & pcc(s) & (Mb) & \\ \hline \hline $11$ & $1.5$ & $32$ & \\ $67$ & $6.5$ & $32$ & \\ $521$ & $88$ & $118$ & \\ $4099$ & $955$ & $857$ & \\ $32771$ &$13279$ & $6983$ & \end{tabular} \quad \begin{tabular}{r||r|r|r} & time & space & \\ $q$ & pcc(s) & (Mb) & \\ \hline \hline $3^5$ & $43$ & $76$ & \\ $7^5$ & $91$ & $118$ & \\ $17^5$ & $257$ &$241$ & \\ $37^5$ & $602$ &$460$ & \\ $79^5$ & $1561$ &$983$ & \end{tabular} \quad \begin{tabular}{r||r|r|r} &time & space & \\ $q$ &pcc(s) & (Mb) & \\ \hline \hline $3^{10}$ & $283$ &$197$ & \\ $7^{10}$ &$777$ &$371$ & \\ $17^{10}$ &$2384$ &$919$ & \\ $37^{10}$ &$6706$ &$2212$ & \\ $79^{10}$ &$18321$ &$4682$ & \end{tabular} \normalsize \paragraph*{Smooth plane quintics} We end this section with a brief discussion of the genus $6$ case where our canonical curve $\overline{C} \subset \PPq^5$ is $\FF_q$-isomorphic to a smooth plane quintic. Such curves are never trigonal: using a variant of Lemma~\ref{genus3gonality} one verifies that the $\FF_q$-gonality is $4$ if and only if $\# \overline{C}(\FF_q) > 0$, which is guaranteed if $q > 137$ by the Serre-Weil bound. In the other cases it is $5$. Nevertheless from the point of view of the canonical embedding, smooth plane quintics behave `as if they were trigonal', which is why we include them here. (The appropriate unifying statement reads that trigonal curves and smooth plane quintics are exactly the curves having Clifford index $1$.) Here our main task towards tackling Problem~\ref{liftingproblem} is to find a linear change of variables transforming the space $\mathcal{I}_2(\overline{C})$ into \[ \langle X_2^2 - X_1X_4, X_2X_3 - X_1X_5, X_3^2 - X_1X_6, X_3X_4 - X_2X_5, X_3X_5 - X_2X_6, X_5^2 - X_4X_6 \rangle_{\FF_q} \] whose zero locus is the Veronese surface in `standard form', i.e.\ the closure of the image of \[ \TTq^2 \hookrightarrow \PPq^5 : (x, y) \mapsto (x^2 : xy : x : y^2 : y : 1). \] In order to achieve this, we simply assume that we have a function \texttt{ConvertVeronese} at our disposal. One could again try to use Schicho's function \texttt{ParametrizeScroll} for this, but here too we expect problems because of the characteristic being finite (although we did not carry out the experiment). Once this standard form is attained, an easy substitution \[ X_1 \leftarrow x^2, \ X_2 \leftarrow xy, \ X_3 \leftarrow x, \ X_4 \leftarrow y^2, \ X_5 \leftarrow y, \ X_6 \leftarrow 1 \] makes the quadrics vanish identically, while the cubics have a gcd whose homogenization defines the desired smooth plane quintic. From here one proceeds as in the smooth plane quartic case described in Section~\ref{basicsolutiontogenus3}. \subsection{Tetragonal curves} \label{section_tetragonal} We conclude this article with some thoughts on how the foregoing material can be adapted to the tetragonal case. A full elaboration of the steps below (or even a rigorous verification of some corresponding claims) lies beyond our current scope. In particular we have not implemented anything of what follows. The main aim of this section is twofold: to illustrate how our treatment of non-trigonal curves of genus five from Section~\ref{section_genus5lifting} naturally fits within a larger framework, and to propose a track for future research, involving mathematics that was developed mainly by Schreyer in~\cite[\S6]{schreyer}${}^\dagger$ and Schicho, Schreyer and Weimann in~\cite[\S5]{weimann}. Let \[ \overline{C} \subset \PPq^{g-1} = \proj \overline{R}, \qquad \overline{R} = \FF_q[X_1,X_2,\dots,X_g] \] be the canonical model of a genus $g \geq 5$ curve that is non-hyperelliptic, non-trigonal, and not isomorphic to a smooth plane quintic, so that a minimal set of generators of $\mathcal{I}(\overline{C}) \subset \overline{R}$ consists of $\beta_{12} := (g-2)(g-3)/2$ quadrics \[ \overline{S}_{2,1}, \overline{S}_{2,2}, \dots, \overline{S}_{2,\beta_{12}}. \] The notation $\beta_{12}$ refers to the corresponding entry in the graded Betti table of the homogeneous coordinate ring of $\overline{C}$, to which we will make a brief reference at the end of this section. Assume that the $\FF_q$-gonality of $\overline{C}$ is four, and consider a corresponding $\FF_q$-rational \begin{wrapfigure}{r}{6.1cm} \hfill \includegraphics[width=6cm]{tetrscroll.pdf} \end{wrapfigure} map $\pi : \overline{C} \rightarrow \PPq^1$. We note that unlike the trigonal case this map may not be uniquely determined modulo automorphisms of $\PPq^1$, even for $g$ arbitrarily large. The linear spans of the fibers of $\pi$ form a one-dimensional family of planes in $\PPq^{g-1}$ that cut out a rational normal \emph{threefold} scroll $\overline{S}$. Similar to before, up to a linear change of variables, such a scroll is obtained by simultaneously parameterizing \begin{itemize} \item a rational normal curve of degree $a$ in the $\PPq^a$ corresponding to $X_1, X_2, \dots, X_{a+1}$, \item a rational normal curve of degree $b$ in the $\PPq^b$ corresponding to $X'_1, X'_2, \dots, X'_{b+1}$, where $X'_i$ denotes the variable $X_{a+1+i}$, and \item a rational normal curve of degree $c$ in the $\PPq^c$ corresponding to $X''_1, X''_2, \dots, X''_{c+1}$, where $X''_i$ denotes the variable $X_{a+b+2+i}$, \end{itemize} each time taking the plane connecting the points under consideration (each of these planes intersects our trigonal curve in four points, counting multiplicities). Again this concerns a determinantal variety, defined by the $2 \times 2$ minors of \begin{equation} \label{tetragonalscrolleqs} \left( \begin{array}{cccc} X_1 & X_2 & \dots & X_a \\ X_2 & X_3 & \dots & X_{a + 1} \\ \end{array} \right| \hspace{-0.1cm} \left. \begin{array}{cccc} X'_1 & X'_2 & \dots & X'_b \\ X'_2 & X'_3 & \dots & X'_{b+1} \\ \end{array} \right. \hspace{-0.1cm} \left| \begin{array}{cccc} X''_1 & X''_2 & \dots & X''_c \\ X''_2 & X''_3 & \dots & X''_{c + 1} \\ \end{array} \right). \end{equation} Alternatively our scroll can be thought of as the Zariski closure of the image of \[ \TTq^3 \hookrightarrow \PPq^{g-1} : (x,y,z) \mapsto (z:xz: \dots : x^az : y : xy : \dots : x^by : 1 : x : \dots : x^c ), \] or if one prefers, as the toric threefold associated to the polytope \begin{center} \begin{minipage}[b]{6.4cm} \begin{center} \includegraphics[height=2.4cm]{threefoldscroll.pdf} \end{center} \end{minipage} \qquad \begin{minipage}[b]{2cm} $(\Delta_{(a,b,c)}).$ \vspace{0.9cm} \end{minipage} \end{center} Let us denote this `standard' scroll in $\PPq^{g-1}$ by $\overline{S}(a,b,c)$. The non-negative integers $(a,b,c)$ are called the scrollar invariants of $\overline{C}$ with respect to $\pi$ and can be chosen to satisfy \begin{equation} \label{scrollarconditions} a \leq b \leq c, \qquad a + b + c = g - 3, \qquad c \leq (2g-2)/4, \end{equation} where the last inequality follows from Riemann-Roch. Inside the scroll $\overline{S}$ our curve $\overline{C}$ arises as a complete intersection of two hypersurfaces $\overline{Y}$ and $\overline{Z}$ that are `quadratic'. More precisely the Picard group of $\overline{S}$ is generated by the class $[ \overline{H} ]$ of a hyperplane section and the class $[\overline{\Pi}]$ of a ruling (i.e.\ of the linear span of a fiber of $\pi$), and $\overline{Y}$ and $\overline{Z}$ can be chosen such that \[ \overline{Y} \in 2[ \overline{H} ] - b_1 [\overline{\Pi}] , \qquad \overline{Z} \in 2[ \overline{H} ] - b_2 [\overline{\Pi}] \] for non-negative integers $b_1 \geq b_2$ satisfying $b_1 + b_2 = g-5$. These integers are invariants of the curve, that is, they do not depend on the choice of $\pi$. If $b_2 < b_1$ then also the surface $\overline{Y}$ is uniquely determined by $\overline{C}$. This is automatic when $g$ is even. Let us now assume that $\overline{S}$ is given in the standard form $\overline{S}(a,b,c)$, which we consider along with the embedded torus $\TTq^3$. Then for $\overline{Y}$ to be in the class $2[ \overline{H} ] - b_1 [\overline{\Pi}]$ it means that $\overline{Y} \cap \TTq^3$ is defined by an irreducible polynomial $\overline{f}_{\overline{Y}} \in \FF_q[x,y,z]$ whose support is contained in \begin{center} \begin{minipage}[b]{5.8cm} \begin{center} \includegraphics[height=2.4cm]{threefoldscroll_chopped.pdf} \end{center} \end{minipage} \qquad \begin{minipage}[b]{2cm} $(\Delta_{(a,b,c),b_1}).$ \vspace{0.9cm} \end{minipage} \end{center} or more precisely\footnote{Indeed, the coordinate $2a - b_1$ might be negative; an example of such behaviour can be found in an \texttt{arXiv} version of this paper (\texttt{1605.02162v2}).} in \[ \text{conv} \{ (0,0,0), (2c-b_1,0,0), (0,2,0), (2b-b_1,2,0), (0,0,2), (2a-b_1,0,2) \} \cap \RR_{\geq 0}^3. \] In other words this is the polytope obtained from $2 \Delta_{(a,b,c)}$ by shifting its right-most face leftwards over a distance $b_1$. Moreover $b_1$ is the maximal integer for which this containment holds. The same applies to $\overline{Z}$, leading to a polynomial $\overline{f}_{\overline{Z}} \in \FF_q[x,y,z]$ whose support is contained in $\Delta_{(a,b,c),b_2}$, which is the polytope obtained from $2 \Delta_{a,b,c}$ by shifting the right-most face inwards over a distance $b_2$. The main observation of this section is that $\overline{f}_{\overline{Y}}, \overline{f}_{\overline{Z}} \in \FF_q[x,y,z]$ is a pair of polynomials meeting a version of Baker's bound for complete intersections, again due to Khovanskii~\cite{khovanskiicomplete}${}^\dagger$. In the case of two trivariate polynomials supported on polytopes $\Delta_1$ and $\Delta_2$ the bound reads \[ g \leq \# \left( \text{interior points of $\Delta_1 + \Delta_2$} \right) - \# \left( \text{interior points of $\Delta_1$} \right) - \# \left( \text{interior points of $\Delta_2$} \right). \] In our case where $\Delta_1 = \Delta_{(a,b,c),b_1}$ and $\Delta_2 = \Delta_{(a,b,c),b_2}$, this indeed evaluates to $g - 0 - 0 = g$. Thus the strategy would be similar: lift these polynomials in a Newton polytope preserving way to polynomials $f_Y, f_Z \in \mathcal{O}_K[x,y,z]$. These then again cut out a genus $g$ curve in $\TTK^3$, and a polynomial $f \in \mathcal{O}_K[x,y]$ satisfying (i)-(iii) can be found by taking the resultant of $f_Y$ and $f_Z$ with respect to $z$ (or with respect to $y$). \paragraph*{Genus $5$ curves revisited} Let us revisit our treatment of tetragonal curves of genus five $\overline{C} \subset \PPq^4 = \proj \FF_q[X,Y,Z,W,V]$ from Section~\ref{section_genus5lifting}. \begin{enumerate} \item Our first step was to look for a point $P \in \mathfrak{D}(\overline{C})(\FF_q)$ for which $\chi(P) = 0$ or $\chi(P) = 1$. The corresponding quadrics were described as cones over $\PPq(1,2,1)$ and $\PPq^1 \times \PPq^1$, respectively. But in the current language these are just rational normal threefold scrolls of type $(0,0,2)$ resp.\ $(0,1,1)$. Note that this shows that the scroll $\overline{S}$ may indeed depend on the choice of $\pi$. \item For ease of exposition let us restrict to the case $\chi(P) = 1$. Then the second step was to transform the quadric into $XY - ZW$, whose zero locus is the Zariski closure of \[ \TTq^3 \hookrightarrow \PPq^4 : (x,y,z) \mapsto (1 : xy : x : y : z), \] i.e.\ the transformation takes the scroll $\overline{S}(0,1,1)$ into `standard form'. \item The other quadrics $\overline{S}_2, \overline{S}_2'$ are instances of the surfaces $\overline{Y}$ and $\overline{Z}$. They are both in the class $2[\overline{H}]$, i.e.\ $b_1 = b_2 = 0$. Viewing $\overline{Y}$ and $\overline{Z}$ inside the torus $\TTq^3$ amounts to evaluating them at $(1,xy,x,y,z)$, resulting in polynomials that are supported on \begin{center} \begin{minipage}[b]{4cm} \begin{center} \includegraphics[height=2.2cm]{genus5_threefoldscroll.pdf} \end{center} \end{minipage} \end{center} as predicted. With the present approach we naively lift these polynomials to $f_Y, f_Z \in \mathcal{O}_K[x,y,z]$. In Section~\ref{section_genus5lifting} we applied this naive lift directly to $\overline{S}_2, \overline{S}_2'$, which was fine there, but in higher genus it is more convenient to work in $\TTq^3$, since $\overline{Y}, \overline{Z} \subset \overline{S}$ will no longer be cut out by a single quadratic hypersurface of $\PPq^{g-1}$. \item The last step was to project this lifted curve from $(0:0:0:0:1)$, which in our case amounts to taking the resultant of $f_Y, f_Z$ with respect to $z$. \end{enumerate} \paragraph*{General recipe} If we want to turn the above into a rigorous recipe for lifting tetragonal curves, three questions show up naturally. We share some brief first thoughts, but further research is needed regarding each of these. \begin{enumerate} \item How do we decide whether the input curve has $\FF_q$-gonality $4$ or not, and how do we extract from $\mathcal{I}_2(\overline{C})$ the equations of a corresponding rational normal threefold scroll $\overline{S}$? In genus five we used the discriminant curve for this, but in general the desired information should be traceable from (the first few steps of) a minimal free resolution \[ \overline{R}(-4)^{\beta_{34}} \oplus \overline{R}(-5)^{ \beta_{35} } \rightarrow \overline{R}(-3)^{\beta_{23}} \oplus \overline{R}(-4)^{\beta_{24}} \rightarrow \overline{R}(-2)^{\beta_{12}} \rightarrow \overline{R} \rightarrow \faktor{\overline{R}}{(\overline{S}_{2,1}, \dots, \overline{S}_{2, \beta_{12}})} \] of the homogeneous coordinate ring of $\overline{C}$ as a graded $\overline{R}$-module, thanks to a proven part of Green's canonical syzygy conjecture~\cite[Thm.\,2.5]{weimann}, namely that $\beta_{24} \neq 0$ if and only if $\overline{C}$ is $\overline{\FF}_q$-tetragonal or $\FF_q$-isomorphic to a smooth plane sextic, which in turn holds if and only if $\overline{C}$ has Clifford index $2$. (The dimensions $\beta_{ij}$ are usually gathered in the so-called graded Betti table of $\overline{C}$, and in general Green's conjecture predicts that the Clifford index equals the number of leading zeroes on the cubic strand, i.e.\ the minimal $i$ for which $\beta_{i,i+2} \neq 0$.) If $g \geq 7$ then a sufficiently generic geometrically tetragonal curve satisfies $\beta_{24} = g-4$. This is what Schicho, Schreyer and Weimann~\cite[Ex.\,4.2]{weimann} refer to as the \emph{goneric} case; see also~\cite[Thm.\,0.3]{farkaskemeny}${}^\dagger$. It implies that our curve admits a unique $g^1_4$, hence it is $\FF_q$-tetragonal, and that the ideal of the corresponding scroll $\overline{S}$ can be computed as the annihilator of the cokernel of the map \[ \overline{R}(-5)^{ \beta_{35}} \rightarrow \overline{R}(-4)^{ \beta_{24}}. \] See~\cite[Prop.\,4.11]{weimann}. In the non-goneric cases one has $\beta_{24} = (g-1)(g-4)/2$ and a finer analysis is needed. Some further useful statements can be found in~\cite{weimann} and~\cite{harrison}${}^\dagger$. \item How do we find the type $(a,b,c)$ of the scroll $\overline{S}$, along with a linear change of variables taking it into the standard form $\overline{S}(a,b,c)$ cut out by the minors of \eqref{tetragonalscrolleqs}? We encountered an analogous hurdle in the trigonal case. Here too it would be natural to try the Lie algebra method from~\cite{GHPS}, but as mentioned this was designed to work over fields of characteristic zero, and it is not clear to us how easily the method carries over to small finite characteristic. \item How do we find the invariants $b_1, b_2$ along with hypersurfaces $\overline{Y} \in 2[\overline{H}] - b_1[\overline{\Pi}]$ and $\overline{Z} \in 2[\overline{H}] - b_2[\overline{\Pi}]$ that inside $\overline{S}(a,b,c)$ cut out our curve $\overline{C}$? By evaluating the generators of $\mathcal{I}(\overline{C})$ in $(z,xz,\dots,x^az,y,xy,\dots,x^by,1,x, \dots, x^c)$ one easily finds a set of generators for the ideal of $\overline{C} \cap \TTq^3$. The challenge is now to replace this set by two polynomials that are supported on polytopes of the form \[ \Delta_{(a,b,c),b_1} \quad \text{and} \quad \Delta_{(a,b,c),b_2} , \] with $b_1,b_2$ satisfying $b_1 + b_2 = g-5$. Here our approach would be to use a Euclidean type of algorithm to find generators whose Newton polytopes are as small as possible. \end{enumerate} \paragraph*{Point counting timings} We have not implemented anything of the foregoing recipe, but we can predict how its output should perform in composition with the point counting algorithm from~\cite{tuitman1,tuitman2}, by simply starting from a sufficiently generic pair of polynomials $\overline{f}_{\overline{Y}}, \overline{f}_{\overline{Z}} \in \FF_q[x,y,z]$ that are supported on $\Delta_{(a,b,c),b_1}$ and $\Delta_{(a,b,c),b_2}$ for non-negative integers $a,b,c$ satisfying \eqref{scrollarconditions} and $b_1 + b_2 = g-5$. Then one can naively lift to $\mathcal{O}_K[x,y,z]$, take the resultant with respect to $z$, make the outcome monic using \eqref{mademonic}, and feed the result to the point counting algorithm. The tables below contain point counting timings and memory usage for randomly chosen such pairs in genera $g = 6,7$. For the sake of conciseness it makes sense to restrict to the case where the scrollar invariants $a,b,c$ and the tetragonal invariants $b_1, b_2$ are as balanced as possible, meaning that $c-a \leq 1$ and $b_1 - b_2 \leq 1$, because this is the generic case~\cite{ballico,bopphoff}${}^\dagger$. We expect the other cases to run faster.\\ \vspace{0.1cm} \noindent \begin{minipage}[b]{8cm} \noindent \textbf{$\mathbf{g=6}$}\\ \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r} & time & space \\ $p$ & pcc(s) & (Mb) \\ \hline \hline $11$ & $8.5$ & $32$ \\ $67$ & $34.7$ & $64$ \\ $521$ & $445$ & $379$ \\ $4099$ & $4748$ & $2504$ \end{tabular} \quad \begin{tabular}{r||r|r} & time & space \\ $q$ & pcc(s) & (Mb) \\ \hline \hline $3^5$ & $266$ & $214$ \\ $7^5$ & $549$ & $325$ \\ $3^{10}$ & $2750$ &$6072$ \\ $7^{10}$ & $6407$ &$9814$ \end{tabular} \end{minipage} \hfill \begin{minipage}[b]{8cm} \noindent \textbf{$\mathbf{g=7}$}\\ \noindent \scriptsize \tabcolsep=0.11cm \begin{tabular}{r||r|r} & time & space \\ $p$ & pcc(s) & (Mb) \\ \hline \hline $11$ & $11$ & $32$ \\ $67$ & $46$ & $80$ \\ $521$ & $445$ & $347$ \\ $4099$ & $4350$ & $2441$ \end{tabular} \quad \begin{tabular}{r||r|r} & time & space \\ $q$ & pcc(s) & (Mb) \\ \hline \hline $3^5$ & $254$ & $156$ \\ $7^5$ & $550$ & $241$ \\ $3^{10}$ & $2347$ &$3606$ \\ $7^{10}$ & $5819$ &$5724$ \end{tabular} \end{minipage} \begin{comment} By Petri's theorem, a minimal set of generators for the canonical ideal of a general curve $\overline{C} / \FF_q$ of genus $g \geq 5$ consists of \[ { g - 2 \choose 2 } \text{ quadrics in } \PPq^{g-1}. \] For $g=5$ this specializes to the well-known statement that a canonical non-trigonal curve of genus five arises as a smooth complete intersection of three quadrics in $\PPq^4$. As we already noted, any naive degree-preserving lift of these quadrics to $\mathcal{O}_K$ is again a smooth complete intersection, in turn defining a non-trigonal canonical curve of genus five in $\PPK^4$. So addressing (i) and (ii) is easy, and our story focused entirely on (iii), i.e.\ on lifting to a curve with the right gonality.\\ As soon as $g \geq 6$ our curve is not a complete intersection. On the other hand a naive lift of these quadrics to $\mathcal{O}_K$ \emph{does} most likely behave like a complete intersection, and therefore cuts out the empty subscheme of $\PPK^{g-1}$. So even getting the dimension right, let alone the genus and the gonality, is a non-trivial task using this approach.\\ Now \emph{trigonal} curves of genus five do not arise as complete intersections either. Yet recall from Section~\ref{section_genus5lifting} that we got around this by viewing $\overline{C}$ as a curve in a \emph{rational normal surface scroll}, which is an instance of a toric surface. In disguised terms, we then took a naive lift of $\overline{C}$ to characteristic zero, using the coordinates $x,y$ of the embedded torus \[ \TTq^2 = \spec \FF_q[x^{\pm 1}, y^{\pm 1} ] \] rather than the coordinates $X,Y,Z,W,V$ of $\PPq^4$ from which we started. The lift was Newton polygon preserving, which is the toric analogue of degree-preserving. So here too, we took a naive degree-preserving lift of a smooth complete intersection (namely, a single hypersurface), but the ambient world is a toric surface, rather than projective space. In fact, this is also the correct geometric framework in which to understand the examples from Section~\ref{section_bakersbound}: if an irreducible polynomial $\overline{f} \in \FF_q[x,y]$ meets Baker's bound, then its zero locus in $\TTq^2$ compactifies to a smooth curve in the toric surface over $\FF_q$ associated to $\Delta(\overline{f})$, and conversely~\cite[\S4]{linearpencils}. Our Newton polygon preserving lift to $\mathcal{O}_K[x,y]$ then just corresponds to a degree-preserving lift of this curve, which is again a smooth curve in the toric surface over $K$ associated to $\Delta(f)$.\\ More generally, we believe that a naive lifting approach to Problem~\ref{liftingproblem} should apply to all curves $\overline{C} / \FF_q$ that arise as a smooth complete intersection in a complete toric variety. As we just remarked, in the toric \emph{surface} case this covers all curves that can be defined by polynomials $\overline{f} \in \FF_q[x,y]$ attaining Baker's bound, the predominant part of whose locus (in terms of moduli) consists of the trigonal curves~\cite{CV}. As for \emph{higher-dimensional} ambient toric varieties, we do not have a good understanding of which curves admit such a complete intersection model. But at least one interesting new family comes into play: tetragonal curves. Indeed, from work of Schreyer \cite{schreyer} it follows that every canonical tetragonal curve naturally arises as a complete intersection of two surfaces in a rational normal threefold scroll. In a subsequent paper~\cite{selfref} we explain how to work out the lifting details in this case, and subsequently how to compute the Hasse-Weil zeta function. But we remark that an instance of this already appears in our treatment of non-trigonal genus $5$ curves in Section~\ref{section_genus5lifting}: the threefold $\overline{S} : XY - ZW$ (cone over $\PPq^1 \times \PPq^1$) is a rational normal scroll, in which our curve is a complete intersection of two surfaces, cut out by $\overline{S}_2, \overline{S}_2'$. \\ \end{comment} \small
\section{Introduction} Given a set $S$ of dominant rational maps on $\mathbb{P}^N$ and an infinite sequence $\gamma=(\theta_1,\theta_2, \dots)$ of elements of $S$, then we are interested in two types of iterated processes attached to $\gamma$. Namely, the \emph{left iterative sequence} of maps, \[\gamma_n^-:=\theta_n\circ\theta_{n-1}\circ\dots\circ\theta_1\;\;\text{for all $n\geq1$},\] and the \emph{right iterative sequence} of maps, \[\;\;\,\gamma_n^+:=\theta_1\circ\theta_{2}\circ\dots\circ\theta_n\;\;\text{for all $n\geq1$}.\] In particular, given a suitable initial point $P\in\mathbb{P}^N$ we wish to study the \emph{left and right orbits} of the pair $(\gamma,P)$ given by \[\Orb_\gamma^-(P):=\big\{\gamma_n^-(P):n\geq0\big\}\;\;\text{and}\;\; \Orb_\gamma^+(P):=\big\{\gamma_n^+(P):n\geq0\big\}\] respectively; here we include the identity function $\gamma_0:=\text{Id}_{\mathbb{P}^N}$ for convenience. The analytic and topological properties of these orbits have been previously studied in complex dynamics \cite{random1,random2,random3,random4,random5,random6}, and in this paper, we consider arithmetic analogs of this work. Specifically, if both $P$ and the maps in $S$ are defined over $\overline{\mathbb{Q}}$ and $h:\mathbb{P}^N(\overline{\mathbb{Q}})\rightarrow\mathbb{R}_{\geq0}$ is the absolute Weil height function \cite[\S 3.1]{SilvDyn}, then we are interested in the growth rate of $h(\gamma_n^+(P))$ and $h(\gamma_n^-(P))$ as we move within the left and right orbits of $(\gamma,P)$ respectively. For sets of morphisms, the growth rates of $h(\gamma_n^-(P))$ for left iteration were studied first in \cite{Kawaguchi} and revisited in \cite{stochastic}. In particular, one may construct canonical heights in this setting and recover several familiar facts from the standard theory of arithmetic dynamics \cite{SilvDyn}, where one iterates a single function (i.e., $\gamma$ is a constant sequence). However, there appears to be relatively little known about heights when iterating on the right. Moreover when $N=1$, the arithmetic properties of $\Orb_\gamma^+(P)$ (for certain $P$ and certain $S$) control the size of the Galois extensions generated by $\gamma_n^+(x)=0$ and $n\geq1$; see Section \ref{sec:Galois}. Therefore, the growth rate of $h(\gamma_n^+(P))$ may be of interest to those studying dynamically generated Galois groups. \begin{remark} A further application of our work on left and right orbits is to the growing field of monoid (or semigroup) arithmetic dynamics \cite{monoid1,monoid2,stochastic,IJNT,monoid3,monoid4}. Here, one is instead interested in understanding the arithmetic properties of \emph{total orbits}, \begin{equation}\label{eq:totalorbit} \Orb_S(P):=\{f(P):\,f\in M_S\}=\bigcup_\gamma \Orb_\gamma^+(P)=\bigcup_\gamma \Orb_\gamma^-(P); \end{equation} here $M_S$ is the monoid generated by $S$ (and the identity) with the operation of composition. However, in practice, if one understands left and right orbits for sufficiently many $\gamma$, then one has gained nontrivial insight into total orbits; for some examples of this heuristic, see \cite[Corollary 1.4]{stochastic}, \cite[Theorem 1.18]{Me:dyndeg}, \cite[Theorem 1.7]{IJNT}, Theorem \ref{thm:zero-one}, and Section \ref{sec:totalorbits}. \end{remark} As in the case of iterating a single map, some useful tools for analyzing heights in left and right orbits are the left and right dynamical degrees, i.e., the limiting values of $\deg(\gamma_n^-)^{1/n}$ and $\deg(\gamma_n^+)^{1/n}$ respectively. However, without much difficulty, one can construct examples for which the aforementioned limits do not exist \cite[Example 1.1]{Me:dyndeg}. Nevertheless, one expects that these limits converge for most sequences. To test this heuristic, we fix a probability measure $\nu$ on $S$, and extend to a probability measure $\bar{\nu}$ on the set of sequences of elements of $S$ via the product measure; see Section \ref{sec:notation} for more details. With this perspective, we prove that the limits of $\deg(\gamma_n^-)^{1/n}$ and $\deg(\gamma_n^+)^{1/n}$ (as we vary over sequences of $S$) are $\bar{\nu}$-almost surely constant and independent of the direction of iteration. Moreover, for finite sets $S$, we show that this constant bounds both $h(\gamma_n^-(P))^{1/n}$ and $h(\gamma_n^+(P))^{1/n}$ for large $n$; compare to \cite[Theorem 1.8]{Me:dyndeg} and \cite[Theorem 1]{KawaguchiSilverman}. However, to prove this second fact about heights we must enforce a condition on $S$, namely, that as we compose elements of $S$ we manage to avoid maps of degree one: \begin{definition} A set of dominant rational maps $S$ on $\mathbb{P}^N$ is called \emph{degree independent} if $\deg(f)\geq2$ for all $f$ in the semigroup generated by $S$; here the operation is composition. \end{definition} Likewise, since the maps in $S$ may have non-trivial indeterminacy loci, we must take care to ensure that the orbits we consider are actually well defined: \begin{definition} Let $f$ be in the compositional semigroup generated by $S$, and let $I_f\subset \mathbb{P}^N$ be the indeterminacy locus of $f$. Then we set $\mathbb{P}^N(\overline{\mathbb{Q}})_S:=\displaystyle{\mathbb{P}^N(\overline{\mathbb{Q}})\mathbin{\fgebackslash}\cup_{f} I_f}$. \end{definition} With these notions in place, we prove our most general result relating the growth rate of degrees and the growth rate of heights in orbits. The proof is an adaptation and combination of the arguments given for left iteration (only) in Theorems 1.3 and 1.8 of \cite{Me:dyndeg}. Namely, we apply Kingman's subadditive ergodic theorem, Birkhoff's ergodic theorem, and ideas from \cite{SilvermanPN}. In what follows, $\mathbb{E}_\nu[\log\deg(\phi)]=\int_S\log\deg(\phi)d\nu$ denotes the expected value of the random variable $\log\deg$ on $S$. \begin{theorem}\label{thm:rationalmaps} Let $S$ be a set of dominant rational self-maps on $\mathbb{P}^N(\overline{\mathbb{Q}})$ and let $\nu$ be a discrete probability measure on $S$. Then the following statements hold: \vspace{.1cm} \begin{enumerate} \item[\textup{(1)}] If $\mathbb{E}_\nu[\log\deg(\phi)]$ exists, then there is a constant $\delta_{S,\nu}$ such that the limits \vspace{.1cm} \[\lim_{n\rightarrow\infty}\deg(\gamma_n^{-})^{1/n}=\delta_{S,\nu}=\lim_{n\rightarrow\infty}\deg(\gamma_n^{+})^{1/n}\vspace{.1cm}\] hold (simultaneously) for $\bar{\nu}$-almost every $\gamma\in\Phi_S$. \vspace{.25cm} \item[\textup{(2)}] If $S$ is finite and degree independent, then for $\bar{\nu}$-almost every $\gamma\in\Phi_S$ the bounds \vspace{.075cm} \[\limsup_{n\rightarrow\infty} h(\gamma_n^{\pm}(P))^{1/n}\leq\delta_{S,\nu}\vspace{.075cm}\] hold (simultaneously) for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$. \end{enumerate} \end{theorem} Motivated by the existence of the constant $\delta_{S,\nu}$ we make the following definition: \begin{definition} For $(S,\nu)$ as in Theorem \ref{thm:rationalmaps}, we call $\delta_{S,\nu}$ the \emph{dynamical degree} of $(S,\nu)$.\end{definition} Although Theorem \ref{thm:rationalmaps} gives an upper bound on the growth rate of heights in orbits that is independent of the direction of iteration and the initial point, the same cannot be said in general for lower bounds. Heuristically, if $P$ has small height, then the direction of iteration can matter greatly. We illustrate this point with the following example. \begin{example}{\label{eg:left-right difference}} Let $S=\{x^2-x,3x^2\}$ with $\phi_1=x^2-x$ and $\phi_2=3x^2$, and define $\nu$ on $S$ determined by $\nu(\phi_1)=1/2=\nu(\phi_2)$. Then viewing $S$ as a set of maps on $\mathbb{P}^1$, we consider the possible left and right orbits of $P=1$ and compute that \vspace{.1cm} \begin{equation*} \begin{split} \liminf_{n\rightarrow\infty} h(\gamma_n^+(P))^{1/n}=0\;\;&\text{and}\;\;\limsup_{n\rightarrow\infty} h(\gamma_n^+(P))^{1/n}=2\qquad\text{($\bar{\nu}$-almost surely)}\\ \;\;\liminf_{n\rightarrow\infty} h(\gamma_n^-(P))^{1/n}=0\;\;&\text{and}\;\;\limsup_{n\rightarrow\infty} h(\gamma_n^-(P))^{1/n}=0\qquad\text{($\bar{\nu}$-probability $1/2$)} \\ \;\;\liminf_{n\rightarrow\infty} h(\gamma_n^-(P))^{1/n}=2\;\;&\text{and}\;\;\limsup_{n\rightarrow\infty} h(\gamma_n^-(P))^{1/n}=2\qquad\text{($\bar{\nu}$-probability $1/2$)} \end{split} \end{equation*} In particular, the direction of iteration may greatly affect the growth rate of heights in orbits. \end{example} However, for morphisms and sufficiently generic initial points, we are able to prove fairly uniform results. Namely, outside of a set of points $P$ of bounded height, we prove that the limits (not merely the limsups) of both $h(\gamma_n^-(P))^{1/n}$ and $h(\gamma_n^+(P))^{1/n}$ are equal to the dynamical degree, almost surely. Moreover, the dynamical degree is easy to compute for finite sets of morphisms; it is a weighted geometric mean of the degrees of the maps in $S$; compare to \cite[Theorem 1.5]{Me:dyndeg}. The main tools we use to prove this result are Birkhoff's Ergodic Theorem and the Law of Iterated Logarithms for simple random walks; see Section \ref{sec:notation} for statements. \begin{theorem}\label{thm:iteratedlogs} Let $S$ be a finite set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all of degree at least two, and let $\nu$ be a discrete probability measure on $S$. Then there exists a constant $B_S$ such that the following statements hold: \vspace{.25cm} \begin{enumerate} \item[\textup{(1)}] The dynamical degree is given by $\displaystyle{\delta_{S,\nu}=\prod_{\phi\in S}\deg(\phi)^{\nu(\phi)}}$. \vspace{.3cm} \item[\textup{(2)}] For $\bar{\nu}$-almost every $\gamma\in\Phi_S$, the limits \vspace{.1cm} \[\lim_{n\rightarrow\infty}h(\gamma_n^-(P))^{1/n}=\delta_{S,\nu}=\lim_{n\rightarrow\infty}h(\gamma_n^+(P))^{1/n}\vspace{.15cm}\] hold (simultaneously) for all $P$ with $h(P)>B_S$. \vspace{.4cm} \item[\textup{(3)}] If the variance $\sigma_{S,\nu}^2$ of $\log(\deg(\phi))$ is nonzero, then for $\bar{\nu}$-almost every $\gamma\in\Phi_S$, \vspace{.1cm} \[\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\delta_{S,\nu}^n}}\bigg)}{\sigma_{S,\nu}\sqrt{2n\log\log n}}=1=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma_{S,\nu}\sqrt{2n\log\log n}},\vspace{.3cm}\] hold (simultaneously) for all $P$ with $h(P)>B_S$. \vspace{.1cm} \end{enumerate} \end{theorem} We can rewrite the bounds in Theorem \ref{thm:iteratedlogs} to give improved estimates for $h(\gamma_n^-(P))$ and $h(\gamma_n^+(P))$ that work almost surely. In particular, these bounds have a main term of $\delta_{S,\nu}^n$ and are (at least in an asymptotic sense) independent of both $\gamma$ and $P$; hence, we have reduced the randomness of heights in generic left and right orbits. Specifically, suppose that $S$, $\nu$, $\delta_{S,\nu}$, $B_S$, $\sigma_{S,\nu}^2$ and $P$ satisfy the conditions of the Theorem \ref{thm:iteratedlogs}, and let $\epsilon>0$. Then for almost every $\gamma$ there exists $N_{\gamma,P,\epsilon}$ such that \vspace{.15cm} \[\delta_{S,\nu}^{\, n-(1+\epsilon)\log_{\delta_{S,\nu}}(e)\,\sigma_{S,\nu}\sqrt{2n\log\log n}}\leq h(\gamma_n^{\pm}(P))\leq \delta_{S,\nu}^{\, n+(1+\epsilon)\log_{\delta_{S,\nu}}(e)\,\sigma_{S,\nu}\sqrt{2n\log\log n}} \vspace{.15cm}\] holds for all $n\geq N_{\gamma,P,\epsilon}$. It would be interesting to know if and when similar type bounds hold for rational functions; for a conjecture along these lines in the case of iterating a single rational map, see \cite[Conjecture 2]{SilvermanPN}. As an application, we can use Theorem \ref{thm:iteratedlogs} to count the number of iterates in left and right orbits of bounded height; compare to \cite[Corollary 1.16]{Me:dyndeg} and \cite[Proposition 3]{KawaguchiSilverman}. \vspace{.1cm} \begin{corollary}\label{cor:escapeptshtbds} Let $S$, $\nu$, $\delta_{S,\nu}$, and $B_S$ be as in Theorem \ref{thm:iteratedlogs}. Then for $\bar{\nu}$-almost every $\gamma\in\Phi_S$ the limits \vspace{.15cm} \[\lim_{B\rightarrow\infty}\frac{\#\{Q\in\Orb_\gamma^-(P)\,:\,h(Q)\leq B\}}{\log(B)}=\frac{1}{\log\delta_{S,\nu}}=\lim_{B\rightarrow\infty}\frac{\#\{W\in\Orb_\gamma^+(P)\,:\,h(W)\leq B\}}{\log(B)} \vspace{.15cm}\] hold (simultaneously) for all $P$ with $h(P)>3B_S$. \end{corollary} Although Theorem \ref{thm:iteratedlogs} and Corollary \ref{cor:escapeptshtbds} give nice descriptions of the growth rate of heights in generic left and right orbits, it is natural to ask what can be said in the non-generic case. Is it possible to prove a result somewhere in-between Theorem \ref{thm:rationalmaps} and Theorem \ref{thm:iteratedlogs}? Likewise, can we prove a result for (suitable) infinite sets $S$? For left iteration of morphisms, we have canonical heights at our disposal \cite{stochastic,Kawaguchi}, but this is not the case when iterating on the right; see Remark \ref{rem:nocanht} below. Moreover, understanding heights in right orbits can be useful for understanding (generalized) dynamical Galois groups; see Section \ref{sec:Galois}. As a first step (with the case of left iteration in mind), we assume that $S$ have further properties, which we now discuss. It is well known that if $\phi:\mathbb{P}^N(\overline{\mathbb{Q}})\rightarrow\mathbb{P}^N(\overline{\mathbb{Q}})$ is a morphism defined over $\overline{\mathbb{Q}}$ of degree $d_\phi$, then \begin{equation}\label{functoriality} h(\phi(P))=d_\phi h(P)+O_{\phi}(1)\;\;\;\text{for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$;} \vspace{.1cm} \end{equation} see, for instance, \cite[Theorem 3.11]{SilvDyn}. With this in mind, we let \begin{equation}{\label{htconstant}} C(\phi):=\sup_{P \in \mathbb{P}^N(\bar{\mathbb{Q}})} \Big\vert h(\phi(P))-d_\phi h(P)\Big\vert \end{equation} be the smallest constant needed for the bound in (\ref{functoriality}). Then, in order to control height growth rates for sequences in $S$, we define the following fundamental notion; compare to \cite{stochastic,Me:dyndeg,Kawaguchi}. \begin{definition}\label{def:htcontrolled} A set $S$ of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ is called \emph{height controlled} if the following properties hold: \vspace{.1cm} \begin{enumerate} \item $d_S:=\inf\{d_\phi:\phi\in S\}$ is at least $2$. \vspace{.15cm} \item $C_S:=\sup\{C(\phi): \phi\in S\}$ is finite. \vspace{.1cm} \end{enumerate} \end{definition} \begin{remark}We note first that any finite set of morphisms of degree at least $2$ is height controlled. To construct infinite collections, let $T$ be any non-constant set of maps on $\mathbb{P}^1$ and let $S_T=\{\phi\circ x^d\,: \phi\in T,\, d\geq2\}$. Then $S_T$ is height controlled and infinite; a similar construction works for $\mathbb{P}^N$ in any dimension. For another type of example, let $\mathcal{U}$ be the set of roots of unity in $\overline{\mathbb{Q}}$. Then $S=\{x^2+u\,:\, u\in \mathcal{U}\}$ is a height controlled collection of maps on $\mathbb{P}^1$. Moreover, it is worth pointing out that $S$ has a corresponding probability measure given by embedding $\mathcal{U}$ in the unit circle (in $\mathbb{C}$) and then taking the Haar measure on the circle. \end{remark} With the notion of height control morphisms in place, we prove a result for right iteration in-between Theorem \ref{thm:rationalmaps} and Theorem \ref{thm:iteratedlogs} above; compare to stronger results for left iteration \cite[Theorem 1.2]{stochastic} and \cite[Theorem 1.15]{Me:dyndeg}. However before stating this result, we make a few more notes on the differences between left and right iteration. First, as was mentioned before, canonical heights (in the usual sense) do not exist for right-iteration. That is, in principle one must keep track of both the corresponding liminf and limsup; see statement (1) of Theorem \ref{thm:zero-one} and Remark \ref{rem:nocanht} below. This is a drawback of right-iteration. On the other hand, there are certain advantages as well. For instance, ideally one would like to determine whether or not the total orbit (\ref{eq:totalorbit}) has a certain property by sampling a right or left orbit (and testing that same property). As an example, if a right (or left) orbit of $P$ is finite with positive probability, is it true that $\Orb_S(P)$ is necessarily finite? This statement turns out to be true for right orbits and false for left; for justification, see both Theorem \ref{thm:zero-one} below and \cite[Example 1.10]{Me:dyndeg}. \begin{theorem}\label{thm:zero-one} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all defined over a fixed number field $K$ and let $\nu$ be a discrete probability measure on $S$. Then the following statements hold: \vspace{.3cm} \begin{enumerate} \item[\textup{(1)}] For all $P$ and all $\gamma$, both \[\displaystyle{\liminf_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\;\;\; \text{and}\;\;\;\displaystyle{\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\] exist and are $h(P)+O(1)$. \\[3pt] \item[\textup{(2)}] For all $P$, the total orbit $\Orb_S(P)$ of $P$ is infinite if and only if \vspace{.1cm} \[\qquad0<\displaystyle{\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\qquad\qquad\text{($\bar{\nu}$-almost surely).}\vspace{.1cm}\] Hence, $\Orb_S(P)$ is finite if and only if $\Orb_\gamma^+(P)$ is finite with positive $\bar{\nu}$-probability. \\[3pt] \item[\textup{(3)}] If $\Orb_S(P)$ is infinite and $\mathbb{E}_\nu[\log\deg(\phi)]$ exists, then \vspace{.2cm} \[\limsup_{n\rightarrow\infty} h(\gamma_n^+(P))^{1/n}=\delta_{S,\nu}\qquad\qquad\text{($\bar{\nu}$-almost surely).}\] Moreover, the dynamical degree $\delta_{S,\nu}=\exp\big(\mathbb{E}_\nu[\log\deg(\phi)]\big)$ is given explicitly. \vspace{.025cm} \end{enumerate} \end{theorem} \begin{remark}{\label{rem:nocanht}} Note that the $\liminf$ and $\limsup$ in statement (1) of Theorem \ref{thm:zero-one} can be distinct. See Example \ref{eg:left-right difference} above. \end{remark} Having obtained results for left and right orbits, we turn to height counting problems for total orbits. Intuitively, one expects that if the maps in $S$ are related in some way (for instance, if they commute with each other), then this should cut down the number of possible points in total orbits. More formally, the asymptotic growth rate of the set \[\{Q\in\Orb_S(P)\,:\,h(Q)\leq B\}\] should depend on the structure of the compositional monoid $M_S$ that $S$ generates, at least for generic initial points $P$. As an illustration, we have the following related asymptotic, \[\lim_{B\rightarrow\infty}\frac{\#\Big\{f\in M_S\,:\,h\big(f(P)\big)\leq B\Big\}}{(\log B)^s}=\frac{1}{s!\cdot\prod_{i=1}^s\log\deg(\phi_i)}, \vspace{.25cm}\] when $S$ is a free basis (of cardinality $s$) for the commutative monoid $M_S$ and $P$ has sufficiently large height. For justification of this fact, as well as a discussion of the problem of counting points of bounded height in total orbits more generally, see Section \ref{sec:totalorbits}. In particular, we discuss how this problem in dynamics relates to the (weighted) growth rate problem for semigroups and to restricted weighted compositions in combinatorics \cite{growth1, compositions, growth2, growth3}.\\[3pt] \textbf{Acknowledgements:} We are happy to thank Andrew Bridy, James Douthitt, Joseph Gunther, Vivian Olsiewski Healey, Trevor Hyde, Rafe Jones, and Joseph Silverman for discussions related to the work in this paper. \section{Notation and tools from probability}\label{sec:notation} We begin by fixing some notation. For more information on these standard constructions in probability, see \cite{Durrett, ProbabilityText}. \begin{align*} S \;\;\;& \text{a set of dominant rational self maps on $\mathbb{P}^N$, all defined over $\overline{\mathbb{Q}}$}.\\ \nu \;\;\;& \text{a probability measure on $S$}.\\ \Phi_S \;\;& \text{the infinite product $\Phi_S=\Pi_{i=1}^\infty S=S^{\mathbb{N}}$}.\\[2pt] \bar{\nu} \;\;\;& \text{the product measure $\bar{\nu}=\Pi_{i=1}^\infty \nu$ on $\Phi_S$}. \\ \gamma\;\;\;& \text{an element of $\Phi_S$, viewed as an infinite sequence.}\\ \mathbb{E}_{\bar\nu}[f]\;\,& \text{the expected value $\mathlarger{\smallint}_{\hspace{-.1cm}\mathsmaller{\Phi_s}} f\,d\bar{\nu}$ of a random variable $f:\Phi_S\rightarrow\mathbb{R}$} \end{align*} \begin{remark} It is likely that many of our results on dynamical degrees hold without assumptions on the field of definition of the maps in $S$. However, since we wish to study heights, we assume that every map in $S$ has $\overline{\mathbb{Q}}$-coefficients. In particular, the sets $S$ we consider are countable, and for this reason, we assume that $\nu$ is a discrete measure with $\nu(\phi)>0$ for all $\phi\in S$. Likewise, since there may be no natural choice of probability measure $\nu$ on $S$, we keep the measures $\nu$ and $\bar{\nu}$ in much of the notation (e.g., $\mathbb{E}_{\bar\nu}[f]$) to remind the reader of the dependence of our formulas and bounds on the choice of $\nu$. \end{remark} When $S=\{\phi\}$ is a single map, a crucial tool for establishing the convergence of the limit defining the dynamical degree is Fekete's lemma (see the proof of \cite[Proposition 7]{SilvermanPN}), which states that if $a_n$ is a subadditive sequence of non-negative real numbers, then $\lim a_n/n$ exists. In particular, the following landmark theorem due to Kingman \cite{kingman} may be viewed as a random version of Fekete's lemma. In what follows, the expected value $\mathbb{E}_{\mu}[f]$ of a random variable $f: \Omega\rightarrow \mathbb{R}$ on a probability space $(\Omega,\Sigma, \mu)$ is the integral $\int_\Omega f d\mu$. \begin{theorem}[Kingman's Subadditive Ergodic Theorem]\label{thm:kingman} Let $T$ be a measure preserving transformation on a probability space $(\Omega,\Sigma, \mu)$, and let $(g_n)_{n\geq1}$ be a sequence of $L^1$ random variables that satisfy the subadditivity relation \begin{equation}\label{subadd} g_{m+n}\leq g_n+g_m\circ T^n \end{equation} for all $n,m\geq1$. Then there exists a $T$-invariant function $g$ such that \[\lim_{n\rightarrow\infty}\frac{g_n(x)}{n}=g(x)\] for $\mu$-almost every $x\in\Omega$. Moreover, if $T$ is ergodic, then $g$ is constant and \vspace{.1cm} \[\lim_{n\rightarrow\infty}\frac{g_n(x)}{n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_\mu[g_n]}{n} =\inf_{n\geq1}\frac{\mathbb{E}_\mu[g_n]}{n}.\] for $\mu$-almost every $x\in\Omega$ \end{theorem} \begin{remark} A transformation $T:\Omega\rightarrow\Omega$ on a probability space $(\Omega,\Sigma,\mu)$ is called \emph{ergodic} if for all $E\in\Sigma$ such that $T^{-1}(E)=E$, either $\mu(E)=0$ or $\mu(E)=1$. \end{remark} We also need a similar (yet weaker) ergodic theorem due to Birkhoff. \begin{theorem}[Birkhoff's Ergodic Theorem]\label{birk} If $T$ is an ergodic, measure preserving transformation on a probability space $(\Omega,\Sigma, \mu)$, then for every random variable $f\in L^1(\Omega)$, \begin{equation}\label{birkhoff} \lim_{n\rightarrow\infty} \frac{1}{n}\sum_{j=0}^{n-1} f\circ T^j(x)=\mathbb{E}_\mu[f]. \end{equation} for $\mu$-almost every $x\in\Omega$. \end{theorem} To apply Kingman's Subadditive Ergodic Theorem to dynamical degrees, we use the following well known example of an ergodic, measure preserving transformation. In particular, the lemma below is a simple consequence of Kolmogorov's $0$\,-$1$ law \cite[Theorem 10.6]{ProbabilityText}; for nice further discussions, see \cite[Example 7.1.6]{Durrett} or \cite[Example 5.5]{steve2} and \cite[Exercise 5.11]{steve2}. \begin{lemma}\label{shift} Let $S$ be a set with probability measure $\nu$ and let $(\Phi_S,\bar{\nu})$ be the corresponding infinite product space. Then the shift map, \[T\big(\theta_1,\theta_2, \dots \big)=(\theta_2, \theta_3,\dots)\] is an ergodic, measure preserving transformation on $\Phi_S$. \end{lemma} \begin{remark} When $S$ is a finite set, the probability space $\Phi_S$ and the map $T$ as in Lemma \ref{shift} are often called Bernoulli schemes and Bernoulli shifts respectively. \end{remark} Finally, to obtain the improved height bounds in part (3) of Theorem \ref{thm:iteratedlogs} with a main term of $\delta^n$, we use the following result due to Hartman and Wintner known as the Law of Iterated Logarithms; see \cite[Theorem 8.11.3]{Durrett}. As with certain classical theorems in probability (e.g., the Law of Large Numbers, The Central Limit Theorem, etc.) the Law of Iterated Logarithms for simple random walks is normally stated in terms of independent and identically distributed (or \emph{i.i.d.} for short) random variables; see \cite[\S2.1]{Durrett} or \cite[\S10]{ProbabilityText} for a definition and discussing of i.i.d sequences. However, for our purposes, it suffices to know that if $f:S\rightarrow\mathbb{R}$ is any $\nu$-measurable function, then the corresponding projection maps $X_n(f):\Phi_S\rightarrow\mathbb{R}$ on the product space $(\Phi_S,\bar{\nu})$ given by $X_{n,f}(\theta_1,\theta_2, \dots)=f(\theta_n)$ form an i.i.d sequence of random variables; this is a simple consequence of the relevant definitions \cite[Corollary 10.2]{ProbabilityText}. \begin{theorem}[Law of Iterated Logarithms]\label{thm:lawiterlogs} Suppose that $X_1$, $X_2$, $\dots$ are i.i.d. random variables on $(\Omega,\Sigma, \mu)$ with $\mathbb{E}_\mu[X_i]=0$ and $\mathbb{E}_\mu[X_i^2]=1$. Then, if $S_n=X_1+\dots+X_n$ denotes the truncated sum, we have that \begin{equation}\label{brownian} \qquad\limsup_{n\rightarrow\infty} \frac{\pm S_n}{\sqrt{2n\log\log n}}=1\qquad\text{($\mu$-almost surely).} \end{equation} \end{theorem} \begin{remark} Interestingly, the Law of Iterated Logarithms (for simple random walks) stated above is proven by first establishing the analogous fact for Brownian motion and then deducing (\ref{brownian}) from that case. \end{remark} \section{Rational maps: dynamical degrees and height bounds} In this section, we prove Theorem \ref{thm:rationalmaps} on dynamical degrees and height bounds for rational maps; for strengthened results on morphisms, see Section \ref{sec:morphisms}. \begin{proof}[(Proof of Theorem \ref{thm:rationalmaps})] We begin with the proof of statement (1) on dynamical degrees. For $n\geq1$, we define the random variables $g_n^{-}:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ and $g_n^{+}:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ given by \[g_n^{-}(\gamma):=\log\deg(\gamma_n^{-})\;\;\text{and}\;\;g_n^{+}(\gamma):=\log\deg(\gamma_n^{+})\] respectively. Note that each $g_n^{\pm}$ is non-negative since $S$ is a collection of dominant maps. We will show that the sequences $(g_n^-)_{n\geq1}$ and $(g_n^+)_{n\geq1}$ satisfy the hypothesis of Kingman's Subadditive Ergodic Theorem. Note first that each $g_n^{\pm}$ factors through the finite product $S^n$ and $S^n$ (a countable set) is equipped with the discrete measure (a finite product of discrete spaces is discrete). In particular, $g_n^{\pm}$ is $\bar{\nu}$-measurable by \cite[Corollary 10.2]{ProbabilityText}. Likewise, define $f_i:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ given by $f_i(\gamma)=\log\deg(\theta_i)$ for $\gamma=(\theta_s)_{s=1}^\infty$. Then $f_i$ is also measurable by \cite[Corollary 10.2]{ProbabilityText}. Moreover, we see that $g_n^{\pm}\leq\sum_{i=1}^nf_i$, since \begin{equation}\label{degbd} \deg(F\circ G)\leq\deg(F)\deg(G)\;\;\;\;\text{for any}\; F,G\in \Dom(\mathbb{P}^N); \end{equation} here, $\Dom(\mathbb{P}^N)$ is the set of dominant self-maps on $\mathbb{P}^N$. In particular, \[\mathbb{E}_{\bar{\nu}}[g_n^{\pm}]\leq\sum_{i=1}^n\mathbb{E}_{\bar{\nu}}[f_i]=n\,\mathbb{E}[f_1]:=n\,\mathbb{E}_\nu[\log\deg(\phi)];\] here we use that $\Phi_S$ consists of i.i.d sequences. In particular, each $g_n^{\pm}$ is an $L^1$ function since $\mathbb{E}_\nu[\log\deg(\phi)]$ is bounded by assumption. Now we check the subadditivity relation in (\ref{subadd}), a simple consequence of (\ref{degbd}). Let $n,m>0$, let $\gamma=(\theta_s)_{s=1}^\infty$, and let $T$ be the shift map on $\Phi_S$. Then we compute that \vspace{.25cm} \begin{equation*} \begin{split} g_{n+m}^{\,-}(\gamma)=\log\deg(\theta_{m+n}\circ\dots\circ\theta_1)&\leq\log\deg(\theta_{m+n}\circ\dots\circ\theta_{n+1})+\log\deg(\theta_n\circ\dots\circ\theta_1)\\[3pt] &=g_m^-(T^n(\gamma))+g_n^-(\gamma)=g_n^-(\gamma)+g_m^-(T^n(\gamma)), \vspace{.25cm} \end{split} \end{equation*} by (\ref{degbd}). Likewise for right iteration, we see that \vspace{.1cm} \begin{equation*} \begin{split} g_{n+m}^{\,+}(\gamma)=\log\deg(\theta_{1}\circ\dots\circ\theta_{n+m})&\leq\log\deg(\theta_{1}\circ\dots\circ\theta_{n})+\log\deg(\theta_{n+1}\circ\dots\circ\theta_{n+m})\\[3pt] &=g_n^+(\gamma)+g_m^+(T^n(\gamma)), \vspace{.25cm} \end{split} \end{equation*} In particular, Theorem \ref{thm:kingman} and Lemma \ref{shift} together imply that \vspace{.2cm} \begin{equation}\label{kinglim} \lim_{n\rightarrow\infty}\log\deg(\gamma_n^{\pm})^{1/n}=\lim_{n\rightarrow\infty}\frac{g_n^{\pm}(\gamma)}{n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{\pm}]}{n}=\inf_{n\geq1}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{\pm}]}{n} \end{equation} for $\bar{\nu}$-almost every $\gamma\in\Phi_S$. However, apriori the limits \[\delta_{S,\nu}^-:=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{-}]}{n}\;\;\text{and}\;\; \delta_{S,\nu}^+:=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{+}]}{n}\] could be distinct (in fact, if we were to allow maps over $\mathbb{C}$ so that $S$ could be uncountable, then we expect that this could be the case). But $S$ is countable and discrete by assumption, and so these limits are in fact equal. To see this, we define the bijections $\tau_n:S^n\rightarrow S^n$ given by \[\tau_n(\theta_1,\dots,\theta_n)=(\theta_n,\dots,\theta_1)\] and let $\nu_n=\nu\times\dots\times\nu$ be the product probability measure on $S^n$. Then it follows from the definition of $\nu_n$ that \[\nu_n(\theta_1,\dots,\theta_n)=\nu(\theta_1)\cdots\nu(\theta_n)=\nu(\theta_n)\cdots\nu(\theta_1)=\nu_n(\tau_n(\theta_1,\dots\theta_n))\] see \cite[\S10]{ProbabilityText}. Now let $G_n^{\pm}$ be the random variables on $S^n$ given by \vspace{.1cm} \[G_n^-(\theta_1,\dots,\theta_n)=\log\deg(\theta_n\circ\dots\circ\theta_1)\;\;\;\text{and}\;\;\;G_n^+(\theta_1,\dots,\theta_n)=\log\deg(\theta_1\circ\dots\circ\theta_n)\vspace{.1cm}\] In particular, it is straightforward to check that $G_n^-=G_n^+\circ\tau_n$. Therefore, since $S^n$ is countable/discrete, $\tau_n$ is bijective, and the series below are absolutely convergent:\vspace{.1cm} \begin{equation}{\label{eq:directionswap}} \mathbb{E}_{\nu_n}[G_n^{-}]=\sum_{x\in S^n}G_n^-(x)\nu_{n}(x)=\sum_{x\in S^n}G_n^+(\tau_n(x))\nu_{n}(\tau(x))=\sum_{y\in S^n}G_n^+(y)\nu_{n}(y)=\mathbb{E}_{\nu_n}[G_n^{+}].\vspace{.1cm} \end{equation} On the other hand, $g_n^{\pm}$ factors through $G_n^{\pm}$, so that \cite[Theorem 10.4]{ProbabilityText} and (\ref{eq:directionswap}) together imply that \vspace{.1cm} \begin{equation}\label{eq:swap2} \mathbb{E}_{\bar{\nu}}[g_n^{-}]=\mathbb{E}_{\nu_n}[G_n^{-}]=\mathbb{E}_{\nu_n}[G_n^{+}]=\mathbb{E}_{\bar{\nu}}[g_n^{+}]\qquad\text{for all $n\geq1$}. \vspace{.1cm} \end{equation} Hence, it follows from (\ref{kinglim}) and (\ref{eq:swap2}) that \begin{equation}{\label{eq:swap3}} \lim_{n\rightarrow\infty}\log\deg(\gamma_n^{-})^{1/n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{-}]}{n}=\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{+}]}{n}=\lim_{n\rightarrow\infty}\log\deg(\gamma_n^{+})^{1/n} \end{equation} for $\bar{\nu}$-almost every $\gamma\in\Phi_S$; here we use also that the intersection of almost sure events is almost sure. Moreover, applying the exponential map to (\ref{eq:swap3}) and exchanging $\exp$ with the limit (justified, by continuity) gives \begin{equation}\label{eq:dendegdef} \lim_{n\rightarrow\infty}\deg(\gamma_n^{\pm})^{1/n}=\delta_{S,\nu}:=\exp\Big(\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{-}]}{n}\Big)=\exp\Big(\lim_{n\rightarrow\infty}\frac{\mathbb{E}_{\bar{\nu}}[g_n^{+}]}{n}\Big) \end{equation} for $\bar{\nu}$-almost every $\gamma\in\Phi_S$ as claimed. Now for the proof of statement (2) of Theorem \ref{thm:rationalmaps}. Suppose that $S$ is finite and degree independent. Let $k\geq1$ be an integer, and let \begin{equation}\label{def:strings} M_{S,k}:=\big\{f=\theta_1\circ\dots\circ\theta_k\,\big\vert\;\text{for some}\;(\theta_1,\dots,\theta_k)\in S^k\big\} \end{equation} be the set of possible functions generated by $k$-term strings of elements of $S$. Then a standard triangle inequality estimate (see the proof of \cite[Theorem 3.11]{SilvDyn}) implies that \begin{equation}\label{rat:bd1} \,h(f(Q))\leq\deg(f) \,h(Q)+C(k,S)\qquad \text{for all $f\in M_{S,k}$ and all $Q\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$}. \end{equation} To see this, note that there is such a constant for each $f$ and only finitely many $f$'s, since $S$ is a finite set. Moreover, it is important to note that the estimate above does not depend on the direction of iteration (but simply the length of the string). In particular, we see that if $P\in \mathbb{P}^N(\overline{\mathbb{Q}})_S$, if $n\geq1$, and if $F_{nk}=f_n\circ f_{n-1}\circ\dots\circ f_1$ is an arbitrary element of $M_{S,nk}$ for some choice of $f_i\in M_{S,k}$, then repeated application of the bound in (\ref{rat:bd1}) implies that \vspace{.25cm} \begin{equation}\label{eq:stringbd} \begin{split} h(F_{nk}(P))\leq&\deg(f_n)\deg(f_{n-1})\dots\deg(f_1)\Scale[.84]{\Big(h(P)+\frac{C(k,S)}{\deg(f_1)}+\frac{C(k,S)}{\deg(f_1)\deg(f_2)}+\dots+\frac{C(k,S)}{\deg(f_1)\dots\deg(f_n)}\Big)} \\[5pt] \leq&\deg(f_n)\deg(f_{n-1})\dots\deg(f_1) \Big(h(P)+C(k,S)\Big). \vspace{.15cm} \end{split} \end{equation} Here we use our assumption that $S$ is degree independent, so that $\deg(f_i)\geq2$ for all $i$. Now we apply this bound to sequences. For $\gamma=(\theta_s)_{s=1}^{\infty}\in\Phi_S$ and $i,k\geq1$, let \vspace{.15cm} \[f_{i,k}^-(\gamma)=\theta_{ik}\circ\theta_{ik-1}\circ\dots\circ\theta_{(i-1)k+1}\;\;\;\text{and}\;\;\;f_{i,k}^+(\gamma)=\theta_{(i-1)k+1}\circ \theta_{(i-1)k+1}\dots\theta_{ik}. \vspace{.15cm} \] In particular, it is straightforward to check that \vspace{.15cm} \[\gamma_{nk}^-=f_{n,k}^-(\gamma)\circ f_{n-1,k}^-(\gamma)\circ\dots \circ f_{1,k}^-(\gamma)\;\;\; \text{and}\;\;\;\gamma_{nk}^+=f_{1,k}^+(\gamma)\circ f_{2,k}^+(\gamma)\circ\dots\circ f_{n,k}^+(\gamma). \vspace{.15cm} \] Moreover, each $f_{i,k}^{\pm}(\gamma)\in M_{S,k}$ is the composition of a $k$-term string from $S$. Therefore, (\ref{eq:stringbd}) above applied separately to $F_{nk}=\gamma_{nk}^-$ and $F_{nk}=\gamma_{nk}^+$ implies that \vspace{.15cm} \begin{equation}\label{rat:bd2} \begin{split} h(\gamma_{nk}^-(P))\leq\deg(f_{1,k}^-(\gamma))\deg(f_{2,k}^-(\gamma))\dots\deg(f_{n,k}^-(\gamma)) \,C(k,S,P)\\[8pt] h(\gamma_{nk}^+(P))\leq\deg(f_{1,k}^+(\gamma))\deg(f_{2,k}^+(\gamma))\dots\deg(f_{n,k}^+(\gamma))\,C(k,S,P) \end{split} \end{equation} holds for all $n,k\geq1$, all $\gamma\in\Phi_{S}$ and all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$; here we reverse the order of the product of the degrees for left iteration,\vspace{.1cm} \[\deg(f_{n,k}^-(\gamma))\deg(f_{n-1,k}^-(\gamma))\dots\deg(f_{1,k}^-(\gamma))=\deg(f_{1,k}^-(\gamma))\deg(f_{2,k}^-(\gamma))\dots\deg(f_{n,k}^-(\gamma)),\vspace{.1cm} \] to streamline the argument to come. From here we use Birkhoff's Ergodic Theorem to control the right hand side of (\ref{rat:bd2}) above. Namely, let $T_{(k)}:\Phi_S\rightarrow\Phi_S$ denote the $k$-shift map, $T_{(k)}:=T^k=T\circ T\circ\dots\circ T$. In particular, since the shift map $T$ is ergodic and measure preserving by Lemma \ref{shift}, so is $T_{(k)}$ for all $k\geq1$. Now consider the random variables $F_{(k)}^{-}:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ and $F_{(k)}^{+}:\Phi_S\rightarrow\mathbb{R}_{\geq0}$ given by \vspace{.1cm} \[ F_{(k)}^{\pm}(\gamma)=\frac{\log\deg(\gamma_k^{\pm})}{k}=\frac{\log\deg(f_{1,k}^{\pm}(\gamma))}{k}\;\;\;\;\ \text{for $\gamma\in\Phi_S$}.\] Then, it follows from the definition of $f_{i,k}^\pm$ that $F_{(k)}^{\pm}\circ T_{(k)}^{i-1}=1/k\cdot\log\deg(f_{i,k}^{\pm})$. Hence, rewriting the bounds in (\ref{rat:bd2}) and taking $nk$-th roots, we see that \vspace{.1cm} \begin{equation}\label{rat:bd3} h(\gamma_{nk}^{\pm}(P))^{1/nk}\leq \bigg(\exp\frac{1}{n}\sum_{j=0}^{n-1}F_{(k)}^{\pm}\big(T_{(k)}^j(\gamma)\big)\bigg) \,C(k,S,P)^{1/nk}.\end{equation} In particular, (\ref{rat:bd3}) implies that \begin{equation}\label{rat:bd4} \limsup_{n\rightarrow\infty}h(\gamma_{nk}^{\pm}(P))^{1/nk}\leq\limsup_{n\rightarrow\infty}\bigg(\exp\,\frac{1}{n}\sum_{i=0}^{n-1}F_{(k)}^{\pm}\big(T_{(k)}^i(\gamma)\big)\bigg). \end{equation} However, Birkhoff's Ergodic Theorem \ref{birk} implies that \begin{equation}\label{rat:lim} \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}F_{(k)}^{\pm}\big(T_{(k)}^i(\gamma)\big)=\mathbb{E}_{\bar{\nu}}[F_{(k)}^{\pm}] \end{equation} for almost every $\gamma\in\Phi_{S}$; note that this claim is independent of the point $P$. Moreover, since a countable intersection of almost sure events is almost sure, we see that the limit in (\ref{rat:lim}) is \textbf{true for all k} (for both left and right iteration), for almost every $\gamma\in\Phi_S$. On the other hand, (\ref{eq:swap2}) above implies that \begin{equation}\label{eq:exp=} \mathbb{E}_{\bar{\nu}}[F_{(k)}^{-}]=\frac{\mathbb{E}_{\bar{\nu}}[g_k^{-}]}{k}=\frac{\mathbb{E}_{\bar{\nu}}[g_k^{+}]}{k}=\mathbb{E}_{\bar{\nu}}[F_{(k)}^{+}]. \end{equation} Hence, the limit on the righthand side of (\ref{rat:lim}) does not depend on the direction. Therefore, (\ref{rat:bd4}), (\ref{rat:lim}), and the fact that the exponential function is continuous together imply that \vspace{.1cm} \begin{equation}\label{rat:bigbd} \limsup_{n\rightarrow\infty}h(\gamma_{nk}^{\pm}(P))^{1/nk}\leq \exp\bigg(\mathbb{E}_{\bar{\nu}}\Big[\frac{\log\deg(\gamma_k^-)}{k}\Big]\bigg) \vspace{.1cm} \end{equation} holds for all $k$ (for both left and right iteration), for almost every $\gamma\in\Phi_S$. From here, we handle left and right iteration separately and begin with left iteration. In particular, we show that the overall limsup (without $k$) in part (2) of Theorem \ref{thm:rationalmaps} can be computed using the subsequence of multiples of $k$ (for any $k\geq1$). This line of reasoning does not work for right iteration in general; see Example \ref{eg:left-right difference}. To do this, define constants \begin{equation}\label{rat:degbd} d_{S,k}:=\max_{\substack{f\in M_{S,r}\\ 0\leq r<k}}\deg(f)\;\;\; \text{and}\;\;\;\ B_{S,k}:=\max_{0\leq r<k} C(r,S); \end{equation} here, we remind the reader that $C(r,S)$ is the height bound constant given by \vspace{.1cm} \begin{equation}\label{rat:degbd2} C(r,S)=\max_{f\in M_{S,r}}\sup_{Q\in\mathbb{P}^N(\overline{\mathbb{Q}})}\{h(f(Q))-\deg(f)h(Q)\}. \vspace{.1cm} \end{equation} In particular, both $d_{S,k}$ and $B_{S,k}$ are finite since $S$ is a finite set. From here we proceed as in the proof of \cite[Proposition 12]{SilvermanPN}. Namely, for any $k\geq1$ and $m\geq k$, we can write $\gamma_m^-=f\circ\gamma_{nk}^-$ for some $f\in M_{S,r}$, some $0\leq r< k$, and some $n\geq1$. With this in mind, \vspace{.15cm} \begin{equation}\label{rat:subseq} \begin{split} \limsup_{m\rightarrow\infty} h(\gamma_m^{-}(P))^{1/m}&=\limsup_{n\rightarrow\infty} \max_{0\leq r<k} h(\gamma_{r+nk}^{-}(P))^{1/(r+nk)}\\[5pt] &\leq\limsup_{n\rightarrow\infty} \Big(d_{S,k}\,h(\gamma_{nk}^{-}(P))+B_{S,k}\Big)^{1/nk}\;\;\;\;\;\ \text{by (\ref{rat:bd1}), (\ref{rat:degbd}), and (\ref{rat:degbd2})}\\[5pt] &=\limsup_{n\rightarrow\infty} h(\gamma_{nk}^{-}(P))^{1/nk} \vspace{.1cm} \end{split} \end{equation} Hence, combining the bound in (\ref{rat:bigbd}) with (\ref{rat:subseq}), we see that \vspace{.2cm} \begin{equation}{\label{rat:bd6}} \limsup_{m\rightarrow\infty} h(\gamma_m^{-}(P))^{1/m}\leq \exp\bigg(\mathbb{E}_{\bar{\nu}}\Big[\frac{\log\deg(\gamma_k^-)}{k}\Big]\bigg)=\exp\bigg(\frac{\mathbb{E}_{\bar{\nu}}[\log\deg(\gamma_k^-)]}{k}\bigg) \vspace{.2cm} \end{equation} holds for all $k\geq1$, for all $P\in\mathbb{P}^N(\bar{\mathbb{Q}})_S$, for $\bar{\nu}$-almost every $\gamma\in\Phi_S$. Now for iteration on the right. For any $k\geq1$ and $m\geq k$, write $\gamma_m^+=\gamma_{nk}^+\circ f$ for some $f\in M_{S,r}$, some $0\leq r<k$, and some $n\geq1$. Now let \[M_{S,k}(P):=\big\{Q\in\mathbb{P}^N(\overline{\mathbb{Q}})\,:\, Q=f(P)\;\text{for some $f\in M_{S,r}$ and $0\leq r<k$} \big\}\] In particular, $M_{S,k}(P)$ is a finite set of points since $S$ is finite. Therefore, \[\mathcal{C}_{S,k,P}:=\max_{Q\in M_{S,k}(P)}\{h(Q)+B_{S,k}\}\] is a finite constant. Moreover, $h(\gamma_m^+(P))=h(\gamma_{nk}^+(Q))$ for some $Q\in M_{S,k}(P)$ by construction. On the other hand, (\ref{eq:stringbd}) and (\ref{rat:bd3}) hold for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$. In particular, these bounds hold for all $Q\in M_{S,k}(P)$. Therefore, \begin{equation}{\label{rat:bd7}} h(\gamma_m^+(P))^{1/m}=h(\gamma_{nk}^+(Q))^{1/m}\leq h(\gamma_{nk}^+(Q))^{1/nk}\leq \bigg(\exp\frac{1}{n}\sum_{j=0}^{n-1}F_{(k)}^{+}\big(T_{(k)}^j(\gamma)\big)\bigg) \,\mathcal{C}_{S,k,P}^{1/nk} \end{equation} As before letting $m\rightarrow\infty$ (and therefore $n\rightarrow\infty$), Birkhoff's Ergodic Theorem implies that \begin{equation}{\label{rat:bd8}} \limsup_{m\rightarrow\infty} h(\gamma_m^{+}(P))^{1/m}\leq \exp\bigg(\mathbb{E}_{\bar{\nu}}\Big[\frac{\log\deg(\gamma_k^-)}{k}\Big]\bigg)=\exp\bigg(\frac{\mathbb{E}_{\bar{\nu}}[\log\deg(\gamma_k^-)]}{k}\bigg) \vspace{.2cm} \end{equation} holds for all $k\geq1$, for all $P\in\mathbb{P}^N(\bar{\mathbb{Q}})_S$, for $\bar{\nu}$-almost every $\gamma\in\Phi_S$; recall that the limit of the expected values of $F_{(k)}^-$ and $F_{(k)}^+$ are equal by (\ref{eq:exp=}). In particular letting $k\rightarrow\infty$, we deduce from (\ref{eq:dendegdef}) and our combined bounds in (\ref{rat:bd6}) and (\ref{rat:bd8}), that for $\bar{\nu}$-almost every $\gamma\in\Phi_S$ the bounds \vspace{.1cm} \[\limsup_{m\rightarrow\infty} h(\gamma_m^{\pm}(P))^{1/m}\leq \delta_{S,\nu}\] hold (simultaneously) for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})_S$. This completes the proof of Theorem \ref{thm:rationalmaps}. \end{proof} \section{Morphisms: dynamical degrees and height bounds}\label{sec:morphisms} Throughout this section, let $S$ be a set of height controlled set of endomorphisms on $\mathbb{P}^N$. Ideally, one would like to strengthen part (2) of Theorem \ref{thm:rationalmaps} for rational maps in two ways: to replace the limsup with a limit, and to replace the inequality with an equality; compare to \cite[Conjecture 6.d]{KawaguchiSilverman} and \cite[Conjecture 1.b]{SilvermanPN}. We succeed in proving this when $S$ is a finite set and the initial point $P$ has sufficiently large height. Moreover (perhaps surprisingly), the resulting limit is (almost surely) independent of the direction of iteration. To prove both Theorems \ref{thm:iteratedlogs} and \ref{thm:zero-one}, we need the following generalization of Tate's telescoping argument. In what follows, $M_S$ is the monoid generated by $S$ under composition, and $d_S$ and $C_S$ are the height controlled constants in Definition \ref{def:htcontrolled}. \begin{lemma}{\label{lem:tate}} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$, and let $d_S$ and $C_S$ be the corresponding height controlling constants. Then for all $\rho\in M_S$, \[\bigg|\frac{h(\rho(Q))}{\deg(\rho)}-h(Q)\bigg|\leq \frac{C_S}{d_S-1} \;\;\;\; \text{for all $Q\in \mathbb{P}^N(\overline{\mathbb{Q}})$.}\] \end{lemma} \begin{proof} Suppose that $\rho=\theta_r\circ\theta_{r-1}\dots \circ\theta_1$ for $\theta_i\in S$, and let $\theta_0$ to be the identity map on $\mathbb{P}^N$. Then define \[\rho_{i}:=\theta_i\circ\theta_{i-1}\dots \circ\theta_1\circ\theta_0 \;\;\;\; \text{for $0\leq i\leq r$}.\vspace{.05cm}\] Note, that $\rho=\rho_r$ and $\rho_0=\theta_0$ is the identity map. In particular, inspired by Tate's telescoping argument, we rewrite \vspace{.05cm} \begin{equation}{\label{Tate}} \begin{split} \bigg|\frac{h(\rho(Q))}{\deg(\rho)}-h(Q)\bigg|&=\bigg|\sum_{i=0}^{r-1}\frac{h(\rho_{r-i}(Q))}{\deg(\rho_{r-i})}- \frac{h(\rho_{r-i-1}(Q))}{\deg(\rho_{r-i-1})}\bigg|\\[5pt] &\leq \sum_{i=0}^{r-1}\bigg|\frac{h(\rho_{r-i}(Q))}{\deg(\rho_{r-i})}- \frac{h(\rho_{r-i-1}(Q))}{\deg(\rho_{r-i-1})}\bigg| \\[5pt] &=\sum_{i=0}^{r-1}\frac{\Big|h(\rho_{r-i}(Q))-\deg(\theta_{r-i})h(\rho_{r-i-1}(Q))\Big|}{\deg(\rho_{r-i})} \\[5pt] &\leq \sum_{i=1}^{r}\frac{C}{(d_S)^{i}}\leq \sum_{i=1}^{\infty}\frac{C_S}{(d_S)^i}=\frac{C_S}{d_S-1}. \end{split} \end{equation} This completes the proof of Lemma \ref{lem:tate}. \end{proof} With this height bound in place, we are nearly ready to prove our main result for sets of morphisms, Theorem \ref{thm:iteratedlogs}. In fact, we are able to prove a stronger result. Namely, both $h(\gamma_n^{\pm}(P))^{1/n}$ approach the dynamical degree (almost surely) whenever $P$ is a so called escape point for $S$; see Definition \ref{def:escapepts} below. Moreover, every point $P$ of sufficiently large height is an escape point for $S$, and we therefore recover Theorem \ref{thm:iteratedlogs}. \begin{remark} This improved version can be useful for analyzing dynamical Galois groups; see Section \ref{sec:Galois}. For instance, if $S=\{x^{d_1}+c_1, \dots, x^{d_s}+c_s\}$ is a set of unicritical polynomials, then the right orbits of $P=0$ (i.e., the critical orbits) control the ramification in the associated towers of splitting fields; see Proposition \ref{prop:discriminant} below. However, $P=0$ does not have large enough height to apply Theorem \ref{thm:iteratedlogs} directly. Nevertheless, $P=0$ is very often an escape point for $S$ (see Corollary \ref{cor:unicritescape} below), in which case the conclusions of Theorem \ref{thm:iteratedlogs} still hold. \end{remark} To define escape points, recall that $M_{S,r}$ denotes the set of functions generated by tuples of elements of $S$ of length $r$; see (\ref{def:strings}) above. Moreover by convention, $M_{S,0}$ is the singleton set containing the identity function. \begin{definition}\label{def:escapepts} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ and define $B_S:=C_S/(d_S-1)$. If there exists $r\geq0$ such that $h(g(P))> B_S$ for all $g\in M_{S,r}$, then we say that $P$ is an \emph{escape point} for $S$. Moreover, we call the minimum such value of $r$ the \emph{escape level} of $P$. \end{definition} The importance of escape points is explained by the following auxiliary result. Namely, if $P$ is an escape point for $S$, then we can bound quantities of the form $h(f(P))/\deg(f)$ from below (in a nontrivial way). This may be viewed as analogous to $P$ having positive canonical height when iterating a single function. However, this is not a perfect analogy, since canonical heights do not exist in general for right iteration; see Example \ref{eg:left-right difference} above. \begin{lemma}\label{lem:escapept} Let $S$ be a finite set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all of degree at least two and let $P$ be an escape point for $S$ with escape level $r\geq0$. Then there exist positive constants $B_{S,P,1}$ and $B_{S,P,2}$ (depending on $P$) such that \[0<B_{S,P,1}\leq\frac{h(f(P))}{\deg(f)}\leq B_{S,P,2}\] for all $f\in M_{S,n}$ with $r\leq n$. \end{lemma} \begin{proof} The upper bound on $h(f(P))/\deg(f)$ follows directly from Lemma \ref{lem:tate} applied to the map $\rho=f$ and the point $Q=P$. For the lower bound, let $r\geq0$ be the escape level of $P$ and let $f\in M_{S,n}$ for some $n\geq r$. Then we can write $f=j\circ g$ for some $j\in M_{S,n-r}$ and some $g\in M_{S,r}$. Then Lemma \ref{lem:tate} applied to the map $\rho=j$ and the point $Q=g(P)$ implies that \vspace{.1cm} \begin{equation}\label{lbdespace} \begin{split} \frac{h(f(P))}{\deg(f)}=\frac{h(j(g(P)))}{\deg(j)\deg(g)}&\geq\frac{1}{\deg(g)}\big(h(g(P))-B_S\big)\\[8pt] &\geq \frac{1}{\displaystyle{\max_{g\in M_{S,r}}\{\deg{g}\}}}\cdot\min_{g\in M_{S,r}}\big\{h(g(P))-B_S\big\} \end{split} \end{equation} However, since $S$ is a finite set and $r$ is fixed, the degree of $g\in M_{S,r}$ is absolutely bounded. Likewise, since $P$ is an escape point for $S$, the quantity $h(g(P))-B_S$ is positive for all $g\in M_{S,r}$. Therefore, the minimum on the right hand side of (\ref{lbdespace}) is positive, since it is the minimum value of a finite set of positive numbers. In particular, there is a positive constant $B_{S,P,2}$, depending only on $S$ and $P$, such that $h(f(P))/\deg(f)>B_{2,P}$ as claimed. \end{proof} With Lemma \ref{lem:escapept} in place, we are ready to prove an improved version of Theorem \ref{thm:iteratedlogs} from the Introduction for escape points. \begin{theorem}\label{thm:escapepoints} Let $S$ be a finite set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all of degree at least two, and let $\nu$ be a discrete probability measure on $S$. Then the following statements hold: \vspace{.25cm} \begin{enumerate} \item[\textup{(1)}] The dynamical degree is given by $\displaystyle{\delta_{S,\nu}=\prod_{\phi\in S}\deg(\phi)^{\nu(\phi)}}$. \vspace{.3cm} \item[\textup{(2)}] For $\bar{\nu}$-almost every $\gamma\in\Phi_S$, the limits \vspace{.1cm} \[\lim_{n\rightarrow\infty}h(\gamma_n^-(P))^{1/n}=\delta_{S,\nu}=\lim_{n\rightarrow\infty}h(\gamma_n^+(P))^{1/n}\vspace{.15cm}\] hold (simultaneously) for all escape points $P$ for $S$. \vspace{.4cm} \item[\textup{(3)}] If the variance $\sigma^2$ of $\log(\deg(\phi))$ is nonzero, then for $\bar{\nu}$-almost every $\gamma\in\Phi_S$, \vspace{.1cm} \[\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\delta_{S,\nu}^n}}\bigg)}{\sigma_{S,\nu}\sqrt{2n\log\log n}}=1=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma_{S,\nu}\sqrt{2n\log\log n}},\vspace{.3cm}\] hold (simultaneously) for all escape points $P$ for $S$. \vspace{.1cm} \end{enumerate} \end{theorem} \begin{remark} Note that if $h(P)>B_S$, then $P$ is an escape point for $S$ of level $r=0$. In particular, Theorem \ref{thm:escapepoints} implies Theorem \ref{thm:iteratedlogs} from the Introduction. \end{remark} \begin{proof}[(Proof of Theorem \ref{thm:iteratedlogs})] For statement (1), consider $f_1:\Phi_S\rightarrow\mathbb{R}$ given by: \[f_1(\gamma)=\log\deg(\theta_1)\qquad \text{for\;\,$\gamma=(\theta_i)_{i=1}^\infty\in\Phi_S$}.\] Then Birkhoff's Ergodic Theorem \ref{birk} and Lemma \ref{shift} together imply that \[\lim_{n\rightarrow\infty} \frac{1}{n}\sum_{j=0}^{n-1} f_1\circ T^j(\gamma)=\mathbb{E}_{\bar{\nu}}[f_1].\] for almost every $\gamma\in\Phi_S$; here $T:\Phi_S\rightarrow\Phi_S$ is the shift map. On the other hand, since \[\deg(F\circ G)=\deg(F)\cdot\deg(G)=\deg(G)\cdot\deg(F)=\deg(G\circ F)\] for all endomorphisms $F$ and $G$ on $\mathbb{P}^N$, we have that \[\log\deg(\gamma^{\pm}_n)^{1/n}=\frac{1}{n}\sum_{j=0}^{n-1} f_1\circ T^j(\gamma).\] In particular, $\delta_{S,\nu}=\displaystyle{\lim_{n\rightarrow\infty}\deg(\gamma_n^{\pm})^{1/n}}=\exp\big(\mathbb{E}_{\bar{\nu}}[f_1]\big)$ almost surely. However, $f_1:\Phi_S\rightarrow\mathbb{R}$ factors through $S$, so that \cite[Theorem 10.4]{ProbabilityText} implies that \[\delta_{S,\nu}=\exp\big(\mathbb{E}_{\bar{\nu}}[f_1]\big)=\exp\big(\mathbb{E}_{\nu}[\log\deg(\phi)]\big)=\exp\Big(\sum_{\phi\in S}\log\deg(\phi)\nu(\phi)\Big)=\prod_{\phi\in S}\deg(\phi)^{\nu(\phi)}\] as claimed. For statement (2), let $\gamma\in\Phi_S$ be such that $\lim\deg(\gamma_n^{\pm})^{1/n}=\delta_{S,\nu}$, true of almost every $\gamma\in \Phi_S$, and let $P$ be an escape point for $S$. Then Lemma \ref{lem:escapept} implies that there are positive constants $B_{S,P,1}$ and $B_{S,P,2}$ such that \[\qquad B_{S,P,1}\cdot\deg(\gamma_n^{\pm})<h(\gamma_n^{\pm}(P))<B_{S,P,2}\cdot\deg(\gamma_n^{\pm}),\qquad\text{for all $\gamma\in\Phi_S$ and all $n\geq r$;}\] here $r$ is the escape level of $P$. Therefore, taking $n$th roots of both sides and letting $n$ tend to infinity, we see that \[\delta_{S,\nu}=\lim_{n\rightarrow\infty}B_{S,P,1}^{1/n}\cdot\lim_{n\rightarrow\infty}\deg(\gamma_n^{\pm})^{1/n}\leq h(\gamma_n^{\pm}(P))^{1/n}\leq\lim_{n\rightarrow\infty}B_{S,P,2}^{1/n}\cdot\lim_{n\rightarrow\infty}\deg(\gamma_n^{\pm})^{1/n}=\delta_{S,\nu}.\] Hence, for almost every $\gamma\in\Phi_S$ the limits \[\lim_{n\rightarrow\infty}h(\gamma_n^{\pm}(P))^{1/n}=\delta_{S,\nu}\] hold (simultaneously) for all escape points $P$ for $S$ as claimed. For statement (3), suppose that $P$ is an escape point for $S$ and that the variance $\sigma^2$ of the random variable $\log\deg(\cdot): S\rightarrow\mathbb{R}$ is nonzero; here, $\sigma^2$ is given explicitly by \[\sigma^2=\sum_{\phi\in S} \big(\log\deg(\phi)-\log(\delta_{S,\nu})\big)^2\nu(\phi).\] Then it follows from Lemma \ref{lem:escapept} that \vspace{.1cm} \begin{equation}\label{escapept:loght} \lim_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}}=0=\lim_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma\sqrt{2n\log\log n}}\qquad \text{for all $\gamma\in\Phi_{S}$;} \vspace{.15cm} \end{equation} here we simply use that the quantities $\log\frac{h(\gamma_n^{\pm}(P))}{\deg(\gamma_n^{\pm})}$ are bounded independently of $n\geq r$ by Lemma \ref{lem:escapept}. On the other hand, consider the i.i.d random variables $Y_n:\Phi_S\rightarrow\mathbb{R}$ given by \[Y_n(\gamma)=\frac{1}{\sigma}\big(\log\deg(\theta_n)-\log\delta\big),\qquad\text{for $\gamma=(\theta_i)_{i\geq1}\in\Phi_S$;}\] In particular, each $Y_n$ has mean $0$ and unit variance. Therefore, the Hartman-Wintner Law of the Iterated Logarithms (Theorem \ref{thm:lawiterlogs}) for the simple random walk $S_n=Y_1+\dots +Y_n$ implies that \vspace{.05cm} \begin{equation}\label{escapept:lawiterlog} \limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{\delta_{S,\nu}^n}}\bigg)}{\sigma\sqrt{2n\log\log n}}=1=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}} \qquad \text{($\bar{\nu}$-almost surely).} \end{equation} Hence, the conclusions in both (\ref{escapept:loght}) and (\ref{escapept:lawiterlog}) hold for almost every $\gamma\in\Phi_S$. Therefore, \vspace{.1cm} \begin{equation*} \begin{split} 1&=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{\delta_{S,\nu}^n}}\bigg)}{\sigma\sqrt{2n\log\log n}}+\lim_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}}\\[10pt] &=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{\delta_{S,\nu}^n}}\bigg)+\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}}=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{h(\gamma_n^{\pm}(P))}{\delta_{S,\nu}^n}}\bigg)}{\sigma\sqrt{2n\log\log n}}\\[5pt] \end{split} \end{equation*} for almost every $\gamma\in\Phi_S$. Likewise, (\ref{escapept:loght}) and (\ref{escapept:lawiterlog}) imply that \vspace{.3cm} \begin{equation*} \begin{split} 1&=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{\deg(\gamma_n^{\pm})}}\bigg)}{\sigma\sqrt{2n\log\log n}}+ \lim_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma\sqrt{2n\log\log n}}\\[10pt] &=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{\deg(\gamma_n^{\pm})}}\bigg)+\log\bigg(\mathlarger{\frac{\deg(\gamma_n^{\pm})}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma\sqrt{2n\log\log n}}=\limsup_{n\rightarrow\infty}\frac{\log\bigg(\mathlarger{\frac{\delta_{S,\nu}^n}{h(\gamma_n^{\pm}(P))}}\bigg)}{\sigma\sqrt{2n\log\log n}}\\[8pt] \end{split} \end{equation*} holds for $\bar{\nu}$-almost every $\gamma\in\Phi_S$, whenever $P$ is an escape point for $S$ and $\sigma^2$ is nonzero. This complete the proof of Theorem \ref{thm:escapepoints}. \end{proof} As an application of Theorem \ref{thm:escapepoints}, we can prove an asymptotic formula for the number of points in generic left and right orbits. \begin{proof}[(Proof of Corollary \ref{cor:escapeptshtbds})] We mostly follow the proof of \cite[Proposition 3]{KawaguchiSilverman}. However, there is an added step, which allows us to pass from superscripts of $\gamma_{n}^{\pm}(P)$ to points in orbits $Q\in\Orb_\gamma^{\pm}(P)$; see Lemma \ref{lem:n'stopoints} below. Let $P$ be an escape point for $S$ and let $\gamma\in\Phi_S$ be such that $\lim h(\gamma_n^{\pm}(P))^{1/n}=\delta_{S,n}$, true of almost every $\gamma$ by Theorem \ref{thm:escapepoints}. Then for every $\epsilon>0$ there is an integer $n_0=n_0(\epsilon,\gamma)$ so that \[(1-\epsilon)\delta_{S,\nu}\leq h(\gamma_n^{\pm}(P))^{1/n}\leq(1+\epsilon)\delta_{S,\nu}\] for all $n\geq n_0$; here you choose $n_0$ to be max of the corresponding $n_0(\epsilon,\gamma,-)$ and $n_0(\epsilon,\gamma,+)$. In particular, it follows that \vspace{.1cm} \begin{equation}\label{basiccount1} \begin{split} \{n\geq n_0\,:\,(1+\epsilon)\delta_{S,\nu}\leq B^{1/n}\}&\subset\{n\geq n_0\,:\,h(\gamma_n^{\pm}(P))\leq B\} \\[2pt] &\text{and}\\[2pt] \{n\geq n_0\,:\,h(\gamma_n^{\pm}(P))\leq B\}&\subset\{n\geq n_0\,:\,(1-\epsilon)\delta_{S,\nu}\leq B^{1/n}\}. \end{split} \end{equation} Therefore, after counting the number elements in the sets in (\ref{basiccount1}), we see that \begin{equation*} \begin{split} \frac{\log(B)}{\log((1+\epsilon)\delta_{S,\nu})}-n_0&\leq\#\{n\geq0\,:\,h(\gamma_n^{\pm}(P))\leq B\}\\[2pt] &\text{and}\\[2pt] \#\{n\geq0\,:\,h(\gamma_n^{\pm}(P))\leq B\}&\leq\frac{\log(B)}{\log((1-\epsilon)\delta_{S,\nu})}+n_0+1. \\[3pt] \end{split} \end{equation*} Hence, dividing by $\log(B)$ and letting $B$ tend to infinity gives \vspace{.1cm} \begin{equation*} \begin{split} \frac{1}{\log((1+\epsilon)\delta_{S,\nu})}&\leq\liminf_{B\rightarrow\infty}\frac{\#\{n\geq0\,:\,h(\gamma_n^{\pm}(P))\leq B\}}{\log(B)}\\[3pt] &\text{and} \\[3pt] \limsup_{B\rightarrow\infty}\frac{\#\{n\geq0\,:\,h(\gamma_n^{\pm}(P))\leq B\}}{\log(B)}&\leq \frac{1}{\log((1-\epsilon)\delta_{S,\nu})}. \\[4pt] \end{split} \end{equation*} In particular, since $\epsilon$ was arbitrary, we deduce that \vspace{.25cm} \begin{equation}\label{count:n's} \lim_{B\rightarrow\infty}\frac{\#\{n\geq0\,:\,h(\gamma_n^{-}(P))\leq B\}}{\log(B)}=\frac{1}{\log(\delta_{S,\nu})}=\lim_{B\rightarrow\infty}\frac{\#\{n\geq0\,:\,h(\gamma_n^{+}(P))\leq B\}}{\log(B)} \vspace{.2cm} \end{equation} hold (simultaneously) for almost every $\gamma\in\Phi_S$. From here, we pass from superscripts $n$ to points in orbits by the following lemma; however, we must assume that the initial point $P$ has height at least $3B_S$ (instead of $B_S$). In particular, we deduce from (\ref{count:n's}) and Lemma \ref{lem:n'stopoints} below, that for almost every $\gamma\in\Phi_S$ the limits \vspace{.15cm} \[\lim_{B\rightarrow\infty}\frac{\#\{Q\in\Orb_\gamma^-(P)\,:\,h(Q)\leq B\}}{\log(B)}=\frac{1}{\log\delta_{S,\nu}}=\lim_{B\rightarrow\infty}\frac{\#\{W\in\Orb_\gamma^+(P)\,:\,h(W)\leq B\}}{\log(B)} \vspace{.15cm}\] hold (simultaneously) for all $P$ with $h(P)>3B_S$ as claimed. \end{proof} \begin{lemma}\label{lem:n'stopoints} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$. If $h(P)>3B_S$, then $\gamma_n^{-}(P)\neq\gamma_m^{-}(P)$ and $\gamma_n^{+}(P)\neq\gamma_m^{+}(P)$ for all $n\neq m$ and all $\gamma\in\Phi_S$. \end{lemma} \begin{proof} Suppose that $n>m$ and that $\gamma_n^{\pm}(P)=\gamma_m^{\pm}(P)$. In particular, $h(\gamma_n^{\pm}(P))=h(\gamma_m^{\pm}(P))$. Then Lemma \ref{lem:tate} applied separately to $f=\gamma_n^{\pm}$ and then to $f=\gamma_m^{\pm}$ implies that \[\deg(\gamma_n^{\pm})\cdot(h(P)-B_S)\leq h(\gamma_n^{\pm}(P))=h(\gamma_m^{\pm}(P))\leq \deg(\gamma_m^{\pm})\cdot(h(P)+B_S).\] Rearranging terms, we deduce that \begin{equation}\label{distinctorbits} \frac{\deg(\gamma_n^{\pm})}{\deg(\gamma_m^{\pm})}\leq \frac{(h(P)+B_S)}{(h(P)-B_S)}. \end{equation} However, $n>m$ so that $\gamma_n^-=g_1\circ\gamma_m^-$ and $\gamma_n^+=\gamma_m^+\circ g_2$ for some $g_1,g_2\in M_{S,n-m}$. Moreover, $S$ is height controlled, so that $\deg(g_i)\geq2$. Furthermore, $\deg(\gamma_n^-)=\deg(g_1)\cdot\deg(\gamma_m^-)$ and $\deg(\gamma_n^+)=\deg(g_2)\cdot\deg(\gamma_m^+)$. Combining these facts with (\ref{distinctorbits}), we deduce that \[2\leq \frac{(h(P)+B_S)}{(h(P)-B_S)}.\] However, this statement immediately implies that $h(P)\leq 3B_S$, and the result follows. \end{proof} Since we are particularly interested in arithmetic aspects of right orbits for their relation to dynamical Galois groups (see Section \ref{sec:Galois}), we give a more explicit version of the height bounds in Theorem \ref{thm:escapepoints} for finite sets of unicritical maps. \begin{remark} If one is interested in trying to generalize known primitive prime divisor results to right iteration, especially those which are useful for understanding dynamical Galois groups \cite{Tucker,AvgZig, Riccati}, then one likely needs (among other things) a fairly refined understanding of the growth rates of heights in right orbits. \end{remark} \begin{corollary}{\label{cor:unicritescape}} Let $S=\{x^{d_1}+c_1, \dots, x^{d_s}+c_s\}$ for some $d_i\geq2$ and some $c_i\in\mathbb{Z}\mathbin{\fgebackslash}\{0\}$. Furthermore, assume that $P\in\mathbb{Q}$ satisfies \[ h((P^{d_i}+c_i)^{d_j}+c_j)\geq\max_i\{\log|2c|\}\;\;\;\;\; \text{for all $i,j$.}\] Then $P$ is an escape point for $S$. Therefore, if $S$ is equipped with the uniform measure and the $d_i$ are not all identical, then for all $\epsilon>0$ and almost every $\gamma\in\Phi_S$, there exists $N_{\gamma,P,\epsilon}$ such that \vspace{.15cm} \[(d_1d_2\dots d_s)^{\frac{n-(1+\epsilon)\log_{\delta}(e)\sigma\sqrt{2n\log\log n}}{s}}\leq h(\gamma_n^+(P))\leq (d_1d_2\dots d_s)^{\frac{n+(1+\epsilon)\log_\delta(e)\sigma\sqrt{2n\log\log n}}{s}} \vspace{.15cm}\] for all $n\geq N_{\gamma,P,\epsilon}$. \end{corollary} \begin{remark}{\label{rmk:escape}} In particular, if $|c_i^{d_j}+c_j|\geq2\max\{|c_i|\}$, then $0$ is an escape point for $S$ and the height bounds in Corollary \ref{cor:unicritescape} hold for $P=0$; for an application to Galois theory, see Corollary \ref{cor:Galoisescp}. We also note that in practice, the condition on $P$ in Corollary \ref{cor:escapeptshtbds} holds for every rational point (for many sets $S$). \end{remark} \begin{proof} Let $\phi(x)=x^d+c$ for some $d\geq2$ and some $c\in\mathbb{Z}\mathbin{\fgebackslash}\{0\}$. Then it is straightforward to prove that \[|h(\phi(P))-dh(P)|\leq\log|2c|\;\;\;\;\;\;\;\text{for all $P\in\mathbb{Q}$;}\] see \cite[Lemma 12]{Ingram}. In particular, the set $S$ as in Corollary \ref{cor:unicritescape} is height controlled with height constants $C_S=\max\{\log|2c|\}$ and $d_S\geq2$. Moreover, the condition on $P$ implies that $P$ is an escape point for $S$ with escape level $r=2$; see Definition \ref{def:escapepts} above. The claim then follows from Theorem \ref{thm:iteratedlogs} part (3) and the fact that the dynamical degree $\delta_{S,\nu}=\prod d_i^{1/s}$ is a geometric mean of the degrees of the maps in $S$; see Theorem \ref{thm:iteratedlogs} part (1). \end{proof} We now move on to study right iteration more carefully for more general initial points, including some points of small height. \begin{remark} For left iteration this analysis is accomplished by using canonical heights. In particular, several of our results on arithmetic and dynamical degrees above (for left iteration) hold for so called almost surely wandering points; see \cite[Theorem 1.5]{Me:dyndeg}. \end{remark} Here, the key assumption we make on the initial point $P$ is that it have infinite total orbit, i.e., the action of the entire monoid generated by $S$ on $P$ gives an infinite set; see (\ref{eq:totalorbit}) above. In particular, this condition is weaker than the assumption that $P$ be an escape point for $S$. In this case (and among other things), we prove that $\limsup h(\gamma_n^+(P))=\delta_{S,\nu}$ almost surely; compare to Theorems \ref{thm:rationalmaps} and \ref{thm:lawiterlogs} above. Moreover, this result holds for infinite height controlled sets of endomorphisms as well. For a statement of the following result, see Theorem \ref{thm:zero-one} from the Introduction. \begin{proof}[(Proof of Theorem \ref{thm:zero-one})] For statement (1), let $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$ be any point. Note that if $P$ is fixed and the sequence $\gamma\in\Phi_S$ is allowed to vary, then Lemma \ref{lem:tate} implies that the height-degree quotient sequence $h(\gamma^+_n(P))/\deg(\gamma_n^+)$ is bounded by $h(P)\pm B_S$. Therefore, both \[\displaystyle{\liminf_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\;\;\; \text{and}\;\;\;\displaystyle{\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}}\] exist and are $h(P)+O(1)$ for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$ and all $\gamma\in\Phi_S$. For statement (2), suppose that all of the maps in $S$ are defined over a fixed number field $K$. Moreover, we assume (without loss of generality) that the initial point $P\in\mathbb{P}^N(K)$. In particular, Northcott's Theorem (over $K$) implies that if $\Orb_S(P)$ is infinite, then there exists $g\in M_S$ such that $h(g(P))>B_S$. On the other hand, the Infinite Monkey Theorem, a simple consequence of Borel-Cantelli \cite[pp. 96-100]{InfiniteMonkey}, implies that \[\gamma_n^+=f_{\gamma,n}\circ g \;\;\text{for some $f_{\gamma,n}\in M_S$ and infinitely many $n$}\] for almost every $\gamma\in\Phi_S$; that is, with probability $1$ the infinite sequence $\gamma$ contains the finite substring $g$ infinitely many times. In particular, for such $\gamma$ and $n$, the bound in Lemma \ref{lem:tate} applied to $Q=g(P)$ and $\rho=f_{\gamma,n}$ implies that \begin{equation}\label{bdas} \frac{h(\gamma^+_n(P))}{\deg(\gamma^+_n)}=\frac{h(f_{\gamma,n}(g(P)))}{\deg(f_{\gamma,n})\deg(g)}\geq\frac{1}{\deg(g)}\big(h(g(P))-B_S\big)>0. \end{equation} It follows that the limsup of the quotient $h(\gamma^+_n(P))/\deg(\gamma^+_n)$ must be strictly positive for almost every $\gamma\in\Phi_S$. Conversely, if the limsup of $h(\gamma^+_n(P))/\deg(\gamma^+_n)>0$ is positive for a single $\gamma$ (in particular, if it's true almost surely), then the right orbit $\Orb_\gamma^+(P)$ must be infinite. Therefore, the total orbit $\Orb_S(P)$ is infinite as well. Finally, statement (3). Let $P$ be any initial point and let $\gamma$ be any sequence. We first show that $\lim h(\gamma_n^+(P))^{1/n}\leq\delta_{S,\nu}$ almost surely. Note that for finite sets, this is known by Theorem \ref{thm:rationalmaps}; however, we wish to allow suitable infinite sets. To do this (and to ease notation), let \begin{equation}\label{upperht} \bar{h}_\gamma^+(P)=\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}. \end{equation} Then by definition of $\limsup$ and Theorem \ref{thm:zero-one} part (1), we know that for all $\epsilon>0$ there is an $N_{P,\gamma,\epsilon}$ such that \[\frac{h(\gamma^+_n(P))}{\deg(\gamma^+_n)}\leq(1+\epsilon)\bar{h}^+_\gamma(P)\] holds for all $n>N_{P,\gamma,\epsilon}$. In particular, \begin{equation}\label{arithdegbd1} h(\gamma^+_n(P))^{1/n}\leq(1+\epsilon)^{1/n}\,\bar{h}^+_\gamma(P)^{1/n}\,\deg(\gamma^+_n)^{1/n} \end{equation} holds for such $n$. On the other hand, if $\Orb_S(P)$ is infinite, then $\bar{h}^+_\gamma(P)$ is positive almost surely by part (2) above. Likewise, if $\mathbb{E}_\nu[\log\deg(\phi)]$ exists, then Birkhoff's Ergodic Theorem \ref{birkhoff} (and an identical argument given for Theorem \ref{thm:iteratedlogs} part (1) above) implies that \vspace{.1cm} \[\displaystyle{\lim_{n\rightarrow\infty}\deg(\gamma^+_n)^{1/n}}=\displaystyle{\lim_{n\rightarrow\infty}\deg(\gamma^-_n)^{1/n}}=\delta_{S,\nu}=\exp\big(\mathbb{E}_\nu[\log\deg(\phi)]\big)\qquad\;\;\;\;\text{(almost surely)};\vspace{.1cm}\] alternatively, we can quote \cite[Theorem 1.5]{Me:dyndeg}. Therefore, if both $\Orb_S(P)$ is infinite and the quantity $\mathbb{E}_\nu[\log\deg(\phi)]$ exists, then the bound in (\ref{arithdegbd1}) implies that \[\limsup_{n\rightarrow\infty} h(\gamma_n^+(P))^{1/n}\leq\delta_{S,\nu}\] is true for almost every $\gamma\in\Phi_S$. For the reverse inequality, suppose that $\Orb_S(P)$ is infinite and the quantity $\mathbb{E}_\nu[\log\deg(\phi)]$ exists. Then by definition of $\bar{h}_\gamma^+(P)$, for all $0<\epsilon<1$ there exists an infinite sequence $\{n_k\}\subseteq\mathbb{N}$, depending on both $\epsilon$ and $\gamma$, such that \vspace{.1cm} \[\bar{h}^+_\gamma(P)(1-\epsilon)\leq \frac{h(\gamma_{n_k}^+(P))}{\deg(\gamma_{n_k}^+)}\] for all $n_k$. In particular, we see that \vspace{.1cm} \[\bar{h}_\gamma^+(P)^{1/n_k}(1-\epsilon)^{1/n_k}\deg(\gamma_{n_k}^+)^{1/n_k}\leq h(\gamma_{n_k}^+(P))^{1/n_k}.\] Therefore, it follows that \vspace{.05cm} \begin{equation}\label{ineq:reverse1} \begin{split} \limsup_{n_k\rightarrow\infty}\Big(\bar{h}_\gamma^+(P)^{1/n_k}(1-\epsilon)^{1/n_k}\deg(\gamma_{n_k}^+)^{1/n_k}\Big) &\leq \limsup_{n_k\rightarrow\infty} h(\gamma_{n_k}^+(P))^{1/n_k} \\[3pt] &\leq \limsup_{n\rightarrow\infty} h(\gamma_{n}^+(P))^{1/n}. \end{split} \end{equation} On the other hand, $\bar{h}_\gamma^+(P)$ is almost surely positive by part (2) of Theorem \ref{thm:zero-one} above. Hence, \begin{equation}\label{ineq:reverse2} \begin{split} \lim_{n_k\rightarrow\infty} \bar{h}_\gamma^+(P)^{1/n_k}=1&=\lim_{n_k\rightarrow\infty}(1-\epsilon)^{1/n_k}\\[3pt] \;\;\;\;\;\;\;\;\;\;\text{and}&\\[3pt] \lim_{n_k\rightarrow\infty}\deg(\gamma_{n_k}^+)^{1/n_k}=& \lim_{n\rightarrow\infty}\deg(\gamma_{n}^+)^{1/n}=\delta_{S,\nu} \end{split} \end{equation} almost surely. Therefore, (\ref{ineq:reverse1}) and (\ref{ineq:reverse2}) together imply that \[\delta_{S,\nu}\leq \limsup_{n\rightarrow\infty} h(\gamma_{n}^+(P))^{1/n}\] holds for almost every $\gamma\in\Phi_S$ as claimed. \end{proof} \begin{remark} The liminf and limsup in Theorem \ref{thm:zero-one} part (1) can be distinct for initial points $P$ of small height, even if the total orbit of $P$ is infinite; see Example \ref{eg:left-right difference} above. \end{remark} We note the following consequence of Theorem \ref{thm:zero-one}, a sort of zero-one law for finite orbit points. In particular, the analogous statement fails for left iteration; see \cite[Example 1.10]{Me:dyndeg}. \begin{corollary} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all defined over a fixed number field $K$ and let $\nu$ be a discrete probability measure on $S$. Then for all $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$, the probability that $\Orb_\gamma^+(P)$ is finite is either $0$ or $1$. \end{corollary} \begin{proof} Suppose that $\Orb_S(P)$ is finite. Then $\Orb^+_\gamma(P)\subseteq\Orb_S(P)$ is finite for all $\gamma\in\Phi_S$. In particular, the probability that $\Orb^+_\gamma(P)$ is finite is $1$. On the other hand, if $\Orb_S(P)$ is infinite, then part (2) of Theorem \ref{thm:zero-one} implies that \[I_P=\{\gamma\in\Phi_S\::\; \bar{h}^+_{\gamma}(P)>0\}\] has full measure in $\Phi_S$; see \ref{upperht} for a definition of $\bar{h}^+_{\gamma}(P)$. On the other hand, it is clear that \[\{\gamma\in\Phi_S\::\; \text{$\Orb^+_\gamma(P)$ is finite}\}\subseteq \Phi_S\mathbin{\fgebackslash} I_P.\] Therefore, the probability that $\Orb^+_\gamma(P)$ is finite is $0$. Hence, the probability that $\Orb^+_\gamma(P)$ is finite is either $0$ or $1$ as claimed. \end{proof} As a further application of Theorem \ref{thm:zero-one}, we record the following result for sets of quadratic polynomials with integral coefficients; see \cite{IJNT} for related work on sets of quadratic polynomials with rational coefficients. \begin{corollary}\label{cor:quad} Let $S=\{x^2+c_1,x^2+c_2,\dots,x^2+c_s\}$ for some distinct $c_i\in\mathbb{Z}$. If $s\geq3$, then \[0<\limsup_{n\rightarrow\infty}\frac{h(\gamma_n^+(P))}{\deg(\gamma_n^+)}\qquad\text{(almost surely)}\] for all $P\in\mathbb{Q}$ (independent of the choice of $\nu$). \end{corollary} \begin{proof} Combine Theorem \ref{thm:zero-one} part (2) with \cite[Corollary 1.2]{IJNT}. \end{proof} Finally, we apply Theorem \ref{thm:zero-one} to the height counting problem in orbits; compare to similar results in \cite[Corollary 1.16]{Me:dyndeg} and Corollary \ref{cor:escapeptshtbds} above. However, without further conditions on the initial point $P$, we can only give lower bounds. \begin{corollary}{\label{cor:orbitcount}} Let $S$ be a height controlled set of endomorphisms of $\mathbb{P}^N(\overline{\mathbb{Q}})$ all defined over a fixed number field $K$ and let $\nu$ be a discrete probability measure on $S$. Moreover, suppose the following conditions hold: \vspace{.1cm} \begin{enumerate} \item $\mathbb{E}_\nu[\log\deg(\phi)]$ exists. \vspace{.1cm} \item $\Orb_S(P)$ is infinite. \vspace{.15cm} \end{enumerate} Then \[\frac{1}{\mathbb{E}_\nu[\log\deg(\phi)]}\leq\liminf_{B\rightarrow\infty}\;\frac{\#\{n\geq0\,:\, h(\gamma_n^+(P))\leq B\}}{\log B}\vspace{.15cm}\] for almost every $\gamma\in\Phi_S$. \end{corollary} We suppress the proof of Corollary \ref{cor:orbitcount} due to its similarity to Corollary \ref{cor:escapeptshtbds} above. \section{Height counting in total orbits}\label{sec:totalorbits} We now turn briefly to the height counting problem for total orbits from the Introduction. However, the reader should bear in mind that the work in this section is preliminary. Nevertheless, we include it to motivate future work; for instance, we shall see how this problem relates to growth rates in semigroups and lattice point counting in various domains. As a reminder, if $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$ is fixed, then our overall goal is to understand the asymptotic size of the set of points in the total orbit of $P$ of height at most $B$, \[\{Q\in\Orb_S(P):\, h(Q)\leq B\},\] as $B$ grows. However, at the moment this problems seems quite difficult (since distinct functions can agree on subvarieties), and we instead study the asymptotic size of the related set of functions \begin{equation}\label{monoidcount} \{f\in M_S:\, h(f(P))\leq B\}, \end{equation} in hopes that this count will shed light on the number of points in $\Orb_S(P)$ of bounded height. The basic idea, consistent with our work on orbits coming from sequences, is that the height of a point $f(P)\in \Orb_S(P)$ is roughly determined by the size of $\deg(f)$, as long as the initial point $P$ is sufficiently generic; see Lemma \ref{lem:escapept} With this in mind, to count the number of functions $f\in M_S$ with $h(f(P))\leq B$, we should in some sense simply be counting the number of $f$'s of bounded degree. Moreover, when $M_S$ is (in a nice way) generated by a set of morphisms, this problem may be tractable. To make this heuristic precise, we briefly discuss weighted lengths on monoids. Let $M$ be a monoid generated by a finite set $S=\{\phi_1,\dots,\phi_s\}$ and let $c=(c_1,\dots,c_s)\in\mathbb{R}^s$ be a vector of positive weights. Then we define the \emph{weighted length} $\mathit{l}_{S,c}(f)$ of any $f\in M$ as follows. First let $\Sigma(S)$ be the free monoid generated by $S$ (i.e., $\Sigma(S)$ is the set of all words in the alphabet $S$) and define $\mathit{l}_{S,c}(\phi_i)=c_i$. Then extend $\mathit{l}_{S,c}$ to any word $\sigma\in\Sigma(S)$ by setting $\mathit{l}_{S,c}(\sigma)=\mathit{l}_{S,c}(s_1)+\dots+\mathit{l}_{S,c}(s_k)$ whenever $\sigma=s_1\cdots s_k$ and $s_i\in S$. Finally, for $f\in M$ we define $\mathit{l}_{S,c}(f)$ to be \[\mathit{l}_{S,c}(f):=\inf\big\{\mathit{l}_{S,c}(\sigma): \sigma\in\Sigma(S)\;\text{and $\sigma$ represents $f$}\big\}.\] Moreover, given a notion of length, one can study the growth function $g_{S,c}:\mathbb{R}\rightarrow\mathbb{N}$ given by \begin{equation}\label{growth} g_{S,c}(B):=\#\{f\in M\,:\, \mathit{l}_{S,c}(f)\leq B\}. \end{equation} In particular, the growth rate of $g_{S,c}$ may be used to encode information about the Monoid $M$ and the generating set $S$. \begin{remark} Historically, most of the work on this problem has focused on the case when $M$ is a group and each $c_i=1$ (with some additional work on the case when $c_i\in\mathbb{N}$ also); see \cite{growth1,growth2}. However, the relevant definitions make sense for $c_i\in\mathbb{R}_{>0}$ and monoids, and this is the situation that arises most naturally in our work here. \end{remark} Back to dynamics. Let $S=\{\phi_1, \dots,\phi_s\}$ be a finite set of endomorphisms on $\mathbb{P}^N$ all of degree at least $2$, let $c_i=\log\deg(\phi_i)$, and define $\mathit{l}(f):=\log\deg(f)$ for all $f\in M_S$. Then it is straightforward to check that $\mathit{l}(f)=\mathit{l}_{S,c}(f)$ independent of $S$ (the degree of a composite morphism is the product of the degrees of its components, and the degree of a function is intrinsic, i.e., does not depend on how it is written as a composition of other functions). Now suppose that $P\in\mathbb{P}^N(\overline{\mathbb{Q}})$ is such that $h(P)>B_S:=C_S/(d_S-1)$; here $C_S$ and $d_S$ are the constants from Definition \ref{def:htcontrolled} above. Then, Tate's telescoping Lemma \ref{lem:tate} implies that \vspace{.1cm} \[\deg(f)(h(P)-B_S)\leq h(f(P))\leq\deg(f)(h(P)+B_S).\vspace{.1cm}\] Therefore, for all $B$ we have the subset relations: \vspace{.1cm} \begin{equation}\label{subset} \Scale[.835]{\;\,\bigg\{f\in M_S\,:\,\mathit{l}(f)\leq \log\big(\frac{B}{h(P)+B_S}\big)\bigg\}\subseteq \big\{f\in M_S:\, h(f(P))\leq B\big\}\subseteq\bigg\{f\in M_S\,:\,\mathit{l}(f)\leq \log\big(\frac{B}{h(P)-B_S}\big)\bigg\}}. \vspace{.1cm} \end{equation} In particular, (\ref{growth}) and (\ref{subset}) imply that \begin{equation}\label{ht-wt} \{f\in M_S:\, h(f(P))\leq B\}\sim g_{S,c}(\log\,B) \end{equation} as $B$ tends to infinity. As an application, we consider the case when $S$ is a free basis of the commutative monoid $M_S$ (as an example, one may take $S=\{x^{d_1}, \dots, x^{d_s}\}$ where the $d_i\in\mathbb{N}$ are multiplicatively independent). In this case, $M_S\cong \mathbb{N}^s$ with the operation of coordinate addition, and it is straightforward to check that \[g_{S,c}(B')=\#\{(e_1,\dots, e_s)\in\mathbb{N}^s\,:\,e_1c_1+e_2c_2+\dots+e_sc_s\leq B'\}.\] However, this is evidently a count of the number of lattice points in a dilate of the bounded, Jordon measurable region \[\Omega=\{(x_1,\dots, x_s)\in\mathbb{R}^s\,:\, 0\leq x_i\;\text{and}\; x_1c_1+\dots+x_sc_s\leq 1\}.\] In particular, since the volume of $\Omega$ is $(s!c_1c_2\dots c_s)^{-1}$ it follows that \[g_{S,c}(B')\sim (s!c_1c_2\dots c_s)^{-1} (B')^s\] as $B'$ tends to infinity; see, for instance, \cite[Theorem 12.2]{Pollack}. Letting $B'=\log(B)$, we deduce from (\ref{ht-wt}) that \[\lim_{B\rightarrow\infty}\frac{\#\Big\{f\in M_S\,:\,h\big(f(P)\big)\leq B\Big\}}{(\log B)^s}=\frac{1}{s!\cdot\prod_{i=1}^s\log\deg(\phi_i)}\] as claimed in the Introduction. However, it seems that generically $M_S$ is a free (non-commutative) monoid, and there appears to be little (precise) information known about the growth rate function $g_{S,c}$ in this case, limiting what we can say about the dynamics. \begin{remark} When $M_S$ is a free non-commutative monoid (with basis $S$) and $c_i\in\mathbb{N}$, then $g_{S,c}(B)$ is a sum over restricted compositions of integers $n\leq B$; see \cite[\S2]{compositions}. In particular, one may be able to use the associated generating function to obtain an asymptotic for $g_{S,c}(B)$ in this case. However, the weights coming from dynamics are never integers (they are logs of integers). Nevertheless, since we are mainly interested in asymptotics for (\ref{monoidcount}), it is possible that the integer weight case could provide sufficient information to answer the general case. \end{remark} \section{Galois groups generated by multiple unicritical polynomials}{\label{sec:Galois}} We now discuss the relation between the arithmetic of right orbits and certain dynamical Galois groups. Many of the results in this section are straightforward adaptations of analogous results for constant sequences (i.e., iterating one function); see, for instance, \cite{Jones} and \cite{Jones-Survey}. For additional work on Galois groups generated by iterating multiple maps, see \cite{Ferraguti}. We begin with some notation. Let $K$ be a field of characteristic $0$, and let $S$ be a set of polynomials over $K$. Then given an infinite sequence $\gamma\in\Phi_S$, we can form a tower of Galois extensions $K_{\gamma,n}:=K(\gamma_n^+)$ for $n\geq0$; here $K(\gamma_n^+)$ denotes the splitting field of the equation $\gamma_n^+(x)=0$ in a fixed algebraic closure $\overline{K}$. We note that the direction of iteration is crucial to create nested extensions: \[K\subseteq K_{\gamma,1}\subseteq\dots \subseteq K_{\gamma,n}\,.\] As in the case of iterating a single function (under some separability assumptions), the Galois group $G_{\gamma,n}:=\Gal(K_{\gamma,n}/K)$ acts naturally on the corresponding truncated \emph{preimage tree} with vertices \[T_{\gamma,n}:=\big\{\alpha\in\overline{K}\,:\,\gamma_m^+(\alpha)=0\; \text{for some $1\leq m\leq n$}\big\}\] and edge relation: if $\gamma_m^+(\alpha)=0$ for some $1\leq m\leq n$ and $\gamma_m^+=\theta_1\circ\dots\circ \theta_m$, then there is an edge between $\alpha$ and $\theta_m(\alpha)$. Likewise, the inverse limit of Galois groups $\displaystyle{G_\gamma:=\lim_{\leftarrow} G_{\gamma,n}}$ acts continuously on the complete preimage tree $\displaystyle{T_\gamma=\cup_{n\geq1} T_{\gamma,n}}$ and we obtain an embedding, \[G_{\gamma,K}\leq \Aut(T_\gamma),\] called the \emph{arboreal representation} of $\gamma$; see \cite[\S2]{Ferraguti} for more details. In particular, in light of our probabilistic approach in this paper and the recent finite index theorems and conjectures in \cite{Bridy-Tucker,Jones-Survey}, we pose the following question. \begin{question}\label{question:Galois} Let $\nu$ be a probability measure on $S$. Under what assumptions on the polynomials in $S$ can we conclude that \[\bar{\nu}\Big(\big\{\gamma\in\Phi_S\,:\,[\Aut(T_\gamma):G_{\gamma,K}]<\infty\big\}\Big)>0?\] That is, when are the arboreal representations above finite index subgroups with positive probability? \end{question} As a first step in understanding this problem, we simplify the setup substantially. Let $S$ be set of unicritical polynomials with a common critical point $c\in K$, that is \begin{equation}\label{unicrit} S=\big\{a(x-c)^{d}+b:\,a,b\in K, a\neq0, d\geq2\big\}. \end{equation} \begin{remark} In practice, especially given our work on heights in the previous sections, we usually restrict ourselves to finite subsets of (\ref{unicrit}). However, for completeness, we keep the Galois theory results in this section as general as possible. \end{remark} In particular, if $K$ is a global field and $S$ is a set of polynomials as in (\ref{unicrit}), then we can restrict the ramification of the extensions $K_{\gamma,n}/K$ to the primes dividing elements of the \emph{critical orbits} $\Orb_\gamma^+(c)$ and the primes dividing the leading coefficients or degrees of the polynomials $\gamma_m^+$ for some $1\leq m\leq n$; compare to \cite[Lemma 2.6]{Jones}. In what follows, we use the shorthand $\ell(f)$ and $d(f)$ for the leading term and degree respectively of a polynomial $f\in K[x]$. Moreover, because this section is entirely devoted to right iteration, we (at times) drop the superscript $+$ and simply write $\gamma_n$ for $\gamma_n^+$ when convenient. \begin{proposition}\label{prop:discriminant} Let $S$ be a set of polynomials as in (\ref{unicrit}). Moreover, given $\gamma=(\theta_i)_{i=1}^\infty\in\Phi_S$ and $n\geq0$, let $\ell_{\gamma,n}$, $d_{\gamma,n}$, and $\Delta_{\gamma,n}$ be the leading term, the degree, and the discriminant of $\gamma_n^+$ respectively. Then \[\Delta_{\gamma,n}=\pm\, d(\theta_n)^{d_{\gamma,n}}\cdot\ell_{\gamma,n-1}^{\,d(\theta_n)-1}\cdot\ell(\theta_n)^{d_{\gamma,n-1}(d_{\gamma,n}-1)}\cdot\gamma_n^+(c)\cdot\Delta_{\gamma,n-1}^{d(\theta_n)}\] for all $n\geq1$. \end{proposition} \begin{proof} We begin with a few well known facts about discriminants and resultants; see, for instance, \cite[IV \S8]{Lang}. Let $h_1, h_2, h_3 \in K[x]$ be nonconstant polynomials. Then the resultant $\Res(h_1,h_2)$ of $h_1$ and $h_2$ is given by \begin{equation}\label{resultant} \Res(h_1,h_2)=\ell(h_1)^{d(h_2)}\prod_{h_1(\alpha)=0}h_2(\alpha), \end{equation} where the product above is taken over roots $\alpha\in\overline{K}$ of $h_1$ with multiplicity. Then the discriminant $\Delta(h_1)$ of $h_1$ satisfies \vspace{.1cm} \begin{equation}\label{discriminant1} \Res(h_1,h_1')=(-1)^{d(h_1)(d(h_1)-1)/2}\ell(h_1)\Delta(h_1).\vspace{.1cm} \end{equation} In particular, it is straightforward to check that $\Res(h_1,h_2)=(-1)^{d(h_1)d(h_2)}\Res(h_2,h_1)$, that $\Res(h_1h_2,h_3)=\Res(h_1,h_3)\Res(h_2,h_3)$, and that \vspace{.1cm} \begin{equation}\label{discriminant2} \Res(h_1\circ h_2, h_1'\circ h_2)=\ell(h_2)^{(d(h_1)^2-d(h_1))d(h_2)}\Res(h_1,h_1')^{d(h_2)}. \vspace{.1cm} \end{equation} We now apply these facts to the discriminants in Proposition \ref{prop:discriminant}. Specifically, it follows from (\ref{discriminant1}) and that $\gamma_n^+=\gamma_{n-1}^+\circ\theta_n$ that \vspace{.1cm} \begin{equation}\label{discriminant3} \frac{\Delta_{\gamma,n}}{\Delta_{\gamma,n-1}^{d(\theta_n)}}=\pm\frac{\ell_{\gamma,n-1}^{\,d(\theta_n)}}{\ell_{\gamma,n}}\cdot \frac{\Res(\gamma_n,\gamma_n')}{\Res(\gamma_{n-1},\gamma_{n-1}')^{d(\theta_n)}}; \vspace{.1cm} \end{equation} here we have dropped the superscript $+$ to avoid overly cumbersome notation. On the other hand, the chain rule implies that $\gamma_n'=(\gamma_{n-1}'\circ\theta_n)\cdot\theta_n'$. In particular, the standard resultant facts above together with (\ref{discriminant2}) imply that \vspace{.1cm} \begin{equation}\label{discriminant4} \begin{split} \Res(\gamma_n,\gamma_n')=&\pm \Res(\gamma_n',\gamma_n)\\[3pt] =&\pm\Res((\gamma_{n-1}'\circ\theta_n)\cdot\theta_n', \gamma_n)\\[3pt] =&\pm \Res(\gamma_{n-1}'\circ\theta_n, \gamma_n)\,\Res(\theta_n', \gamma_n)\\[3pt] =&\pm\Res(\gamma_{n-1}'\circ\theta_n, \gamma_{n-1}\circ\theta_n)\,\Res(\theta_n', \gamma_n)\\[3pt] =&\pm\Res(\gamma_{n-1}\circ\theta_n,\gamma_{n-1}'\circ\theta_n)\,\Res(\theta_n', \gamma_n)\\[3pt] =&\pm\ell(\theta_n)^{(d_{\gamma,n-1}^{\,2}-\,d_{\gamma,n-1})d(\theta_n)}\,\Res(\gamma_{n-1},\gamma_{n-1}')^{d(\theta_n)}\,\Res(\theta_n', \gamma_n) \end{split} \end{equation} Therefore, combining the expression in (\ref{discriminant3}) with the bottom line of (\ref{discriminant4}), we see that \begin{equation}\label{discriminant5} \frac{\Delta_{\gamma,n}}{\Delta_{\gamma,n-1}^{d(\theta_n)}}=\pm\frac{\ell_{\gamma,n-1}^{\,d(\theta_n)}}{\ell_{\gamma,n}}\cdot \ell(\theta_n)^{(d_{\gamma,n-1}^{\,2}-\,d_{\gamma,n-1})d(\theta_n)}\,\Res(\theta_n', \gamma_n). \end{equation} However, using the definition of the resultant in (\ref{resultant}) and the fact that $\theta_n$ has a unique critical point $c$, we see that $\Res(\theta_n', \gamma_n)=\ell(\theta_n')^{d_{\gamma,n}}\gamma_n(c)$. Hence, (\ref{discriminant5}) may be rewritten as \begin{equation}\label{discriminant6} \frac{\Delta_{\gamma,n}}{\Delta_{\gamma,n-1}^{d(\theta_n)}}=\pm\frac{\ell_{\gamma,n-1}^{\,d(\theta_n)}}{\ell_{\gamma,n}}\cdot \ell(\theta_n)^{(d_{\gamma,n-1}^{\,2}-\,d_{\gamma,n-1})d(\theta_n)}\,\ell(\theta_n')^{d_{\gamma,n}}\gamma_n(c). \end{equation} Hence, we need only control the relevant leading terms to complete the proof. First, since $\gamma_n=\gamma_{n-1}\circ\theta_n$, we see that $\ell_{\gamma,n}=\ell(\theta_n)^{d_{\gamma,n-1}}\ell_{\gamma,n-1}$. Moreover, $\ell(\theta_n)'=d(\theta_n)\ell(\theta_n)$. Therefore, after substituting these expressions into (\ref{discriminant6}) and simplifying like terms, we obtain the formula in Proposition \ref{prop:discriminant}. \end{proof} In particular for global fields $K$ and finite subsets $S$ of (\ref{unicrit}), we expect that $\Orb_\gamma^+(c)$ controls most of the ramification in $K_{\gamma,n}$. Specifically, suppose that $a_1,\dots, a_s$ and $d_1,\dots, d_s$ are the leading terms and degrees of a subset of the polynomials in $S$ respectively. Then by inducting on the formula in Proposition \ref{prop:discriminant} we see that if $\mathfrak{p}$ is a prime in $K$ that ramifies in $K_{\gamma,n}$, then $\mathfrak{p}\big\vert (d_1d_2\dots d_sa_1a_2\dots a_s)$ or $\mathfrak{p}\big\vert\gamma_m^+(c)$ for some $1\leq m\leq n$. Hence, if the total orbit of $c$ is finite, then Proposition \ref{prop:discriminant} provides a method for constructing many examples of finitely ramified, infinite extensions. \begin{example}\label{eg:finitelyramified} Let $S=\{\pm{x^2}, \pm{(x^2-1)}, 2x^2-1\}$, a finite set of quadratic polynomials of the form in (\ref{unicrit}) over the rational numbers. Then we check that $\Orb_S(0)=\{0,\pm{1}\}$. In particular, it follows from Proposition \ref{prop:discriminant} that the extensions $K_{\gamma,n}=\mathbb{Q}(\gamma_n^+)$ are unramified outside of the prime $p=2$ for all $\gamma\in\Phi_S$ and all $n\geq1$. Moreover, if $\gamma=(2x^2-1, 2x^2-1, \theta_3, \dots)$, then $\gamma_n^+$ is irreducible for all $n\geq1$ by Proposition \ref{prop:irreducible} below; the point here is that after the second stage of iteration, one may choose any element of $S$. In particular, it would be interesting to compute the arboreal representations associated to such $\gamma$. The finite ramification precludes finite index in all of $\Aut(T_\gamma)$, but perhaps some subgroup of $\Aut(T_\gamma)$ furnishes the correct overgroup (for finite index with positive probability). \end{example} \begin{example}\label{eg:finitelyramified2} Likewise, for $a,c\in\mathbb{Z}$ and $a\neq0$, let $S_{a,c}=\big\{a(x-c)^2+\frac{ac-2}{a}, -a(x-c)^2+\frac{ac+2}{a}\big\}$. Then for all sequences $\gamma\in\Phi_{S_{a,c}}$ the extensions (over $\mathbb{Q}$) generated by $\gamma_n^+$ are unramified outside of the primes dividing $a$, $ac-2$, or $ac+2$. \end{example} We now move on to prove an irreducibility test for right iteration when $S$ is a set of quadratic polynomials; compare to \cite[Proposition 4.2]{Jones} and \cite[Lemma 1.2]{Stoll}. \begin{proposition}\label{prop:irreducible} Let $S$ be a set of quadratic polynomials of the form in (\ref{unicrit}), and let $\gamma=(\theta_i)_{i=1}^{\infty}\in\Phi_S$. If \begin{equation}\label{criticalorbit} -\ell_{\gamma,1}\,\gamma_1^+(c),\,\ell_{\gamma,1}\,\gamma_2^+(c),\, \dots,\, \ell_{\gamma,1}\,\gamma_n^+(c) \end{equation} are all non-squares in $K$, then $\gamma_n^+$ is irreducible over $K$. \end{proposition} \begin{proof} We proceed by induction. It is clear that if $-\ell_{\gamma,1}\,\gamma_1^+(c)$ is not a square in $K$, then $\gamma_1^+(x)=\ell_{\gamma,1}(x-c)^2+\gamma_1^+(c)$ is an irreducible quadratic polynomial over $K$. For $n\geq2$, assume that Proposition \ref{prop:irreducible} holds for $n-1$ and that the elements listed in (\ref{criticalorbit}) are all non-squares in $K$. Then $\gamma_{n-1}^+$ is irreducible by the induction hypothesis. Now let $\alpha\in\overline{K}$ be any root of $\gamma_{n-1}^+$ and let $\theta_n(x)=a(x-c)^2+b$. Moreover, assume (for a contradiction) that $\theta_n(x)-\alpha$ is reducible over $K(\alpha)$. Then $a(\alpha-b)$ must be a square in $K(\alpha)$. However, since $\gamma_{n-1}^+$ is irreducible over $K$, we see that $(1/\ell_{\gamma,n-1})\gamma_{n-1}^+(x+b)$ is a minimal polynomial of $\alpha-b$ over $K$. Hence, we have the following norm computation: \begin{equation*} \begin{split} N_{K(\alpha)/K}(a(\alpha-b))=a^{[K(\alpha):K]}\cdot N_{K(\alpha-b)/K}(\alpha-b)&=a^{2^{n-1}}\frac{\;(-1)^{2^{n-1}}}{\ell_{\gamma,n-1}}\,\gamma_{n-1}^+\big(0+b\big) \\[3pt] &=\frac{a^{2^{n-1}}}{\ell_{\gamma,n-1}}\gamma_{n-1}^+(\theta_n(c))=\frac{a^{2^{n-1}}}{\ell_{\gamma,n-1}}\gamma_n^+(c). \vspace{.05cm} \end{split} \end{equation*} Therefore (since norms of squares are squares) if $\theta_n(x)-\alpha$ is reducible over $K(\alpha)$, then $\ell_{\gamma,n-1}\gamma_n^+(c)$ is a square in $K$. On the other hand, it is straightforward to check that \begin{equation}\label{leadingterm} \ell_{\gamma,m}=\ell(\theta_m)^{2^{m-1}}\,\ell(\theta_{m-1})^{2^{m-2}}\dots\,\ell(\theta_1)\;\;\; \text{for all $m\geq1$}. \vspace{.05cm} \end{equation} Hence, the square class of $\ell_{\gamma,n-1}\,\gamma_n^+(c)$ in $K$ is the square class of $\ell(\theta_1)\,\gamma_n^+(c)=\ell_{\gamma,1}\,\gamma_n^+(c)$. In particular, we have contradicted our assumption that $\ell_{\gamma,1}\gamma_n^+(c)$ is a non-square in $K$. Therefore, $\theta_n(x)-\alpha$ must be an irreducible polynomial over $K(\alpha)$. Hence, Capelli's Lemma (stated directly below) applied to $g=\gamma_{n-1}^+$ and $f=\theta_n$ implies that $\gamma_n^+=\gamma_{n-1}^+\circ\theta_n$ is irreducible over $K$ as desired. \end{proof} \begin{lemma}[Capelli's Lemma] Let $K$ be a field, let $f,g\in K[x]$, and let $\alpha\in\overline{K}$ be a root of $g$. Then $g\circ f$ is irreducible over $K$ if and only if both $g$ is irreducible over $K$ and $f-\alpha$ is irreducible over $K(\alpha)$. \end{lemma} \begin{remark}\label{eg:finitelyramified+irre} Let $S=\{\pm{x^2}, \pm{(x^2-1)}, 2x^2-1\}$ be as in Example \ref{eg:finitelyramified}. Then, it is easy to check that if $\gamma$ is of the form $\gamma=(2x^2-1, 2x^2-1, \theta_3, \dots)$, then $\ell_{\gamma,1}\gamma_{n}^+(0)=2$ for all $n\geq1$. In particular, it follows from Proposition \ref{prop:irreducible} that the polynomials $\gamma_n^+$ are irreducible over the rational numbers for all $n\geq1$. Moreover, it is worth noting that the $\gamma_n^+$ (and their reciprocal polynomials for $n\geq2$) are not Eisenstein at $p=2$. \end{remark} In particular, we can use the irreducibility test in Proposition \ref{prop:irreducible} to make some progress towards Question \ref{question:Galois} for finite sets of quadratic polynomials with integral coefficients. For a reminder of the definition of escape points, see Definition \ref{def:escapepts} above. \begin{theorem}\label{thm:stability} Let $S=\{x^2+c_1, x^2+c_2,\dots, x^2+c_s\}$ for some distinct $c_i\in\mathbb{Z}$, and assume that $S$ has the following properties: \vspace{.05cm} \begin{enumerate} \item[\textup{(1)}] Some $-c_i$ is not a square in $\mathbb{Z}$. \vspace{.05cm} \item[\textup{(2)}] $0$ is an escape points for $S$. \vspace{.05cm} \end{enumerate} Then for all discrete probability measures $\nu$ on $S$, we have that \[\bar{\nu}\Big(\big\{\gamma\in\Phi_S\,:\,\gamma_n^+\,\text{is irreducible over $\mathbb{Q}$ for all $n\geq1$}\big\}\Big)>0.\] Equivalently, $G_{\gamma,\mathbb{Q}}$ acts transitively on $T_\gamma$ with positive probability. \end{theorem} \begin{proof} Without loss of generality, we may assume that $-c_1$ is not a square in $\mathbb{Z}$. Therefore, if $\phi_1=x^2+c_1$, then it follows from the proof of \cite[Corollary 1.3]{Stoll} that $\phi_1^n(0)$ is not a square in $\mathbb{Z}$ for all $n\geq2$. In particular, $\phi_1^n$ is irreducible over $\mathbb{Q}$ for all $n\geq1$ by \cite[Corollary 1.3]{Stoll} and our assumption on $c_1$. Now consider the affine equation $E: y^2=\phi_1^2(x)$. Note that $E$ is nonsingular, since $\phi_1^2(x)$ is irreducible. In particular, there are only finitely many integer solutions $(x,y)\in\mathbb{Z}^2$ to $E$ by Siegel's Theorem. Now suppose that $\gamma\in\Phi_S$ is of the form $\gamma=(\phi_1,\phi_1, \theta_3,\dots)$ and that $\gamma_n^+(0)=y_n^2$ for some $y_n\in\mathbb{Z}$ and some $n\geq\max\{r+2\}$; here $r\geq0$ is the escape level of $0$ for $S$. Then $(x,y)=(\theta_3\circ\dots\circ\theta_n(0),y_n)$ is an integral solution to $E$. Therefore, there is a positive constant $B_E$ such that $h(\theta_3\circ\dots\circ\theta_n(0))\leq B_E$. Combining this bound with the lower bound in Lemma \ref{lem:escapept} applied to the function $f=\theta_3\circ\dots\circ\theta_n$ and the point $P=0$, we see that there is a positive constant $B_1=B_{S,0,1}$ such that \[0<B_{1}<\frac{h(\theta_3\circ\dots\circ\theta_n(0))}{2^{n-2}}\leq\frac{B_E}{2^{n-2}};\] here we use that $\deg(\theta_3\circ\dots\circ\theta_n)=2^{n-2}$, since $S$ is a set of quadratic polynomials. Hence, such indices $n$ are bounded: $n\leq n_{E,0}:=\log_2(B_E/B_1)+2$. From here, define $N:=\max\{r+2,n_{E,0}\}$ and consider the sequences \[\Phi_{S,1,N}:=\big\{\gamma\in\Phi_S\,:\,\gamma=(\phi_1,\phi_1,\dots, \phi_1, \theta_{N+1}, \dots)\big\}.\] Then by definition of $N$, if $\gamma\in\Phi_{S,1,N}$, we see that $\gamma_n^+(0)$ cannot be a square in $\mathbb{Z}$ for all $n> N$. On the other hand, if $\gamma\in\Phi_{S,1,N}$, then $-\gamma_1^+(0), \gamma_2^+(0),\dots, \gamma_N^+(0)$ are all non-squares in $\mathbb{Q}$, since $\gamma_m^+(x)=\phi_1^m(x)$ for all $1\leq m\leq N$, since $\phi_1^n(0)$ is not a square in $\mathbb{Q}$ for all $n\geq2$, and since $-\phi_1(0)=-c_1$. Therefore, it follows from Proposition \ref{prop:irreducible} above, that if $\gamma\in\Phi_{S,1,N}$, then $\gamma_n^+$ is irreducible over $\mathbb{Q}$ for all $n\geq1$. However, $\bar{\nu}(\Phi_{S,1,N})=\nu(\phi_1)^N>0$ by \cite[Theorem 10.4]{ProbabilityText}, and the result follows. \end{proof} In particular, we have the following immediate consequence of Theorem \ref{thm:stability} and Corollary \ref{cor:unicritescape} above; see also Remark \ref{rmk:escape}. \begin{corollary}\label{cor:Galoisescp} Let $S=\{x^2+c_1, x^2+c_2,\dots, x^2+c_s\}$ for some distinct $c_i\in\mathbb{Z}$, and assume that $S$ has the following properties: \vspace{.05cm} \begin{enumerate} \item[\textup{(1)}] Some $-c_i$ is not a square in $\mathbb{Z}$. \vspace{.075cm} \item[\textup{(2)}] $|c_i^2+c_j|\geq2\max\{|c_i|\}$ for all $1\leq i,j\leq s$. \vspace{.075cm} \end{enumerate} Then for all discrete probability measures $\nu$ on $S$, we have that \[\bar{\nu}\Big(\big\{\gamma\in\Phi_S\,:\,\gamma_n^+\,\text{is irreducible over $\mathbb{Q}$ for all $n\geq1$}\big\}\Big)>0.\] Equivalently, $G_{\gamma,\mathbb{Q}}$ acts transitively on $T_\gamma$ with positive probability. \end{corollary} We next generalize Stoll's maximality lemma \cite[Lemma 1.6]{Stoll} to sets of quadratic polynomials; see also \cite[Lemma 3.2]{Jones}. In practice, this maximality lemma is the main tool for showing a given arboreal representation has finite index in the automorphism group of its associated preimage tree. \begin{proposition}\label{prop:maximality} Let $S$ be a set of quadratic polynomials of the form in (\ref{unicrit}), and let $\gamma=(\theta_i)_{i=1}^{\infty}\in\Phi_S$. Assume that $n\geq1$ and that $\gamma_{n-1}^+$ is irreducible over $K$. Then the following statements are equivalent: \vspace{.15cm} \begin{enumerate} \item[\textup{(1)}] $[K_{\gamma,n}:K_{\gamma,n-1}]=2^{2^{n-1}}$. \vspace{.15cm} \item[\textup{(2)}] $\ell_{\gamma,1}\,\gamma_n^+(c)$ is not a square in $K_{\gamma,n-1}$. \end{enumerate} \end{proposition} \begin{remark} Since $K_{\gamma,n}/K_{\gamma,n-1}$ is the compositum of at most $2^{n-1}$ quadratic extensions (one for each root of $\gamma_{n-1}^+$), we see that $[K_{\gamma,n}:K_{\gamma,n-1}]=2^{2^m}$ for some $0\leq m\leq n-1$. For this reason, when $m=n-1$ we say that the extension $K_{\gamma,n}/K_{\gamma,n-1}$ is maximal. \end{remark} \begin{proof} We begin with a few observations analogous to those in the proof of \cite[Lemma 1.6]{Stoll}. Let $\theta_n(x)=a(x-c)^2+b$, let $d=2^{n-1}$, and let $\alpha_1, \alpha_2, \dots, \alpha_{d}$ be the roots of $\gamma_{n-1}^+$ in $K_{\gamma,n-1}$. Then $K_{\gamma,n}=K_{\gamma,n-1}\big(\sqrt{a(\alpha_i-b)}:\,1\leq i\leq d\big)$ since $\pm1/a\sqrt{a(\alpha_i-b)}+c$ are roots of $\gamma_n^+$. Hence, $K_{\gamma,n}/K_{\gamma,n-1}$ is a $2$-Kummer extension and $[K_{\gamma,n}:K_{\gamma,n-1}]=2^{d-\dim(V)}$, where $V$ is the $\mathbb{F}_2$-vector space given by \[V:=\Big\{(e_1,\dots, e_{d})\in\mathbb{F}_2^{d}\,:\,\prod_{i=1}^d(a(\alpha_i-b))^{e_i}\in (K_{\gamma,n-1})^2\Big\};\] see \cite[VI \S8]{Lang}. On the other hand, since $G_{\gamma,n-1}:=\Gal(K_{\gamma,n-1}/K)$ permutes the roots of $\gamma_{n-1}^+$, we obtain an induced linear action of $G_{\gamma,n-1}$ on $V$. Moreover, since $G_{\gamma,n-1}$ is a $2$-group, either $\dim(V)=0$ or $V$ has a non-trivial $G_{\gamma,n-1}$-fixed vector; see \cite[I Lemma 6.3]{Lang}. However, $\gamma_{n-1}^+$ is irreducible over $K$, so that $G_{\gamma,n-1}$ acts transitively on the roots of $\gamma_{n-1}^+$. In particular, $(1,\dots,1)$ is the only possible non-trivial fixed vector. Therefore, we have deduced the following fact: either $\dim(V)=0$ or $(1,\dots,1)\in V$. However, if $(1,\dots,1)\in V$, then \[\prod_{i=1}^da(\alpha_i-b)=\frac{a^d\cdot(-1)^{d}}{\ell_{\gamma,n-1}}\cdot\Big(\ell_{\gamma,n-1}\prod_{i=1}^d(b-\alpha_i)\Big)=\frac{a^d}{\ell_{\gamma,n-1}}\cdot\gamma_{n-1}^+(b)=\frac{a^d}{\ell_{\gamma,n-1}}\cdot\gamma_{n}^+(c)\] is a square in $K_{\gamma,n-1}$; here we use that $d$ is even. Moreover, (\ref{leadingterm}) implies that $\ell_{\gamma,n-1}$ is a square in $K$ times $\ell_{\gamma,1}$. In particular, $(1,\dots,1)\in V$ if and only if $\ell_{\gamma,1}\,\gamma_n^+(c)$ is a square in $K_{\gamma,n-1}$. The result easily follows. \end{proof} We combine the discriminant formula and the maximality lemma above to obtain a sufficient criterion for ensuring that a given arboreal representation (associated to a sequence of quadratic polynomials) has finite index in the automorphism group of its preimage tree. To do this, we briefly fix some notation. Let $K$ be a global field of characteristic $0$, i.e., a number field or a finite extension $K/k(t)$ of a rational function field in one variable; here $k$ has characteristic $0$. Given a finite prime $\mathfrak{p}$ of $K$, we let $v_{\mathfrak{p}}$ denote the normalized valuation on $K$ associated to $\mathfrak{p}$. Moreover, when $K$ is a number field, we let $\mathfrak{o}_K$ denote the ring of integers of $K$. When $K$ is a function field, we choose a prime $\mathfrak{p}_0$, and let $\mathfrak{o}_K$ denote the set $\{z\in K\,:\,v_{\mathfrak{p}}(z)\geq0\,\text{for all $\mathfrak{p}\neq\mathfrak{p}_0$}\}$. With these notions in place, we have the following arithmetic finite index test. \begin{theorem}{\label{maxtest}} Let $K$ be a global field of characteristic zero and let $S$ be a set of quadratic polynomials in $\mathfrak{o}_K[x]$ with common critical point $c\in\mathfrak{o}_K$. Assume that a sequence $\gamma=(\theta_i)_{i=1}^{\infty}\in\Phi_S$ is such that $\gamma_m^+$ is irreducible for all $m\geq1$. Moreover, assume that for all $n$ sufficiently large there exists a prime $\mathfrak{p}_{\gamma,n}$ of $K$ with the following properties: \vspace{.1cm} \begin{enumerate} \item[\textup{(1)}] $v_{\mathfrak{p}_{\gamma,n}}(2)=0$. \vspace{.1cm} \item [\textup{(2)}] $\mathfrak{p}_{\gamma,n}\neq\mathfrak{p}_0$ if $K$ is a function field. \vspace{.1cm} \item[\textup{(3)}] $v_{\mathfrak{p}_{\gamma,n}}(\ell(\theta_m))=0$ for all $1\leq m\leq n$.\vspace{.1cm} \item [\textup{(4)}] $v_{\mathfrak{p}_{\gamma,n}}(\gamma_m^+(c))=0$ for all $1\leq m\leq n-1$. \vspace{.1cm} \item [\textup{(5)}] $v_{\mathfrak{p}_{\gamma,n}}(\gamma_n^+(c))\equiv1\pmod{2}$. \vspace{.1cm} \end{enumerate} Then $G_{\gamma,K}$ is a finite index subgroup of $\Aut(T_\gamma)$. \end{theorem} \begin{proof} By the discriminant formula in Proposition \ref{prop:discriminant}, if $\mathfrak{p}_{\gamma,n}$ has properties $(1)$-$(4)$ above, then $\mathfrak{p}_{\gamma,n}$ must be unramified in $K_{\gamma,n-1}$. Hence properties (3) and (5) together imply that $\ell_{\gamma,1}\gamma_n^+(c)$ cannot be a square in $K_{\gamma,n-1}$. In particular, it follows from Proposition \ref{prop:maximality} that $K_{\gamma,n}/K_{\gamma,n-1}$ is maximal for all $n$ sufficiently large. Therefore, $G_{\gamma,K}$ is a finite index subgroup of $\Aut(T_\gamma)$ as claimed. \end{proof} As a consequence of Theorem \ref{maxtest}, we construct examples over the global field $K=\mathbb{Q}(t)$ for which Question \ref{question:Galois} has an affirmative answer; here we take $\mathfrak{o}_K:=\mathbb{Q}[t]$. In what follows, $\frac{d}{dt}$ denotes the usual derivative on polynomials and $\overline{c}\in\mathbb{Z}/2\mathbb{Z}[t]$ denotes the image of $c\in\mathbb{Z}[t]$ under the ring homomorphism $\mathbb{Z}[t]\rightarrow\mathbb{Z}/2\mathbb{Z}[t]$ given by reducing coefficients. \begin{theorem}{\label{thm:functionfield}} Let $K=\mathbb{Q}(t)$ and let $S$ be a set of quadratic polynomials of the form $x^2+c$ such that each $c$ satisfies all of the following conditions: \vspace{.1cm} \begin{enumerate} \item[\textup{(1)}] $c\in\mathbb{Z}[t]$ and $\ell(c)=\pm{1}$. \vspace{.2cm} \item [\textup{(2)}] $\deg(c)=d>0$. \vspace{.1cm} \item[\textup{(3)}] $\displaystyle{\frac{d}{dt}}\,\overline{c}=1$.\vspace{.1cm} \end{enumerate} Then $G_{\gamma,K}=\Aut(T_\gamma)$ for all $\gamma\in\Phi_S$. \end{theorem} \begin{example} In particular, the set $S=\big\{x^2+(-t^2+t+3),\; x^2+(t^2-5t)\big\}$ satisfies the hypothesis of Theorem \ref{thm:functionfield} above (with $d=2$). \end{example} \begin{remark} Although the conditions in Theorem \ref{thm:functionfield} may seem strange, their utility may be summarized as follows: conditions (1) and (3) ensure that $\gamma_n^+(0)$ is square-free and condition (2) ensures that $\deg(\gamma_n^+(0))=2^{n-1}d$. In particular, putting these facts together we deduce that $\gamma_n^+(0)$ has an irreducible factor appearing to exponent $1$, which is coprime to $\gamma_m^+(0)$ for all $1\leq m\leq n-1$ (by simple degree considerations). In particular, it follows that $K_{\gamma,n}/K_{\gamma, n-1}$ is maximal for all $n\geq1$ by Proposition \ref{prop:maximality}. \end{remark} \begin{proof} Suppose that $S$ conditions (1)-(3) of Theorem \ref{thm:functionfield} hold, and let $\gamma=(\theta_n)_{n=1}^\infty\in\Phi_S$. Then it follows easily by induction, using only that $\deg(f+g)=\max\{\deg(f),\deg(g)\}$ when $\deg(f)\neq\deg(g)$ and $\deg(f^2)=2\deg(f)$, that \vspace{.1cm} \begin{equation}\label{fact1} \deg(\gamma_n^+(0))=2^{n-1}d\;\;\;\; \text{for all $n\geq1$, $\gamma\in\Phi_S$}. \end{equation} Likewise, the leading term $\ell(\gamma_n^+(0))=\pm{1}$ by property (1) above. In particular, $\gamma_n^+(0)\in\mathbb{Z}[t]$ is a primitive polynomial (the gcd of its coefficients is $1$). We next show that each polynomial $\gamma_n^+(0)\in\mathbb{Q}[t]$ (a unique factorization domain) is square-free. To see this, suppose for a contradiction, that $\gamma_n^+(0)=f_n\cdot g_n^2$ for some $f_n,g_n\in\mathbb{Q}[t]$ and some non-constant $g_n$. Note that by Gauss' Lemma, we can assume that $f_n,g_n\in\mathbb{Z}[t]$; here we use that $\gamma_n^+(0)$ is primitive. In particular, after writing $\theta_1=x^2+c$ for some $c$ satisfying (1)-(3) above, we have that \begin{equation}\label{fact2} f_n\cdot g_n^2=\gamma_n^+(0)=y_n^2+c \end{equation} for some $y_n\in\mathbb{Z}[t]$. Moreover, since the leading term of $\gamma_n^+(0)$ is $\pm{1}$, the leading term of $g_n$ must be $\pm{1}$ also. Therefore, $\deg(g)=\deg(\overline{g})>0$, and the reduction of $g$ modulo $2$ is non-constant. On the other hand, after reducing coefficients and taking derivatives of both sides of (\ref{fact2}), we see that \[\Big(\frac{d}{dt}\overline{f_n}\,\Big)\cdot{\overline{g_n}}^2=\frac{d}{dt}\overline{c}=1\] by property (3). Hence, $\bar{g_n}$ is a unit in $\mathbb{Z}/2\mathbb{Z}[t]$. However, this contradicts the fact that $\deg(\overline{g})>0$. Therefore, $\gamma_n^+(0)\in\mathbb{Q}[t]$ is square-free as claimed. We use this fact to analyze the relevant Galois groups. Note first that since $\gamma_n^+(0)$ is non-constant and square-free in $\mathbb{Q}[t]$, Proposition \ref{prop:irreducible} implies that $\gamma_n^+$ is irreducible over $K=\mathbb{Q}(t)$ for all $n\geq1$. Likewise, if no prime $\mathfrak{p}_n$ (corresponding to an irreducible polynomial) of $\mathbb{Q}[t]$ as in Theorem \ref{maxtest} exists for $n\geq2$, then each irreducible factor $q(t)$ of $\gamma_n^+(0)$ must also divide some $\gamma_{m_q}^+(0)$ for some $1\leq m_q\leq n-1$: conditions (1)-(3) of Theorem \ref{maxtest} hold trivially, and condition (5) holds since $\gamma_n^+(0)$ is square-free. In particular, it follows that the polynomial $\gamma_n^+(0)$ divides the product $\gamma_1^+(0)\gamma_2^+(0)\cdots\gamma_{n-1}^+(0)$. However, in this case we deduce from (\ref{fact1}) that \[2^{n-1}d=\deg(\gamma_n^+(0))\leq\deg(\gamma_1^+(0)\gamma_2^+(0)\cdots\gamma_{n-1}^+(0))=d+ 2d+\dots 2^{n-2}d=(2^{n-1}-1)d.\] But this inequality forces $d=0$, a contradiction. Therefore, for all $n\geq2$ a prime $\mathfrak{p}_n$ of $K=\mathbb{Q}(t)$ as in Theorem \ref{maxtest} exists. In particular, the argument in the proof Theorem \ref{maxtest} implies that the extensions $K_{\gamma,n}/K_{\gamma,n-1}$ are maximal for all $n\geq2$. Likewise, since $-\gamma_1^+(0)$ is not a square in $K$ (it's square-free), the extension $K_{\gamma,1}/K$ is also maximal. Hence, $G_{\gamma,K}=\Aut(T_\gamma)$ for all $\gamma\in\Phi_S$ as claimed. \end{proof}
\section{Confinement and DCSB} \label{secConfinement} Most hadron physicists have a notional understanding of confinement; but, in order to consider the concept in depth, it is crucial to have a concrete definition. That problem is canvassed in Sec.\,2.2 of Ref.\cite{Cloet:2013jya}, which explains that the potential between infinitely-heavy quarks measured in simulations of quenched lQCD -- the so-called static potential -- is \emph{irrelevant} to the question of confinement in our Universe, in which light quarks are ubiquitous and the pion is unnaturally light. This is because light-particle creation and annihilation effects are essentially nonperturbative and so it is impossible in principle to compute a quantum mechanical potential between two light quarks \cite{Bali:2005fuS, Prkacin:2005dc, Chang:2009ae}. There is no flux tube in a Universe with light quarks, the flux tube is not a valid paradigm for confinement, and hence it is meaningless to speak of linear potentials and Regge trajectories \cite{Tang:2000tb, Masjuan:2012gc}. DCSB is critical here. It ensures the existence of nearly-massless pseudo-Goldstone modes (pions), each constituted from a valence-quark and -antiquark whose individual current-quark masses are $<1$\% of the proton mass \cite{Qin:2014vya}. These modes ensure that no flux tube between a static colour source and sink can have a measurable existence. To see this, consider such a tube being stretched between a source and sink. The potential energy within the tube may increase only until it reaches that required to produce a particle-antiparticle pair of the theory's pseudo-Goldstone modes. Simulations of lQCD show \cite{Bali:2005fuS, Prkacin:2005dc} that the flux tube then disappears instantaneously along its entire length, leaving two isolated colour-singlet systems. The length-scale associated with this effect in QCD is $r_{\not\sigma} \simeq (1/3)\,$fm. Hence if any such string formed, it would dissolve well within a hadron's interior. An alternative perspective associates confinement with dynamically-driven changes in the analytic structure of QCD's propagators and vertices. In this realisation, \emph{confinement is a dynamical process}, whose expression cannot be understood using models that violate Poincar\'e-covariance or any other symmetry that is crucial to the observable features of hadrons. Modern theory predicts that both gluons and quarks acquire mass distributions, which are large at infrared momenta \cite{Bhagwat:2003vw, Bhagwat:2006tu, Bowman:2005vx, Binosi:2014aea}. These running masses lead to the emergence of a length-scale $\varsigma \approx 0.5\,$fm, whose existence and magnitude are evident in all studies of dressed-gluon and -quark propagators and which characterises a dramatic change in their analytic structure. In models based on such features \cite{Stingl:1994nk}, once a gluon or quark is created, it begins to propagate; but after each ``step'' of length $\varsigma$, on average, an interaction occurs so that the parton loses its identity, sharing it with others. Finally a cloud of partons is produced, which coalesces into colour-singlet final states. Such pictures of parton propagation, hadronisation and confinement can be tested in experiments at modern and planned facilities, \emph{e.g}.\ via measurements that chart parton distribution amplitudes and functions of mesons, and the nucleon and its excited states. Whilst the nature and realisation of confinement in empirical QCD is still being explored, DCSB; namely, the generation of \emph{mass} \emph{from} \emph{nothing}, is a theoretically-established feature of QCD. It is ``dynamical,'' as distinct from spontaneous, because nothing is added to QCD in order to effect this remarkable outcome and there is no simple change of variables in the QCD action that will make it apparent. Instead, through the act of quantising the classical chromodynamics of massless gluons and quarks, a large mass-scale is generated. DCSB is the most important mass generating mechanism for visible matter in the Universe, being responsible for approximately $98$\% of the proton's mass. A fundamental expression of DCSB is the behaviour of the quark mass-function, $M(p)$, which is a basic element in the dressed-quark propagator: \begin{equation} \label{SgeneralN} S(p) = 1/[i\gamma\cdot p A(p^2) + B(p^2)] = Z(p^2)/[i\gamma\cdot p + M(p^2)]\,, \end{equation} and may be obtained as a solution to QCD's most basic fermion gap equation, \emph{i.e}. the Dyson-Schwinger equation (DSE) for the dressed-quark propagator \cite{Cloet:2013jya}. The nontrivial character of the mass function in Fig.\,\ref{gluoncloud} arises primarily because a dense cloud of gluons comes to clothe a low-momentum quark. It explains how an almost-massless parton-like quark at high energies transforms, at low energies, into a constituent-like quark with an effective ``spectrum mass'' $M_D \sim 350\,$MeV. \begin{figure}[t] \begin{minipage}[t]{\textwidth} \begin{minipage}{0.5\textwidth} \centerline{\includegraphics[width=0.9\textwidth]{DressedMR.eps}} \end{minipage} \begin{minipage}{0.5\textwidth}{\small \caption{\label{gluoncloud} \small Dressed-quark mass function, $M(p)$ in Eq.\,(\ref{SgeneralN}): \emph{solid curves} -- DSE results, explained in Refs.\,\protect\cite{Bhagwat:2003vw,Bhagwat:2006tu}, \emph{points} -- numerical simulations of lattice-regularised QCD \protect\cite{Bowman:2005vx}. (\emph{N.B}.\ $m=70\,$MeV is the uppermost curve and current-quark mass decreases from top to bottom.) The current-quark of perturbative QCD evolves into a constituent-quark as its momentum becomes smaller. The constituent-quark mass arises from a cloud of low-momentum gluons attaching themselves to the current-quark. This is DCSB: an essentially nonperturbative effect that generates a quark \emph{mass} \emph{from nothing}; namely, it occurs even in the chiral limit. }} \end{minipage} \end{minipage} \end{figure} \section{Gluon Cannibalism} \label{secCannibals} The propagation of gluons, too, is described by a gap equation \cite{Aguilar:2009nf}; and its solution shows that gluons are cannibals: they are a particle species whose members become massive by eating each other! The gluon mass function, $m_g(k^2)$, is monotonically decreasing with increasing $k^2$, with $m_g(0) \approx 0.5$GeV \cite{Binosi:2014aea}. The mass term appears in the transverse part of the gluon propagator, hence gauge-invariance is not tampered with; and the mass function falls as $1/k^2$ for $k^2\gg m_g(0)$ (up to logarithmic corrections), so the gluon mass is invisible in perturbative applications of QCD. Gluon cannibalism presents a new physics frontier within the Standard Model. Asymptotic freedom means that the ultraviolet behaviour of QCD is controllable. At the other extreme, dynamically generated masses for gluons and quarks entail that QCD creates its own infrared cutoffs. Together, these effects eliminate both the infrared and ultraviolet problems that typically plague quantum field theories and thereby make reasonable the hope that QCD is nonperturbatively well defined. The dynamical generation of gluon and quark masses provides a basis for understanding the notion of a maximum wavelength for partons in QCD \cite{Brodsky:2008be}. Given the magnitudes of these mass-scales, it is apparent that field modes with wavelengths $\lambda > \varsigma \approx 2/m_g(0) \approx 0.5\,$fm decouple from the dynamics: they are screened in the sense described in Sec.\,\ref{secConfinement}. This is just one consequence of a dynamically generated gluon mass-scale. There are many more. \emph{e.g}.\ it is plausible to conjecture that dynamical generation of an infrared gluon mass-scale leads to saturation of the gluon parton distribution function at small Bjorken-$x$ within hadrons. The possible emergence of this phenomenon stirs great scientific interest and curiosity and it is a key motivation in plans to construct an EIC \cite{Accardi:2012qutS}. \begin{figure}[t] \begin{minipage}[t]{\textwidth} \begin{minipage}{0.48\textwidth} \centerline{\includegraphics[clip,width=0.9\textwidth]{RGIalphaR.eps}} \end{minipage} \begin{minipage}{0.02\textwidth} \hspace*{-0.2em}\mbox{\LARGE \textbf{$\Rightarrow$}} \end{minipage} \begin{minipage}{0.48\textwidth} \centerline{\includegraphics[clip,width=0.9\textwidth]{BinosiNew.eps}} \end{minipage} \end{minipage} \caption{\label{figInteraction} \small \emph{Left} -- RGI running interaction computed via a combination of DSE- and lattice-QCD analyses \cite{Aguilar:2009nf}. The function obtained with five different values of the renormalisation point is depicted in order to highlight that the result is RGI. The interaction is characterized by $\alpha_s(0) \approx 0.9\, \pi$ and the gluon mass-scale $m_g(0) \approx 0.5$GeV. \emph{Right} -- Comparison between top-down results for the gauge-sector interaction (derived from the left-panel) with those obtained using the bottom-up approach based on hadron physics observables. \underline{Solid curve} within \emph{grey band} -- top-down result for the RGI running interaction; and \underline{dashed curve} within \emph{pale-green band} -- advanced bottom-up result obtained using the most sophisticated truncation of the matter sector DSEs - the DSE-DB kernel. The bands denote the domain of uncertainty in the determinations of the interaction. } \end{figure} \section{Continuum-QCD and \emph{ab initio} predictions of hadron observables} \label{secAbInitio} Within hadron physics there are two methods for determining the mo\-men\-tum-dependence of the interaction between quarks: the top-down approach, which works toward an \textit{ab initio} computation of the interaction via analysis of the gauge-sector gap equations; and the bottom-up scheme, which infers the interaction by fitting data within a well-defined truncation of those equations in the matter sector that are relevant to bound-state properties. These two approaches have recently been united by a demonstration that the renormalisation-group-invariant (RGI) running-interaction predicted by contemporary analyses of QCD's gauge sector coincides with that required in order to describe ground-state hadron observables using a nonperturbative truncation of QCD's DSEs in the matter sector \cite{Binosi:2014aea}. The unification is illustrated in Fig.\,\ref{figInteraction}: the interaction derived from QCD's gauge sector is in near precise agreement with that required for a veracious description of hadron properties using the most sophisticated matter-sector gap and Bethe-Salpeter kernels available today. This is a remarkable result, given that there had previously been no serious attempt at communication between practitioners from the top-down and bottom-up hemispheres of continuum-QCD. It bridges a gap that had lain between nonperturbative continuum-QCD and the \emph{ab initio} prediction of bound-state properties. It should be noted that if the realistic interaction depicted in Fig.\,\ref{figInteraction} were employed as the seed for a RL-truncation study, it would fail completely because, \emph{inter alia}, DCSB would be absent. We now know that a veracious description of DCSB and hence hadron properties in QCD requires a dressed-quark-gluon vertex. Constraining its form is a topic of great contemporary interest; and in this connection it cannot be emphasised too strongly that little of value today will be produced by any attempt at a term-by-term diagrammatic construction of this vertex. \section{Enigma of mass} \label{secEnigma} As noted in Sec.\,\ref{secConfinement}, the pion is Nature's lightest hadron. In fact, it is peculiarly light, with a mass just one-fifth of that which quantum mechanics would lead one to expect. This remarkable feature has its origin in DCSB. The pion's structure is described by a Bethe-Salpeter amplitude: $\Gamma_{\pi}(k;P) = \gamma_5 [ i E_{\pi}(k;P) + \gamma\cdot P F_{\pi}(k;P) + \gamma\cdot k \, G_{\pi}(k;P) - \sigma_{\mu\nu} k_\mu P_\nu H_{\pi}(k;P)],$ (here $k$ is the relative momentum between the valence-quark and -antiquark constituents, and $P$ is their total momentum). In QCD if, and only if, chiral symmetry is dynamically broken, then in the chiral limit \cite{Qin:2014vya}: \begin{equation} \label{gtrE} f_\pi E_\pi(k;0) = B(k^2)\,, \end{equation} where $f_\pi$ is the pion's leptonic decay constant, a directly measurable quantity that connects the strong and weak interactions, and the rhs is a scalar function in the dressed-quark propagator, Eq.\,\eqref{SgeneralN}. This identity is miraculous. It means that the two-body problem is solved, almost completely once the solution to the one body problem is known. Eq.\,\eqref{gtrE} is a quark-level Goldberger-Treiman relation. It is also the most basic expression of Goldstone's theorem in QCD, \emph{viz}.\\[-2ex] \centerline{\parbox{0.90\textwidth}{\flushleft \emph{Goldstone's theorem is fundamentally an expression of equivalence between the one-body problem and the two-body problem in QCD's colour-singlet pseudoscalar channel.}}} \medskip \hspace*{-\parindent}Eq.\,\eqref{gtrE} emphasises that Goldstone's theorem has a pointwise expression in QCD; and, furthermore, that pion properties are an almost direct measure of the mass function depicted in Fig.\,\ref{gluoncloud}. Thus, enigmatically, properties of the (nearly-)massless pion are the cleanest expression of the mechanism that is responsible for almost all the visible mass in the Universe. This provides strong motivation for pion form factor and distribution function measurements at JLab\,12 \cite{E1206101, E1207105, Keppel:2015}. \section{Structure of Baryons} It is crucial to address the three valence-quark bound-state problem in QCD with the same level of sophistication that is now available for mesons, with the goal being to correlate the properties of meson and baryon ground- and excited-states within a single, symmetry-preserving framework. Here, symmetry-preserving means that the analysis respects Poincar\'e covariance and satisfies the relevant Ward-Green-Takahashi identities. Constituent-quark models have hitherto been the most widely applied spectroscopic tools; and whilst their weaknesses are emphasised by critics and acknowledged by proponents, they are of continuing value because there is nothing better that is yet providing a bigger picture. Nevertheless, they possess no connection with quantum field theory and therefore no connection with QCD; and they are not symmetry-preserving and therefore cannot veraciously connect meson and baryon properties. A comprehensive approach to QCD will provide a unified explanation of both mesons and baryons. We have seen that DCSB is a keystone of the Standard Model, evident in the momentum-dependence of the dressed-quark mass function -- Fig.\,\ref{gluoncloud}: it is just as important to baryons as it is to mesons. The DSEs furnish the only extant framework that can simultaneously and transparently connect meson and baryon observables with this basic feature of QCD, having provided, \emph{e.g}.\ a direct correlation of meson and baryon properties via a single interaction kernel, which preserves QCD's one-loop renormalisation group behaviour and can systematically be improved. This is evident in Refs.\,\cite{Eichmann:2008ae, Eichmann:2008ef, Eichmann:2009qa, Eichmann:2011ej, Chang:2012cc}. \begin{figure}[t] \begin{minipage}[t]{\textwidth} \begin{minipage}{0.49\textwidth} \centerline{\includegraphics[clip,width=0.9\textwidth]{FaddeevF.eps}} \end{minipage} \begin{minipage}{0.49\textwidth} \caption{\small \label{fig:Faddeev} Poincar\'e covariant Faddeev equation. $\Psi$ is the Faddeev amplitude for a baryon of total momentum $P= p_q + p_d$, where $p_{q,d}$ are, respectively, the momenta of the quark and diquark within the bound-state. The shaded area is the Faddeev equation kernel: \emph{single line}, dressed-quark propagator; $\Gamma$, diquark correlation amplitude; and \emph{double line}, diquark propagator.} \end{minipage} \end{minipage} \end{figure} Let us focus initially on the proton, which is a composite object, whose properties and interactions are determined by its valence-quark content: $u$ + $u$ + $d$, \emph{i.e}.\ two up ($u$) quarks and one down ($d$) quark. So far as is now known, bound-states seeded by two valence-quarks do not exist; and the only two-body composites are those associated with a valence-quark and -antiquark, \emph{i.e}.\ mesons. These features are supposed to derive from colour confinement, whose complexities are discussed in Sec.\,\ref{secConfinement}. Such observations lead one to a position from which the proton may be viewed as a Borromean bound-state \cite{Segovia:2015ufa}, \emph{viz}.\ a system constituted from three bodies, no two of which can combine to produce an independent, asymptotic two-body bound-state. In QCD the complete picture of the proton is more complicated, owing, in large part, to the loss of particle number conservation in quantum field theory and the concomitant frame- and scale-dependence of any Fock space expansion of the proton's wave function. Notwithstanding that, the Borromean analogy provides an instructive perspective from which to consider both quantum mechanical models and continuum treatments of the nucleon bound-state problem in QCD. It poses a crucial question: \emph{Whence binding between the valence quarks in the proton, \mbox{\rm i.e}.\ what holds the proton together}? In numerical simulations of lQCD that use static sources to represent the proton's valence-quarks, a ``Y-junction'' flux-tube picture of nucleon structure is produced \cite{Bissey:2006bz, Bissey:2009gw}. This might be viewed as originating in the three-gluon vertex, which signals the non-Abelian character of QCD and is the source of asymptotic freedom. Such results and notions would suggest a key role for the three-gluon vertex in nucleon structure \emph{if} they were equally valid in real-world QCD wherein light dynamical quarks are ubiquitous. However, as we saw in Sec.\,\ref{secConfinement}, they are not; and so a different explanation of binding within the nucleon must be found. DCSB has numerous corollaries that are crucial in determining the observable features of the Standard Model, some of which are detailed above. Another particularly important consequence is less well known. Namely, any interaction capable of creating pseudo-Goldstone modes as bound-states of a light dressed-quark and -antiquark, and reproducing the measured value of their leptonic decay constants, will necessarily also generate strong colour-antitriplet correlations between any two dressed quarks contained within a baryon. This assertion is based upon evidence gathered in two decades of studying two- and three-body bound-state problems in hadron physics. No counter examples are known; and the existence of such diquark correlations is also supported by lQCD \cite{Alexandrou:2006cq, Babich:2007ahS}. The properties of diquark correlations have been charted. Most importantly, diquarks are confined. Additionally, owing to properties of charge-conjugation, a diquark with spin-parity $J^P$ may be viewed as a partner to the analogous $J^{-P}$ meson \cite{Cahill:1987qr}. It follows that scalar, isospin-zero and pseudovector, isospin-one diquark correlations are the strongest in ground-state baryons; and whilst no pole-mass exists, the following mass-scales, which express the strength and range of the correlation and are each bounded below by the partnered meson's mass, may be associated with these diquarks \cite{Cahill:1987qr, Maris:2002yu, Alexandrou:2006cq, Babich:2007ahS}: $m_{[ud]_{0^+}} \approx 0.7-0.8\,$GeV, $m_{\{uu\}_{1^+}} \approx 0.9-1.1\,$GeV, with $m_{\{dd\}_{1^+}}=m_{\{ud\}_{1^+}} = m_{\{uu\}_{1^+}}$ in the isospin symmetric limit. Realistic diquark correlations are also soft. They possess an electromagnetic size that is bounded below by that of the analogous mesonic system, \emph{viz}.\ \cite{Maris:2004bp, Roberts:2011wyS}: $r_{[ud]_{0^+}} \gtrsim r_\pi$, $r_{\{uu\}_{1^+}} \gtrsim r_\rho$, with $r_{\{uu\}_{1^+}} > r_{[ud]_{0^+}}$. As with mesons, these scales are set by that associated with DCSB. The interaction depicted in Fig.\,\ref{figInteraction} characterises a realistic class that generates strong attraction between two quarks and thereby produces tight diquark correlations in analyses of the three valence-quark scattering problem. The existence of such correlations considerably simplifies analyses of baryon bound states because it reduces that task to solving a Poincar\'e covariant Faddeev equation \cite{Cahill:1988dx}, depicted in Fig.\,\ref{fig:Faddeev}. The three gluon vertex is not explicitly part of the bound-state kernel in this picture of the nucleon. Instead, one capitalises on the fact that phase-space factors materially enhance two-body interactions over $n\geq 3$-body interactions and exploits the dominant role played by diquark correlations in the two-body subsystems. Then, whilst an explicit three-body term might affect fine details of baryon structure, the dominant effect of non-Abelian multi-gluon vertices is expressed in the formation of diquark correlations. Such a nucleon is then a compound system whose properties and interactions are primarily determined by the quark$+$diquark structure evident in Fig.\,\ref{fig:Faddeev}. The quark$+$diquark structure of the nucleon is elucidated in Fig.\,\ref{figS1}, which provides a representation of the leading component of the nucleon's Faddeev amplitude: with the notation of Ref.\,\cite{Segovia:2014aza}, $s_1(|p|,\cos\theta)$, computed using the Faddeev kernel described therein. This function describes a piece of the quark$+$scalar-diquark relative momentum correlation. Notably, in this solution of a realistic Faddeev equation there is strong variation with respect to both arguments. Support is concentrated in the forward direction, $\cos\theta >0$, so that alignment of $p$ and $P$ is favoured; and the amplitude peaks at $(|p|\simeq M_N/6,\cos\theta=1)$, whereat $p_q \approx P/2 \approx p_d$ and hence the \emph{natural} relative momentum is zero. In the antiparallel direction, $\cos\theta<0$, support is concentrated at $|p|=0$, \emph{i.e}.\ $p_q \approx P/3$, $p_d \approx 2P/3$. \begin{figure}[t] \begin{minipage}[t]{\textwidth} \begin{minipage}{0.49\textwidth} \centerline{\includegraphics[width=0.9\textwidth]{F2S1.eps}} \end{minipage} \begin{minipage}{0.49\textwidth} \caption{\small \label{figS1} Representation of the dominant piece in the nucleon's eight-component Poincar\'e-covariant Faddeev amplitude: $s_1(|p|,\cos\theta)$. In the nucleon rest frame, this term describes that piece of the quark-diquark relative momentum correlation which possesses zero \emph{intrinsic} quark-diquark orbital angular momentum, \emph{i.e}.\ $L=0$ before the propagator lines are reattached to form the Faddeev wave function. Referring to Fig.\,\ref{fig:Faddeev}, $p= P/3-p_q$ and $\cos\theta = p\cdot P/\sqrt{p^2 P^2}$. (The amplitude is normalised such that its $U_0$ Chebyshev moment is unity at $|p|=0$.)} \end{minipage} \end{minipage} \end{figure} A nucleon (and kindred baryons) described by Fig.\,\ref{fig:Faddeev} is a Borromean bound-state, the binding within which has two contributions. One part is expressed in the formation of tight diquark correlations; but that is augmented by attraction generated by the quark exchange depicted in the shaded area of Fig.\,\ref{fig:Faddeev}. This exchange ensures that diquark correlations within the nucleon are fully dynamical: no quark holds a special place because each one participates in all diquarks to the fullest extent allowed by its quantum numbers. The continual rearrangement of the quarks guarantees, \emph{inter} \emph{alia}, that the nucleon's dressed-quark wave function complies with Pauli statistics. One cannot overstate the importance of appreciating that these fully dynamical diquark correlations are vastly different from the static, pointlike ``diquarks'' which featured in early attempts \cite{Lichtenberg:1967zz} to understand the baryon spectrum and to explain the so-called missing resonance problem \cite{Ripani:2002ss, Burkert:2012ee, Kamano:2013iva}. Modern diquarks are soft and enforce certain distinct interaction patterns for the singly- and doubly-represented valence-quarks within the proton. On the other hand, the number of states in the spectrum of baryons obtained from the Faddeev equation in Fig.\,\ref{fig:Faddeev} \cite{Chen:2012qrS} is similar to that found in the three-constituent quark model, just as it is in today's lQCD calculations of this spectrum \cite{Edwards:2011jj}. \section{Roper Resonance} The Roper has long resisted understanding. JLab experiments \cite{Dugger:2009pn, Aznauryan:2009mx, Aznauryan:2011qj, Mokeev:2015lda} have yielded precise nucleon-Roper ($N\to R$) transition form factors and thereby exposed the first zero seen in any hadron form factor or transition amplitude. It has also attracted much theoretical attention; but Ref.\,\cite{Segovia:2015hraS} provides the first continuum treatment of this problem using the power of relativistic quantum field theory. That study begins with a computation of the mass and wave function of the proton and its first radial excitation, using precisely the same framework that was used in a successful unification of nucleon and $\Delta$ properties \cite{Segovia:2014aza}. The masses are (in GeV): \begin{equation} \label{eqMasses} \mbox{nucleon\,(N)} = 1.18\,,\; \mbox{nucleon-excited\,(R)} = 1.73\,. \end{equation} These values correspond to the locations of the two lowest-magnitude $J^P=1/2^+$ poles in the three-quark scattering problem. The associated residues are the Faddeev wave functions, which depend upon $(p^2,p\cdot P)$, where $p$ is the quark-diquark relative momentum. Fig.\,\ref{figFA} depicts the zeroth Chebyshev moment of all $S$-wave components in that wave function. The appearance of a single zero in $S$-wave components of the Faddeev wave function associated with the first excited state in the three dressed-quark scattering problem indicates that this state is a radial excitation. \begin{figure}[t] \centerline{\includegraphics[width=0.8\linewidth]{F5.eps}} \caption{\label{figFA} \emph{Left}. Zeroth Chebyshev moment of all $S$-wave components in the nucleon's Faddeev wave function, which is obtained from $\Psi$ in Fig.\,\ref{fig:Faddeev}, by reattaching the dressed-quark and -diquark legs. \emph{Right}. Kindred functions for the first excited state. Legend: $S_1$ is associated with the baryon's scalar diquark; the other two curves are associated with the axial-vector diquark; and the normalisation is chosen such that $S_1(0)=1$.} \end{figure} It is worth dwelling on the masses in Eq.\,\eqref{eqMasses}. The empirical values of the pole locations for the first two states in the nucleon channel are \cite{Suzuki:2009njS}: $0.939\,$GeV and $1.36 - i \, 0.091\,$GeV, respectively. At first glance, these values appear unrelated to those in Eq.\,\eqref{eqMasses}. However, deeper consideration reveals \cite{Eichmann:2008ae,Eichmann:2008ef} that the kernel in Fig.\,\ref{fig:Faddeev} omits all those resonant contributions which may be associated with the meson-baryon final-state interactions that are resummed in dynamical coupled channels models in order to transform a bare-baryon into the observed state \cite{Suzuki:2009njS, Kamano:2013iva, Doring:2014qaa}. This Faddeev equation should therefore be understood as producing the dressed-quark core of the bound-state, not the completely-dressed and hence observable object. Clothing the nucleon's dressed-quark core by including resonant contributions to the kernel produces a physical nucleon whose mass is $\approx 0.2$\,GeV lower than that of the core \cite{Ishii:1998tw, Hecht:2002ej}. Similarly, clothing the $\Delta$-baryon's core lowers its mass by $\approx 0.16\,$GeV \cite{Suzuki:2009njS}. It is therefore no coincidence that (in GeV) $1.18-0.2 = 0.98\approx 0.94$, \emph{i.e}.\ the nucleon mass in Eq.\,\eqref{eqMasses} is 0.2\,GeV greater than the empirical value. A successful body of work on the baryon spectrum \cite{Chen:2012qr} and nucleon and $\Delta$ elastic and transition form factors \cite{Segovia:2014aza, Roberts:2015deaS} has been built upon precisely this knowledge of the impact of omitting resonant contributions and the magnitude of their effects. Crucial, therefore, is a comparison between the quark-core mass and the value determined for the mass of the meson-undressed bare-Roper in Ref.\cite{Suzuki:2009njS}, \emph{viz}. (in GeV) \begin{equation} \label{eqMassesA} \begin{array}{l|cc|c} & \mbox{R}_{{\rm core}}^{\mbox{\footnotesize \cite{Segovia:2015hraS}}} & \mbox{R}_{{\rm core}}^{\mbox{\footnotesize \cite{Wilson:2011aa}}} & \mbox{R}_{\rm bare}^{\mbox{\footnotesize \cite{Suzuki:2009njS}}} \\\hline \mbox{mass} & 1.73 & 1.72 & 1.76 \end{array}\,. \end{equation} The bare Roper mass in Ref.\,\cite{Suzuki:2009njS} agrees with both the quark-core result in Eq.\,\eqref{eqMasses} and that obtained using a refined treatment of a vector$\,\otimes\,$vector contact-interaction \cite{Wilson:2011aa}. This is notable because all these calculations are independent, with just one common feature; namely, an appreciation that measured hadrons can realistically be built from a dressed-quark core plus a meson-cloud. The agreement in Eq.\,\eqref{eqMassesA} is suggestive but not conclusive. As noted above, precise empirical information is available on the nucleon-Roper transition form factors. Thus, if the picture described herein is valid, then combining the solutions of the Faddeev equation in Fig.\ref{fig:Faddeev} for both the ground-state nucleon and its radial excitation should produce transition form factors that possess an understandable connection with available data and, indeed, match in accuracy the predictions for the nucleon and $\Delta$-baryon elastic and transition form factors obtained using the same approach \cite{Segovia:2014aza, Roberts:2015deaS}. The QCD-based Faddeev equation predicts the existence of diquark correlations within baryons; and it is interesting to compare the diquark content of the nucleon and its radial excitation. That information is contained in the zero-momentum value of the elastic Dirac form factor \cite{Wilson:2011aa, Segovia:2014aza}: \begin{equation} \label{Pdiquark} \begin{array}{l|cc} & N & R \\\hline P_{J=0} & 62\% & 62\% \\ P_{J=1} & 38\% & 38\%\\ \end{array}\,; \end{equation} namely, the relative strength of scalar and axial-vector diquark correlations in the nucleon and its radial excitation is the same. \begin{figure}[t] \begin{minipage}[t]{\textwidth} \begin{minipage}{0.49\textwidth} \centerline{\includegraphics[clip,width=0.85\linewidth]{F3F1sR.eps}} \end{minipage} \begin{minipage}{0.49\textwidth} \centerline{\includegraphics[clip,width=0.85\linewidth]{F3F2sR.eps}} \end{minipage} \end{minipage} \caption{\label{figFT} \emph{Left} -- Dirac transition form factor, $F_{1}^{\ast}(x)$, $x=Q^2/m_N^2$. Solid (black) curve, our prediction; dot-dashed (red) curve, contact-interaction result \cite{Wilson:2011aa}; dotted (green) curve, inferred meson-cloud contribution; and dashed (blue) curve, anticipated complete result. \emph{Right} -- Pauli transition form factor, $F_{2}^{\ast}(x)$, with same legend. Data in both panels: circles (blue) \cite{Aznauryan:2009mx}; triangle (gold) \cite{Dugger:2009pn}; squares (purple) \cite{Mokeev:2012vsa}; and star (green) \cite{Agashe:2014kda}.} \end{figure} The transition form factors are displayed in Fig.\,\ref{figFT}. The DSE predictions agree quantitatively in magnitude and qualitatively in trend with the data on $x\gtrsim 2$. Nothing was tuned to achieve those results. Instead, the nature of the DSE prediction owes fundamentally to the QCD-derived momentum-dependence of the propagators and vertices employed in formulating the problems. This point is further highlighted by the contact-interaction result: momentum-independent propagators and vertices yield predictions that disagree both quantitatively and qualitatively with the data. Experiment is evidently a sensitive tool with which to chart the nature of the quark-quark interaction and hence discriminate between competing theoretical hypotheses; and it is plainly settling upon an interaction that produces the momentum-dependent dressed-quark mass which characterises QCD \cite{Bowman:2005vx, Bhagwat:2006tu, Roberts:2007ji}. The mismatch between the DSE predictions and data on $x\lesssim 2$ is also revealing. Meson-cloud contributions are expected to be important on this domain \cite{Segovia:2014aza, Roberts:2015deaS}. An inferred form of that contribution is provided by the dotted (green) curves in Fig.\,\ref{figFT}. These curves have fallen to just 20\% of their maximum value by $x=2$ and vanish rapidly thereafter so that the DSE predictions alone remain as the explanation of the data. Importantly, the existence of a zero in $F_{2}^{\ast}$ is not influenced by meson-cloud effects, although its precise location is. (The same is true of the $p\to\Delta^+$ electric transition form factor.) Thus any realistic approach to the $p\to R$ transition must describe a zero in $F_{2}^{\ast}$. Numerous properties of the dressed-quark core of the proton's radial excitation were computed in Ref.\cite{Segovia:2015hraS}. In all cases they provide an excellent understanding of data on the proton-Roper transition and related quantities derived using coupled channels models. The DSE analysis is based on a sophisticated framework for the three-quark bound-state problem; all elements employed possess an unambiguous link with analogous quantities in QCD; and no parameters were varied in order to achieve success. One may thus conclude that the Roper resonance is at heart the nucleon's first radial excitation, consisting of a dressed-quark core augmented by a meson cloud that reduces its (Breit-Wigner) mass by approximately 20\%. The analysis shows that a meson-cloud obscures the quark core from long-wavelength probes; but that it is revealed to probes with $Q^2 \gtrsim 3 m_N^2$. This feature is typical of nucleon-resonance transitions; and hence measurements of resonance electroproduction on this domain can serve as an incisive probe of quark-gluon dynamics within the Standard Model, assisting greatly in mapping the evolution between the nonperturbative and perturbative domains of QCD. \section{Summary} It is worth reiterating a few significant points. Owing to the conformal anomaly, both gluons and quarks acquire mass dynamically in QCD. Those masses are momentum dependent, with large values at infrared momenta. The appearance of these nonperturbative running masses is intimately connected with confinement and DCSB; and the relationship between those phenomena entails that in a Universe with light-quarks, confinement is a dynamical phenomenon. Consequently, flux tubes are not the correct paradigm for confinement and it is meaningless to speak of linear potentials and Regge trajectories. In exploring the connection between QCD's gauge and matter sectors, top-down and bottom-up DSE analyses have converged on the form of the renormalisation-group-invariant interaction in QCD. This outcome paves the way to parameter-free predictions of hadron properties. Decades of studying the three valence-body problem in QCD have provided the evidence necessary to conclude that diquark correlations are a reality; but diquarks are complex objects so that their existence does not restrict the number of baryon states in any obvious way. This effort has led us to a sophisticated understanding of the nucleon, $\Delta$-baryon and Roper resonance: all may be viewed as Borromean bound-states, and the Roper is at heart the nucleon's first radial excitation. The progress summarised herein highlights the capacity of DSEs in QCD to connect the quark-quark interaction, expressed, for instance, in the dressed-quark mass function, $M(p^2)$, with predictions for a wide range of hadron observables; and therefore serves as strong motivation for new experimental studies of nucleon elastic and transition form factors, which exploit the full capacity of JLab\,12 in order to chart $M(p^2)$ and thereby explain the origin of more than 98\% of the visible mass in the Universe. \medskip \hspace*{-\parindent}\textbf{Acknowlegments}. The material described in this contribution is drawn from work completed in collaboration with numerous excellent people, to all of whom I am greatly indebted. I would also like to thank R.~Gothe, T.-S.\,H.~Lee, V.~Mokeev and T.~Sato for constructive input; and to express my gratitude to the sponsors of \emph{The 10th International Workshop on the Physics of Excited Nucleons (NSTAR2015)}, whose support helped enable my participation; and to the organisers and my hosts, who ensured both that the meeting was a success and my participation was enjoyable and fruitful. Work supported by U.S.\ Department of Energy, Office of Science, Office of Nuclear Physics, under contract no.~DE-AC02-06CH11357. \providecommand{\newblock}{}
\section*{Supplementary materials} \setcounter{figure}{0} \makeatletter \renewcommand{\thefigure}{S\@arabic\c@figure} \makeatother \setcounter{table}{0} \makeatletter \renewcommand{\thetable}{S\@arabic\c@table} \makeatother \setcounter{equation}{0} \makeatletter \renewcommand{\thetable}{S\@arabic\c@table} \makeatother \begin{itemize} \item Model equations for SCM4OPT v2.0 \item \cref*{tbl_hist} to \cref*{tbl_unc} \item \cref*{fig:ghg} to \cref*{fig:gwp_sensi} \end{itemize} \clearpage \newpage \section*{Model equations for SCM4OPT v2.0} \subsection*{Carbon cycle} The perturbation of the atmospheric carbon pool includes the following five components, as defined in \cref*{eq:cflux}. \begin{enumerate}[label=\protect\circled{\arabic*}] \item CO\textsubscript{2} emissions from fossil fuels and industrial sources; \item Anthropogenic CO\textsubscript{2} emissions into or removal from the terrestrial biosphere; \item CH\textsubscript{4} oxidation of fossil fuels; \item Carbon fluxes to or from the terrestrial biosphere due to CO\textsubscript{2} fertilization and climate feedback; \item Carbon uptake by oceans. \end{enumerate} \begin{equation} \label{eq:cflux} \Delta C_{atm}(t)=E^{ind}_{CO_{2}}(t) +E^{lnd}_{CO_{2}}(t)+E_{fCH_{4}}(t)-F_{bio}(t)-F_{ocn}(t) \end{equation} \nomenclature[C]{$\Delta C_{atm}(t)$}{Atmospheric carbon pool at time t} \nomenclature[C]{$E^{ind}_{CO_{2}}(t)$}{CO\textsubscript{2} emissions from fossil fuels and industrial sources} \nomenclature[C]{$E^{lnd}_{CO_{2}}(t)$}{Anthropogenic CO\textsubscript{2} emissions into or removal from the terrestrial biosphere} \nomenclature[C]{$E_{fCH_{4}}(t)$}{CH\textsubscript{4} oxidation of fossil fuels} \nomenclature[C]{$F_{bio}(t)$}{Carbon fluxes to or from the terrestrial biosphere due to CO\textsubscript{2} fertilization and climate feedback} \nomenclature[C]{$F_{ocn}(t)$}{Carbon uptake by oceans at time t} \subsubsection*{Terrestrial carbon cycle} Both the logarithmic and rectangular hyperbolic forms are adopted to simulate the CO\textsubscript{2} fertilization effects. First, the logarithmic description is defined as: \begin{equation} \label{eq:ferbeta} \beta_{log}(t)=1+\beta ln\left(\frac{C_{CO_{2}}(t)}{C_{CO_{2}}^{0}}\right) \end{equation} \nomenclature[C]{$\beta$}{CO\textsubscript{2} fertilization factor} \nomenclature[C]{$C_{CO_{2}}(t)$}{Atmospheric CO\textsubscript{2} concentration at time t} \nomenclature[C]{$C_{CO_{2}}^{0}$}{Preindustrial CO\textsubscript{2} concentration (278 ppm)} \nomenclature[C]{$\beta_{log}$}{Fertilization coefficient} Second, the rectangular hyperbolic description is given by Eq. (\ref{eq:fersigr}-\ref{eq:fersig}): \begin{equation} \label{eq:fersigr} \beta_{sig-r}=\frac{1+\beta \log\left(680 / C_{CO_{2}}^{0}\right)}{1+\beta \log\left(340 / C_{CO_{2}}^{0}\right)} \end{equation} \begin{equation} \label{eq:fersigb} \beta_{sig-b}=\frac{\left(680-C_{b}\right)-\beta_{sig-r} \left(340-C_{b}\right)}{\left(\beta_{sig-r}-1 \right)\left(680-C_{b}\right) \left(340-C_{b}\right)} \end{equation} \begin{equation} \label{eq:fersig} \beta_{sig}(t)=\frac{1/\left(C_{CO_{2}}^{0}-C_{b}\right)+\beta_{sig-b}}{1/\left(C_{CO_{2}}(t)-C_{b}\right)+\beta_{sig-b}} \end{equation} \nomenclature[C]{$\beta_{sig}(t)$}{Effective CO\textsubscript{2} fertilization factor at time t} \nomenclature[C]{$C_{b}$}{Concentration at which the net primary productivity (NPP) is zero, which is set to 31 ppm \cite{Gifford1993}} The CO\textsubscript{2} fertilization coefficient ($\beta_{fert}$) is given by: \begin{equation} \label{eq:betafert} \beta_{fert}(t)=(2-\beta_{m})\beta_{log}(t)+(\beta_{m}-1)\beta_{sig}(t) \end{equation} \nomenclature[C]{$\beta_{fert}$}{CO\textsubscript{2} fertilization coefficient} \nomenclature[C]{$\beta_{m}$}{Allocation coefficient between the two descriptions of the CO\textsubscript{2} fertilization effects} The NPP ($F_{NPP}(t)$) and heterotrophic respiration ($F_{rsp}(t)$) are defined as products of the initial carbon flux and a certain fertilization coefficient, considering an exponential temperature feedback effect (Eq. \ref{eq:fnpp} and \ref{eq:fresp}). \begin{equation} \label{eq:fnpp} F_{NPP}(t) = F^{0}_{NPP} \beta_{fert}(t) \exp\left(\sigma_{NPP} \Delta T(t)\right) \end{equation} \begin{equation} \label{eq:fresp} F_{rsp}(t)=F^{0}_{rsp} \beta_{fert}(t) \exp\left(\sigma_{rsp} \Delta T(t)\right) \end{equation} \nomenclature[C]{$F_{NPP}(t)$}{Net primary productivity (NPP) at time t} \nomenclature[C]{$F_{rsp}(t)$}{Heterotrophic respiration at time t} \nomenclature[C]{$F^{0}_{rsp}$}{Preindustrial heterotrophic respiration} \nomenclature[C]{$\sigma_{rsp}$}{Sensitivity to changes in temperature} The gross land-use emission levels are defined as the sums of the net land-use emissions and the corresponding regrowth, as shown in Eq. (\ref{eq:dpgross}-\ref{eq:dsgross}): \begin{equation} \label{eq:dpgross} D^{gross}_{P}(t)=E^{lnd}_{P}(t) + G_{P}(t) \end{equation} \begin{equation} \label{eq:dhgross} D^{gross}_{H}(t)=E^{lnd}_{H}(t) + G_{H}(t) \end{equation} \begin{equation} \label{eq:dsgross} D^{gross}_{S}(t)=E^{lnd}_{S}(t) + G_{S}(t) \end{equation} \nomenclature[C]{$D^{gross}_{i}(t)$}{Gross land-use emission level, $i \in \{P, H, S\} $ denote the living plant pool, the detritus pool and the soil pool, respectively} \nomenclature[C]{$E^{lnd}_{i}(t)$}{Net land-use emission level, $i \in \{P, H, S\} $ denote the living plant pool, the detritus pool and the soil pool, respectively} \nomenclature[C]{$G_{i}(t)$}{Carbon flux originating from regrowth, $i \in \{P, H, S\} $ denote the living plant pool, the detritus pool and the soil pool, respectively} Proportions of the net land-use emission levels are allocated as: \begin{enumerate}[label=\protect\circled{\arabic*}] \item Living plant pool; \item Detritus pool; \item Soil pool. \end{enumerate} Please refer to Eq. (\ref{eq:elndp}-\ref{eq:elnds}). \begin{equation} \label{eq:elndp} E^{lnd}_{P}(t)=\delta_{P} E^{lnd}_{CO_{2}}(t) \end{equation} \begin{equation} \label{eq:elndh} E^{lnd}_{H}(t)=\delta_{H} E^{lnd}_{CO_{2}}(t) \end{equation} \begin{equation} \label{eq:elnds} E^{lnd}_{S}(t)=\delta_{S} E^{lnd}_{CO_{2}}(t) \end{equation} \nomenclature[C]{$E^{lnd}_{P}(t)$}{Living plant pool} \nomenclature[C]{$E^{lnd}_{H}(t)$}{Detritus pool} \nomenclature[C]{$E^{lnd}_{S}(t)$}{Soil pool} \nomenclature[C]{$\delta_{i}$}{Land-use emission distribution factors} The regrowth here is defined to be linearly related to the relaxation time. \begin{equation} \label{eq:gp} G_{P}(t)=a_{P}+b_{P} \tau_{P}(t) \end{equation} \begin{equation} \label{eq:gh} G_{H}(t)=a_{H}+b_{H} \tau_{H}(t) \end{equation} \begin{equation} \label{eq:gs} G_{S}(t)=a_{S}+b_{S} \tau_{S}(t) \end{equation} \nomenclature[C]{$G_{i}(t)$}{Land-use regrowth, $a_{i}$ and $b_{i}$ are parameters that are estimated based on CMIP5 outputs} \nomenclature[C]{$\tau_{i}(t)$}{Regrowth relaxation time, $a_{i}$ and $b_{i}$ are parameters that are estimated based on CMIP5 outputs} The relaxation times are defined as follows: \begin{equation} \label{eq:taup} \tau_{P}(t)=\frac{P_{0}-\psi \int_{0}^{t} E^{lnd}_{P}(t')dt'}{dP_{0}} \end{equation} \begin{equation} \label{eq:tauh} \tau_{H}(t)=\frac{H_{0}-\psi \int_{0}^{t} E^{lnd}_{H}(t')dt'}{dH_{0}} \end{equation} \begin{equation} \label{eq:taus} \tau_{S}(t)=\frac{S_{0}-\psi \int_{0}^{t} E^{lnd}_{S}(t')dt'}{dS_{0}} \end{equation} \nomenclature[C]{$P_{0}$, $H_{0}$ and $S_{0}$}{Initial states of the living plant pool, the detritus pool and the soil pool, respectively} \nomenclature[C]{$\psi$}{Fraction of the gross deforestation without regrowth} \nomenclature[C]{$dP_{0}$, $dH_{0}$ and $dS_{0}$}{Initial decay rates} Therefore, the annual decay rates for the living plant pool, detritus pool and soil pool are defined as shown in Eq. (\ref{eq:dp}-\ref{eq:ds}): \begin{equation} \label{eq:dp} dP(t)=C_{P}(t) \frac{1}{\tau_{P}(t)} \end{equation} \begin{equation} \label{eq:dH} dH(t)=C_{H}(t) \frac{1}{\tau_{H}(t)} \exp\left(\sigma_{H} \Delta T(t)\right) \end{equation} \begin{equation} \label{eq:ds} dS(t)=C_{S}(t) \frac{1}{\tau_{S}(t)} \exp\left(\sigma_{S} \Delta T(t)\right) \end{equation} \nomenclature[C]{$C_{P}(t)$, $C_{H}(t)$ and $C_{S}(t)$}{Amounts of carbon remaining in the living plant pool, detritus pool and soil pool, respectively} \nomenclature[C]{$\sigma_{H}$ and $\sigma_{S}$}{Temperature feedback coefficients for the detritus pool and soil pool, respectively} The perturbations of carbon in the living plant pool, detritus pool and soil pool at time t are defined as shown in Eq. (\ref{eq:deltacp}-\ref{eq:deltacs}): \begin{equation} \label{eq:deltacp} \Delta P(t)=F_{NPP}(t) \nu_{P} - dP(t) - D^{gross}_{P}(t) - F_{rsp}(t) \end{equation} \begin{equation} \label{eq:deltacd} \Delta H(t)=F_{NPP}(t) \nu_{H} - dH(t) - D^{gross}_{H}(t) + dP(t) \rho_{p2d} \end{equation} \begin{equation} \label{eq:deltacs} \Delta S(t)=F_{NPP}(t) \nu_{S} - dS(t) - D^{gross}_{S}(t) + dP(t) \rho_{p2s} + dH(t) \delta_{d2s} \end{equation} \nomenclature[C]{$\Delta P(t)$, $\Delta H(t)$ and $\Delta S(t)$}{Total changes in the carbon levels for the living plant pool, the detritus pool and the soil pool, respectively} \nomenclature[C]{$\nu_{P}$, $\nu_{H}$ and $\nu_{S}$=1-$\nu_{P}$-$\nu_{H}$}{NPP partition factors for the living plant pool, the detritus pool and the soil pool, respectively} \nomenclature[C]{$\rho_{p2d}$ and $\rho_{p2s}$=1-$\rho_{p2d}$}{Fractions of $dP(t)$ that are distributed to the detritus and soil pools, respectively} \nomenclature[C]{$\delta_{d2s}$}{Fraction of $dH(t)$ going to the soil pool} Therefore, the carbon flux to or from the terrestrial biosphere can be calculated as follows: \begin{equation} \label{eq:fbio} F_{bio}(t)=\Delta P(t)+\Delta H(t)+\Delta S(t) \end{equation} \nomenclature[C]{$F_{bio}(t)$}{Carbon flux to or from the terrestrial biosphere} Here, we fitted the land net primary productivity (NPP), land surface net downward carbon flux (NBP), ocean surface downward carbon flux (fgco2) and CO\textsubscript{2} concentration of SCM4OPT v2.0 to the outputs of three CMIP5 experiments, namely, the historical, RCP26 and RCP85 experiments. The calibration procedures were performed in several steps, thereby minimizing the sum of squared errors (SSEs) with the associated variables. \subsubsection*{Oceanic carbon cycle} We apply the method proposed by \cite{Hartin2015, Hartin2016} to construct the oceanic carbon cycle (Eq.\ref{eq:dic}-\ref{eq:focn}). \begin{align} \label{eq:dic} &DIC(obx,t)\cdot\left(\frac{K_{1}(obx,t)}{H(obx,t)} + 2\frac{K_{1}(obx,t)K_{2}(obx,t)}{H(obx,t)^{2}}\right) = \nonumber\\ &\left(ALK(obx,t) - \frac{K_{B}(obx,t)BOR(obx)}{K_{B}(obx,t) + H(obx,t)} - \frac{K_{W}(obx,t)}{H(obx,t)} + H(obx,t) \right)\cdot \nonumber \\ &\left(1 + \frac{K_{1}(obx,t)}{H(obx,t)} + \frac{K_{1}(obx,t)K_{2}(obx,t)}{H(obx,t)^{2}} \right) \end{align} \nomenclature[C]{$DIC(obx,t)$}{Dissolved inorganic fraction for ocean box $obx$ and time $t$} \nomenclature[C]{$K_{1}(obx,t)$}{First acidity constant of carbonic acid for ocean box $obx$ and time $t$} \nomenclature[C]{$K_{2}(obx,t)$}{Second acidity constant of carbonic acid for ocean box $obx$ and time $t$} \nomenclature[C]{$H(obx,t)$}{Concentration of [H\textsuperscript{+}] for ocean box $obx$ and time $t$} \nomenclature[C]{$ALK(obx,t)$}{Total alkalinity for ocean box $obx$ and time $t$} \nomenclature[C]{$K_{B}(obx,t)$}{Dissociation constant of boric acid for ocean box $obx$ and time $t$} \nomenclature[C]{$BOR(obx)$}{Total boron level for ocean box $obx$} \nomenclature[C]{$K_{W}(obx,t)$}{Dissociation constant of water for ocean box $obx$ and time $t$} \begin{equation} \label{eq:co2sys} CO^{sys}_{2}(obx,t) = \frac{DIC(obx,t)}{1 + \frac{K_{1}(obx,t)}{H(obx,t)} + \frac{K_{1}(obx,t)K_{2}(obx,t)}{H(obx,t)^{2}}} \end{equation} \nomenclature[C]{$CO^{sys}_{2}(obx,t)$}{Dissolved inorganic (DIC) fraction of the system for ocean box $obx$ and time $t$} \begin{equation} \label{eq:pco2} pCO_{2}(obx,t) = \frac{CO^{sys}_{2}(obx,t)}{K_{H}(obx,t)} \end{equation} \nomenclature[C]{$pCO_{2}(obx,t)$}{Sea surface partial pressure for ocean box $obx$ and time $t$} \nomenclature[C]{$K_{H}(obx,t)$}{Henry’s constant for ocean box $obx$ and time $t$} \begin{equation} \label{eq:hco3} HCO_{3}(obx,t) = \frac{DIC(obx,t)}{1 + \frac{H(obx,t)}{K_{1}(obx,t)} + \frac{K_{2}(obx,t)}{H(obx,t)}} \end{equation} \nomenclature[C]{$HCO_{3}(obx,t)$}{Concentration of ocean bicarbonate $HCO_{3}^{-}$ for ocean box $obx$ and time $t$} \begin{equation} \label{eq:co3} CO_{3}(obx,t) = \frac{DIC(obx,t)}{1+\frac{H(obx,t)}{K_{2}(obx,t)} + \frac{H(obx,t)^{2}}{K_{1}(obx,t)K_{2}(obx,t)}} \end{equation} \nomenclature[C]{$CO_{3}(obx,t)$}{Concentration of ocean carbonate $CO_{3}^{2-}$ for ocean box $obx$ and time $t$} \begin{equation} \label{eq:k1} K_{1}(obx,t) = \frac{H(obx,t)HCO_{3}(obx,t)}{CO^{sys}_{2}(obx,t)} \end{equation} \begin{equation} \label{eq:k2} K_{2}(obx,t) = \frac{H(obx,t)CO_{3}(obx,t)}{HCO_{3}(obx,t)} \end{equation} \begin{equation} \label{eq:kb} K_{B}(obx,t) = \frac{H(obx,t)BOH_{4}(obx)}{BOH_{3}(obx)} \end{equation} \begin{equation} \label{eq:bor} BOR(obx) = 416.0 \cdot \frac{S}{35.0} = BOH_{4}(obx) + BOH_{3}(obx) \end{equation} \nomenclature[C]{$BOH_{4}(obx)$}{Ocean borate level} \nomenclature[C]{$BOH_{3}(obx)$}{Ocean boric acid level} \begin{equation} \label{eq:kw} K_{W}(obx,t) = \frac{H(obx,t)}{OH(obx,t)} \end{equation} \nomenclature[C]{$OH(obx,t)$}{Concentration of $OH^{-1}$} \begin{equation} \label{eq:fas} F_{as}(obx,t) = \kappa_{s}\alpha_{s}\cdot\left(C_{CO_{2}}(t)-pCO_{2}(obx,t)\right) \end{equation} \nomenclature[C]{$F_{as}(obx,t)$}{Carbon fluxes between the atmosphere and surface ocean box for ocean box $obx$ and time $t$, if applicable} \nomenclature[C]{$\kappa_{s}$}{CO\textsubscript{2} transfer velocity} \nomenclature[C]{$\alpha_{s}$}{Solubility of CO\textsubscript{2} in seawater} \begin{equation} \label{eq:focn} F_{ocn}(t) = \sum_{obx} F_{as}(obx,t) \end{equation} The carbon in the atmospheric pool is converted into the CO\textsubscript{2} concentration by: \begin{equation} \label{eq:cco2} C_{CO_{2}}(t)=\frac{C_{atm}(t)}{\alpha_{ppm2gtc}} \end{equation} \nomenclature[C]{$C_{CO_{2}}(t)$}{CO\textsubscript{2} concentration} \nomenclature[C]{$\alpha_{ppm2gtc}$}{Unit conversion factor from ppm into Gt C, = 2.123 Gt C ppm\textsuperscript{-1}} The radiative forcing from CO\textsubscript{2} can be obtained as: \begin{equation} \label{eq:fco2} f_{CO_{2}}(t)=\alpha_{CO_{2}} \log\frac{C_{CO_{2}}(t)}{C^{0}_{CO_{2}}} \end{equation} \nomenclature[C]{$f_{CO_{2}}(t)$}{CO\textsubscript{2} radiative forcing at time $t$} \nomenclature[C]{$\alpha_{CO_{2}}$}{Forcing scaling parameter, = $\frac{3.71}{\log\left(2\right)}$=5.35 Wm\textsuperscript{-2} \cite{Myhre1998}} \subsection*{CH\textsubscript{4}} The change in the CH\textsubscript{4} concentration is directly calculated from the CH\textsubscript{4} emissions from natural, industrial and land-use sources and from the CH\textsubscript{4} sinks in the troposphere (based on the lifetime of OH), stratosphere, and soil. \begin{equation} \label{eq:deltach4} \Delta C_{CH_{4}}(t) = \frac{E^{nat}_{CH_{4}} + E^{ind}_{CH_{4}}(t) + E^{lnd}_{CH_{4}}(t)}{\theta_{CH_{4}}}- \frac{C_{CH_{4}}(t-1)}{\tau^{tot}_{CH_{4}}(t-1)} \end{equation} \nomenclature[G]{$\Delta C_{CH_{4}}(t)$}{Change in the CH\textsubscript{4} concentration at time $t$} \nomenclature[G]{$E^{nat}_{CH_{4}}$}{Natural CH\textsubscript{4} emissions, =274.5 Mt CH\textsubscript{4} yr\textsuperscript{-1}} \nomenclature[G]{$E^{ind}_{CH_{4}}(t)$}{Industrial CH\textsubscript{4} emissions at time $t$} \nomenclature[G]{$E^{lnd}_{CH_{4}}(t)$}{Land-use source CH\textsubscript{4} emissions at time $t$} \nomenclature[G]{$\theta_{CH_{4}}$}{CH\textsubscript{4} conversion factor, 2.78 Tg ppb\textsuperscript{-1}} \nomenclature[G]{$\tau^{tot}_{CH_{4}}(t)$}{CH\textsubscript{4} lifetime at time $t$} \begin{equation} \label{eq:tautot} \frac{1}{\tau^{tot}_{CH_{4}}(t)} = \frac{1}{\tau^{init}_{CH_{4}}/\tau^{rel}_{OH}(t)} + \frac{1}{\tau^{soil}_{CH_{4}}} + \frac{1}{\tau^{oth}_{CH_{4}}} \end{equation} \nomenclature[G]{$\tau^{init}_{CH_{4}}$}{Initial lifetime of OH, =9.6 years} \nomenclature[G]{$\tau^{init}_{CH_{4}}/\tau^{rel}_{OH}(t)$}{CH\textsubscript{4} lifetime in the troposphere} \nomenclature[G]{$\tau^{soil}_{CH_{4}}$}{CH\textsubscript{4} lifetime in soil, =160 years} \nomenclature[G]{$\tau^{oth}_{CH_{4}}$}{CH\textsubscript{4} lifetime in the stratosphere, =120 years} The change in the tropospheric OH abundance relative to the level in 2000 is thus modeled as: \begin{align} \label{eq:reloh} \tau^{rel}_{OH}(t)=&\quad S_{\tau_{CH_{4}}} \Delta T_{2k}(t) +\left(\frac{C_{CH_{4}}(t)}{C^{2k}_{CH_{4}}}\right)^{S^{OH}_{CH_{4}}} \nonumber\\ &\cdot\exp\left(S^{OH}_{NO_{x}} \Delta E_{NO_{x}}(t)+S^{OH}_{CO} \Delta E_{CO}(t)+S^{OH}_{VOC} \Delta E_{VOC}(t)\right) \end{align} \nomenclature[G]{$S^{OH}_{x}$}{Sensitivities of the tropospheric OH to CH\textsubscript{4}, NO\textsubscript{x}, CO and VOC, with values of -0.32, +0.0042, -1.05E-4 and -3.15E-4, respectively} \nomenclature[G]{$C^{2k}_{CH_{4}}$}{CH\textsubscript{4} concentration in 2000} \nomenclature[G]{$S_{\tau_{CH_{4}}}$}{CH\textsubscript{4} temperature sensitivity coefficient of the tropospheric chemical reactions, =0.0316 \degree C\textsuperscript{-1} \cite{Meinshausen2011}} \nomenclature[G]{$\Delta T_{2k}(t)$}{Temperature change above the 2000 level} \subsection*{N\textsubscript{2}O} The feedback effect of the atmospheric N\textsubscript{2}O concentration on its own lifetime is approximated as: \begin{equation} \label{eq:taun2o} \tau_{N_{2}O}(t) = \tau^{init}_{N_{2}O} \left(\frac{C_{N_{2}O}(t)}{C^{2k}_{N_{2}O}}\right)^{S_{\tau_{N_{2}O}}} \end{equation} \nomenclature[G]{$C_{N_{2}O}(t)$}{N\textsubscript{2}O concentration} \nomenclature[G]{$\tau_{N_{2}O}(t)$}{N\textsubscript{2}O lifetime at time $t$} \nomenclature[G]{$\tau^{init}_{N_{2}O}$}{Initial N\textsubscript{2}O lifetime, =120 years} \nomenclature[G]{$C^{2k}_{N_{2}O}$}{N\textsubscript{2}O concentration in 2000} \nomenclature[G]{$S_{\tau_{N_{2}O}}$}{N\textsubscript{2}O sensitivity coefficient, =-0.05} The change in the atmospheric N\textsubscript{2}O concentration is calculated as: \begin{equation} \label{eq:deltan2o} \Delta C_{N_{2}O}(t) = \frac{E^{nat}_{N_{2}O} + E^{ind}_{N_{2}O}(t) + E^{lnd}_{N_{2}O}(t)}{\theta_{N_{2}O}}- \frac{C_{N_{2}O}(t-1)}{\tau_{N_{2}O}(t-1)} \end{equation} \nomenclature[G]{$\Delta C_{N_{2}O}(t)$}{N\textsubscript{2}O concentration change at time $t$} \nomenclature[G]{$E^{nat}_{N_{2}O}$}{Natural N\textsubscript{2}O emissions, =8.4 Mt N\textsubscript{2}O-N yr\textsuperscript{-1}} \nomenclature[G]{$E^{ind}_{N_{2}O}(t)$}{Industrial N\textsubscript{2}O emissions at time $t$} \nomenclature[G]{$E^{lnd}_{N_{2}O}(t)$}{Land-use source N\textsubscript{2}O emissions at time $t$} \nomenclature[G]{$\theta_{N_{2}O}$}{N\textsubscript{2}O conversion factor, =4.81 Tg ppb\textsuperscript{-1}} The radiative forcings of CH\textsubscript{4} ($f_{CH_{4}}(t)$) and N\textsubscript{2}O ($f_{N_{2}O}(t)$) are calculated following the standard IPCC (2001) methods \cite{ipcc2001ch6}, as shown in Eq. (\ref{eq:fch4}-\ref{eq:fmn}): \begin{align} \label{eq:fch4} f_{CH_{4}}(t)=&\alpha_{CH_{4}}\left(\sqrt{C_{CH_{4}}(t)}-\sqrt{C^{0}_{CH_{4}}}\right) - \nonumber\\ &\left(f_{mn}\left(C_{CH_{4}}(t),C^{0}_{N_{2}O}\right)-f_{mn}\left(C^{0}_{CH_{4}},C^{0}_{N_{2}O}\right)\right) \end{align} \begin{align} \label{eq:fn2o} f_{N_{2}O}(t)=&\alpha_{N_{2}O}\left(\sqrt{C_{N_{2}O}(t)}-\sqrt{C^{0}_{N_{2}O}}\right)- \nonumber\\ &\left(f_{mn}\left(C^{0}_{CH_{4}},C_{N_{2}O}(t)\right)-f_{mn}\left(C^{0}_{CH_{4}},C^{0}_{N_{2}O}\right)\right) \end{align} \nomenclature[G]{$\alpha_{CH_{4}}$}{CH\textsubscript{4} scaling factor, =0.036} \nomenclature[G]{$\alpha_{N_{2}O}$}{N\textsubscript{2}O scaling factor, =0.12} \nomenclature[G]{$C^{0}_{CH_{4}}$}{CH\textsubscript{4} preindustrial concentration, =721.9 ppb} \nomenclature[G]{$C^{0}_{N_{2}O}$}{N\textsubscript{2}O preindustrial concentration, =273.0 ppb} The function $f_{mn}(M,N)$ defining the overlap between CH\textsubscript{4} and N\textsubscript{2}O is: \begin{align} \label{eq:fmn} f_{mn}(M,N) =& \nonumber \\ & 0.47 \log\left(1+0.6356\left(\frac{MN}{10^{6}}\right)^{0.75}+0.007 \frac{M}{10^{3}} \left(\frac{MN}{10^{6}}\right)^{1.52}\right) \end{align} \nomenclature[G]{$M$ and $N$}{CH\textsubscript{4} and N\textsubscript{2}O concentration inputs} \subsection*{Halogenated gases} All the available halogenated gases are treated separately with regard to their concentrations \cite{Meinshausen2011, Hartin2015}: \begin{align} \label{eq:chc} C_{hc}(t+1,hc) =&\tau_{hc}\frac{E(t,hc)}{\mu_{hc}} \frac{\rho_{atm}}{m_{atm}}\cdot \nonumber \\ & \left(1-\exp(-\frac{1}{\tau_{hc}})\right)+C_{hc}(t,hc)\left(1-\exp(-\frac{1}{\tau_{hc}})\right) \end{align} \nomenclature[G]{$C_{hc}(t+1,hc)$}{Concentration (in ppt) of halogenated gas $hc$ in year $t+1$} \nomenclature[G]{$E(t,hc)$}{Halogenated gas emission level of $hc$ in kt yr\textsuperscript{-1}} \nomenclature[G]{$\mu_{hc}$}{Molar mass of halogenated gas $hc$} \nomenclature[G]{$\tau_{hc}$}{Lifetime of halogenated gas $hc$} \nomenclature[G]{$\rho_{atm}$}{Average density of air} \nomenclature[G]{$m_{atm}$}{Total mass of the atmosphere} The radiative forcing from each halogenated gas is given by: \begin{equation} \label{eq:fhc} f_{hc}(t,hc)=\alpha_{hc}\left(C_{hc}(t,hc)-C^{0}_{hc}\right) \end{equation} \nomenclature[G]{$f_{hc}(t,hc)$}{Halogenated gas radiative forcing} \nomenclature[G]{$\alpha_{hc}$}{Halogenated gas radiative efficiency} \nomenclature[G]{$C^{0}_{hc}$}{Halogenated gas preindustrial atmospheric concentration} \subsection*{Direct effect of aerosols} We update the estimation of the direct effects from aerosols based on \cite{Gasser2016}. The change in the sulfate burden is assessed to capture the radiative forcing impacts resulting from sulfate aerosols. \begin{align} \label{eq:so4} C_{SO_{4}}(t) = &C_{SO_{4}}^{0} + \alpha_{SO_{4}} \tau_{SO_{2}} \left(E_{SO_{2}}^{ind}(t) + E_{SO_{2}}^{lnd}(t) \right) + \nonumber \\ &\alpha_{SO_{4}} \tau_{dms} E_{dms}(t) + \Gamma_{SO_{4}} \Delta T_{as}(t) \end{align} \nomenclature[A]{$C_{SO_{4}}(t)$}{Sulfate concentration at time $t$} \nomenclature[A]{$C_{SO_{4}}^{0}$}{Initial sulfate concentration} \nomenclature[A]{$\alpha_{SO_{4}}$}{Conversion of $SO_{4}$ from Tg S into Tg (SO4)} \nomenclature[A]{$\tau_{SO_{2}}$}{Lifetime of $SO_{2}$} \nomenclature[A]{$E_{SO_{2}}^{ind}(t)$}{Industrial $SO_{2}$ emissions at time $t$} \nomenclature[A]{$E_{SO_{2}}^{lnd}(t)$}{Land-use $SO_{2}$ emissions at time $t$} \nomenclature[A]{$\tau_{dms}$}{Lifetime of dimethyl sulfide} \nomenclature[A]{$E_{dms}(t)$}{Dimethyl sulfide emissions} \nomenclature[A]{$\Gamma_{SO_{4}}$}{Sulfate sensitivity to the global mean temperature} \nomenclature[A]{$\Delta T_{as}(t)$}{Global mean temperature relative to 1850 at time $t$} Similarly, the concentration of primary organic aerosols (POAs) is defined as: \begin{align} \label{eq:poa} C_{POA}(t) = &C_{POA}^{0} + \tau_{OM}^{ind} \alpha_{POM} E_{OC}^{ind}(t) + \tau_{OM}^{lnd} \alpha_{POM} E_{OC}^{lnd}(t) + \nonumber \\ & \Gamma_{POA} \Delta T_{as}(t) \end{align} \nomenclature[A]{$C_{POA}(t)$}{Concentration of primary organic aerosols at time $t$} \nomenclature[A]{$C_{POA}^{0}$}{Initial concentration of primary organic aerosols} \nomenclature[A]{$\tau_{OM}^{ind}$}{Lifetime of industrial primary organic aerosols} \nomenclature[A]{$\alpha_{POM}$}{Conversion of POM from Tg (OC) into Tg (OM)} \nomenclature[A]{$E_{OC}^{ind}(t)$}{Industrial OC emissions at time $t$} \nomenclature[A]{$\tau_{OM}^{lnd}$}{Lifetime of land-use primary organic aerosols} \nomenclature[A]{$E_{OC}^{lnd}(t)$}{Land-use OC emissions at time $t$} \nomenclature[A]{$\Gamma_{POA}$}{Primary organic aerosol sensitivity to the global mean temperature} The black carbon (BC) concentration is: \begin{align} \label{eq:bc} C_{BC}(t) = &C_{BC}^{0} + \tau_{BC}^{ind} E_{BC}^{ind}(t) + \tau_{BC}^{lnd} E_{BC}^{lnd}(t) + \nonumber \\ & \Gamma_{BC} \Delta T_{as}(t) \end{align} \nomenclature[A]{$C_{BC}(t)$}{Concentration of BC at time $t$} \nomenclature[A]{$C_{BC}^{0}$}{Initial concentration of BC} \nomenclature[A]{$\tau_{BC}^{ind}$}{Lifetime of industrial BC} \nomenclature[A]{$E_{BC}^{ind}(t)$}{Industrial BC emissions at time $t$} \nomenclature[A]{$\tau_{BC}^{lnd}$}{Lifetime of land-use BC} \nomenclature[A]{$E_{BC}^{lnd}(t)$}{Land-use BC emissions in time $t$} \nomenclature[A]{$\Gamma_{BC}$}{BC sensitivity to the global mean temperature} The concentration of nitrate aerosols is: \begin{align} \label{eq:no3} C_{NO_{3}}(t) = &C_{NO_{3}}^{0} + \tau_{NO_{x}} \left(E_{NO_{x}}^{ind}(t) + E_{NO_{x}}^{lnd}(t) \right) + \nonumber \\ &\tau_{NH_{3}} \left(E_{NH_{3}}^{ind}(t) + E_{NH_{3}}^{lnd}(t) \right) + \Gamma_{NO_{3}} \Delta T_{as}(t) \end{align} \nomenclature[A]{$C_{NO_{3}}(t)$}{Concentration of nitrate aerosols at time $t$} \nomenclature[A]{$C_{NO_{3}}^{0}$}{Initial concentration of nitrate aerosols} \nomenclature[A]{$\tau_{NO_{x}}$}{Lifetime of $NO_{x}$} \nomenclature[A]{$E_{NO_{x}}^{ind}(t)$}{Industrial $NO_{x}$ emissions at time $t$} \nomenclature[A]{$E_{NO_{x}}^{lnd}(t)$}{Land-use $NO_{x}$ emissions at time $t$} \nomenclature[A]{$\tau_{NH_{3}}$}{Lifetime of $NH_{3}$} \nomenclature[A]{$E_{NH_{3}}^{ind}(t)$}{Industrial $NH_{3}$ emissions at time $t$} \nomenclature[A]{$E_{NH_{3}}^{lnd}(t)$}{Land-use $NH_{3}$ emissions at time $t$} \nomenclature[A]{$\Gamma_{NO_{3}}$}{$NO_{x}$ sensitivity to the global mean temperature} The concentration of secondary organic aerosols (SOAs) is: \begin{align} \label{eq:soa} C_{SOA}(t) = &C_{SOA}^{0} + \tau_{VOC}\left(E_{VOC}^{ind}(t) + E_{VOC}^{lnd}(t) \right) + \tau_{BVOC} E_{BVOC}(t) + \nonumber \\ & \Gamma_{SOA} \Delta T_{as}(t) \end{align} \nomenclature[A]{$C_{SOA}(t)$}{Concentration of SOAs at time $t$} \nomenclature[A]{$C_{SOA}^{0}$}{Initial concentration of SOAs} \nomenclature[A]{$\tau_{VOC}$}{Lifetime of nonmethane volatile organic compounds (NMVOCs)} \nomenclature[A]{$E_{VOC}^{ind}(t)$}{Industrial NMVOC emissions at time $t$} \nomenclature[A]{$E_{VOC}^{lnd}(t)$}{land-use NMVOC emissions at time $t$} \nomenclature[A]{$\tau_{BVOC}$}{Lifetime of biogenic NMVOCs} \nomenclature[A]{$E_{BVOC}(t)$}{Biogenic NMVOC emissions at time $t$} \nomenclature[A]{$\Gamma_{SOA}$}{NMVOCs sensitivity to the global mean temperature} Thus, the direct radiative forcing caused by aerosols and pollutants is: \begin{equation} \label{eq:aero} f_{aero}(t)= \alpha_{aero}^{rf} \delta C_{aero}(t) \end{equation} \nomenclature[A]{$f_{aero}(t)$}{Direct radiative forcing of aerosol $aero$ at time $t$} \nomenclature[A]{$\alpha_{aero}^{rf}$}{Radiative efficiency of aerosol $aero$} \nomenclature[A]{$\delta C_{aero}(t)$}{Aerosol $aero$ concentration at time $t$} \subsection*{Mineral dust aerosols} The historical radiative forcing from mineral dust aerosols is obtained from MAGICC 6.0 \cite{Meinshausen2011}. The future forcing level is assumed to remain at a constant value of -0.1 Wm\textsuperscript{-2} after 2005. \begin{equation} \label{eq:fmd} f_{mindust}(t)=-0.1 \end{equation} \nomenclature[A]{$f_{mindust}(t)$}{Radiative forcing from mineral dust} \subsection*{Cloud effects} The tropospheric burden of soluble aerosols can be obtained by: \begin{equation} \label{eq:csolaero} C_{solu}(t) = C_{solu}^{0} + \sum_{aero \in SO_{4}, POA,BC,NO_{3},SOA} \alpha_{solu}^{aero} \left(C_{aero}(t) - C_{aero}^{0}\right) \end{equation} \nomenclature[A]{$C_{solu}(t)$}{Number concentrations of soluble aerosols at time $t$} \nomenclature[A]{$C_{solu}^{0}$}{Initial number concentrations of soluble aerosols} \nomenclature[A]{$\alpha_{solu}^{aero}$}{Soluble fraction for aerosol $aero$} \nomenclature[A]{$C_{aero}(t)$}{Aerosol concentration at time $t$} \nomenclature[A]{$C_{aero}^{0}$}{Initial aerosol concentration} The cloud forcing effects are estimated by: \begin{equation} \label{eq:fcloud} f_{cloud}(t) = f_{BC}(t) \kappa_{adj}^{BC} + \phi_{solu} ln \left(1 + \frac{\Delta C_{solu}(t)}{C_{solu}^{0}} \right) \end{equation} \nomenclature[A]{$f_{cloud}(t)$}{Cloud forcing effects at time $t$} \nomenclature[A]{$f_{BC}(t)$}{BC radiative forcing at time $t$} \nomenclature[A]{$\kappa_{adj}^{BC}$}{Adjustment coefficient of the BC radiative forcing to the cloud forcing effect} \nomenclature[A]{$\phi_{solu}$}{Intensity effect coefficient for soluble aerosols} \subsection*{Stratospheric ozone} The equivalent effective stratospheric chlorine (EESC) concentration is calculated as: \begin{equation} \label{eq:eesc} C_{EESC}(t)=a_{EESC}\left(\sum_{Cl}n_{Cl}f_{Cl}C_{hc}(t,Cl)+\alpha_{br}\sum_{Br}n_{Br}f_{Br}C_{hc}(t,Br)\right) \end{equation} \nomenclature[O]{$C_{EESC}(t)$}{EESC concentration at time $t$} \nomenclature[O]{$n_{Cl}$ and $n_{Br}$}{Numbers of chlorine and bromine atoms, respectively} \nomenclature[O]{$f_{Cl}$ and $f_{Br}$}{Release efficiencies of stratospheric halogens for chlorine and bromine, respectively} \nomenclature[O]{$C_{hc}(t,Cl)$ and $C_{hc}(t,Br)$}{Gas mixing rates in the stratosphere for chlorine and bromine, respectively} \nomenclature[O]{$\alpha_{br}$}{Ratio of effectiveness in ozone depletion between bromine and chlorine} \nomenclature[O]{$a_{EESC}$}{Fractional release factor of EESC} The concentration of stratospheric ozone is: \begin{align} \label{eq:co3s} C_{O3s}(t) = & C_{O3s}^{0} + \xi_{EESC}^{O3s}\left(C_{EESC}(t) - C_{EESC}^{0} \right) + \nonumber \\ &\xi_{N_{2}O}^{O3s}\left(1 - \frac{C_{EESC}(t) - C_{EESC}^{0}}{C_{EESC}^{X}} \right) \Delta C_{N_{2}O}^{lag}(t) + \Gamma_{O3s} \Delta T_{as}(t) \end{align} \nomenclature[O]{$C_{O3s}(t)$}{Stratospheric ozone concentration at time $t$} \nomenclature[O]{$C_{O3s}^{0}$}{Initial stratospheric ozone concentration} \nomenclature[O]{$\xi_{EESC}^{O3s}$}{Stratospheric ozone sensitivity to EESC} \nomenclature[O]{$C_{EESC}^{0}$}{Initial EESC concentration} \nomenclature[O]{$\xi_{N_{2}O}^{O3s}$}{Stratospheric ozone sensitivity to $N_{2}O$} \nomenclature[O]{$C_{EESC}^{X}$}{Nonlinear interaction parameter between the chlorine and nitrogen chemistries} \nomenclature[O]{$\Delta C_{N_{2}O}^{lag}(t)$}{$N_{2}O$ concentration with time lag at time $t$} \nomenclature[O]{$\Gamma_{O3s}$}{Stratospheric ozone sensitivity to the global mean temperature} Thus, the forcing effect of the stratospheric ozone burden can be obtained by: \begin{equation} \label{eq:fo3s} f_{O3s}(t)=\alpha_{O3s}^{rf}\left(C_{O3s}(t)-C_{O3s}^{0}\right) \end{equation} \nomenclature[O]{$f_{O3s}(t)$}{Forcing effect of the stratospheric ozone burden at time $t$} \nomenclature[O]{$\alpha_{O3s}$}{Stratospheric ozone radiative efficiency} \subsection*{Tropospheric ozone} The tropospheric ozone concentration is estimated to be: \begin{align} \label{eq:co3t} C_{O3t}(t) = &C_{O3t}^{0} + \xi_{CH_{4}}^{O3t} ln \left(1+\frac{\Delta C_{CH_{4}}(t)}{C_{CH_{4}}^{0}} \right) + \Gamma_{O3t} \Delta T_{as}(t) + \nonumber \\ &\sum_{aero \in NO_{x},CO,VOC} \xi_{aero}^{O3t} \left(E_{aero}^{ind}(t) + E_{aero}^{lnd}(t)\right) \end{align} \nomenclature[O]{$C_{O3t}(t)$}{Tropospheric ozone concentration at time $t$} \nomenclature[O]{$C_{O3t}^{0}$}{Initial tropospheric ozone concentration} \nomenclature[O]{$\xi_{CH_{4}}^{O3t}$}{Tropospheric ozone sensitivity of the $CH_{4}$ effect} \nomenclature[O]{$\Gamma_{O3t}$}{Tropospheric ozone sensitivity to the global mean temperature} \nomenclature[O]{$\xi_{aero}^{O3t}$}{Tropospheric ozone sensitivity of aerosol $aero$} The radiative forcing from the tropospheric ozone is then calculated as: \begin{equation} \label{eq:fo3t} f_{O3t}(t)=\alpha_{O3t}^{rf}\left(C_{O3t}(t)-C_{O3t}^{0}\right) \end{equation} \nomenclature[O]{$f_{O3t}(t)$}{Radiative forcing of the tropospheric ozone at time $t$} \nomenclature[O]{$\alpha_{O3t}$}{Tropospheric ozone radiative efficiency} \subsection*{Stratospheric water vapor from CH\textsubscript{4} oxidation} The forcing effect of the stratospheric water vapor from CH\textsubscript{4} oxidation $f_{H_{2}O}(t)$ is calculated by: \begin{equation} \label{eq:fh2o} f_{H_{2}O}(t)=\alpha_{H_{2}O}^{rf} \sqrt{C_{CH_{4}}^{0}} \left(\sqrt{1 + \frac{\Delta C_{CH_{4}}^{lag}(t)}{C_{CH_{4}}^{0}} } - 1 \right) \end{equation} \nomenclature[T]{$f_{H_{2}O}(t)$}{Forcing effect of the stratospheric water vapor from CH\textsubscript{4} oxidation at time $t$} \nomenclature[T]{$\alpha_{H_{2}O}^{rf}$}{Stratospheric water vapor radiative efficiency} \nomenclature[T]{$\Delta C_{CH_{4}}^{lag}(t)$}{$CH_{4}$ concentration with time lag at time $t$} \subsection*{Land-use albedo} The forcing effect from the land-use albedo is estimated according to the annual mean albedo at the biome and regional scales, using the changes in regional land cover as input following the methods described in ref \cite{Gasser2016}. \begin{equation} \label{eq:flcc} f_{LCC}(t)=-\pi_{trans} \phi_{rsds} \sum_{bio} \alpha_{LCC}^{bio} \frac{\Delta A_{LCC}^{bio}(t)}{\Delta A_{Earth}} \end{equation} \nomenclature[S]{$f_{LCC}(t)$}{Land-use albedo forcing at time $t$} \nomenclature[S]{$\pi_{trans}$}{Global shortwave and upward transmittance} \nomenclature[S]{$\phi_{rsds}$}{Radiative shortwave and downward flux at the surface} \nomenclature[S]{$\alpha_{LCC}^{bio}$}{Yearly averaged albedo at the biome scale} \nomenclature[S]{$\Delta A_{LCC}^{bio}(t)$}{Surface area change in the biome at time $t$} \nomenclature[S]{$\Delta A_{Earth}$}{Surface area of Earth} \subsection*{BC on snow} The forcing effect of BC on snow is determined as a linear function of the BC emission level: \begin{equation} \label{eq:fbcsnow} f_{BCSnow}(t)=a_{BC}+ b_{BC} \left(E_{BC}^{ind}(t) + E_{BC}^{lnd}(t)\right) \end{equation} \nomenclature[T]{$f_{BCSnow}(t)$}{Forcing effect of BC on snow} \nomenclature[T]{$a_{BC}$ and $b_{BC}$}{Forcing scaling parameters of BC on snow} \subsection*{Natural sources} Regarding the various natural sources, the volcanic and solar forcings are assumed to be the natural forcing inputs for CMIP6. \begin{equation} \label{eq:fvolc} f_{volc}(t)=f_{volc}^{CMIP6}(t) \end{equation} \begin{equation} \label{eq:fsolar} f_{solar}(t)=f_{solar}^{CMIP6}(t) \end{equation} \nomenclature[T]{$f_{volc}(t)$}{Volcanic forcing effects at time $t$} \nomenclature[T]{$f_{volc}^{CMIP6}(t)$}{Volcanic forcing effects for CMIP6 at time $t$} \nomenclature[T]{$f_{solar}(t)$}{Solar irradiance forcing effects at time $t$} \nomenclature[T]{$f_{solar}^{CMIP6}(t)$}{Solar irradiance forcing effects for CMIP6 at time $t$} \subsection*{Global mean temperature} The estimation of the global mean temperature is based on the Diffusion Ocean Energy balance CLIMate (DOECLIM) model by using the total radiative forcing as input \cite{Tanaka2007,Wong2017}. Here, we reestimated the climate sensitivity, vertical ocean diffusivity and radiative forcing coefficient for CO\textsubscript{2} doubling based on the CMIP5 outputs related to each available GCM. The detailed descriptions and equations are contained in the references \cite{Tanaka2007,Wong2017}. For the simple climate module, the time step was calibrated to be 1/6 year for SCM4OPT v2.0 to avoid possible convergence problems when calculating the ocean carbon cycle \cite{Hartin2015}. The calibrated results are shown in \cref*{fig:rfc_ghg,fig:rfc_aero,fig:rfc_other,fig:rfc_natr,fig:rfc_tot,fig:rfc_LCC,fig:scm_tatm}. We also included the results produced by other models or associated statistical records for comparison purposes. \printnomenclature \newpage \begin{table} \small \centering \caption{Datasets of historical emissions} \label{tbl_hist} \begin{tabular}{ l l l l l } \hline Source & Period & Emission & Format & Reference \\ \hline CEDS & 1750-2014 & {\makecell{CO\textsubscript{2}, CH\textsubscript{4}, BC, CO, NH\textsubscript{3}, \\ NMVOC, NO\textsubscript{x}, OC, SO\textsubscript{2}}} & Spatial (sectoral) & {Ref \cite{Hoesly2018}} \\ EDGAR v4.3.2 & 1970-2012 & {\makecell{CO\textsubscript{2}, CH\textsubscript{4}, N\textsubscript{2}O, BC, CO, NH\textsubscript{3}, \\ NMVOC, NO\textsubscript{x}, OC, SO\textsubscript{2}}} & {\makecell{Regional and sectoral \\ /Spatial (sectoral)}} & {Ref \cite{Aardenne2018}} \\ EDGAR v4.2 (*) & 1970-2008 & {\makecell{CO\textsubscript{2}, CH\textsubscript{4}, N\textsubscript{2}O, CO, NH\textsubscript{3}, F-gases,\\ NF3, SF6, NMVOC, NO\textsubscript{x}, SO\textsubscript{2}}} & {\makecell{Regional and sectoral \\ /Spatial (sectoral)}} & {Ref \cite{JRCPBL2011}} \\ PRIMAP v2.0 (**) & 1850-2016 & {\makecell{CO\textsubscript{2}, CH\textsubscript{4}, N\textsubscript{2}O, F-gases, HFCs, \\ PFCs, NF3, SF6}} & Spatial (sectoral) & {Ref \cite{Gutschow2016}} \\ RCP historical & 1850-2000 & {\makecell{CH\textsubscript{4}, BC, CO, NH\textsubscript{3}, NO\textsubscript{x}, OC, \\ SO\textsubscript{2}, VOC}} & Spatial (sectoral) & {Ref \cite{Lamarque2009}} \\ \hline \end{tabular} \begin{tablenotes} \item (*) Halogenated gas emissions are used in EDGAR v4.3.2 since these emissions are not included in EDGAR v4.3.2. \item (**) N\textsubscript{2}O is employed in the other datasets when not included. \end{tablenotes} \end{table} \newpage \normalsize \begin{table} \small \centering \begin{threeparttable} \caption{Datasets of the future scenarios at the various forcing levels} \label{tbl_sce} \begin{tabular}{ l l l l l } \hline \makecell{Forcing levels\\ (Wm\textsuperscript{-2})} & Source & Scenario & Reference \\ \hline 1.9 & AIM/CGE & {SSP1-1.9, SSP2-1.9} & {Ref \cite{Fujimori2018}} \\ 1.9 & IAMC & SSP1-1.9 & {Ref \cite{Gidden2019}} \\ 2.6 & AIM/CGE & {SSP1-2.6, SSP2-2.6, SSP3-2.6(*), SSP4-2.6, SSP5-2.6} & {Ref \cite{Fujimori2018}} \\ 2.6 & IAMC & SSP1-2.6,{Ref \cite{Gidden2019}} \\ 3.4 & AIM/CGE & {SSP1-3.4, SSP2-3.4, SSP3-3.4, SSP4-3.4, SSP5-3.4} & {Ref \cite{Fujimori2018}} \\ 3.4 & IAMC & {SSP4-3.4, SSP5-3.4-OS} & {Ref \cite{Gidden2019}} \\ 4.5 & AIM/CGE & {SSP1-4.5, SSP2-4.5, SSP3-4.5, SSP4-4.5, SSP5-4.5} & {Ref \cite{Fujimori2018}} \\ 4.5 & IAMC & SSP2-4.5 & {Ref \cite{Gidden2019}} \\ 6.0 & AIM/CGE & {SSP1-Baseline, SSP2-6.0, SSP3-6.0, SSP4-Baseline, SSP5-6.0} & {Ref \cite{Fujimori2018}} \\ 6.0 & IAMC & {SSP3-LowNTCF(**), SSP4-6.0} & {Ref \cite{Gidden2019}} \\ 7.0 & AIM/CGE & {SSP2-Baseline, SSP3-Baseline} & {Ref \cite{Fujimori2018}} \\ 7.0 & IAMC & SSP3-7.0 & {Ref \cite{Gidden2019}} \\ 8.5 & AIM/CGE & SSP5-Baseline & {Ref \cite{Fujimori2018}} \\ 8.5 & IAMC & SSP5-8.5 & {Ref \cite{Gidden2019}} \\ \hline \end{tabular} \begin{tablenotes} \small \item (*) The SSP3-2.6 scenario was not available in Table 2 in ref \cite{Fujimori2018}, however, the dataset was provided in https://doi.org/10.7910/DVN/4NVGWA. We retained SSP3-2.6 in our analysis. \item (**) The target forcing level of SSP3-LowNTCF was 6.3 Wm\textsuperscript{-2} (Table 1 in ref\cite{Gidden2019}). We classified it to the closest forcing level of 6.0 Wm\textsuperscript{-2}. \end{tablenotes} \end{threeparttable} \end{table} \newpage \normalsize \begin{table} \centering \begin{threeparttable} \caption{Datasets of CO\textsubscript{2} emissions from land-use change} \label{tbl_luc} \begin{tabular}{l l l l} \hline Source & Period & Format & Reference \\ \hline {Houghton et al. (2012) (*)} & 1960-2010 & Regional & {Ref \cite{Houghton2012,Hansis2015}} \\ MPIMET& 1850-2005& Spatial grid& {Ref \cite{Raddatz2010}}\\ PRIMAP v1.2& 1850-2015& Regional& {Ref \cite{Gutschow2016}}\\ {Smith and Rothwell (2013)}& 1850-2010& Regional& {Ref\cite{Smith2013}} \\ \hline \end{tabular} \begin{tablenotes} \small \item (*) An updated version \cite{Hansis2015} was used, downloaded from http://www.globalcarbonatlas.org/en/CO2-emissions. \end{tablenotes} \end{threeparttable} \end{table} \newpage {\linespread{1.3} \begin{table} \caption{Please refer to the spreadsheet in the supplementary tables. Atmospheric drivers and radiative forcings. Note: This table is compiled based on Figure SPM.5 in IPCC (2013) and references Gasser et al. (2016) and Su et al. (2017). All emissions from international shipping activities are regional nonattributable.} \label{tbl_region} \end{table} } {\linespread{1.3} \begin{table} \caption{Please refer to the spreadsheet in the supplementary tables, Mapping of the eleven regions. Note: The spatial mapping is based on Natural Earth data (https://www.naturalearthdata.com), and 1:10 m cultural vectors are applied. Columns 2 and 3 are extracted from Natural Earth maps. ADM0\_A3 are the alpha-3 codes defined for each country or region.} \label{tbl_sector} \end{table} } {\linespread{1.3} \begin{table} \centering \begin{threeparttable} \footnotesize \caption{Equilibrium climate sensitivity (ECS) used in this study compared to other references} \label{tbl_clims} \begin{tabular}{l r r r r r r r r} \hline Model & This study & Ref\cite{Andrews2012} & Ref\cite{Forster2013} & Ref\cite{IPCC2014} & Ref\cite{Sherwood2014} & Ref\cite{Gregory20140417} & Ref\cite{Tsutsui2017} & Ref\cite{Mauritzen2017}\\ \hline ACCESS1-0&3.88& -&3.83&3.8&3.79&3.45&3.76&3.8\\ ACCESS1-3&3.59& -& -& -&3.45&2.8&3.22& -\\ bcc-csm1-1&2.80& -&2.82&2.8&2.88& -&2.73&2.8\\ bcc-csm1-1-m&2.79& -&2.87&2.9& -& -&3.1& -\\ BNU-ESM&4.11(*)& -& -&4.1&4.11& -&4.08&4.1\\ CanESM2&3.66&3.69&3.69&3.7&3.68&3.6&3.63&3.7\\ CCSM4&2.90& -&2.89&2.9&2.92& -&2.8&2.9\\ CNRM-CM5&3.27&3.25&3.25&3.3&3.25&3.16&3.07&3.3\\ CNRM-CM5-2&3.46& -& -& -& -& -& -& -\\ CSIRO-Mk3-6-0&4.24&4.08&4.08&4.1&3.99&2.96&3.55&4.1\\ FGOALS-g2&3.45(*)& -& -& -&3.45& -&2.46&3.45\\ FGOALS-s2&4.16(*)& -&4.17& -&4.16& -&4.14&4.16\\ GFDL-CM3&3.97&3.97&3.97&4&3.96&3.2&3.85&4\\ GFDL-ESM2G&2.57&2.39&2.39&2.4&2.38& -&1.81& -\\ GFDL-ESM2M&2.71&2.44&2.44&2.4&2.41& -&2.23&2.4\\ HadGEM2-ES&4.58&4.59&4.59&4.6&4.55&4.32&4.6&4.6\\ IPSL-CM5A-LR&4.05&4.13&4.13&4.1&4.1&3.46&3.92&4.1\\ IPSL-CM5A-MR&4.11& -& -& -& -&3.4& -& -\\ IPSL-CM5B-LR&2.64& -&2.61&2.6&2.59& -&2.43&2.6\\ MIROC5&2.70&2.72&2.72&2.7&2.71&2.12&2.22&2.7\\ MIROC-ESM&4.67&4.67&4.67&4.7&4.65&3.47&3.88&4.7\\ MPI-ESM-LR&3.64&3.63&3.63&3.6&3.6&3.08&3.27& -\\ MPI-ESM-MR&3.48& -& -& -&3.44&2.94&3.14&3.4\\ MPI-ESM-P&3.47&3.45&3.45& -&3.42& -&3.07& -\\ MRI-CGCM3&2.60&2.6&2.6&2.6&2.59&2.19&2.52&2.6\\ NorESM1-M&2.82&2.8&2.8&2.8&2.83&2.11&2.48&2.8 \\ \hline \end{tabular} \begin{tablenotes} \small \item (*) The ECS values for BNU-ESM, FGOALS-g2 and FGOALS-s2 are retrieved from ref\cite{Sherwood2014}. All other values in this study are estimated by using the standard regression method \cite{Gregory2004,Forster2013} based on the available CMIP5 experiments of the preindustrial control (piControl) and abrupt 4xCO\textsubscript{2} scenario (abrupt4xCO2). (**) Based on Table 9.5 in IPCC AR5-WG1 \cite{IPCC2014}. \end{tablenotes} \end{threeparttable} \end{table} } {\linespread{1.3} \begin{table} \caption{Please refer to the spreadsheet in the supplementary tables, Sector mapping. Note: (*) The AIM/CGE negative CO2 and land-use CO2 emissions are extracted from the regional dataset rather than from the spatial dataset. (**) Forest burning and grassland burning levels are adjusted based on the percentage share in 2012 in EDGAR v4.3.2.} \label{tbl_driver} \end{table} } \begin{table} \small \centering \caption{An overview of the iterations regarding the climate system, scenarios, regions, sectors and emissions} \label{tbl_unc} \begin{tabular}{ l l l } \hline Sources & Quantity & Composition \\ \hline Climate system & 63 & {\makecell[l]{Terrestrial carbon cycle, ocean carbon cycle, aerosols and \\ pollutants, climate influences, cloud effects, climate system}} \\ Scenarios & 7 & {\makecell[l]{1.9 Wm\textsuperscript{-2}, 2.6 Wm\textsuperscript{-2}, 3.4 Wm\textsuperscript{-2}, 4.5 Wm\textsuperscript{-2}, 6.0 Wm\textsuperscript{-2}, 7.0 Wm\textsuperscript{-2} \\ and 8.5 Wm\textsuperscript{-2}}} \\ Regions & 11 & {\makecell[l]{CHN, IND, JPN, RUS, USA, AFR, EUR, LAM, MEA, OAS, ROW}} \\ Sectors & 12 & {\makecell[l]{Agriculture, agricultural waste burning, domestic housing \\ and commercial, energy, industry, industrial solvents, surface \\ transportation, waste treatment, open forest burning, open grassland \\ burning, aviation and international shipping}}\\ Emissions & 48 & {\makecell[l]{Industrial CO\textsubscript{2}, land-use CO\textsubscript{2}, CH\textsubscript{4}, N\textsubscript{2}O, BC, CO, NH\textsubscript{3}, NO\textsubscript{x}, OC,\\ SO\textsubscript{2}, VOCs and halogenated gases (a total of 37 gases including \\HFC-23, HFC-32, HFC-125, HFC-134a, HFC-143a, HFC-152a, \\HFC-227ea, HFC-236fa, HFC-245fa, HFC-365mfc, HFC-43-10mee,\\ CF\textsubscript{4}, C\textsubscript{2}F\textsubscript{6}, C\textsubscript{3}F\textsubscript{8}, c-C\textsubscript{4}F\textsubscript{8}, C\textsubscript{4}F\textsubscript{10}, C\textsubscript{5}F\textsubscript{12}, C\textsubscript{6}F\textsubscript{14}, C\textsubscript{7}F\textsubscript{16}; SF\textsubscript{6}, NF\textsubscript{3}, \\CFC-11, CFC-12, CFC-113, CFC-114, CFC-115, CCl\textsubscript{4}, CH\textsubscript{3}CCl\textsubscript{3}, \\HCFC-22, HCFC-141b, HCFC-142b, Halon-1211, Halon-1202, \\Halon-1301, Halon-2402, CH\textsubscript{3}Br and CH\textsubscript{3}Cl)}} \\ \hline \end{tabular} \end{table} \clearpage \newpage {\linespread{1.3} \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{ghg_WORLD.pdf} \centering \caption{\small{\textbf{Historical and future GHG emissions.} The future projections include seven forcing levels, namely, 1.9 Wm\textsuperscript{-2}, 2.6 Wm\textsuperscript{-2}, 3.4 Wm\textsuperscript{-2}, 4.5 Wm\textsuperscript{-2}, 6.0 Wm\textsuperscript{-2}, 7.0 Wm\textsuperscript{-2} and 8.5 Wm\textsuperscript{-2}. The uncertainty ranges denote the upper and lower trends. The error bars to the right show the upper and lower trends in 2100 at each forcing level. Open burning includes the emissions from agricultural waste burning, forest fires and grassland fires. Sources: the historical emissions stem from ref \cite{Lamarque2009,Gutschow2016,Aardenne2018,Hoesly2018}; the future trends stem come ref \cite{Fujimori2018,Gidden2019}; land-use CO\textsubscript{2} originates from ref \cite{Raddatz2010,Smith2013,Gutschow2016,gcp2018}; open burning is from ref \cite{Marle2017}.}} \label{fig:ghg} \end{figure} } \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{aero1_WORLD.pdf} \centering \caption{\small{\textbf{Historical and future aerosol and pollutant emissions (a-h).} The future projections include seven forcing levels, namely, 1.9 Wm\textsuperscript{-2}, 2.6 Wm\textsuperscript{-2}, 3.4 Wm\textsuperscript{-2}, 4.5 Wm\textsuperscript{-2}, 6.0 Wm\textsuperscript{-2}, 7.0 Wm\textsuperscript{-2} and 8.5 Wm\textsuperscript{-2}. The uncertainty ranges denote the upper and lower trends. The error bars to the right show the upper and lower trends in 2100 at each forcing level. Open burning includes the emissions from agricultural waste burning, forest fires and grassland fires. Sources: the historical emissions stem from ref \cite{Lamarque2009,Gutschow2016,Aardenne2018,Hoesly2018}; future trends come from ref \cite{Fujimori2018,Gidden2019}; open burning originates from ref \cite{Marle2017}.}} \label{fig:aero1} \end{figure} \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{aero2_WORLD.pdf} \centering \caption{\small{\textbf{Historical and future aerosol and pollutant emissions (i-n).} The future projections include seven forcing levels, namely, 1.9 Wm\textsuperscript{-2}, 2.6 Wm\textsuperscript{-2}, 3.4 Wm\textsuperscript{-2}, 4.5 Wm\textsuperscript{-2}, 6.0 Wm\textsuperscript{-2}, 7.0 Wm\textsuperscript{-2} and 8.5 Wm\textsuperscript{-2}. The uncertainty ranges denote the upper and lower trends. The error bars to the right show the upper and lower trends in 2100 at each forcing level. Open burning includes the emissions from agricultural waste burning, forest fires and grassland fires. Sources: the historical emissions are from ref \cite{Lamarque2009,Gutschow2016,Aardenne2018,Hoesly2018}; future trends come from ref \cite{Fujimori2018,Gidden2019}; open burning stems from ref \cite{Marle2017}.}} \label{fig:aero2} \end{figure} \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{lc_WORLD.pdf} \centering \caption{\small{\textbf{Historical and future land cover changes, compared to the values in 1700.} The future projections include seven forcing levels, namely, 1.9 Wm\textsuperscript{-2}, 2.6 Wm\textsuperscript{-2}, 3.4 Wm\textsuperscript{-2}, 4.5 Wm\textsuperscript{-2}, 6.0 Wm\textsuperscript{-2}, 7.0 Wm\textsuperscript{-2} and 8.5 Wm\textsuperscript{-2}. The uncertainty ranges denote the upper and lower trends. The error bars to the right show the upper and lower trends in 2100 at each forcing level. Sources: LUH2 v2h\cite{Hurtt2016a}; LUH2 v2f\cite{Hurtt2016b}; AIM-SSP/RCP gridded emission and land-use data\cite{Fujimori2018}.}} \label{fig:lc} \end{figure} \newpage {\linespread{1.3} \begin{figure}[ht] \includegraphics[width=0.8\textwidth]{ghg.pdf} \centering \caption{\small{\textbf{Simulation of the radiative forcings induced by greenhouse gases (GHGs) compared to existing studies (IPCC AR5\cite{ar5ch8}, MAGICC6\cite{Meinshausen2011} and OSCAR v2.2\cite{Gasser2016}).} The uncertainties in SCM4OPT v2.0 indicate the 17th and 83rd percentiles. The MAGICC6 time series are extracted from RCP calculations \cite{Meinshausen2011b}. The OSCAR v2.2 uncertainties are produced by 500 runs, accounting for the 17th and 83rd percentiles, downloaded from https://github.com/tgasser/OSCARv2. The error bars in 2011 denote the forcing values over the period of 1750-2011 in IPCC AR5 (Table 8.2).}} \label{fig:rfc_ghg} \end{figure} } {\linespread{1.3} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{aero.pdf} \caption{\small{\textbf{Simulation of the radiative forcings induced by aerosols and pollutants, compared to existing studies (IPCC AR5\cite{ar5ch8,ar5wg1spm}, MAGICC6\cite{Meinshausen2011} and OSCAR v2.2\cite{Gasser2016}).} The uncertainties in SCM4OPT v2.0 indicate the 17th and 83rd percentiles. The MAGICC6 time series are extracted from RCP calculations \cite{Meinshausen2011b}. The OSCAR v2.2 uncertainties are produced by 500 runs, accounting for the 17th and 83rd percentiles. The error bars in 2011 denote the forcing values over the period of 1750-2011 in IPCC AR5 (Table 8.4 and Figure SPM.5).}} \label{fig:rfc_aero} \end{figure} } {\linespread{1.3} \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{other.pdf} \caption{\small{\textbf{Simulation of the radiative forcings induced by human activities, other than the GHGs and aerosols and pollutants above, compared to existing studies (IPCC AR5\cite{ar5ch8}, MAGICC6\cite{Meinshausen2011} and OSCAR v2.2\cite{Gasser2016}).} The uncertainties in SCM4OPT v2.0 indicate the 17th and 83rd percentiles. The MAGICC6 time series are extracted from RCP calculations\cite{Meinshausen2011b}. The OSCAR v2.2 uncertainties are produced by 500 runs, accounting for the 17th and 83rd percentiles. The error bars in 2011 denote the forcing values over the period of 1750-2011 in IPCC AR5 (Table 8.6).}} \label{fig:rfc_other} \end{figure} } \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{natr.pdf} \caption{\textbf{Assumptions of the radiative forcings induced by the natural sources of volcanic activity and solar irradiance compared to existing studies.} The volcanic and solar irradiance forcings used in SCM4OPT v2.0 are assumed in accordance with volcanic activity \cite{Zanchettin2016} and solar irradiance \cite{Matthes2017} forcing inputs for CMIP6, and the volcanic forcing is normalized to zero in 1850.} \label{fig:rfc_natr} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{rfc_LCC.pdf} \caption{\textbf{Land-use albedo forcings estimated by SCM4OPT v2.0 compared to existing studies.} REMIND 1.7 uses default exogenous values as the future outlook (extracted from the source code, https://www.pik-potsdam.de/research/transformation-pathways/models/remind). The RCP scenarios are produced by MAGICC6\cite{Meinshausen2011b}. The uncertainties in SCM4OPT v2.0 indicate the 17th and 83rd percentiles. The boxplot to the right shows the distributions in 2100, with the upper and lower hinges corresponding to the 25th and 75th percentiles, respectively, where the upper whisker denotes 1.5 times the interquartile range above the 75th percentile, and the lower whisker denotes 1.5 times the interquartile range below the 25th percentile.} \label{fig:rfc_LCC} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{rfc_tot.pdf} \caption{\textbf{Total radiative forcing simulated by SCM4OPT v2.0 compared to existing studies (IPCC AR5\cite{ar5ch8}, MAGICC6\cite{Meinshausen2011} and OSCAR v2.2\cite{Gasser2016}).} The uncertainties in SCM4OPT v2.0 indicate the 17th and 83rd percentiles. The MAGICC6 time series are extracted from RCP calculations\cite{Meinshausen2011b}. The OSCAR v2.2 uncertainties are produced by 500 runs, accounting for the 17th and 83rd percentiles. The error bars in 2011 denote the total anthropogenic radiative forcing relative to 1750 (Figure SPM.5).} \label{fig:rfc_tot} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{scm_tatm.pdf} \caption{\textbf{Historical global mean temperature increase above the preindustrial level, generated by SCM4OPT v2.0 and compared to existing statistical records.} The anomalies deviate from the average over 1890-1910. The SCM4OPT v2.0 uncertainties result from the emission source- (CEDS\cite{Hoesly2018}, EDGAR v4.3.2\cite{Aardenne2018} and RCP historical\cite{Meinshausen2011b}) and climate uncertainties described in this paper. The uncertainties in HadCRUT 4.6 indicate the 95\% confidence interval of the combined effects of all the uncertainties described in the HadCRUT4 error model. GISTEMP v4 from ref \cite{GISTEMPv4}; HadCRUT 4.6 from ref \cite{Morice2012}; Japan Meteorological Agency (JMA) from ref \cite{JMA2019}.} \label{fig:scm_tatm} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{densi.pdf} \caption{\textbf{Probability distributions of the total radiative forcing and global mean temperature at forcing levels of 1.9 Wm\textsuperscript{-2}, 2.6 Wm\textsuperscript{-2}, 3.4 Wm\textsuperscript{-2}, 4.5 Wm\textsuperscript{-2}, 6.0 Wm\textsuperscript{-2}, 7.0 Wm\textsuperscript{-2} and 8.5 Wm\textsuperscript{-2}, estimated by SCM4OPT v2.0.} a, Total radiative forcing in 2100. b, Global mean temperature increase relative to 1850 in 2100. The color values indicate the mean value at each forcing level.} \label{fig:densi} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{marginal.pdf} \caption{\textbf{The normalized marginal method for the attributions of radiative forcings.} The figure is plotted based on Figure 5 in ref \cite{IPCC2002}.} \label{fig:margin} \end{figure} {\linespread{1} \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{reg_grp.pdf} \centering \caption{\small{\textbf{Regional forcings are decomposed into CO\textsubscript{2}-induced forcings and those not directly related to CO\textsubscript{2}.} a, Historical period (1850-2016); b, 2 \textdegree C (1850-2100); c, 1.5 \textdegree C (1850-2100). The direct CO\textsubscript{2} emissions are separated into fossil-fuel CO\textsubscript{2} (FF CO\textsubscript{2}), land-use CO\textsubscript{2} (LUC CO\textsubscript{2}), and negative CO\textsubscript{2} emissions, if applicable. The value on top of the bar indicates the mean value summing all components of the left CO\textsubscript{2} bar. The value at the bottom of the bar indicates the mean value summing all components of the right bar. All uncertainties are represented as one standard deviation.}} \label{fig:reg_grp} \end{figure} } \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{gwp_sensi.pdf} \centering \caption{\small{\textbf{Cumulative CO\textsubscript{2} emissions projected by AIM/CGE and IAMC.} The uncertainties are represented as one standard deviation. All the AIM/CGE projections are slightly higher than those by the IAMC. This figure shows only the cumulative CO\textsubscript{2} emissions, representing part of the systematic deviations, and variations regarding aerosols and pollutants also occur.}} \label{fig:gwp_sensi} \end{figure} \clearpage \bibliographystyle{iopart-num} \section{Introduction} The Paris Agreement has set goals to limit the global average temperature increase to well below 2 \textdegree C and to pursue efforts to limit the temperature increase to 1.5 \textdegree C above the preindustrial level. The Paris Agreement goals can be translated into the required levels of greenhouse gas (GHG) emission reductions \cite{Rogelj2018, Tanaka2018, sr15, Tong2019, Tachiiri2019, Kawamiya2019} for practical implementation purposes, through calculations of the radiative forcings resulting from various emission sources. Attaining the radiative forcing of 2.6 Wm\textsuperscript{-2} and 1.9 Wm\textsuperscript{-2} are known to be largely consistent with achieving the 2 \textdegree C and 1.5 \textdegree C climate goals, respectively, at an approximately 66\% probability \cite{Meinshausen2009,ar5ch8,Rogelj2018,VanVuuren2018,Seneviratne2018,sr15}. Here we use the radiative forcing as a benchmark and assess how individual regions, sectors, and climate forcers can contribute to achieving the Paris Agreement temperature targets based on a latest set of historical and future emission data. Such source attribution cannot be done from emissions data alone because a variety of climate forcers affect the climate system on different temporal and spatial scales in nonlinear ways, requiring dedicated methodologies like the one presented here. Attributing forcing at the levels of regions, sectors, or climate forcers provides a basis for considering the principle of common but differentiated responsibilities contained in the 1992 United Nations Framework Convention on Climate Change (UNFCCC). We identified two issues associated with previous attribution methods. First, comprehensive regional and sectoral assessments should in principle consider a full suite of anthropogenic sources at the regional and sectoral levels, including GHGs, aerosols and pollutants, as well as land-use albedo. However, not all sources have been considered in previous studies. For example, aerosols and pollutants were not always examined\cite{Rive2008, denElzen2013}, or only a subset of aerosol species was considered (such as sulfate aerosols\cite{Matthews2014}). Also, land-use albedo were sometimes not included\cite{Rive2008, denElzen2013, Matthews2014}. This may be due to the lack of datasets or difficulties in representing regional or sectoral forcings, but the latest datasets provide opportunities to consider a more comprehensive set of GHGs and related agents. Second, among various methods proposed, only the marginal and time-sliced methods are considered to be useful based on a satisfaction test of eight essential criteria\cite{IPCC2002, Trudinger2005}. In principle, an attribution method needs to ensure additivity regarding regions and time. Implementing a non-additive method like the residual method used in \cite{Skeie2017} may introduce bias to the outcome for regions with high and low emissions. Some of the methods yield non-zero radiative forcing when a region's concentration has become zero. On the other hand, the two recommended methods are computationally expensive, especially when various sources of uncertainties are considered. We apply the most up-to-date emissions and land-use datasets (\cref*{tbl_hist,tbl_sce,tbl_luc,fig:ghg,fig:aero1,fig:aero2,fig:lc}), which resolve regions and sectors and contain all pertinent forcing sources, including GHGs, aerosols and pollutants, and land-use albedo, as well as their associated uncertainties. Particularly we consider a range of future emissions trajectories with various socioeconomic backgrounds and climate mitigation levels\cite{Fujimori2018,Gidden2019}. We combine the normalized marginal method, which is computationally less expensive than the time-sliced method\cite{IPCC2002}, with a simple climate model - the Simple Climate Model for Optimization version 2 (SCM4OPT v2.0) \cite{Su2017, Su2018}. SCM4OPT v2.0 is designed to be lightweight and suitable for performing a large number of simulations required for our study exploring uncertainties, while resolving diverse characteristics of forcing agents considered. Furthermore, based on the premise of previous studies\cite{Rive2008, Hohne2011, Li2016}, we make the attribution analysis more comprehensive by considering historical and future emissions and providing perspectives from regions, sectors, and climate forcers. We consider two types of uncertainties, including those due to our lack of knowledge regarding historical emissions and future projections (emission uncertainties), and those due to low confidence in understanding the climate system (climate uncertainties). With these methodological advances, we quantify the forcing contributions of regions, sectors, and climate forcers toward the Paris Agreement temperature goals. \section{Methods} We compile the emissions and land cover datasets at regional and sectoral levels and implement them to SCM4OPT v2.0 to calculate the marginal forcing effects of forcing agents at regional and sectoral levels. The relative forcing contribution of the forcing agents at regional and sectoral levels can be distinguished based on the fraction accounting for their marginal effects in total marginal effects caused by the forcing agent. The forcing contributions of the forcing agents at regional and sectoral levels can be therefore attributed. We sum up the associated individual forcings to obtain the radiative forcings resulting from regional and sectoral sources. \subsection{Emission data and uncertainties} We used historical and future emissions and land cover datasets with both regional and sectoral details. The available datasets are shown in \cref*{tbl_hist,tbl_sce,tbl_luc,fig:ghg,fig:aero1,fig:aero2,fig:lc} \cite{Lamarque2009,Gutschow2016,Aardenne2018,Hoesly2018, Fujimori2018,Gidden2019}. We designated historical sources as those originating from 1850 to 2016, and future projections by 2100 are grouped by the forcing level at 1.9 Wm\textsuperscript{-2} and 2.6 Wm\textsuperscript{-2}, regardless of the underlying socioeconomic development or technological assumptions. We also included other scenarios with relatively lower possibilities to achieve the 2 \textdegree C and 1.5 \textdegree C targets, namely, forcing levels of 3.4 Wm\textsuperscript{-2}, 4.5 Wm\textsuperscript{-2}, 6.0 Wm\textsuperscript{-2}, 7.0 Wm\textsuperscript{-2} and 8.5 Wm\textsuperscript{-2} (\cref*{tbl_sce}). Thus, a broad range of forcings can be examined for future climate change. For the future emission datasets, 25 of them are obtained from the Asia-Pacific Integrated Model/Computable General Equilibrium (AIM/CGE) model\cite{Fujimori2018}, and the remaining nine are from the Integrated Assessment Modeling Consortium (IAMC)\cite{Gidden2019} (\cref*{tbl_sce}). We divided the world into eleven regions, including 1) China (CHN), 2) India (IND), 3) Japan (JPN), 4) Russia (RUS), 5) the United States of America (USA), 6) sub-Saharan Africa (AFR), 7) Europe (EUR), 8) Latin America and the Caribbean (LAM), 9) the Middle East and North Africa (MEA), 10) other areas in Asia (OAS) and 11) the rest of the world (ROW) (\cref*{tbl_region}). For each region, twelve emitting sectors \cite{Lamarque2009, Fujimori2018, Gidden2019} were assessed, namely, 1) agriculture, 2) agricultural waste burning, 3) domestic and commercial housing, 4) energy, 5) industry, 6) industrial solvents, 7) surface transportation, 8) waste treatment, 9) open forest burning, 10) open grassland burning, 11) aviation and 12) international shipping (\cref*{tbl_sector}). In addition to the twelve sectors above, 13) land-use CO\textsubscript{2} emissions and 14) negative CO\textsubscript{2} emissions, namely, through carbon capture and storage (CCS) and bioenergy with CCS (BECCS), were separately considered. We compiled the emissions from the available datasets into scenario-, region- and sector-specific emissions and used $E_{n,r,s,e}(t)$ to denote scenario ($n$)-, region ($r$)- and sector ($s$)-specific emissions ($e$, refer to the emission species) over time $t$. The emissions of the same forcing target originating from different Shared Socioeconomic Pathways (SSPs) and integrated assessment models (IAMs) were treated as emission uncertainties. For example, we iteratively simulated the 1.9 Wm\textsuperscript{-2} forcing scenario by using dataset from the AIM/CGE (SSP1-1.9 and SSP2-1.9) and the IAMC (SSP1-1.9), as reported in \cref*{tbl_sce}. \subsection{Climate model and uncertainties} We used the simple climate model SCM4OPT v2.0 to generate the outputs for our analysis. The current model has been updated from the precedent in the following four respects: First, we adopted the ocean carbon cycle of Hector v1.0\cite{Hartin2015} and applied the Diffusion Ocean Energy balance CLIMate (DOECLIM) model \cite{Kriegler2005,Tanaka2007,Wong2017} to calculate the temperature change. We calibrated the carbon cycle and temperature modules based on 26 coupled atmosphere-ocean general circulation models (AOGCMs) with outputs for the carbon cycle in the Coupled Model Intercomparison Project, Phase 5 (CMIP5) (\cref*{tbl_clims}). Second, parameters associated with CH\textsubscript{4}, N\textsubscript{2}O and halogenated gases (a total of 37 gases, see \cref*{tbl_driver}) were tuned against the atmospheric lifetimes and radiative efficiencies in the IPCC Fifth Assessment Report (AR5) \cite{ar5annexii}. Third, we employed the simple global parameterizations described in OSCAR v2.2 \cite{Gasser2016} to estimate the radiative forcings resulting from aerosols and pollutants. The radiative forcing of short-lived climate forcers depends on the geographical location of emissions. The spatial distribution of the radiative forcing of short-lived species is different from that of long-lived species\cite{Sand2016,Tanaka2019}. However, these two effects are not considered in our analysis. Fourth, we adopted a simple parameterization scheme \cite{Gasser2016} to calculate the land-use albedo (see \cref*{eq:flcc} in the supplementary materials). The equations for the climate model are listed in the supplementary materials. We performed a robustness test over the historical period by using historical emission datasets as input and considering the climate uncertainties that were applied in this analysis. The outputs from our model are consistent with those of other models or statistical records (\cref*{fig:rfc_ghg,fig:rfc_aero,fig:rfc_other,fig:rfc_natr,fig:rfc_tot,fig:scm_tatm}). Furthermore, the likelihoods of meeting the 2\textdegree C and 1.5 \textdegree C targets of each of the forcing scenarios obtained from our model agree largely with the corresponding IPCC ranges (\cref*{fig:densi}) to limit global warming to 2\textdegree C with at least 66\% probability and 1.5 \textdegree C with 50\%\cite{sr15}. \subsection{Calculation of the regional and sectoral forcings} We utilized and expanded the normalized marginal method presented in ref \cite{IPCC2002, Trudinger2005, Li2016} to conduct our analysis. The relative forcing contribution of emission $E_{n,r,s,e}$ (Column 2 in \cref*{tbl_driver}) to the associated radiative forcing $f$ (Column 4 in \cref*{tbl_driver}), which is defined as $\alpha^{f}_{n,r,s,e}$, is proportional to the marginal effect of $E_{n,r,s,e}$ causing the radiative forcing $f$ (see \cref*{fig:margin}). To calculate $\alpha^{f}_{n,r,s,e}$, for each $E_{n,r,s,e}$, we performed two simulations, i.e., 1) one simulation with all emissions included in the simulation as input, to calculate the associated radiative forcing termed $F^{all,f}_{n,r,s,e}$, and 2) another simulation with the emission $e$ reduced by $E_{n,r,s,e}\cdot\epsilon$ ($\epsilon = 0.001$) over the evaluation period, that is 1850-2100, to obtain the corresponding radiative forcing named $F^{\epsilon,f}_{n,r,s,e}$. The relative contribution $\alpha^{f}_{n,r,s,e}$ is obtained by: \begin{equation} \label{eq:marginal1} \alpha^{f}_{n,r,s,e} = \frac{F^{all,f}_{n,r,s,e}-F^{\epsilon,f}_{n,r,s,e}}{\sum_{r,s,e}{\left(F^{all,f}_{n,r,s,e}-F^{\epsilon,f}_{n,r,s,e}\right)}} \end{equation} Therefore, the radiative forcing $F^{f}_{n,r,s,e}$, which is resulting from $E_{n,r,s,e}$, is isolated by: \begin{equation} \label{eq:marginal2} F^{f}_{n,r,s,e} = {F^{all,f}_{n,r,s,e}} \cdot \alpha^{f}_{n,r,s,e} \end{equation} To consider the relevant emission and climate uncertainties, we carried out 200 similar pairs of runs for each forcing-level-specific source $E_{n,r,s,e}$, with randomized scenarios at the same forcing level and randomized parameter sets for the climate system, and we call them as one experiment for $E_{n,r,s,e}$ (see the randomization sources of the scenarios (within each forcing level) and the climate system in \cref*{tbl_unc}). Here, the value of 200 has been tested to ensure that two decimal places of the precision level could be achieved for the mean forcing value of $F^{f}_{n,r,s,e}$ under different experiments. We obtain the individual forcing agents by summing the forcings induced by all available emissions sources. Therefore, a certain forcing agent is probably a mixed effect resulting from various emissions sources. On the other hand, a particular emission may result in different kinds of radiative forcings, as indicated in \cref*{tbl_driver}. For example, black carbon (BC) can cause BC forcing, BC on snow and indirect cloud effects. A total of $5.3\times 10^{6}$ runs ($7.6\times 10^{5}$ for each forcing level) were performed considering all forcing levels, regions, sectors and emissions. To derive the regional forcings, we applied the Monte Carlo approach (n = 20,000) to sum all $F^{f}_{n,r,s,e}$ values belonging to a given region. Here, the value of 20,000 for n was also tested to ensure the necessary precision for our analysis. The sectoral forcings are similarly obtained. An overview of all the iterations are contained in \cref*{tbl_unc}. \subsection{The probability of exceeding 2\textdegree C or 1.5\textdegree C} For each experiment, 200 sample results were acquired. Here, we assumed that the obtained temperature increase $T$ over time $t$ followed a normal distribution, and the cumulative distribution function was defined as: \begin{equation} \label{eq:cdf} F^{t}_{T}\left(\tau\right) = P^{t}\left(T\leq\tau\right) \end{equation} We used the exceedance of \cref*{eq:cdf} to obtain the probability of exceeding a specified climate target $\tau$: \begin{equation} \label{eq:exceed} \overline{F}^{t}_{T}\left(\tau\right) = P^{t}\left(T > \tau\right) = 1 - F^{t}_{T}\left(\tau\right) \end{equation} Therefore, $\overline{F}^{t}_{T}\left(2 \right)$ indicates the probability of exceeding 2\textdegree C, and $\overline{F}^{t}_{T}\left(1.5 \right)$ gives the probability of exceeding 1.5\textdegree C. \section{Results} \subsection{Regional attributions} We performed our analysis based on the available existing scenarios, and the 2 \textdegree C or 1.5 \textdegree C results herein thus reflect the diagnosed compatible scenarios in terms of the 2 \textdegree C or 1.5 \textdegree C targets. The results reveal that the USA, China and the European Union (EU) are three major emitters, accounting for approximately 45\% of all the forcings under the historical, 2 \textdegree C and 1.5 \textdegree C scenarios (see \cref*{fig:rfc_reg}a). China's share increased from $12\pm 4$\% ($0.25\pm 0.09$ Wm\textsuperscript{-2}) by 2016 (cf. $10 \pm 4$\% for Chinese data (1750-2010) in ref \cite{Li2016} with similar methods) to 2 \textdegree C's $16\pm 3$\% ($0.41\pm 0.08$ Wm\textsuperscript{-2}) and 1.5 \textdegree C's $17\pm 4$\% ($0.29\pm 0.07$ Wm\textsuperscript{-2}), while the share of the EU declined, from the historical $15\pm 2$\% ($0.32\pm 0.04$ Wm\textsuperscript{-2}) level to the 2 \textdegree C level of $12\pm 2$\% ($0.31\pm 0.06$ Wm\textsuperscript{-2}) and the 1.5 \textdegree C level of $13\pm 3$\% ($0.23\pm 0.06$ Wm\textsuperscript{-2}) (for the forcing values, see \cref*{fig:sec_all}a\&\cref*{fig:reg_grp}). In contrast, the share of the USA exhibited no major changes, contributing to approximately 17\% of the total forcings under all three scenarios. However, the absolute values of the forcings varied, i.e., the 2 \textdegree C forcing attributed to the USA increased to $0.42\pm 0.07$ Wm\textsuperscript{-2} from the current forcing value of $0.36\pm 0.05$ Wm\textsuperscript{-2}, while the 1.5 \textdegree C forcing value declined to $0.29\pm 0.06$ Wm\textsuperscript{-2}. Latin America and the Caribbean (as one region) also exhibited a relatively high historical share with $12\pm 3$\%; however, the value substantially declined under both target scenarios. CO\textsubscript{2}, including fuel CO\textsubscript{2}, land-use CO\textsubscript{2}, and negative CO\textsubscript{2}, if applicable, is the main contributor and varies across regions. Among them, China, the USA, and the Middle East and North Africa (as one region) exhibited the highest net growth forcings under the 2 \textdegree C scenario, with values of $0.08\pm 0.14$ Wm\textsuperscript{-2}, $0.07\pm 0.13$ Wm\textsuperscript{-2} and $0.07\pm 0.06$ Wm\textsuperscript{-2}, respectively. Under the 1.5 \textdegree C scenario, the CO\textsubscript{2} forcings in all regions decreased. The largest decline occurred in Latin America and the Caribbean, from the historical value of $0.20\pm 0.04$ Wm\textsuperscript{-2} to the 1.5 \textdegree C scenario value of $0.08\pm 0.05$ Wm\textsuperscript{-2}, which occurred due to the negative CO\textsubscript{2} emissions and the great decrease in land-use CO\textsubscript{2} emissions. The non-CO\textsubscript{2} forcings described here refer to the forcings induced by sources other than CO\textsubscript{2}, and these forcings also play an important role in the historical period, although they are almost adequately controlled under the 2 \textdegree C and 1.5 \textdegree C scenarios (also shown in \cref*{fig:reg_grp}). Basically, most of the regions reveal net positive non-CO\textsubscript{2} forcings in the historical period. Particularly in regard to Latin America and the Caribbean, the relatively high net positive non-CO\textsubscript{2} forcing, combined with the relatively high land-use CO\textsubscript{2} forcing, contributes to a comparatively large forcing share in the historical period, although the fossil-fuel forcing is relatively smaller. It is worth noting that the regions with nearly zero-sum non-CO\textsubscript{2} forcings contribute considerable amounts of both positive and negative forcings, such as China and the rest of the world, in the historical period. For future non-CO\textsubscript{2} forcings, however, sub-Saharan Africa is found to exhibit a reasonable increase in net forcing due to its continuous development and industrialization and population growth, which requires more biomass for cooking and heating purposes, as well as changes in land cover \cite{Fujimori2018,Gidden2019}. The total forcing, including the forcings that cannot be assigned to any region, increases to $2.6\pm 0.4$ Wm\textsuperscript{-2} under the 2 \textdegree C scenario but declines to $1.8\pm 0.4$ Wm\textsuperscript{-2} under the 1.5 \textdegree C scenario, which is lower than the current level of $2.2\pm 0.4$ Wm\textsuperscript{-2} (\cref*{fig:rfc_reg}b). All regional forcings indicate net warming effects, with positive forcing values. Forcing increases are encountered in most of the regions except in Russia, the EU, Latin America and the Caribbean, and the rest of the world under the 2 \textdegree C scenario, while the main increases still occur in two developing regions, namely, China and the Middle East and North Africa under the 1.5 \textdegree C scenario (\cref*{fig:sec_all}a\&\cref*{fig:reg_grp}). Here, the forcing increases in the Middle East and North Africa can mostly be attributed to fossil-fuel CO\textsubscript{2}, sulfate, cloud effects, and land-use albedo, probably due to industry and energy supply expansions as well as due to the expected reforestation in this area \cite{Fujimori2018}. The regional nonattributable forcings in the historical period also reveal warming effects. These forcings are later notably suppressed under both the 2 \textdegree C and 1.5 \textdegree C scenarios (\cref*{fig:rfc_reg}b), and they are mainly attributed to the control of ozone-depleting substances (ODSs) under the various scenario assumptions (\cref*{fig:ghg})\cite{Fujimori2018,Gidden2019}. {\linespread{1} \begin{figure}[ht] \includegraphics[width=0.75\textwidth]{rfc_reg.pdf} \centering \caption{\small{\textbf{Reginal contributions to climate change.} a, Regional relative contributions to climate change. The regional relative contributions are derived from the elementwise ratio of the regional forcings to the total forcings via the methods described in ref \protect\cite{Li2016}. Note that the sum of the mean percentages of all regions is not equal to 100\% since regional nonattributable forcings occur (see \cref*{fig:rfc_reg}b). b, The world total forcings are divided into regional forcings. The value on top of the bar indicates the mean value of the total radiative forcing, and the error bar indicates the associated uncertainty resulting from the world ensemble. The other forcings in Fig. b are the regional nonattributable climate forcers, including the international shipments of $0.01\pm 0.03$, $0.06\pm 0.02$, and $0.05\pm 0.03$ Wm\textsuperscript{-2} and part of the ozone-depleting substances (ODSs) of $0.23\pm 0.05$, $0.07\pm 0.03$, and $0.07\pm 0.03$ Wm\textsuperscript{-2} (for the regional nonattributable forcings, see \cref*{tbl_driver}), under the historical, 2 \textdegree C and 1.5 \textdegree C scenarios, as well as mineral dust (\cref*{fig:rfc_aero}) and the effects of solar irradiance and volcanic activity (\cref*{fig:rfc_natr}). The probabilities of reaching the 2 \textdegree C and 1.5 \textdegree C targets here are 56\% and 61\%, respectively (see \cref*{fig:rfc_prob}). All uncertainties are represented as one standard deviation. CHN, China; IND, India; JPN, Japan; RUS, Russia; USA, United States of America; AFR, sub-Saharan Africa; EUR, Europe; LAM, Latin America and the Caribbean; MEA, Middle East and North Africa; OAS, other Asian countries; ROW, the rest of the world; Other, regional nonattributable forcings.}} \label{fig:rfc_reg} \end{figure} } \subsection{Sectoral constituents} The regional effects are further separated into their sectoral constituents to assess how future changes occur (\cref*{fig:sec_all}a). First, for the developed regions, relatively large increases are observed in both industrial and housing sectors under the 2 \textdegree C scenario. In regard to energy, the gross forcings related to the USA and EU are considerably high under the 2 \textdegree C scenario. However, if combined with the negative CO\textsubscript{2} emissions, the energy forcings decrease to $0.12\pm 0.11$ Wm\textsuperscript{-2} and $0.07\pm 0.08$ Wm\textsuperscript{-2} for the USA and EU, respectively, which are lower than the current levels. Second, among the developing regions, China's industry exhibits the most significant forcing increase, with a value of $0.14\pm 0.07$ Wm\textsuperscript{-2} under the 2 \textdegree C scenario. In addition to the industrial sector, prominent increases are found in the agricultural sector, such as in sub-Saharan Africa, and the forcing induced by the agricultural sector increases by $0.09\pm 0.03$ Wm\textsuperscript{-2} under the 2 \textdegree C scenario. In addition, the land-use CO\textsubscript{2} forcings are alleviated to varying degrees in all regions under the 2 \textdegree C scenario. Under the 1.5 \textdegree C scenario, most of the regions still demonstrate increased forcings in the industrial sector, while in the developed regions, the forcings in the industrial sector decrease. Furthermore, both the negative and land-use CO\textsubscript{2} emissions could result in extensive forcing abatement from the current levels in the developing regions under the 1.5 \textdegree C scenario. Globally, in certain sectors, as shown in \cref*{fig:sec_all}b, the forcings still increase to certain levels under the 2 \textdegree C scenario, except for the land-use CO\textsubscript{2} and other sources that are responsible for the main emissions of aerosols and pollutants, such as waste treatment, agricultural waste burning, forest burning and grass burning. However, under the 1.5 \textdegree C scenario, except for the energy sector and land-use albedo, only a small amount of the forcings is found to increase in the major emitting sectors, such as domestic and commercial housing, industrial sector, aviation and international shipping. In addition, as also indicated in the analysis of the individual forcing agents below, the negative CO\textsubscript{2} emissions remove a considerable amount of forcings from the energy sector under the 1.5 \textdegree C scenario, and the net forcing level in the energy sector is lower than the current energy sector level. This result implies that to attain the 1.5 \textdegree C target, efforts need to be implemented to maintain the sectoral forcings below or equal to the current levels. {\linespread{1} \begin{figure}[ht] \includegraphics[width=0.9\textwidth]{sec_all.pdf} \centering \caption{\small{\textbf{Sectoral contributions to climate change.} a, Sectoral contributions of the eleven regions worldwide. b, The world total forcings are decomposed into sectoral forcings. The value on top of the bar indicates the mean value of the total radiative forcing, and the error bar indicates the associated uncertainty resulting from the world ensemble. The other forcings are the forcings induced by mineral dust (\cref*{fig:rfc_aero}), solar irradiance and volcanic activity (\cref*{fig:rfc_natr}). The probabilities of reaching the 2 \textdegree C and 1.5 \textdegree C targets here are 56\% and 61\%, respectively (see \cref*{fig:rfc_prob}). All uncertainties are represented as one standard deviation.}} \label{fig:sec_all} \end{figure} } \subsection{Individual forcing agents} \cref*{fig:rfc_grp}a shows the individual forcing agents for each sectoral source. Fossil-fuel CO\textsubscript{2} dominates the forcings in the housing, energy, industrial and transport sectors, particularly under the 2\textdegree C and 1.5 \textdegree C scenarios when the other GHGs and aerosols and pollutants are substantially removed (\cref*{fig:ghg,fig:aero1,fig:aero2}) and the resulting impacts are therefore greatly reduced. Among them, first, the negative CO\textsubscript{2} emissions can eliminate considerable amounts of forcings. For example, -$0.75\pm 0.44$ Wm\textsuperscript{-2} and -$0.42\pm 0.27$ Wm\textsuperscript{-2} are attributed to the negative CO\textsubscript{2} emissions under the 2 \textdegree C and 1.5 \textdegree C scenarios, respectively, (\cref*{fig:rfc_grp}a). It is interesting to note that the reduced amount of the absolute forcing under the 2 \textdegree C scenario is even larger than that under the 1.5 \textdegree C scenario. This finding explains the feasibility of the relatively weaker climate policies adopted under the 2 \textdegree C scenario, leading to higher gross fossil CO\textsubscript{2} emissions. However, a fair amount of fossil CO\textsubscript{2} emissions is removed in the later period when the costs related to the negative CO\textsubscript{2} emissions are more reasonable. Under the 1.5 \textdegree C scenario, stronger strategies are implemented after the early period. Thus, the gross fossil CO\textsubscript{2} emissions are relatively lower, and the required negative CO\textsubscript{2} emissions do not need to be as high \cite{Fujimori2018}. Here, the general trend can be simply interpreted as that of emit more but reduce more. Moreover, if considering the negative CO\textsubscript{2} emissions, the net forcings are actually lower than the current levels in the energy sector under both the 2 \textdegree C (0.42 Wm\textsuperscript{-2}) and the 1.5 \textdegree C (0.31 Wm\textsuperscript{-2}) scenarios, although gross increases are prominent (\cref*{fig:rfc_grp}a). Second, in regard to agriculture, the major sources are CH\textsubscript{4} and N\textsubscript{2}O. A considerable amount of the forcings induced by CH\textsubscript{4} and N\textsubscript{2}O still remains under the 2 \textdegree C and 1.5 \textdegree C scenarios (\cref*{fig:rfc_grp}a), due to difficulties in reducing the CH\textsubscript{4} and N\textsubscript{2}O emissions from agriculture \cite{Fujimori2018,Gidden2019} and their relatively long lifetimes, namely, 12.4 years for CH\textsubscript{4} and 121 years for N\textsubscript{2}O (see Table 8.A.1 in ref \cite{ar5ch8}). Third, the land-use albedo currently exhibits a cooling effect of -$0.16\pm 0.03$ Wm\textsuperscript{-2}. However, the land-use albedo may reveal warming effects in the future, at $0.03\pm 0.08$ Wm\textsuperscript{-2} under the 2 \textdegree C scenario and $0.05\pm 0.13$ Wm\textsuperscript{-2} under the 1.5 \textdegree C scenario (\cref*{fig:rfc_grp}b). The forest cover is expected to increase under the 2 \textdegree C and 1.5 \textdegree C scenarios (\cref*{fig:lc}), which will lower the surface albedo and reflect less of the incoming solar radiation, which in turn will generate lower negative forcings, or more positive forcings, while the current deforestation causes a negative forcing \cite{Myhre2003}. Therefore, to achieve the set climate goals, more forcings need to be reduced from other sources to compensate for this effect. Here, the land-use albedo is estimated by a simple parameterization scheme \cite{Bright2013, Gasser2016} constrained by future land cover changes (see Methods), and the results reveal lower negative forcings, or more positive forcings, than those by the other estimations (\cref*{fig:rfc_LCC}). {\linespread{1} \begin{figure}[ht] \includegraphics[width=0.8\textwidth]{rfc_grp.pdf} \centering \caption{\small{\textbf{Contributions of the individual climate forcers.} a, Sectoral contributions decomposed into individual climate forcers from (a-1) the historical period to 2016, (a-2) the 2 \textdegree C climate target by 2100 and (a-3) the 1.5 \textdegree C climate target by 2100. Here, open burning sums the agricultural waste burning, forest burning and grass burning levels. Industry includes industry and solvents, similar to in \cref*{fig:sec_all}. Transport totals the surface transport, aviation and international shipping values. The annotation values under the energy sector in (a-2) and (a-3) denote the forcing values accounting for the negative CO\textsubscript{2} emissions. b, The world total forcings are decomposed into individual climate forcers. The value on top of the bar indicates the mean value of the total radiative forcing, and the error bar indicates the associated uncertainty resulting from the world ensemble. The direct CO\textsubscript{2} emissions are divided into fossil-fuel CO\textsubscript{2} (FF CO\textsubscript{2}), land-use CO\textsubscript{2} (LUC CO\textsubscript{2}), and negative CO\textsubscript{2} emissions, if applicable. The natural forcings include solar irradiance and volcanic activity (\cref*{fig:rfc_natr}). The land-use albedo, mineral dust (Dust) and natural forcings are applied to Fig. b only. The probabilities of reaching the 2 \textdegree C and 1.5 \textdegree C targets here are 56\% and 61\%, respectively (see \cref*{fig:rfc_prob}). All uncertainties are represented as one standard deviation.}} \label{fig:rfc_grp} \end{figure} } \subsection{Attributions under the high-emission scenarios} All potential projected scenarios, including those forcings higher than the 2\textdegree C and 1.5 \textdegree C forcings, are shown in \cref*{fig:rfc_prob}a (for the regional contributions) and \cref*{fig:rfc_prob}b (for the sectoral forcings). We translate the forcing levels into probabilities of exceeding 2\textdegree C or 1.5\textdegree C to demonstrate the likelihood of achieving the climate goals under such conditions. Basically, China, the USA and the EU are still the three major contributors to climate change when high forcings are applied. For example, under the high-forcing scenarios, China may account for approximately 1.1 Wm\textsuperscript{-2}, albeit with greater uncertainty, the USA accounts for approximately 0.9 Wm\textsuperscript{-2}, and the EU accounts for approximately 0.6 Wm\textsuperscript{-2}. All other regions exhibit relatively lower but still significant radiative forcings under the same circumstances, except Japan, where the forcing levels do not greatly change even under the high-forcing scenarios. Sectorally, the energy sector may contribute the highest forcings given its high emissions, up to $3.8$ Wm\textsuperscript{-2}, followed by the industry (up to $1.4$ Wm\textsuperscript{-2}) and transport (up to $1.1$ Wm\textsuperscript{-2}), as well as land-use CO\textsubscript{2} (up to $0.8$ Wm\textsuperscript{-2}). Lower or no negative CO\textsubscript{2} emissions (\cref*{fig:ghg}), relatively fewer nuclear and renewable energy sources (for example, solar and wind), and more fossil-fueled energy use could bring about extremely high climate-related emissions in the energy sector \cite{ONeill2014, Kriegler2017,Fujimori2018, Gidden2019}; hence, high forcings are produced. All sectors reveal high radiative forcing values except for open burning, which remains relatively stable, under the assumed high-forcing scenarios (\cref*{fig:rfc_prob}b). {\linespread{1} \begin{figure}[ht] \includegraphics[width=\textwidth]{rfc_prob.pdf} \centering \caption{\small{\textbf{Regional and sectoral contributions under the different future projections.} a, The relationship between the exceedance probability of 2\textdegree C or 1.5\textdegree C and the regional forcing contributions. The color points are the regional relative contributions. The trends are shown by the colored lines obtained via linear regression. b, The relationship between the exceedance probability of 2\textdegree C or 1.5\textdegree C and the sectoral forcing contributions. The negative CO\textsubscript{2} level is summed into the energy sector to simplify the analysis. The color points are the sectoral forcings. The trends are shown by colored lines obtained through linear regression. The points in Fig. a\&b are sampled (from 2020 to 2100, every 10 years) at seven forcing levels, namely, 1.9 Wm\textsuperscript{-2}, 2.6 Wm\textsuperscript{-2}, 3.4 Wm\textsuperscript{-2}, 4.5 Wm\textsuperscript{-2}, 6.0 Wm\textsuperscript{-2}, 7.0 Wm\textsuperscript{-2} and 8.5 Wm\textsuperscript{-2}. The forcing levels are translated into exceedance probabilities within each sample year. The 1.5\textdegree C and 2\textdegree C results marked with the vertical lines represent the scenarios with forcing levels of 1.9 Wm\textsuperscript{-2} and 2.6 Wm\textsuperscript{-2}, respectively, in 2100.}} \label{fig:rfc_prob} \end{figure} } \section{Discussion} In this study, we applied a simple climate model, SCM4OPT v2.0, to determine the forcing contributions of regions, sectors and climate forcers based on available historical and future projection emissions and land-use datasets. This study provides an IAM-based assessment from a forcing perspective at the sectoral and regional levels. The radiative forcings, including the forcings resulting from the various sources of GHGs, aerosols and pollutants, and land-use albedo, are distinguished among the different regions and sectors. The outputs here can be used to inform policy-makers of the relative importance of the forcing levels resulting from different regional or sectoral sources. The results are interpreted with certain caveats and limitations. First, we analyze emission datasets that contain regional and sectoral information, while global-scale datasets are not included. Therefore, our analysis only reflects limited uncertainties that are derived from the available emission estimates. Thus, the analysis here is merely considered as an IAM-based evaluation of the potential future climate, especially under the 2\textdegree C and 1.5 \textdegree C scenarios. Second, the outcome of this study is contingent on the set of selected scenarios used, which are treated equally likely. However, the distribution of total range of future emissions does not necessarily present equal probabilities \cite{Ho2019} (\cref*{fig:gwp_sensi}). We consider a further update for this analysis when the probabilistic emission scenarios are available. The results showed that the 1.5 \textdegree C target requires most regions and sectors to maintain their forcings not higher than the current levels, while slightly higher future forcing levels than present levels are allowed for the 2 \textdegree C target. The results here can be used to assess the gap between the current and targeted climate levels for both regions and sectors in terms of the radiative forcing. Furthermore, we found that the negative CO\textsubscript{2} forcing is projected to contribute -$0.75\pm 0.44$ Wm\textsuperscript{-2} and -$0.42\pm 0.27$ Wm\textsuperscript{-2} under the 2 \textdegree C and 1.5 \textdegree C scenarios, respectively. Our analysis illustrates the importance of the negative CO\textsubscript{2} emissions in achieving the climate targets from the perspective of the radiative forcing. By using a new land-use forcing parameterization, we further found that less negative forcings, or more positive forcings, for the land-use albedo for the 2 \textdegree C and 1.5 \textdegree C scenarios than those from existing studies. A comprehensive consideration of the available forcing sources is important for the climate change assessment. \clearpage \newpage \section*{Acknowledgments} This work was supported by the Integrated Research Program for Advancing Climate Models (TOUGOU), Grant Number JPMXD0717935457, from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. The computing resources were provided by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC). We thank M. Abe for providing the data used for model calibration and T. Gasser for sharing the OSCAR v2.2 source code. \section*{Author contributions} X.S., K. Tachiiri and K. Tanaka designed the study. X.S. processed the emissions and land cover source data. X.S. developed the model with the help of K. Tanaka and M.W. X.S. performed the calculations and generated the figures. All coauthors contributed to analyzing the results and writing the paper. \section*{Competing financial interests} The authors declare that they have no competing financial interests. \section*{Data availability} The data used to support the analysis are available from the corresponding author upon reasonable request. \clearpage \newpage \bibliographystyle{iopart-num}
\section{Introduction} Development of analytic theory of freak (or rogue) waves is one of the most interesting problems of hydrodynamics. In spite of recent progress in this area \cite{PK} many important questions are not answered yet. Apparently, freak waves are the structures well localized in space, see Fig.\ref{SAT}. But behavior of freak waves in time in co-moving coordinate frames is not still explored. From the experimental viewpoint this is a hard question. It cannot be answered by a resting observer, for whom the freak wave is just a single event localized in time, see Fig.\ref{Draupner}. From the other hand, satellites move too fast to record the full "live story" of a freak wave. The standard model for description of freak waves in deep water is Nonlinear Schr\"{o}dinger Equation ($NLSE$). This equation has a plethora of exact solutions which often are associated with the freak waves on deep water. Some of these solutions are presented in \cite{AA}, more recent developments can be found in articles \cite{TW}-\cite{A2}. These solutions, however, presume existence of background monochromatic wave (condensate) and are connected to the subject of our paper only indirectly. For this reason, we do not pursue a purpose to present here the detailed description of all solitonic solutions on the condensate background, as well as completed and controversial history of their discovery. In this article we study the solitons on almost zero background. \begin{figure}[ht] \includegraphics[scale=1.]{./FreakWaveSatelliteImage.eps} \caption{Giant wave detected during a global census using three weeks of raw ERS-2 SAR imagette data, carried out by the German Aerospace Centre (DLR). This SAR data set was inverted to individual wave heights and investigated for individual wave height and steepness. The wave shown here has a height of 29.8 m. Adopted from $ http://www.esa.int/esaCP/SEMOKQL26WD \_ index \_ 1.html\sharp subhead4$ }\label{SAT} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.35]{./Draupner.eps} \caption{Freak wave event detected from the Draupner oil platform on Jan.1, 1995. Adopted from http://www.math.uio.no/~karstent/seminarV05/Haver2004.pdf}\label{Draupner} \end{figure} However, we should mention the remarkable $NLSE$ solution found by Peregrine \cite{P}. This solution in the co-moving coordinate frame is an instanton, describing the single event -- appearance and disappearance of the freak waves group. Today we can speak about two alternative versions of the freak wave theory. The "instantonic" version assumes that the freak wave is a single event, localized in time. The "solitonic" version proposes that the freak wave are described by persistent solitons, probably oscillating in time. So far experimental data are too scarce to make a conclusion in favor of one of these theories. One should remember that $NLSE$ is derived in the assumption that the wave train size, containing the freak waves, is much larger than characteristic wave length. Most of collected experimental data, however, show that in the real ocean this condition is not satisfied (see Fig.\ref{SAT}, \ref{Draupner}) and that the $NLSE$ is hardly applicable. A level of nonlinearity of quasi-monochromatic wave group is measured by the characteristic steepness $ \mu \simeq ka $ ($k$ is the wavenumber and $a$ is the amplitude). Our numerical experiments \cite{PDZ} show that $NLSE$ is applicable if $\mu \lessapprox 0.07$. According to our calculations, $NLSE$ is not applicable if $\mu \simeq 0.1$. Recent numerical experiments \cite{S} show that this limit might be extended to $\mu\simeq 0.15$. However, for freak waves in the real sea $\mu \simeq 0.3 \div 0.5$ (see Appendix II). The $NLSE$ is absolutely not applicable for description of that steep freak waves. It is also known from observations \cite{PK} that a typical configuration of a freak wave group consists of three sequent waves -- "Three Sisters". This group is too short to be described by $NLSE$. What are the alternatives to $NLSE$ model? The most consistent approach is the use of the exact Euler equations for description of the potential flow of the ideal fluid with free surface. Some advances in this direction are already achieved \cite{PDZ}-\cite{DZ}. However, the study of more simple and less accurate models also could be very useful. In this article we present our result on numerical solution of well-known $MMT$ \cite{MMT} equation with the special choice of parameter $\alpha=1/2, \beta=3, \lambda=+1$, making this model well adjusted for description of surface gravity waves. Our results mostly support the "solitonic" theory of freak waves. We started with initial data, corresponding to $NLSE$ solitons and discovered formation of persistent quasisolitons existing for more than two thousands of wave periods. These quasisolitons slowly radiate energy in backward direction. As was shown recently \cite{RNZ} in the "model case" ($\alpha=1/2, \beta=0, \lambda=1$), this effect plays the key role in formation of the wave turbulent spectrum, but in our case its influence is negligibly small. However, we discovered completely new effect. While quasisolitons of small steepness ($\mu \lessapprox 0.1$) behave similar to $NLSE$ solitons on zero background, the quasisolitons of higher steepness demonstrate almost periodic oscillations of amplitude and spectral shape, periodically forming power-like tails in spectra. This effect can be explained by modulational instability inside the quasisoliton. Development of this instability leads to formation of "weak" one-dimensional collapses, which deform the spectrum, but absorb negligibly small amount of energy. Thereafter we call oscillating quasisolitons "quasibreathers". \section{Basic model} The Majda-McLaughlin-Tabak ($MMT$) equation (see \cite{MMT}, \cite{ZDP} and \cite{ZGPD}) \begin{eqnarray} \label{MMT} i \frac{\partial \psi}{\partial t} &=& \left| \frac{\partial }{\partial x} \right|^\alpha \psi + \lambda \left| \frac{\partial}{\partial x} \right|^{\beta/4} \left( \left|\left| \frac{\partial }{\partial x} \right|^{\beta/4} \psi \right|^2 \left| \frac{\partial }{\partial x} \right|^{\beta/4} \psi \right), \\ \lambda &=& \pm 1 ,\,\,\,-\infty<x<\infty, \,\,\,\, 0<t<\infty \nonumber \end{eqnarray} where $\psi(x,t)$ is the complex function and the fractional derivative is defined by \begin{eqnarray} \label{FT} \left| \frac{\partial }{\partial x} \right|^\alpha \psi = \int |k|^\alpha \psi_k e^{ikx} dk \end{eqnarray} has been attracting lately fare attention of nonlinear wave scientists. The reason is $MMT$ equation incorporates several already known important cases, and also can be used as a ``test-bed'' for verification of the concepts like weak-turbulent waves spectra, localized structures and their co-existence \cite{ZDP}, \cite{ZGPD}. For $\alpha=0$ and $\beta=0, 2$ Eq.\ (\ref{MMT}) is completely integrable. If $\alpha=2$ and $\beta=0$, it is the classical $NLSE$ for focusing ($\lambda=-1$) and defocusing ($\lambda=+1$) cases: \begin{eqnarray} \label{NLS} i \frac{\partial \psi}{\partial t} = -\frac{\partial^2 \psi}{\partial x^2} + \lambda |\psi|^2 \psi \end{eqnarray} If $\alpha=2$ and $\beta=2$, transformation $\phi=|\frac{\partial}{\partial x}|^{\frac{1}{2}}\psi$ turns Eq.(\ref{MMT}) into the derivative $NLSE$ \cite{Kundu}: \begin{eqnarray} \label{NLES} i \frac{\partial \phi}{\partial t} = -\frac{\partial^2 \phi}{\partial x^2} + \lambda \frac{\partial}{\partial x} |\phi|^2 \phi \nonumber \end{eqnarray} Through Fourier transform \begin{equation} \psi_k = \frac{1}{2\pi}\int\psi(x) e^{-ikx}dx \nonumber \end{equation} Eq.(\ref{MMT}) can be rewritten in the form \begin{eqnarray} \label{MMT_FFT} i\frac{\partial \psi_k}{\partial t} = |k|^\alpha \psi_k+\int T_{k k_1 k_2 k_3} \psi_{k_1}^\star \psi_{k_2}\psi_{k_3} \delta_{k+k_1+k_2+k_3} dk_1 dk_2 dk_3 \end{eqnarray} where \begin{eqnarray} \label{MMT_ME} T_{k k_1 k_2 k_3} = \lambda |k|^{\beta/4} |k_1|^{\beta/4} |k_2|^{\beta/4} |k_3|^{\beta/4} \end{eqnarray} Suppose that in Eq.\ (\ref{MMT_FFT}) $T_{k k_1 k_2 k_3}$ is a generic function satisfying the symmetry conditions \begin{eqnarray} \label{cond} T_{k k_1, k_2 k_3} = T_{k_1 k, k_2 k_3} = T_{k k_1, k_3 k_2} = T_{k_2 k_3, k k_1} \end{eqnarray} For matrix coefficient (\ref{MMT_ME}) conditions (\ref{cond}) are satisfied and Eq.(\ref{MMT}) is a Hamiltonian system \begin{eqnarray} \label{H} i\frac{\partial \psi_k}{\partial t} &=& \frac{\delta H}{\delta \psi_k^*}, \nonumber \\ H&=&\int |k|^\alpha |\psi_k|^2 dk + \frac{1}{2} \int T_{k k_1 k_2 k_3} \psi_k^\star \psi_{k_1}\psi_{k_2} \psi_{k_3} \delta_{k+k_1-k_2-k_3} dk dk_1 dk_2 dk_3 \nonumber \end{eqnarray} Obviously, the Hamiltonian $H$ is a constant of motion. Other motion constants are wave action \begin{eqnarray} \label{N} N = \int \left| \psi_k \right|^2 dk \nonumber \end{eqnarray} and wave momentum \begin{eqnarray} \label{M} P = \frac{i}{2} \int\left( \psi \frac{\partial \psi^\star}{\partial x} - \frac{\partial \psi}{\partial x} \psi^\star \right) dx \nonumber \end{eqnarray} Another model of type (\ref{MMT_FFT}), describing surface waves on deep water, is so-called ``Zakharov equation'' \cite{ZE}. This equation is not heuristic like $MMT$, it was systematically derived from Euler equations and therefore is supposed to be more accurate in corresponding context. In this equation $T_{\epsilon k,\epsilon k_1,\epsilon k_2,\epsilon k_3} = \epsilon^3 T_{k k_1 k_2 k_3}$ is cumbersome homogeneous function of the third order. One should note that if Eq.\ (\ref{MMT}) is applied for description of gravity waves, the surface shape can be reconstructed by the formula (see {\bf Appendix I}) \begin{eqnarray} \label{SurfaceElevations} \eta(x,t) = \frac{1}{\sqrt{2}} \int e^{ikx} |k|^{1/4} (\psi_k+\psi_k^*)dk \end{eqnarray} \section{Solitons and quasisolitons} Let us look for a solution of Eq.(\ref{MMT_FFT}) in a form \begin{eqnarray} \label{GenSol} \psi_k(t) = e^{i(\Omega-kV)t} \phi_k \end{eqnarray} where $\Omega$ and $V$ are the constants. The function $\phi_k$ should satisfy the nonlinear integral equation \begin{eqnarray} \label{Sol} \phi_k= \lambda \frac{\int T_{1234} \phi_1^\star \phi_2 \phi_3 \delta(k+k_1-k_2-k_3)dk_1dk_2dk_3}{-\Omega+kV-|k|^\alpha} \end{eqnarray} This equation has solutions if $\Omega$ and $V$ can be chosen such that the denominator in Eq. (\ref{Sol}) cannot be zero for real $k$. This might happen only if $\alpha > 1$. Let's suppose now $\alpha<1$. One can see that in this case the denominator in Eq.\ (\ref{Sol}) always has zero, which is clear from Fig.\ref{DisRel}. Let $\Omega<0$, $V>0$. \begin{figure}[ht] \includegraphics[scale=0.15]{./Pic.eps} \caption{Example of the situation when defocusing quasisolitons are possible. The dispersion relation is $\omega=|k|^\alpha$ for $\alpha<1$, $\Omega$ is negative and $V$ is positive. The straight line always crosses the dispersion relation $\omega=\omega(k)$ and, therefore, the denominator $\Omega-kV+\omega(k)$ in Eq.\ (\ref{Sol}) has zero. Quasisoliton takes place only in the defocusing case $\lambda=+1$.} \label{DisRel} \end{figure} \noindent Thus, any solution of type (\ref{GenSol}) has singularity at negative $k$. It means that strict soliton solution of type (\ref{GenSol}) does not exist. However, one can construct approximate solutions, such that $\phi_k$ in (\ref{GenSol}) is slow function of time. These approximate moving solutions, radiating energy in the backward direction are called quasisolitons after paper \cite{ZK}. As it was recently shown in \cite{ZDP}, \cite{ZGPD}, quasisolitons play the central role in wave turbulence in the frame of $MMT$ model if $\alpha=\frac{1}{2}$, $\beta=0$ and $\lambda=+1$. It was shown that in this case the backward radiation plays the central role in dynamics of quasisolitons. But we study only the case $\alpha=\frac{1}{2}$, $\beta=3$ and $\lambda=+1$. In this case , which intentionally models the gravity waves on deep water, the backward radiation is not that strong, due to essential nonlinearity suppression in the area of small wave numbers. Nevertheless, we definitely detect this phenomenon in our numerical experiments. Consider the structure of the denominator in Eq.(\ref{Sol}). One can expect existence of the quasisoliton in the case when the straight line $\omega = k V-\Omega$ is tangential to the curve $\omega = k^\alpha$. The conditions of equal derivatives and existence of the common point of these two curves at $k=k_m$ are: \begin{eqnarray} \label{Tangent} V &=& \alpha k_m^{\alpha-1} \\ \label{CommonPoint} \Omega &=& (\alpha-1) k_m^\alpha \end{eqnarray} We are now returning back to non-stationary Eq.(\ref{MMT_FFT}) and make the change of variables $k = k_m +\kappa$, $\kappa<<k$. Dispersion relation expansion into Taylor series \begin{eqnarray} (k_m+\kappa)^\alpha= k_m^\alpha+\alpha k_m^{\alpha-1}\kappa+\frac{1}{2}\alpha(\alpha-1)k_m^{\alpha-2}\kappa^2 \nonumber \end{eqnarray} and change of variables \begin{eqnarray} \psi_k (t) = e^{-i (k_m^\alpha + \alpha k_m^{\alpha-1}\kappa)t} \phi_{\kappa}(t) \end{eqnarray} gives \begin{eqnarray} &&i\frac{\partial \phi_\kappa}{\partial t}= \frac{1}{2}\alpha(\alpha-1) k_m^{\alpha-2} \kappa^2 \phi_\kappa +\\ &+& k_m^\beta \int \phi_{\kappa_1}^\star \phi_{\kappa_2}\phi_{\kappa_3} \delta(\kappa+\kappa_1-\kappa_2-\kappa_3) d\kappa_1 d\kappa_2 d\kappa_3 = 0 \end{eqnarray} Another change of variables \begin{eqnarray} \phi_\kappa = e^{i \Delta t} \chi_\kappa, \,\,\,\,\,\,\, \Delta = \frac{1}{2}\alpha(\alpha-1) k_m^{\alpha-2}q^2 \nonumber \end{eqnarray} gives \begin{eqnarray} &&i \frac{\partial \chi_\kappa}{\partial t} = \frac{1}{2}\alpha(\alpha-1) k_m^{\alpha-2}(q^2 + \kappa^2) \chi_\kappa +\\ &+& i k_m^\beta \int \chi_{\kappa_1}^\star \chi_{\kappa_2}\chi_{\kappa_3} \delta(\kappa+\kappa_1-\kappa_2-\kappa_3) d\kappa_1 d\kappa_2 d\kappa_3 \nonumber \end{eqnarray} Applying inverse Fourier transform $\chi(x,t) = \int \chi_\kappa(t) e^{i \kappa x} d \kappa$ to the last equation, we get $NLSE$ in real space: \begin{eqnarray} \label{NLS-chi-Real} i \frac{\partial \chi}{\partial t} + \frac{1}{2}\alpha (1-\alpha) k_m^{\alpha-2}(q^2 \chi - \frac{\partial^2 \chi}{\partial x^2}) - k_m^\beta |\chi|^2 \chi = 0 \end{eqnarray} Eq.(\ref{NLS-chi-Real}) has partial stationary solution \begin{eqnarray} \label{NLS-stationary-soliton} \chi(x) = \sqrt{\frac{\alpha (\alpha-1)}{k_m^{\beta-\alpha+2}}} \frac{q}{\cosh{ q x }} \end{eqnarray} which produces approximate quasisoliton solution of the Eq.(\ref{MMT}) with $\lambda = 1$: \begin{eqnarray} \label{MMT-quasi-soliton} \psi(x,t) &=& \chi(x-vt) e^{i (\Omega + \Delta) t} e^{i k_m (x-vt)} \\ \Omega &=& - (1-\alpha) k_m^\alpha \nonumber \\ \Delta &=& - \frac{1}{2} \alpha (1-\alpha) k_m^{\alpha-2} q^2 \nonumber \\ V &=& \alpha k_m^{\alpha-1} \nonumber \end{eqnarray} The characteristic wave-number $k_0=-c k_m$ of backward radiation associated with the quasisoliton (see Fig.\ref{DisRel}) can be found from the equation \begin{eqnarray} \label{Zeroes} k_0 V - \Omega = |k_0|^\alpha \end{eqnarray} together with Eq.(\ref{Tangent})-(\ref{CommonPoint}). For $\alpha=1/2$ \begin{eqnarray} \label{C} c = 3-\sqrt{8} \simeq 0.172 \end{eqnarray} Therefore, due to the smallness of the ratio $\frac{T(k_0,k_0,k_m,k_m)}{T(k_m,k_m,k_m,k_m)} \simeq c^3=5\cdot 10^{-3}$, the backward radiation process in framework of the $MMT$ model for $\beta=3$ is suppressed with respect to the case $\beta=0$. To obtain the surface shape we replace in Eq.(\ref{SurfaceElevations}) $k^{1/4}$ with $k_m^{1/4}$ and get \begin{eqnarray} \eta = \frac{q}{k_m} \frac{1}{cosh{q (x-vt)}} \cos(\omega t -k_m x) \nonumber \end{eqnarray} Thus $q$ is the standard steepness. \section{Self-similar collapses} Eq.(\ref{MMT}) has self-similar solution: \begin{eqnarray} \label{PsiSSS} \psi(x,t) = (t_0-t)^{5/2} F\left(\frac{x}{(t_0-t)^2}\right) \end{eqnarray} For the shape of the surface it gives \begin{eqnarray} \label{EtaSSS} \eta(x,t) = (t_0-t)^2 F\left(\frac{x}{(t_0-t)^2}\right) \end{eqnarray} At $t \rightarrow t_0$ time must vanish from Eq.\ (\ref{MMT}), which means that \begin{eqnarray} \eta \rightarrow \alpha^+ x\,\,\,for \, x>0 \nonumber \\ \eta \rightarrow \alpha^- x\,\,\,for \, x<0 \nonumber \end{eqnarray} where $\alpha^+>0$ and $\alpha^-<0$ are the constants. In other words, solution Eq.\ (\ref{PsiSSS}) describes formation of a wedge, in general (if $\alpha^+ \neq \alpha^-$), tilted with respect to vertical line. In $k$-space we get \begin{eqnarray} \label{PsiSSS_Kspace} \psi(k,t) = (t_0-t)^{9/2} F\left(k {(t_0-t)^2}\right) \end{eqnarray} According to (\ref{PsiSSS_Kspace}) $F(\xi) \rightarrow \xi^{-9/4}$ at $\xi \rightarrow 0$. Hence, asymptotically \begin{eqnarray} \label{SSS-1} \psi(k,t) \simeq k^{-9/4} \\ \label{SSS-2} |\psi(k,t)|^2 \simeq k^{-9/2} \end{eqnarray} Formation of collapses like Eq.\ (\ref{PsiSSS})-(\ref{EtaSSS}) means growth of power-like tails in $k$-space. The spectrum Eq.\ (\ref{SSS-1})-(\ref{SSS-2}) appears only at the moment of collapse $t \rightarrow 0$. The singularity has the form of appearing and vanishing wedge, absorbing some amount of energy. However, time-averaged spectrum can have a slope different from $|\psi_k|^2 \simeq k^{-9/2}$. If the collapse events are rare, the slope must be higher that $k^{-9/2}$. Let us suppose that the collapse is ``weak'' and that only a very small part if energy is dissipated in an individual event. It means that the collapse is ``almost'' invertible process, symmetric in time with respect to the sign change to $-t$. In other words, the collapsing solution is \begin{eqnarray} \psi(k,t) = |t_0-t|^{9/2}F(k(t_0-t)^2) \nonumber \end{eqnarray} Now we can perform the Fourier transform in time and get \begin{eqnarray} \psi(k,\omega) = \int_{t_0}^{\infty} |t_0-t|^{9/2} F(k(t_0-t)^2) e^{-i\omega t} dt = e^{i\omega t_0} \frac{1}{k^{11/4}} f\left(\frac{\omega}{k^{1/2}} \right) \nonumber \end{eqnarray} The spatial spectrum is given by the integral \begin{eqnarray} \label{Spectrum} I_k = |\psi(k)|^2 \simeq \int|\psi(k,\omega)|^2 d\omega \simeq k^{-5} \end{eqnarray} For the surface elevations spectrum we obtain Phillips spectrum \begin{eqnarray} |\eta_k|^2\simeq \frac{1}{k^4} \nonumber \end{eqnarray} In our numerical experiments we observed the spectra both more steep for $k \rightarrow +\infty$, and more shallow for $k \rightarrow -\infty$ than Eq.\ (\ref{Spectrum}). So far, we have no proper explanation of this fact. \section{Turbulent quasibreathers in $MMT$ model} The Eqs.\ (\ref{MMT_FFT})-(\ref{MMT_ME}) have been solved numerically in periodic boundary conditions real space domain $[0,2\pi]$ for deep gravity surface waves case $\alpha=\frac{1}{2}$, $\beta=3$ and $\lambda = 1$. Numerical integration has been performed through iterations of the implicit second order scheme in time and calculation of nonlinear term by Fast Fourier Transform technique. This numerical scheme preserves constants of motion of the approximated equation. To avoid high-frequency instabilities, the low-pass filtering has been applied on every time-step through multiplication of the Fourier transform of the wave field by hyper-gaussian function, leaving about $90\%$ of Fourier modes intact, while effectively suppressing the rest of potentially unstable high-frequency modes. Results were verified against the wave modes number change from $8192$ to $16384$ and $32768$ for the same Cauchy problem. The calculations were continued typically up to thousands of the initial wave periods without loss of the accuracy. The initial condition was taken in the form of $NLSE$ soliton \begin{eqnarray} \label{InCondSoliton} \psi(x,0) = \frac{q}{2 k_m^{9/4}} \frac{e^{ik_m x}}{\cosh qx} \end{eqnarray} for $k_m=50$, see Fig. \ref{InCond}. \begin{figure}[ht] \includegraphics[scale=0.3]{./p000000.ps} \caption{Real and Fourier space distributions of wave field. Top graph: $|\psi(x,t)|^2$ as a function of $x$ for $t=0$. Bottom graph: Fourier spectrum $\log_{10} {|\psi(k,t)|^2}$ as a function of signed logarithm of waves number $\operatorname{sign}(k)\log_{10}{|k|}$ for time $t=0$.} \label{InCond} \end{figure} It is known \cite{ZGPD} that simulation results essentially depend on the value of the nonlinearity parameter $q/k_m$. For $q/k_m \lesssim 0.1$ the initial condition moves with the constant speed $V$ without any noticeable shape change over characteristic length of at least dozens of simulation domain size $2\pi$. For $q/k_m > 0.1$, the initial shape Eq.(\ref{InCondSoliton}) starts to change in time and for $q/k_m=0.3$ forms moving wedge-like growing structure with narrowing width. This behavior was interpreted in \cite{ZGPD} as possible collapse of the initial condition over finite time, but further numerical simulation was not continued because of high-wavenumbers instability development in Fourier space, causing blow-up of the numerical scheme. In current research utilizing more sophisticated numerical approach, it was possible to follow the evolution of the same collapsing initial condition for practically unlimited time. We observed that, in fact, this collapsing initial condition evolves into localized non-stationary solution, periodically recurring to its initial shape. By analogy with cubical $NLSE$, it was interpreted as a breather-like structure. The observed phenomenon is quite interesting: at $q/k_m \sim 0.3$ the initial condition Eq.\ (\ref{InCondSoliton}) evolves into localized object, but with ``inner life''. The shape of this object and the form of its spectra demonstrate irregular, stochastic behavior, which can be interpreted as some ``intrinsic turbulence''. Time evolution of real space maximum of the solution is presented on Fig.\ \ref{OSC_MOM}. One should note that oscillations are quasi-periodic and their amplitude slowly diminishes in time, at least partially due to destruction of the breather by surrounding noise -- that's the reason why we called this localized state by quasibreather. Almost identical picture of oscillations is seen from the second curve on Fig.\ref{OSC_MOM}, which presents the behavior of the second moment as a function of time. Both curves on Fig.\ref{OSC_MOM} clearly indicate the presence of nonlinear oscillating structure in the wave system. Fig.\ref{FreqOsc} shows dependence of the frequency of these oscillations on the their mean level. The frequency has a tendency to grow with the growth of the oscillations level. This fact is in a correspondence with frequency dependence on nonlinear frequency shift. \begin{figure}[ht] \includegraphics[scale=0.5]{./OSCILLATIONS.ps} \caption{Dependence of the solution maximum $max(|\psi(x,t))^2$,taken over integration domain $[0,2 \pi]$ (solid line, left axis) and the second moment $\int (k-k_0)^2 |\psi_k|^2 dk$ (dotted line, right axis), on time $t$. The average wave-number is defined as $k_0 = \frac{\int k |\psi_k|^2 dk}{\int |\psi_k|^2 dk}$ } \label{OSC_MOM} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.5]{./OscTab.ps} \caption{Dependence of quasibreather maximum oscillations frequency on the mean level of these oscillations $<|\psi(x,t)|^2>$.} \label{FreqOsc} \end{figure} Fig.\ref{Max} presents real and Fourier space of the system at $t=38.88$, corresponding to the first maximum from Fig.\ref{OSC_MOM}. The real space picture of $|\psi(x)|^2$ shows that initial condition moved to the right with respect to initial condition, growing in the amplitude and narrowing in width. Also, small portion of the initial condition has been separated in the form of the hump of much smaller amplitude. Fourier space contains two maxima: the right major peak approximately at $k_m=50$, corresponding to the quasibreather, and the left smaller peak corresponding to the solution of the Eq.(\ref{C}): \begin{eqnarray*} k_0 = -(3-\sqrt{8})\cdot k_m \simeq -8.6 \end{eqnarray*} As shows Fig.\ref{Max}, the spectrum remains localized near initial wave number $k \simeq k_m$. This fact can be explained by conservation of both wave action and momentum. Thus, the turbulence inside the solution can be interpreted as an ``envelope turbulence''. It is interesting that the area of this turbulence is localized both in real and Fourier spaces. Comparison with initial data shows that the spectrum gains power-like tails $I_k\simeq k^{-3.3}$ for negative $k$ and $I_k\simeq k^{-6.8}$ for positive $k$. Recall that the simple collapse theory predicts $I_k\simeq k^{-5}$. Anyway, appearance of power-like tails indicates violation of smoothness of $\psi(x,t)$. The observed singularity is of weak-collapse type. It is confirmed by the fact that the amount of Hamiltonian absorbed during 11 periods of oscillations of quasibreather (see Fig.\ \ref{OSC_MOM}) is approximately equal to $0.03 \% $ of its initial value. One should note that observed picture is universal: another snapshots of the system, taken at the times corresponding to subsequent maxima from Fig.\ \ref{OSC_MOM}, reveal the pictures similar to observed on Fig.\ \ref{Max} (see, for example, Fig.\ \ref{6Max}). \begin{figure}[ht] \includegraphics[scale=0.3]{./p028000.ps} \caption{Same as Fig.\ref{InCond}, but for $t=38.88$, corresponding to the 1st maximum from Fig.\ref{OSC_MOM}. Left slope of the spectrum is approximated by function $\sim k^{-3.3}$ (dotted line), right slope is approximated by function $\sim k^{-6.8}$ (dashed line).} \label{Max} \end{figure} \noindent \begin{figure}[ht] \includegraphics[scale=0.3]{./p187200.ps} \caption{Same as Fig. \ref{InCond}, but for time $t=259.91$ corresponding to the 3rd trough from Fig.\ref{OSC_MOM}} \label{3Min} \end{figure} \noindent \begin{figure}[ht] \includegraphics[scale=0.3]{./p345000.ps} \caption{Same as Fig. \ref{InCond}, but for time $t=479.0$ corresponding to 6th maximum from Fig.\ref{OSC_MOM}. Left slope of the spectrum is approximated by function $\sim k^{-3.3}$ (dotted line), right slope is approximated by function $\sim k^{-6.8}$ (dashed line).} \label{6Max} \end{figure} Fig.\ref{3Min} presents real and Fourier space of the system at $t=259.91$, corresponding to the third trough from Fig.\ref{OSC_MOM}. The real-space picture of $|\psi(x)|^2$ shows that amplitude of quasibreather has been diminished with respect to the state corresponding to Fig.\ref{Max}. Fourier space exhibits both similarities and differences being compared to bottom of the Fig.\ref{Max}: there are the same right main peak approximately at $k_m=50$ and the left smaller peak approximately at $k_0 = -8.6$, but high-wavenumber tails decay much faster than power law. It means that $\psi(x,t)$ is smooth at the moment of minimum. For the illustration of the quasibreather temporal behavior, we present Fig.\ \ref{Breathes}, showing two states of the system taken at the moments when quasibreather reaches it's maximum and minimum amplitude in semi-log scale. It's quite obvious that spectral tails decay exponentially at the moment corresponding to the amplitude minimum of quasibreather, and decay as a power of wave number at the moment of the quasibreather amplitude maximum. This solution, therefore, periodically "breathes" between states of singularity formation and its regularization. \begin{figure}[ht] \includegraphics[scale=0.4]{./Real.ps} \caption{Comparison of two spectra $\log_{10} |\psi_k(t)|^2$ for time $t=259.91$ (solid line, corresponds to the third trough on the Fig.\ref{OSC_MOM}) and time $t=479.00$ (dashed line, corresponds to the six' peak on the Fig.\ref{OSC_MOM}), plotted as a function of wave-number $k$. This picture demonstrates that the spectral tails "breath" between exponential and power-like states.} \label{Breathes} \end{figure} Fig.\ \ref{ThreeSisters} presents surface elevation Eq.\ (\ref{SurfaceElevations}) for the same time as Fig.\ \ref{InCond}. This picture looks qualitatively similar to experimentally observed "Three Sisters" killer wave on the ocean surface \cite{PK} and the resent results on freakon simulation on the deep water surface \cite{ZD}. Fig.\ \ref{3_Sisters_Deri} shows slope elevations, corresponding to Fig.\ \ref{ThreeSisters}. These slope elevations values have the meaning of the original Euler equations for deep water surface gravity waves. \begin{figure}[ht] \includegraphics[scale=0.4]{./ThreeSisters.ps} \caption{Surface elevation $\eta(x,t)$ as a function of real space coordinate $x$ for time $t=479.00$, corresponding to Fig.\ \ref{6Max}}\label{ThreeSisters} \end{figure} \noindent \begin{figure}[ht] \includegraphics[scale=0.4]{./3_Sisters_Deri.ps} \caption{Slope of the surface elevation $\left.\frac{\partial \eta(x,t)}{\partial x}\right|_{t=479.0}$ as a function of real space coordinate $x$, corresponding to Fig.\ \ref{ThreeSisters}}\label{3_Sisters_Deri} \end{figure} \noindent One remarkable feature of the observed quasibreather is its co-existence with surrounding noise environment, associated with the radiation at the secondary spectral peak at $k_0=-8.6$. In fact, the surrounding weakly-nonlinear noise could consists not only of radiation at wavenumber $k_0=-8.6$, but also of the products of the initial condition decay into quasibreather and other waves. However, the wave action density in this noise is so small with respect to energy density in quasibreather, that this noise certainly cannot be interpreted as a kind of "condensate". To analyze this situation, we performed the following experiment. In the middle of the simulation the real-space, containing quasibreather and surrounding noise, was "cleaned-up" through zeroing the function $\psi(x)$ everywhere except the carrier domain of the quasibreather. As a result, further evolution of the system starting from such "cleaned" initial conditions didn't show any qualitative difference from previous behavior -- we observed immediate appearance of the surrounding noise at $k_0=-8.6$ of the same characteristic amplitude, as we have seen before the "cleaning" of the real space. This observation lead us to the conjecture that quasisolitons and quasibreathers exist only in quasi-equilibrium with weakly nonlinear wave noise environment. Another important observation, which distinguishes quasibreathers from oscillations of perturbed $NLSE$ solitons, is periodical singularity formation at every time quasibreather reaches its maximum. This property is illustrated by both Fig. \ref{Max} (corresponds to the first maximum from Fig.\ \ref{OSC_MOM}) and Fig. \ref{6Max} (corresponds to the maximum number six from Fig. \ref{OSC_MOM}). In a nutshell, the gravity surface waves $MMT$ model shows periodic focusing of the initial condition Eq.(\ref{InCondSoliton}) with weak-collapse singularity formation exhibiting itself in power spectral tails and weakly nonlinear radiation at secondary spectral maximum at $k_0=-8.6$, which differs observed quasibreather from previously known breather-like structures. The similarity of observed quasibreather in terms of water surface elevation with experimental "Three Sisters" wave packet and numerically observed freakon shows that even simplified model of gravity surface waves as $MMT$ catches significant properties of the original exact equations. \section{Conclusion} On the base of numerical experiments we see that quasisolitons in frame of defocusing $MMT$ model with parameters $\alpha=1/2$, $\beta=3$ and $\lambda=1$ are robust long-living objects, existing for hundreds of leading wave periods. Quasisolitons of large amplitude turn to quasibreathers. Their amplitude and spectral shape oscillate in time. These oscillations are accompanied by formation of weak collapses which can be compared with "white capping" of real ocean waves. We conclude that the "solitonic" scenario of freak waves is based on the equal foot with alternative "instantonic" scenario. We need to perform more numerical experiments in the frame of exact Euler equation to establish what scenario is closer to reality. Let us mention that oscillatory effects in solitons propagating on zero background were observed in paper \cite{A3}. However, in this paper the authors studied not single $NLSE$, but the system of coupled $NLSE$. The dynamics of this system is much more complicated. \section{Acknowledgments} This work was sponsored by ONR grant N00014-10-1-0991, NSF grant \# 1130450, Russian Government contract $11.9.34.31.0035$, RFBR grant 12-01-00943, the Program of the RAS Presidium ``Fundamental Problems of Nonlinear Dynamics in Mathematical and Physical Sciences'' and the "Leading scientific schools of Russia" grant NSh 6170.2012.2. Authors gratefully acknowledge continuous support of these foundations. \section{Appendix I} Now we address the following question -- what value of $\lambda$ has to be chosen to provide the best possible modeling of real surface gravity waves on deep water? To answer this question, we notice that weakly nonlinear gravity waves on deep water surface with gravity acceleration $g=1$ are described by so-called "Zakharov equation", which is exactly Eq.(\ref{MMT_FFT}) at $\alpha=1/2$. The "real" coupling coefficient $T_{k k_1 k_2 k_3}$ is a complicated homogeneous function of the third order: \begin{eqnarray} T_{\epsilon k \epsilon k_1 \epsilon k_2 \epsilon k_3}^R=\epsilon^3 T_{k k_1 k_2 k_3}^R \end{eqnarray} Explicit expression for "real" $T_{k k_1 k_2 k_3}$ was found, for instance, in the paper \cite{PRZ}. Functions $T_{k k_1 k_2 k_3}^R$ from \cite{PRZ} and $T_{k k_1 k_2 k_3}$, given by Eq.(\ref{MMT_ME}), are essentially different. However, we can make them coincide in one point $k=k_1=k_2=k_3$ by the proper choice of $\lambda$. According to \cite{PRZ} \begin{eqnarray} \label{MC} T_{k k_1 k_2 k_3}^R= \frac{1}{4 \pi^2} k^3 \end{eqnarray} But in the cited paper we used the "symmetric form" of the Fourier transform. If we define the Fourier transform according to Eq.(\ref{FT}), we must replace Eq.(\ref{MC}) to \begin{eqnarray} T_{k k k k}^R = k^3 \end{eqnarray} Hence, to reach the best approximation to reality, we have to put \begin{eqnarray} T_{k k k k} = k^3 \end{eqnarray} It means that we must choose $\lambda=1$. Then the shape of the surface $\eta(x,t)$ defined by Eq.(\ref{SurfaceElevations}) is a model (rather approximate, of course) of a real water surface. From Fig. \ref{ThreeSisters} one can conclude that the steepness of our breather is fairly high and hardly can be described by $NLSE$. \section{Appendix II} The vast majority of surface waves physical characteristics measurements is coming from stationary installations like oil platforms, presenting the water surface elevations as a time series. The water surface elevation itself is not a measure of the system nonlinearity degree, since underlying equations are invariant with respect to stretching transformations, therefore surface waves of height varying by the order of magnitude can be of the same degree of nonlinearity. The real physical characteristic of nonlinearity is the wave slope $\mu$, which needs to be recovered from the surface elevations time series. Here we preset such simple estimate. By definition, the slope (same as steepness) is $\mu = ka$, where $k$ and $a$ are the characteristic wave number and amplitude correspondingly. The connection between wave period $T$ and wave number is \begin{eqnarray} k=\frac{4 \pi^2}{g T^2} \end{eqnarray} For the famous "Draupner Wave" (also known as "New Year Wave", see Fig.\ref{Draupner}, \cite{DW}), $T=12\,sec$, $a=13.7\, m$ and $g=9.81 \, m/sec$, which gives $\mu \simeq 0.38$ in accordance with our experiments.
\section{Introduction} \label{sec:intro} Let $\Gamma$ be a finite set of curves in $\mathbb{R}^2$. The \emph{arrangement} $\mathcal{A}(\Gamma)$ of $\Gamma$ is the planar subdivision induced by $\Gamma$. Its vertices are the intersection points and the endpoints of the curves of $\Gamma$, its edges are the maximal (relatively open) connected subsets of curves in $\Gamma$ not containing a vertex, and its faces are maximal (open) connected subsets of $\mathbb{R}^2 \setminus \bigcup_{\gamma\in\Gamma}\gamma$. Because of their rich geometric structure and numerous applications, arrangements of curves, and especially of lines and segments, have been widely studied. See \cite{SA} for a comprehensive survey. A \emph{Jordan arc} is the homeomorphic image of the open interval $(0,1)$\footnote{sometimes in the literature a Jordan arc is defined to be the homeomorphic image of the closed interval $[0,1]$. In this paper, however, we will always use open intervals}; unless otherwise specified, all such arcs will be in $\mathbb{R}^2$. We say that a set $\Gamma$ of Jordan arcs is a set of \emph{pseudo-segments} if every pair of arcs in $\Gamma$ intersect at most once, and the arcs cross properly at the point of intersection. Note that in this paper, pseudo-segments may be unbounded. Many combinatorial results on arrangements of lines or segments extend to arrangements of pseudo-segments. Three notable examples are (i) the complexity of a single level in an arrangement, (ii) the number of incidences between points and curves in the arrangement, and (iii) the complexity of many (marked) faces in an arrangement; see, e.g., \cite{AAS,Ch,Sze}. However, when two curves are allowed to intersect more than once, the resulting complexity bounds become weaker. One strategy to address this issue is to cut each curve into several pieces so that the resulting pieces form a collection of pseudo-segments, and then apply the existing bounds for pseudo-segments to the resulting collection. If each pair of curves intersect at most $E$ times, then it is always possible to cut $n$ such curves into at most $En^2$ pieces, so that each pair of pieces intersect at most once. When one does this, however, the resulting complexity bounds for problems (i)--(iii) are generally poor. In order to obtain better bounds, one must cut the curves into fewer pieces. This strategy has been pursued successfully for the past 15 years; see~\cite{AAS,ANPPSS,ALPS,AS,lilach,Ch,Ch2,PiSm,TT}. However, previous work has almost exclusively focused on arrangements where each pair of curves can intersect at most twice (sets of curves of this type are called \emph{pseudo-circles}, or, if unbounded, \emph{pseudo-parabolas}). The current best result in this direction is the work of Agarwal et al.~\cite{ANPPSS}, supplemented by that of Marcus and Tardos \cite{MT}. They showed that when $\Gamma$ is a set of $n$ pseudo-circles, it is possible to cut the curves of $\Gamma$ into a set of $O(n^{3/2}\log n)$ pseudo-segments. There are only a few (and considerably weaker) results of this kind for more general families of curves; they include works by Chan~\cite{Ch,Ch2} and by Bien~\cite{lilach}. In the present paper we study algebraic curves (or more generally, connected subsets of algebraic curves) of constant maximum degree. Pairs of such curves might intersect many times---by B\'ezout's theorem, they might intersect as many as $D^2$ times, where $D$ is the maximum degree of the curves. Our main result is a new technique for cutting the curves in such a set into a relatively small number of Jordan arcs, each pair of which intersect at most once. Our method only applies to algebraic curves (or slightly more generally, connected subsets of algebraic curves), but it works well no matter how many times the curves intersect (in brief, the bounds in our results become weaker, but only very slowly, as the degree of the curves increases). Let $\mathcal{C}$ be a set of algebraic plane curves, no two of which share a common component. Let $\Gamma_0$ be a set of Jordan arcs, each pair of which have finite intersection. We say that $\Gamma_0$ is a \emph{cutting}\footnote{% Not to be confused with the notion of $(1/r)$-cutting, which is a decomposition of the plane induced by the given curves; see, e.g., \cite{mat-book}.} of $\mathcal{C}$ if each curve in $\mathcal{C}$ can be expressed as a finite union of arcs from $\Gamma_0$ plus finitely many points (the points at which the original curves are cut). Similarly, let $\Gamma$ be a set of Jordan arcs, each of which is contained in a plane curve, and each pair of which have finite intersection. A set $\Gamma_0$ of Jordan arcs is said to be a \emph{cutting} of $\Gamma$ if each curve in $\Gamma$ can be expressed as a finite union of arcs from $\Gamma_0$ plus finitely many points. (It is possible to write down a single definition of a cutting for a collection of curves that contains both of the previous definitions as special cases, but this definition is rather technical so we will not do so here.) We can now state our main result. \begin{theorem}[Cutting algebraic curves into pseudo-segments]\label{cuttingCurvesIntoSegments} Let $\mathcal C$ be a set of $n$ algebraic plane curves of degree at most $D$, no two of which share a common component. Then $\mathcal{C}$ can be cut into\footnote{% We use the standard notation $O_{\kappa}(\cdot)$ to refer to a constant of proportionality that depends on the parameter or parameters $\kappa$.} $O_D(n^{3/2} \log^{O_D(1)}n)$ Jordan arcs, so that each pair of arcs intersect in at most one point. \end{theorem} The above theorem uses the fact that there are at most $O_D(n^2)$ pairwise intersections amongst the curves in $\mathcal C$. While this serves as a general upper bound, the actual number of intersections might be much smaller. The following theorem provides a refined bound that depends on the actual number of intersections. It is stated in a more general setup that involves Jordan arcs contained in algebraic curves rather than the entire algebraic curves themselves. \begin{theorem}[Cutting algebraic arcs into pseudo-segments]\label{cuttingArcsIntoSegments} Let $\Gamma$ be a set of $n$ Jordan arcs, each of which is contained in an algebraic curve of degree at most $D$, and every pair of which have finite intersection. Let $X=\sum_{\substack{\gamma,\gamma^\prime\in\Gamma\\\gamma\neq\gamma^\prime}}|\gamma\cap\gamma^\prime|$ be the number of times pairs of curves from $\Gamma$ intersect. Then $\Gamma$ can be cut into $O_D(n + X^{1/2} n^{1/2} \log^{O_D(1)}n)$ Jordan arcs, so that each pair of arcs intersect in at most one point. In the worst case, the bound is $O_D(n^{3/2} \log^{O_D(1)}n)$ (as in Theorem~\ref{cuttingCurvesIntoSegments}). \end{theorem} \begin{remark}\label{JordanArcsVsCurvesRem} Since each algebraic curve of degree at most $D$ can be cut into $\leq D(D-1)$ pairwise disjoint Jordan arcs by removing all points at which the curve is singular or is tangent to a vertical line (see Lemma \ref{cuttingCurveJordanArcs} below), the requirement that the curves in $\Gamma$ be Jordan arcs is not a serious constraint, and we only impose it to simplify notation and readability. At the cost of introducing messy notation, we could instead formulate the theorem in a superficially more general fashion by requiring that the curves in $\Gamma$ be open connected subsets of degree $D$ curves, rather than Jordan arcs contained in degree $D$ curves. In particular, Theorem~\ref{cuttingArcsIntoSegments} is indeed a generalization of Theorem \ref{cuttingCurvesIntoSegments}. \end{remark} \begin{remark} Figure \ref{pseudocubics} depicts a set of $n$ ``pseudo-cubics'' (i.e., Jordan arcs, each pair of which intersect at most three times) that requires a quadratic number of cuts in order to turn it into a set of pseudo-segments. This demonstrates that in order to obtain sub-quadratic bounds on the number of such cuts, one must impose additional restrictions on the given family of curves. For example, Theorems \ref{cuttingCurvesIntoSegments} and \ref{cuttingArcsIntoSegments} do this by requiring that the curves be subsets of bounded-degree algebraic curves. \begin{figure}[hbpt] \centering \input{cut3int.pstex_t} \caption{A set of $n$ pseudo-cubics that require $\Omega(n^2)$ cuts to turn them into pseudo-segments. See Tamaki and Tokuyama~\cite[Theorem 5.3]{TT}.} \label{pseudocubics} \end{figure} \end{remark} \subsection{Point-curve incidences} Theorem \ref{cuttingArcsIntoSegments} can be applied to obtain new incidence theorems in the plane. Pach and Sharir \cite{PS} proved that a set $\mathcal{P}$ of $m$ points and a set $\Gamma$ of $n$ plane curves (either Jordan arcs or bounded degree algebraic curves) determine $O_{s,t}(m^{\frac{s}{2s-1}}n^{\frac{2s-2}{2s-1}}+m+n)$ incidences, provided that every pair of curves of $\Gamma$ intersect at most $t$ times, and that there are at most $t$ curves of $\Gamma$ passing through any $s$-tuple of points of $\mathcal{P}$ (if the curves are algebraic plane curves, then the implicit constant also depends on the degree of the curves). Sets $\mathcal{P}$, $\Gamma$ of this type are said to have $s$ \emph{degrees of freedom} (the parameter $t$ is often suppressed, since as long as it is bounded, independently of $m$ and $n$, it only affects the implicit constant in the incidence bound). In the case where $\Gamma$ consists of algebraic curves, we will obtain a slightly stronger bound under a related (though slightly different) condition. Rather than requiring $\Gamma$ and $\mathcal{P}$ to have $s$ degrees of freedom, we will assume that the curves of $\Gamma$ lie in an ``$s$-dimensional family of curves,'' a notion that we will make precise in Section~\ref{incidencesSection} (see Definition \ref{defnFamilyOfCurves}). Roughly, this means that we can represent each curve of $\Gamma$ by a point that lies in some $s$-dimensional algebraic variety in a suitable parameter space. For the vast majority of incidence problems that arise in practice, whenever an arrangement of algebraic curves has $s$ degrees of freedom, these curves belong to a family of curves of dimension at most $s$. The relationship between having $s$ degrees of freedom and being contained in a family of dimension $s$ is discussed further in Appendix \ref{dimFamilyVsDegreesOfFreedom} below. Using Theorem \ref{cuttingCurvesIntoSegments}, we can improve the Pach--Sharir bound under the assumptions made above, which hold for a large class of point-curve configurations. \begin{theorem}[Incidences between points and algebraic curves]\label{incidencesPtsCurves} Let $\mathcal{C}$ be a set of $n$ algebraic plane curves that belong to an $s$-dimensional family of curves, no two of which share a common irreducible component. Let $\mathcal{P}$ be a set of $m$ points in the plane. Then for any $\eps>0$, the number $I(\mathcal{P},\mathcal{C})$ of incidences between the points of $\mathcal{P}$ and the curves of $\mathcal{C}$ satisfies \begin{equation* I(\mathcal{P},\mathcal{C}) = O\Big(m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps}\Big) + O_{D}\Big(m^{2/3}n^{2/3} + m + n\Big). \end{equation*} The implicit constant in the first term depends on $\epsilon$, $s$, the maximum degree of the curves, and also the ``complexity'' of the family of curves from which the set $\mathcal{C}$ is selected. \end{theorem} In Section \ref{incidencesSection} we will give the precise definition of a $s$-dimensional family of curves, and in Section \ref{newIncidenceBoundsSection} we will state a more rigorous version of Theorem \ref{incidencesPtsCurves} that describes how the implicit constant in the first term depends on the family of curves. \subsection{The complexity of a single level in an arrangement}\label{complexityOfALevelSec} Given a collection $\Gamma$ of algebraic curves, or subsets of such curves, the {\em level} of a point $p=(x_0,y_0)\in\mathbb{R}^2$ with respect to $\Gamma$ is defined to be the number of intersection points between the downward vertical ray $\{(x_0,y)\in\mathbb{R}^2 \mid y<y_0\}$ and the curves of $\Gamma$, counted with multiplicity (we assume that each curve in $\Gamma$ has finite intersection with every vertical line). For each non-negative integer $k$, the {\em $k$-level} of $\mathcal{A}(\Gamma)$ is the closure of the locus of all points on the curves of $\Gamma$ whose level is exactly $k$. The $k$-level consists of subarcs of curves from $\Gamma$ that are delimited either at vertices of $\mathcal{A}(\Gamma)$ or at points that lie above a locally $x$-extremal point of some curve from $\Gamma$. The \emph{complexity} of the $k$-level is the number of subarcs that comprise the level. Combining the bound from Theorem~\ref{cuttingArcsIntoSegments} with a result of Chan~\cite[Theorem 2.1]{Ch}, we obtain the following result. \begin{theorem} \label{theo:level} Let $\Gamma$ be a set of $n$ Jordan arcs, each of which is contained in an algebraic curve of degree at most $D$, and every pair of which have finite intersection. Then each level of $\mathcal{A}(\Gamma)$ has complexity $O_D(n^{5/3}\log^{O_D(1)} n)$. \end{theorem} Theorem \ref{theo:level} is proved in Section \ref{subsec:level}. It improves earlier results of Chan~\cite{Ch,Ch2} and Bien~\cite{lilach} for the general algebraic case, and it almost matches the results in \cite{ANPPSS,MT} for the case of pseudo-circles and pseudo-parabolas. \subsection{Complexity of many marked faces in an arrangement}\label{complexityMarkedFacesIntroSec} Let $\Gamma$ be a set of $n$ Jordan arcs, each of pair of which has finite intersection. Let $\mathcal{P}$ be a set of $m$ points in the plane with the property that no point of $\mathcal{P}$ lies on any curve of $\Gamma$. We define $K(\mathcal{P},\Gamma)$ to be the sum of the complexities of the faces of $\mathcal{A}(\Gamma)$ that contain at least one point of $\mathcal{P}$, where the complexity of a face is the number of edges of $\mathcal{A}(\Gamma)$ on its boundary. Informally, this can be regarded as an ``off-curve'' incidence question, where instead of counting the number of curves each point intersects, we (more or less) count the number of curves that the point can ``reach'' (without crossing other curves). The problem has been studied in the context of lines (see~\cite{SA}), segments, and circles~\cite{AAS,ANPPSS,AS}. We will establish the following bound on the complexity of many marked faces: \begin{theorem}[Complexity of many faces]\label{manyfacesCurvesWeakBd} Let $\Gamma$ be a set of $n$ Jordan arcs, each of which is contained in an algebraic curve of degree at most $D$, and every pair of which have finite intersection. Let $\mathcal{P}$ be a set of $m$ points in the plane, so that no point of $\mathcal{P}$ lies on any curve of $\Gamma$. Then \begin{equation} \label{weakptfaces} K(\mathcal{P},\Gamma) = O_D(m^{2/3}n^{2/3}+n^{3/2}\log^{O_D(1)}n). \end{equation} \end{theorem} Theorem \ref{manyfacesCurvesWeakBd} is obtained by using Theorem \ref{cuttingArcsIntoSegments} to cut the Jordan arcs into pseudo-segments and then applying existing techniques to this collection of pseudo-segments. As discussed in Remark \ref{JordanArcsVsCurvesRem}, we could also state Theorem \ref{manyfacesCurvesWeakBd} for collections of algebraic curves (or collections of connected subsets of algebraic curves) rather than Jordan arcs contained in algebraic curves. Doing so, however, makes the notation more complex without actually making the result any more general. We prove Theorem \ref{incidencesPtsCurves} by using Theorem \ref{cuttingCurvesIntoSegments} to obtain a weak incidence bound and then amplifying the bound using further arguments. The bound in Theorem \ref{manyfacesCurvesWeakBd} is the analogue of the weak incidence bound which is the starting point for the proof of Theorem \ref{incidencesPtsCurves}. We attempted to amplify the bound in Theorem \ref{incidencesPtsCurves} as well, but we encountered several technical issues that we do not know how to overcome. In Section \ref{markedFacesDiscussionSec} we will comment on these difficulties and leave such an improvement as an open problem. \section{Cutting algebraic arcs into pseudo-segments} \label{sec:cut} In this section we prove Theorems \ref{cuttingCurvesIntoSegments} and \ref{cuttingArcsIntoSegments}. \subsection{Some real algebraic geometry}\label{realAlgGeoSec} Before we can proceed further, we will need some basic definitions from real algebraic geometry. A real (resp., complex) affine algebraic variety is the common zero locus of a finite set of polynomials over the real (resp., complex) numbers. If $Z\subset\mathbb{R}^d$ is a real algebraic variety, we define $Z^*\subset\mathbb{C}^d$ to be the smallest (complex) variety that contains $Z$. Unless otherwise noted, all varieties are assumed to be affine. Throughout the proof, we will work with both the Euclidean and Zariski topology\footnote{See \cite{Harris} for an introduction to the Zariski topology and related background.}. Unless specified explicitly, all open sets are assumed to be in the Euclidean topology. Let $Z\subset \mathbb{R}^d$ be a real algebraic variety. A crucial property of $Z$ will be its \emph{dimension}. The precise definition of the dimension of a real algebraic variety is slightly subtle, see, e.g., \cite{BCR}. Informally, however, the dimension of $Z$ is the largest integer $e$ so that $Z$ contains a subset homeomorphic to the open $e$-dimensional cube $(0,1)^e$. If $Z\subset\mathbb{R}^d$ is a non-empty algebraic set of dimension $\leq 1$, we call it an \emph{algebraic curve}. The \emph{degree} of $Z\subset\mathbb{R}^d$ is the degree of the complex variety $Z^*\subset\mathbb{C}^d$; the latter is the sum of the degrees of the irreducible components of $Z^*$. See \cite{Harris} for further background and details. Similarly, a real (resp., complex) \emph{projective} algebraic variety is the common zero locus of a finite set of homogeneous polynomials. The dimension and degree of a real projective variety are defined analogously to the definitions for the affine case. For a single polynomial $f\in\mathbb{R}[x_1,\ldots,x_d]$, its zero locus $Z(f)=\{p\in\mathbb{R}^d \mid f(p)=0\}$ is a real algebraic variety of degree at most $\deg(f)$. \subsection{Cuttings}\label{cuttingsSection} Let $\mathcal{S}$ be a collection\footnote{The terms ``collection'' and ``set'' mean the same thing; we use both merely to improve readability.} of sets in $\mathbb{R}^2$, each of which is contained in an algebraic curve and each pair of which have finite intersection. We say that $\mathcal{S}^\prime$ is a \emph{cutting} of $\mathcal{S}$ if $\mathcal{S}^\prime$ is a collection of pairwise disjoint connected sets, and for each $S\in\mathcal{S}$ there is a finite set $\mathcal{P}_S\subset S $ so that \begin{equation*} \mathcal{S}^\prime=\bigcup_{S\in\mathcal{S}}\{S^\prime \mid S^\prime\ \textrm{is a connected component of}\ S\backslash\mathcal{P}_S\}. \end{equation*} We say that $\sum_{S\in\mathcal{S}}|\mathcal{P}_S|$ is the number of cuts used in the cutting. Note that if $S^\prime\in \mathcal{S}^\prime$, then either $S^\prime$ is a point, or there is a unique $S\in\mathcal{S}$ with $S^\prime\subset S$. In practice, we will throw away all isolated points, so each $S^\prime\in\mathcal{S}$ will have (be contained in) a unique ``parent'' set $S\in\mathcal{S}$. The following three results will help us control the number of connected components that are obtained by cutting an algebraic plane curve. \begin{theorem}[Harnack~\cite{Harnack}]\label{HarnackCurveTheorem} Let $f\in\mathbb{R}[x,y]$ be a polynomial of degree $D$. Then $Z(f)$ contains at most $\frac12(D-1)(D-2) +1 \leq D^2$ connected components. \end{theorem} \begin{lemma}[Removing a point from a curve]\label{removePtFromSet} Let $f\in\mathbb{R}[x,y]$ be a polynomial of degree $D$, let $\gamma\subset Z(f)$ be a connected set, and let $p\in\gamma$. Then $\gamma\backslash\{p\}$ contains at most $D$ connected components. \end{lemma} The idea behind Lemma \ref{removePtFromSet} is that in a small neighborhood of $p$, $\gamma$ is a union of at most $D$ ``branches,'' and removing $p$ can cut these branches into separate connected components. See, e.g., Lemmas 4.5, 4.6, and 4.7 from \cite{Zahl} for details. Note that for the purposes of this paper, the exact bounds in Theorem \ref{HarnackCurveTheorem} and Lemma \ref{removePtFromSet} are not important; all that matters is that the quantities are $O_D(1).$ \begin{lemma}[Cutting a curve into Jordan arcs]\label{cuttingCurveJordanArcs} Let $f\in\mathbb{R}[x,y]$ be a square-free polynomial. Then $Z(f)\backslash Z(\partial_y f)$ is a union of disjoint Jordan arcs. \end{lemma} \begin{proof} First, note that by the implicit function theorem, $Z(f)\backslash Z(\partial_y f)$ is a one-dimensional manifold. By the classification of one-dimensional manifolds, we conclude that each connected component of $Z(f)\backslash Z(\partial_y f)$ is either a Jordan arc (i.e., homeomorphic to the interval $(0,1)$), or is homeomorphic to a circle. Suppose that a connected component $\gamma\subset Z(f)\backslash Z(\partial_y f)$ is homeomorphic to a circle. Since $\gamma\subset\mathbb{R}^2$ is compact, there exists a point $(x_0,y_0)\in\gamma$ with $x_0 = \min\{x\mid (x,y)\in\gamma\}.$ We must have $\partial_y f(x_0,y_0)=0$, which contradicts the fact that $\gamma\subset Z(f)\backslash Z(\partial_y f)$. \end{proof} Lemmas \ref{HarnackCurveTheorem} and \ref{removePtFromSet} imply that when a collection of algebraic curves is cut, the number of sets in the new collection is controlled by the number of curves in the original collection and the number of cuts made in the cutting. This is made precise in the following lemma. \begin{lemma}\label{cutsVsComponents} Let $\mathcal{C}$ be a set of algebraic curves and let $\Gamma_0$ be a cutting of $\mathcal{C}$. Suppose that $\ell$ cuts are used in the cutting. Then $|\Gamma_0|\leq D^2|\mathcal{C}|+D\ell$. \end{lemma} \subsection{Lenses} Let $\gamma$ and $\gamma^\prime$ be Jordan arcs. We say that $\gamma$ and $\gamma^\prime$ form a \emph{lens} if $\mathbb{R}^2\backslash(\gamma\cup\gamma^\prime)$ consists of at least two connected components. We say that $\gamma$ and $\gamma^\prime$ form a \emph{proper lens} if $\mathbb{R}^2\backslash(\gamma\cup\gamma^\prime)$ consists of exactly two connected components. If $\lambda$ and $\lambda^\prime$ form a lens, then we can always find two connected subarcs $\delta\subseteq\gamma$ and $\delta'\subseteq\gamma'$ with two common endpoints. If the lens is proper, then the sets $\delta$ and $\delta^\prime$ are unique, and the relative interiors of $\delta$ and $\delta'$ are disjoint. See Figure~\ref{fig:lenses}. We will abuse notation slightly and will also refer to the pair $(\delta,\delta^\prime)$ as the lens. \begin{figure}[hbpt] \centering \input{figlenses.pstex_t} \caption{(a) $\gamma$ and $\gamma'$ form a proper lens that consists of the subarcs $\delta$ and $\delta'$. (b) The lens formed by $\gamma$ and $\gamma'$ with endpoints $u,v$ is not proper.} \label{fig:lenses} \end{figure} Let $\Gamma$ be a set of Jordan arcs. We say that $\Gamma$ is \emph{lens-free} if no two curves from $\Gamma$ form a lens. $\Gamma$ is lens-free if and only if the curves in $\Gamma$ are a set of pseudo-segments. \subsection{Lifting plane curves to space curves} In this section we will describe a process adopted from Ellenberg, Solymosi and Zahl~\cite{ESZ} that transforms a plane curve into a space curve, so that the ``slope'' of the plane curve is encoded as the $z$--coordinate of the space curve. If $C$ is a plane curve and $(x,y)$ is a smooth point of $C$, we define the slope of $C$ at $(x,y)$ to be the slope of the tangent line to $C$ at $(x,y)$. If this line is vertical, then we say that the slope is infinite. \begin{lemma}\label{liftOfACurve} Let $C$ be an irreducible algebraic curve in $\mathbb{R}^2$ of degree at most $D$. Then there is an irreducible space curve $\hat C\subset\mathbb{R}^3$ with the following property: if $(x,y,z)\in \hat C$ and if $(x,y)$ is a smooth point of $C$ where the slope is finite, then $z$ is the slope of $C$ at the point $(x,y)$. Furthermore, the degree of $\hat C$ is at most $D^2$. \end{lemma} This is \cite[Proposition 1]{ESZ}. In brief, let $f$ be an irreducible polynomial satisfying $C=Z(f)$, and consider the algebraic variety \begin{equation*} \big\{(x,y,z) \mid f(x,y)=0,\; z\partial_y f(x,y) + \partial_x f(x,y) = 0 \big\} . \end{equation*} As discussed in \cite[\S3.3]{ESZ}, this variety is a union of vertical lines (one line above each singular point of $C$), plus an irreducible curve that is not a vertical line. This curve is $\hat C$. Its degree is $\le D^2$, since it is an irreducible component of the intersection of two surfaces of degree at most $D$. See \cite{ESZ} for details.\footnote{% Note that regular points of $C$ with vertical tangency are not part of the projection of $\hat C$. For example, if $C$ is the circle $x^2+y^2=1$, $\hat C$ is the space curve given by $x^2+y^2=1$ and $x+yz=0$, and its $xy$-projection does not contain the points $(1,0)$ or $(-1,0)$.} \begin{remark} Note that if $(x,y)$ is a smooth point of $C$ with finite slope, then the $z$-vertical line passing through $(x,y)$ intersects $\hat C$ in exactly one point. Thus if $\gamma\subset C$ is a Jordan arc consisting of smooth points with finite slope, then there is a unique space Jordan arc $\hat \gamma\subset \hat C$ satisfying $\pi(\hat \gamma)=\gamma$, where $\pi(x,y,z)=(x,y)$. We will exploit this observation by cutting $C=Z(f)$ at each point where $\partial_yf$ vanishes. By B\'ezout's theorem and Lemma \ref{removePtFromSet}, this process cuts $C$ into $O_D(1)$ connected pieces, so that the interior of each piece consists exclusively of smooth points with finite slope. \end{remark} \subsection{Depth cycles and lenses} Let $\gamma$ and $\gamma^\prime$ be Jordan space arcs in $\mathbb{R}^3$. We say that $\gamma$ and $\gamma^\prime$ form a \emph{depth cycle of length two} if there are points $(x_1,y_1,z_1),(x_2,y_2,z_2)\in\gamma,$ $(x_1,y_1,z_1^\prime),(x_2,y_2,z_2^\prime)\in\gamma^\prime$, so that $(x_1,y_1)\neq (x_2,y_2)$, $z_1\geq z_1^\prime$, and $z_2\leq z_2^\prime.$ This depth cycle is characterized by the tuple $(\gamma,\gamma^\prime, x_1,y_1,z_1,z_1^\prime, x_2,y_2,z_2,z_2^\prime)$. If $z_1>z_1^\prime$ and $z_2<z_2^\prime$, we call the depth cycle a \emph{proper} depth cycle (of length two). Let $\Gamma$ be a set of Jordan space arcs in $\mathbb{R}^3$. We say that $\Gamma$ has no depth cycles of length two (resp., no proper depth cycles of length two) if no pair of curves in $\Gamma$ have a depth cycle of length two (resp., a proper depth cycle of length two). \begin{lemma}\label{lensesAndDepthCycles} Let $C$ and $C^\prime$ be plane algebraic curves. Let $\gamma\subset C$ and $\gamma^\prime\subset C^\prime$ be $x$-monotone Jordan arcs consisting of smooth points with finite slope, and suppose that $\gamma$ and $\gamma^\prime$ form a lens. Then $\hat \gamma$ and $\hat \gamma^\prime$ form a depth cycle of length two. \end{lemma} \begin{proof} By shrinking $\gamma$ and $\gamma^\prime$ if necessary, we can assume that $\gamma$ and $\gamma^\prime$ intersect at exactly two points (since the arcs are $x$-monotone, this is equivalent to them forming a proper lens)---call these points $p$ and $q$. In particular, $\mathbb{R}^2\backslash (\gamma\cup\gamma^\prime)$ has exactly two connected components, exactly one of which is unbounded. Call the bounded component the ``inside'' of $\gamma\cup\gamma^\prime$. Since $\gamma$ and $\gamma^\prime$ are $x$-monotone, every $y$-vertical line intersects each of $\gamma$ and $\gamma^\prime$ at most once. In particular, every vertical line that intersects the inside of $\gamma\cup\gamma^\prime$ must intersect each of $\gamma$ and $\gamma^\prime$ at precisely one point. By renaming the indices if necessary, we can assume that if $\ell$ is a vertical line that intersects the inside of $\gamma\cup\gamma^\prime$, then the intersection of $\ell$ with $\gamma$ has larger $y$-coordinate than the intersection of $\ell$ with $\gamma^\prime$. \begin{figure}[hbpt] \centering \input{lift.pstex_t} \caption{(a) $\gamma$ and $\gamma'$ form a lens in the $xy$-plane. (b) The respective lifted images $\hat{\gamma}$, $\hat\gamma'$ of $\gamma$, $\gamma'$ form a depth cycle of length two.} \label{fig:lift} \end{figure} Suppose that $\gamma$ and $\gamma^\prime$ are tangent at $p=(x,y)$. Then $\hat\gamma$ and $\hat\gamma^\prime$ intersect at the point $(x,y,z)$, where $z$ is the slope of both curves at $p$. By interchanging the indices if necessary, we can assume that at the lifting of $q$, either $\hat\gamma$ and $\hat\gamma^\prime$ intersect, or $\hat\gamma$ has larger $z$-coordinate. In either case, $\hat\gamma$ and $\hat\gamma^\prime$ form a depth cycle of length two. An identical argument can be used if $\gamma$ and $\gamma^\prime$ are tangent at $q$. Suppose then that $\gamma$ and $\gamma^\prime$ are not tangent at $p$ or at $q$. For each $x$ that lies in the $x$-projection of the inside of $\gamma\cup\gamma^\prime$, let $f_1(x)$ (resp., $f_2(x)$) be the $y$-coordinate of the intersection of $\gamma$ (resp., $\gamma'$) with the vertical line passing through $(x,0)$. Let $f(x)=f_1(x)-f_2(x)$. Let $p=(x_1,y_1)$, $q=(x_2,y_2)$. Then $f(x_1)=f(x_2)=0$, and $f(x)>0$ for $x_1<x<x_2$. Furthermore, $f(x)$ is smooth, and $\frac{df}{dx}(x_1)\neq 0,$ $\frac{df}{dx}(x_2)\neq 0.$ We conclude that $\frac{df}{dx}(x_1)>0$ and $\frac{df}{dx}(x_2)<0$, i.e., the slope of $\gamma$ at $p$ is larger than that of $\gamma^\prime$, and the slope of $\gamma$ at $q$ is smaller than that of $\gamma^\prime$. We conclude that $\hat\gamma$ and $\hat\gamma^\prime$ form a depth cycle. \end{proof} \subsection{Cutting lenses} We are almost ready to prove Theorem~\ref{cuttingCurvesIntoSegments}. Before doing so, we will need the following key lemma. \begin{lemma}\label{cutSpaceCurves} For each $D\geq 1$, there are constants $A = A(D)$ and $\kappa=\kappa(D)$ so that the following holds. Let $\mathcal{C}$ be a set of $n$ irreducible algebraic plane curves of degree at most $D$ and let $\hat\mathcal{C}=\{\hat C \mid C\in\mathcal{C}\}$. Then by using $\leq A n^{3/2} \log^{\kappa}n$ cuts, $\hat \mathcal{C}$ can be cut into a set of Jordan space arcs that have no proper depth cycles of length two. \end{lemma} To avoid interrupting the flow of the proof, we will prove Lemma \ref{cutSpaceCurves} in Appendix \ref{proofOfLemCutSpaceCurvesSec} below. A similar statement appears in the recent work of Aronov and Sharir~\cite{ArS1}. That work deals primarily with eliminating cycles (of any length) in three-dimensional line configurations that satisfy certain genericity assumptions. It also presents an extension of this result to the case of constant-degree algebraic curves, but does so in a rather sketchy way. For the sake of exposition, and in order to make the present paper as self contained as possible, we give a detailed and rigorous proof for the specific case that we need. \begin{remark} Although we will not need it here, we can define a depth cycle of any length $\ell\geq 2$ in a similar fashion. The cutting from Lemma \ref{cutSpaceCurves} will actually eliminate depth cycles of all lengths $\ell\geq 2$. \end{remark} \begin{proof}[Proof of Theorem \ref{cuttingCurvesIntoSegments} using Lemma \ref{cutSpaceCurves}] The proof of Theorem \ref{cuttingCurvesIntoSegments} will proceed as follows. First, we will cut the curves from $\mathcal{C}$ so as to eliminate all lenses where the corresponding curves intersect transversely (such lenses correspond to proper depth cycles). Then we will cut the curves from $\mathcal{C}$ so as to eliminate all lenses where the corresponding curves intersect tangentially at one or both endpoints of the lens. Finally, we will further cut the curves from $\mathcal{C}$ so that the resulting pieces are $x$-monotone and smooth Jordan arcs. Let $\hat\mathcal{C}=\{\hat C \mid C\in\mathcal{C}\}$. Use Lemma \ref{cutSpaceCurves} to cut $\hat\mathcal{C}$ into Jordan space arcs so that all proper depth cycles of length two are eliminated; this cutting uses $O_D\left( n^{3/2} \log^{O_D(1)}n \right)$ cuts (recall that if the same point $p\in\mathbb{R}^3$ is removed from several curves, then this point is counted with multiplicity). The projection of each of these Jordan space arcs to the $xy$-plane yields a connected subset of a curve from $\mathcal{C}$. Let $\mathcal{D}_1$ denote the set of all connected components of the projections of these Jordan space arcs. Then $\mathcal{D}_1$ is a cutting of $\mathcal{C}$ using $O_D\left( n^{3/2} \log^{O_D(1)}n \right)$ cuts; the resulting segments do not form any proper lenses. Next, we further cut the sets in $\mathcal{D}_1$ as follows. For each pair of connected sets $S,S^\prime\in\mathcal{D}_1$ where neither $S$ nor $S^\prime$ is a point, let $C,C^\prime\in\mathcal{C}$ be the (uniquely defined) curves from $\mathcal{C}$ containing $S$ and $S^\prime$, respectively. Cut $S$ and $S^\prime$ at those points $p\in S\cap S^\prime$ where $p$ is a smooth point of both $C$ and $C^\prime$, and $C$ and $C^\prime$ are tangent at $p$---points of this type might be endpoints of an improper lens. After this procedure has been performed for every pair of sets $S,S^\prime\in\mathcal{D}_1$, finitely many points have been removed from each set $S\in\mathcal{D}_1$. The total number of cuts performed at this stage is at most \begin{align}\label{ptsOfTangency} 2\sum_{C\in\mathcal{C}}\Big|\{p\in C\mid& \ p\ \textrm{is a smooth point of}\ C,\ \textrm{and there exists a } \\[-15pt] & \textrm{curve}\ C^\prime\in\mathcal{C}\ \textrm{that is smooth at}\ p\ \textrm{and tangent to}\ C\ \textrm{at}\ p\}\Big|. \nonumber \end{align} By \cite[Theorem 1]{ESZ}, the sum in \eqref{ptsOfTangency} is $O_D(n^{3/2})$. Finally, we further cut the sets in $\mathcal{D}_2$ as follows. For each $S\in\mathcal{D}_2$ that is not a point, let $C=Z(f)$ be the unique curve from $\mathcal{C}$ containing $S$, for a suitable bivariate polynomial $f$ of degree at most $D$. Remove from $S$ those points satisfying $\partial_yf=0$. If $S$ is a point, remove it entirely (such points will be singular points of any algebraic curve that contains them). By B\'ezout's theorem, this process uses $O_D(n)$ cuts. Let $\Gamma_0$ be the collection of connected components of sets from $\mathcal{D}_2$ after this cutting process. Each of the sets in $\Gamma_0$ is an $x$-monotone Jordan arc, and by Lemma \ref{lensesAndDepthCycles}, each pair of arcs of $\Gamma_0$ intersect at most once. Thus $\Gamma_0$ is a cutting of $\mathcal{C}$ (in the sense of Section \ref{cuttingsSection}) that uses $O_D(n^{3/2}\log^{O_D(1)} n)$ cuts. By Lemma \ref{cutsVsComponents}, $|\Gamma_0|=O_D(n^{3/2}\log^{O_D(1)} n)$. Thus $\Gamma_0$ satisfies the conclusions of Theorem \ref{cuttingCurvesIntoSegments}. \end{proof} \begin{proof}[Proof of Theorem \ref{cuttingArcsIntoSegments} using Theorem \ref{cuttingCurvesIntoSegments}] Let $\Gamma$ be a set of $n$ Jordan arcs, each of which is contained in an algebraic curve of degree at most $D$, and every pair of which have finite intersection. Let $X=\sum_{\substack{\gamma,\gamma^\prime\in\Gamma\\\gamma\neq\gamma^\prime}}|\gamma\cap\gamma^\prime|$ be the number of times pairs of curves from $\Gamma$ intersect. We will obtain an improved bound when the number of curve-curve intersections is much smaller than $n^2$. First, the case where $X=O(n)$ is trivial: we simply cut each arc at all its intersection points with the other arcs, and get a total of $O(n+X)=O(n)$ pairwise disjoint subarcs; this certainly satisfies the bound in Theorem \ref{cuttingArcsIntoSegments}. Assume then that $X$ is superlinear in $n$. \begin{definition} Let $\Gamma$ be a collection of Jordan arcs in the plane, let $\mathcal{A}(\Gamma)$ be the arrangement determined by $\Gamma$, and let $r\geq 1$. A \emph{$(1/r)$-cutting} of $\mathcal{A}(\Gamma)$ into pseudo-trapezoids is a collection $\Xi$ of pairwise-disjoint open connected sets in $\mathbb{R}^2$ (these sets are called the cells of the cutting), so that the following properties hold \begin{enumerate} \item[(i)] Each cell is crossed by at most $n/r$ curves from $\Gamma$ (we say a curve from $\Gamma$ crosses a cell if the curve intersects the cell). \item[(ii)] The closures of the cells cover the plane. \item[(iii)] The boundary of each of these cells is the union of at most two vertical line segments and two Jordan arcs, where each arc is a subarc of an arc from $\Gamma$. \end{enumerate} \end{definition} Let $r=\lceil n^2/X\rceil$. Construct a $(1/r)$-cutting $\Xi$ of $\mathcal{A}(\Gamma)$ into pseudo-trapezoids, which consists of $O(r+r^2X/n^2) = O(r)$ cells, each of which intersect $n/r=O(X/n)$ curves of $\Gamma$. The existence of such a cutting has been established by de Berg and Schwarzkopf~\cite{BS} for the case of line segments, and has been considered as a folklore result for the general case, with an essentially identical proof (see \cite{AAS} and \cite[Proposition 2.12]{HP}). First, cut each arc of $\Gamma$ at each of its intersection points with the boundaries of the cells of $\Xi$. If an arc of $\Gamma$ occurs as the boundary of one or more cells, cut that arc at each point where it meets a vertical line segment from the boundary of the trapezoid (these points are the ``corners'' of the trapezoid). This procedure cuts $\Gamma$ into a new collection $\Gamma_1$ of Jordan arcs, and uses $O(r)\cdot (n/r) = O(n)$ cuts. Note that if $\gamma,\gamma^\prime\in\Gamma_1$ form a lens, and if $\delta\subset\gamma,\ \delta^\prime\subset\gamma^\prime$ are Jordan arcs with common endpoints, then $\delta$ and $\delta^\prime$ must be contained in a common cell from $\Xi$, for otherwise one of them would have to be cut by the procedure just mentioned. Thus, to eliminate all lenses from $\Gamma_1$, it suffices to cut each of the curves in $\Gamma_1$ into smaller Jordan arcs so that within each cell of $\Xi$, all lenses are eliminated. Each cell $\tau$ of $\Xi$ intersects $O(X/n)$ curves from $\Gamma$; call this collection of curves $\Gamma_{\tau}$. Each curve $\gamma\in\Gamma_\tau$ is contained in a unique algebraic curve $C_{\gamma}$. Let $\mathcal{C}_{\tau}=\{C_\gamma \mid \gamma\in\Gamma_{\tau}\}$. Then $|\mathcal{C}_\tau|\leq|\Gamma_\tau|=O(X/n)$. Apply Theorem \ref{cuttingCurvesIntoSegments} to $\mathcal{C}_\tau$; we obtain a cutting $\Gamma^\prime_{\tau}$ of $\mathcal{C}_\tau$ that uses $O((X/n)^{3/2}+\log^{O_D(1)}(X/n))$ cuts, so that each pair of arcs in $\Gamma^\prime_{\tau}$ intersect at most once. For each curve $C\in C_{\tau},$ let $\mathcal{P}_{C,\tau}$ be the set of points at which $C$ is cut. For each $\gamma\in\Gamma_1$, let $C_\gamma$ be the corresponding algebraic curve and define \begin{equation*} \mathcal{P}_\gamma =\gamma\cap \bigcup_{\tau \mid \gamma\in\Gamma_\tau} \mathcal{P}_{C_\gamma,\tau}. \end{equation*} $\mathcal{P}_\gamma$ is the set of points of $\gamma$ at which $C_\gamma$ is cut, when $C_\gamma$ is regarded as a curve in $\mathcal{C}_\tau$ for some cell $\tau$ containing $\gamma$. Let $\Gamma_2$ be the collection of Jordan arcs obtained by cutting each arc $\gamma\in \Gamma_1$ at each point of $\mathcal{P}_\gamma$. The total number of cuts is \begin{equation*} O(n^2/X)O((X/n)^{3/2}\log^{O_D(1)}(X/n))+O(n)=O(n^{1/2}X^{1/2}\log^{O_D(1)}n), \end{equation*} and the curves in $\Gamma_2$ satisfy the conclusions of Theorem \ref{cuttingArcsIntoSegments}. \end{proof} \section{Point-curve incidences}\label{incidencesSection} In this section we will prove (a precise version of) Theorem \ref{incidencesPtsCurves}. In order to do so, we first define rigorously the notion of a family of algebraic curves. \subsection{Families of algebraic curves} Let $f\in\mathbb{R}[x,y]$ be a polynomial of degree at most $D$. $f$ can be written as a sum of $\binom{D+2}{2}$ monomials (some of which might have zero coefficients), and thus we can identify $f$ with a vector in $\mathbb{R}^{\binom{D+2}{2}}$. If $\lambda\neq 0$, then $f$ and $\lambda f$ have the same zero-set. Thus, the set of algebraic curves of degree at most $D$ in $\mathbb{R}^2$ can be identified with the points in the projective space $\mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$. Henceforth we will abuse notation and refer to such algebraic curves as elements of $\mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$, and vice-versa. \begin{definition}\label{defnFamilyOfCurves} An \emph{$s$-dimensional family of plane curves of degree at most $D$} is an algebraic variety $F\subset \mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$ that has dimension $s$. We will call the degree of $F$ the \emph{complexity} of the family. We use the term complexity (rather than degree) to avoid confusion with $D$, which is the maximum degree of the plane curves. Since the degree of the curves and the complexity of the family are of secondary importance in the context that we consider here, we will sometimes abbreviate this as ``an $s$-dimensional family of plane curves.'' \end{definition} For example, the set of unit circles in the plane is a two-dimensional family of curves; the set of circles (of arbitrary radius) in the plane is a three-dimensional family of curves; and the set of axis-parallel ellipses or of hyperbolas are four-dimensional families. In each of these instances, $D=2$ and the complexity is $O(1)$. Informally, if $F$ is $s$-dimensional then we expect to be able to characterize each element of $F$ by $s$ real parameters. This is the case with all the aforementioned examples and in most of the applications. In this case, requiring a curve of $F$ to pass through $s$ points in the plane imposes $s$ constraints on the $s$ parameters specifying the curve, and we expect these equations to have a finite number of solutions. When all these expectations are satisfied, we indeed get a family of curves with $s$ degrees of freedom (as in Pach and Sharir~\cite{PS}). In Appendix \ref{dimFamilyVsDegreesOfFreedom} we further discuss the connection between having $s$ degrees of freedom and belonging to an $s$-dimensional family of curves. \subsection{New incidence bounds}\label{newIncidenceBoundsSection} We can now state (and prove) a precise version of Theorem \ref{incidencesPtsCurves}. \begin{incidencesPtsCurvesThm}[Incidences between points and algebraic curves] Let $\mathcal{C}$ be a set of $n$ algebraic plane curves that belong to an $s$-dimensional family of curves of complexity $K$, no two of which share a common irreducible component. Let $\mathcal{P}$ be a set of $m$ points in the plane. Then for each $\eps>0$, the number $I(\mathcal{P},\mathcal{C})$ of incidences between the points of $\mathcal{P}$ and the curves of $\mathcal{C}$ satisfies \begin{equation} \label{newIncidenceBdPrecise} I(\mathcal{P},\mathcal{C}) = O_{s,D,K,\eps}\Big(m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps}\Big) + O_D\Big(m^{2/3}n^{2/3} + m + n\Big). \end{equation} \end{incidencesPtsCurvesThm} \medskip \begin{remark} If the arrangement of points and curves also has $s\ge 3$ degrees of freedom, then Pach and Sharir's bound from \cite{PS} would say that $I(\mathcal{P},\mathcal{C})=O_{D,s}\big(m^{\frac{s}{2s-1}}n^{\frac{2s-2}{2s-1}}+m+n\big)$. The bound \eqref{newIncidenceBdPrecise} is superior for $m > n^{1/s + c}$ for any constant $c>0$, with a suitable $\eps$ that depends linearly on $c$. When $m\leq n^{1/s-c}$, both bounds become $O(n)$ (again, with a suitable choice of $\eps$), and when $m$ is close to $n^{1/s}$, our bound is larger by a factor of $n^{\eps}$ than the bound in \cite{PS}. \end{remark} \begin{remark} For $s=2$, which arises for lines and for unit circles, we almost recover the Szemer\'edi--Trotter bound (we miss by an $n^\eps$ factor). For $s=3$, which arises for arbitrary circles and for vertical parabolas, we again almost recover the bound $O(m^{6/11}n^{9/11}\log^{2/11}n+m^{2/3}n^{2/3}+m+n)$ from~\cite{ANPPSS,AS,MT} (in our bound, the $\log^{2/11}n$ is weakened to $n^\eps$). For $s=4$, which arises for example for axis-parallel ellipses or hyperbolas, we get the bound $O\left(m^{1/2}n^{7/8+\eps} + m^{2/3}n^{2/3} + m + n \right)$, the best previously known bound for this case was the Pach-Sharir bound $O(m^{4/7}n^{6/7}+m+n)$. The new bound is superior when $m\ge n^{1/4+c}$ for any constant $c>0$ (with $\eps$ depending on $c$, as above). \end{remark} The high-level approach that we use follows the earlier treatments that have appeared, for example, in Agarwal et al.~\cite{ANPPSS}. We first derive a weaker bound using Sz\'ekely's crossing lemma argument for collections of pseudo-segments~\cite{Sze}, and then strengthen this bound by passing to a parametric dual space in which the curves of $\mathcal{C}$ become points, and the points of $\mathcal{P}$ become bounded-degree algebraic hypersurfaces. We then decompose the problem into smaller subproblems, using the multilevel polynomial partitioning technique of Matou\v{s}ek and Pat\'akov\'a~\cite{MP} (see Theorem~\ref{thm:mp} below). Finally, we use induction (or rather recursion) on the subproblems produced by the partition. The terminal instances of the recursion are subproblems which are either too small, or at which we can effectively apply the weak bound. \subsection{An initial weaker bound} \begin{lemma}\label{STResultLem} Let $\mathcal{P}$ be a set of $m$ points and let $\mathcal{C}$ be a set of $n$ plane algebraic curves of degree at most $D$, no two of which share a common component. Then \begin{equation}\label{boundOnIPC} I(\mathcal{P},\mathcal{C})= O_D(m^{2/3}n^{2/3}+n^{3/2}\log^{O_D(1)}n+m). \end{equation} \end{lemma} \begin{proof} Apply Theorem \ref{cuttingCurvesIntoSegments} to $\mathcal{C}$, and let $\Gamma_0$ be the resulting set of Jordan arcs. If $(p,C)$ is an incidence from $I(\mathcal{P},\mathcal{C})$, then either (a) $p$ is a singular point of $C$, or (b) there is a curve $\gamma\in\Gamma_0$ with $\gamma\subset C$ and either (b.i) $p\in\gamma$ or (b.ii) $p$ lies at an endpoint of $\gamma$ (recall that the curves in $\Gamma_0$ are relatively open). We conclude that \begin{equation*} I(\mathcal{P},\mathcal{C})\leq I(\mathcal{P},\Gamma_0)+2|\Gamma_0|+D^2|\mathcal{C}|. \end{equation*} The bound in (\ref{boundOnIPC}) then follows by applying the Szem\'eredi-Trotter theorem for incidences with pseudo-segments, using the crossing-lemma technique of Sz\'ekely~\cite{Sze}, the bound $|\Gamma_0| = O_D(n^{3/2}\log^{O_D(1)}n)$ from Theorem \ref{cuttingCurvesIntoSegments}, and the fact that the number of crossings between the arcs of $\Gamma_0$ is still only $O(n^2)$. \end{proof} \begin{remark} In fact, the above argument actually proves a slightly stronger statement. If the set of algebraic curves $\mathcal{C}$ is replaced by a set of Jordan arcs $\Gamma$ that satisfy the hypotheses of Theorem \ref{cuttingArcsIntoSegments} (or if we still stick to full algebraic curves with a smaller number of intersections), then \eqref{boundOnIPC} can be replaced by the stronger bound \begin{equation*} I(P,\Gamma) = O_D\left( m^{2/3}X^{1/3} + m + n + n^{1/2}X^{1/2}\log^{O_D(1)} n \right), \end{equation*} where $X=\sum_{\substack{\gamma,\gamma^\prime\in\Gamma\\ \gamma\neq\gamma^\prime}}|\gamma\cap\gamma^\prime|$. The last term follows from the refined bound on $|\Gamma_0|$ given in Theorem~\ref{cuttingArcsIntoSegments}, and the first term follows from Sz\'ekely's crossing-lemma analysis~\cite{Sze}. \end{remark} \subsection{Duality and space decomposition}\label{dualityTransSec} In this section we will describe a ``duality transform'' that sends algebraic curves to points in a suitable parameter space, and sends points in the plane to algebraic varieties in this parameter space. The key property of this transform is that it preserves the incidence relation---if a curve in the plane is incident to a point, then the corresponding point and variety in the parameter space are incident too. In the statement of Theorem \ref{incidencesPtsCurves}, we refer to a family of algebraic curves of degree at most $D$, which, by definition, is a subvariety $F\subset \mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$. $F$ need not be irreducible, so let $F^\prime\subset F$ be an irreducible component of $F$; we will consider each irreducible component of $F$ separately. If $F^\prime= \mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$, then the set of curves that are incident to a point $p$ in the plane corresponds to a proper subvariety $\sigma_p$ of $F^\prime$ (in fact, $\sigma_p$ is a hyperplane). However, if $F^\prime$ is a proper subvariety of $\mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$, then it is possible that there is a point $p$ that is incident to every curve in $F^\prime$. This can occur, but if it does, then either there are $\leq D^2$ points $p\in\mathbb{R}^2$ with this property, or $F^\prime$ can contain at most one curve from $\mathcal{C}$. Thus if we throw away a small set of points and curves, we can assume that for each point $p$ in the plane, the set of curves that are incident to $p$ corresponds to a (proper) subvariety of $F^\prime$ (this set is in fact the intersection of $F'$ with some hyperplane). The number of incidences that we may have missed is at most $O_{D,\operatorname{deg}(F)}(n+m)$, and we can deal with these incidences separately. The following lemma makes this statement precise. \begin{lemma}[point-curve duality]\label{dualityTransformLem} Let $F\subset \mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$ be a family of plane curves of degree at most $D$ with $\dim F=s$. Let $\mathcal{C}\subset F$ be a finite set of curves, no pair of which share a common component, and let $\mathcal{P}\subset\mathbb{R}^2$ be a finite set of points. Then there exist a set of points $\mathcal{P}_{\operatorname{bad}}\subset\mathbb{R}^2$, a set of curves $\mathcal{C}_{\operatorname{bad}}\subset \mathcal{C}$, a set of points $W=\{w_C\}_{C\in\mathcal{C}\backslash \mathcal{C}_{\operatorname{bad}}}\subset\mathbb{R}^s$ and a set of real algebraic varieties $\Sigma=\{\sigma_p\}_{p\in\mathcal{P}\backslash \mathcal{P}_{\operatorname{bad}}}$ in $\mathbb{R}^s$ that satisfy the following properties: \begin{itemize} \item Each variety $\sigma_p$ has dimension at most $s-1$ and degree $O_{D,\deg(F)}(1)$. \item If $C\in\mathcal{C}\backslash \mathcal{C}_{\operatorname{bad}}$ and $p\in \mathcal{P}\backslash\mathcal{P}_{\operatorname{bad}}$, then $p\in C$ if and only if $w_C\in \sigma_p$. \item $|\mathcal{P}_{\operatorname{bad}}|=O_{D,\deg(F)}(1)$ and $|\mathcal{C}_{\operatorname{bad}}|=O_{D,\deg(F)}(1)$. \end{itemize} \end{lemma} \begin{proof} Decompose $F$ into its irreducible components $F_1\cup\cdots\cup F_\ell$. If an irreducible component contains at most one curve from $\mathcal{C}$, add this curve to $\mathcal{C}_{\operatorname{bad}};$ after doing so, $|\mathcal{C}_{\operatorname{bad}}|=O_{D,\deg(F)}(1)$. After re-indexing, we will assume that each of the remaining components $F_1,\ldots,F_{\ell^\prime}$ contain at least two curves from $\mathcal{C}$. Let $F^\prime=F_1\cup\cdots\cup F_{\ell^\prime}$. For each $p\in\mathbb{R}^2$, define $H_p=\{\gamma\in \mathbf{P}\mathbb{R}^{\binom{D+2}{2}} \mid p\in\gamma\}$. Then $H_p$ is a subvariety of $\mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$ (in fact, it it a hyperplane). Observe that if $F_j$ contains at least two curves from $\mathcal{C}$, then $F_j\subset H_p$ for at most $D^2$ points $p\in\mathbb{R}^2$. Indeed, suppose there exist points $p_1,\ldots,p_{D^2+1}$ with $F_j\subset H_{p_i}$ for each $i=1,\ldots,D^2+1.$ Let $C_1,C_2$ be distinct curves in $\mathcal{C}\cap F_j$. Then $C_1$ and $C_2$ intersect in $\geq D^2+1$ points, so by B\'ezout's theorem, they must share a common component, contrary to our assumptions. Define \begin{equation*} \mathcal{P}_{\operatorname{bad}}=\bigcup_{j=1}^{\ell^\prime}\{p\in\mathbb{R}^2 \mid F_j\subset H_p\}. \end{equation*} We have $|\mathcal{P}_{\operatorname{bad}}|=O_{D,\deg(F)}(1)$. Now, if $p\in\mathbb{R}^2\backslash \mathcal{P}_{\operatorname{bad}}$, then $H_p\cap F_j$ is a proper subvariety of $F_j$ for each $j=1,\ldots,\ell^\prime$. Thus $H_p\cap F^\prime$ is a proper subvariety of $F^\prime$ of degree $O_{D,\operatorname{deg}(F)}(1)$. For the next step, we need to identify (a Zariski open subset of) $\mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$ with $\mathbb{R}^{\binom{D+2}{2}-1}$. To do this, we need to choose where the ``hyperplane at infinity'' lies. We wish to do this in a way that does not affect the incidence relation between the points (representing curves) of $\mathcal{C}$ and the surfaces $\{H_p\}$. Let $H\subset\mathbf{P}\mathbb{R}^{\binom{D+2}{2}}$ be a generic hyperplane (in particular, $H$ avoids all the points of $\mathcal{C}$, and $H\neq H_p$ for any $p\in\mathcal{P}$). After a change of coordinates, we can assume that $H$ is the hyperplane $\{x_0=0\}\subset \mathbf{P}\mathbb{R}^{\binom{D+2}{2}}.$ With this choice of $H$, we obtain a function \begin{equation*} \begin{split} \operatorname{Aff} : \; &\mathbf{P}\mathbb{R}^{\binom{D+2}{2}}\backslash H\to \mathbb{R}^{\binom{D+2}{2}-1}, \quad\text{defined by}\\ &[x_0:x_1:\ldots:x_{\binom{D+2}{2}-1}]\mapsto(x_1/x_0,\ldots,x_{\binom{D+2}{2}-1}/x_0). \end{split} \end{equation*} Let $\pi :\; \mathbb{R}^{\binom{D+2}{2}-1}\to\mathbb{R}^s$ be a generic surjective linear transformation (i.e., $\pi$ is given by a generic\footnote{Over $\mathbb{R}$ one must be a bit careful with ``generic'' points, since they lack some of the favorable properties that hold over an algebraically closed field. See \cite[\S4.2]{Zahl} for further discussion of generic points in the context of combinatorial geometry.} $\left(\binom{D+2}{2}-1\right)\times s$ matrix, which will necessarily have rank $s$). For each $p\in\mathcal{P}\backslash\mathcal{P}_0$, define $\sigma_p$ to be the Zariski closure of $\pi(\operatorname{Aff}(F^\prime\cap H_p))$ (since $\pi$ and $H$ were chosen generically, $\pi(\operatorname{Aff}(H_p))$ is already a real variety, but it is easier to take the Zariski closure than to verify this fact). We have $\sigma_p\subset \mathbb{R}^s$, and $\sigma_p$ is a real algebraic variety of dimension at most $s-1$ and degree $O_{D,\deg(F)}(1)$. For each $C\in\mathcal{C}\backslash\mathcal{C}_{\operatorname{bad}}$, define $w_C = \pi(\operatorname{Aff}(C))$; this is a point in $\mathbb{R}^s$. Define $W=\{w_C\}_{C\in\mathcal{C}\backslash\mathcal{C}_{\operatorname{bad}}}$. Since $\pi$ was chosen generically, it preserves the incidence relation between the points $C\in\mathcal{C}$ and the surfaces $H_p$. Thus if $p\in\mathcal{P}\backslash\mathcal{P}_{\operatorname{bad}}$ and $C\in\mathcal{C}\backslash\mathcal{C}_{\operatorname{bad}}$, then $p\in C$ if and only if $w_C\in \sigma_p$. \end{proof} \begin{remark} The objects created by Lemma \ref{dualityTransformLem} appear rather suspicious. We know that in dimensions $\geq 3$, it is impossible to get non-trivial point-hypersurface incidence theorems unless we impose some sort of non-degeneracy condition on the points and surfaces. Otherwise, it is possible that all of the hypersurfaces intersect in a common curve (or higher dimensional variety), and all of the points lie on this curve. On the face of it, we have not ruled out this possibility, so it seems strange that we will be able to use Lemma \ref{dualityTransformLem} to obtain non-trivial incidence theorems. However, since the points and varieties produced by Lemma \ref{dualityTransformLem} come from collections of points and curves in $\mathbb{R}^2$, we will be able to exclude the sort of degenerate arrangements that prevent non-trivial incidence results. This will be made explicit in the bound \eqref{FPSSZBound} below, which exploits the fact that the incidence graph of points and curves cannot contain a large induced bipartite subgraph. \end{remark} \subsection{Multi-level polynomial partitioning} The duality transform from Lemma \ref{dualityTransformLem} allows us to recast our incidence problem involving points and curves in the plane as a new incidence problem involving points and varieties in $\mathbb{R}^s$. We will analyze this new problem using the following multilevel polynomial partitioning theorem of Matou\v{s}ek and Pat\'akov\'a~\cite{MP}, which generalizes the polynomial partitioning theorem of Guth and Katz from \cite{GK2}. \begin{theorem}[Matou\v{s}ek and Pat\'akov\'a~\protect{\cite[Theorem 1.1]{MP}}] \label{thm:mp} For every integer $s > 1$ there is a constant $K$ such that the following holds. Given a set $\mathcal{P} \subset\mathbb{R}^s$ of cardinality $n$ and a parameter $r > 1$, there are numbers $r_1, r_2,\ldots, r_s \in [r, r^K]$, positive integers $t_1, t_2, \ldots, t_s$, a partition \begin{equation*} \mathcal{P} = \mathcal{P}^* \cup \bigcup_{i=1}^s \bigcup_{j=1}^{t_i} \mathcal{P}_{ij} \end{equation*} of $\mathcal{P}$ into pairwise disjoint subsets, and for every $i, j$, a connected set $S_{ij}\subseteq\mathbb{R}^s$ containing $\mathcal{P}_{ij}$, such that $|\mathcal{P}_{ij}| \le n/r_i$ for all $i, j$; $|\mathcal{P}^*| \le r^K$; and the following holds: Let $Z\subset\mathbb{R}^s$ be a variety of degree at most $D$. Then for each $i = 1,2,\ldots,s$, the number of sets $S_{ij}$ that cross $Z$ is $O_{D,d}\left(r_i^{1-1/d}\right)$. \end{theorem} \noindent (In the theorem, $Z$ \emph{crosses} $S_{ij}$ if $Z\cap S_{ij} \ne\emptyset$ but $Z$ does not contain $S_{ij}$.) While it is not stated explicitly in \cite{MP}, we also have the bound \begin{equation}\label{boundOnNumberOfPieces} \sum_{i=1}^s t_i\leq r^K, \end{equation} provided $K$ is chosen sufficiently large (depending only on $s$). In brief, the bound \eqref{boundOnNumberOfPieces} is obtained as follows. The proof of \cite[Theorem 1.1]{MP} constructs a sequence of $s$ polynomials, each of degree at most $r^{K^\prime}$ (where $K^\prime$ depends only on $s$), and the sets $S_{ij}$ are the connected components of all realizable sign conditions of these polynomials. By \cite{BPR}, the set of connected components of sign conditions determined by $s$ polynomials in $\mathbb{R}^s$, each of degree at most $r^{K^\prime}$, has cardinality at most $O_s(1) r^{sK^\prime}$. Thus if $r>1$ (which is assumed to be the case) and $K$ is chosen sufficiently large, then \eqref{boundOnNumberOfPieces} holds. \subsection{Proof of Theorem \ref{incidencesPtsCurves}} We are now ready to prove Theorem \ref{incidencesPtsCurves}. First, note that \eqref{newIncidenceBdPrecise} immediately holds if $m \ge n^{5/4+\eps'}$, where $\eps'$ is a suitable multiple of $\eps$ (see below for a concrete choice). Indeed, we then have \begin{equation*} n^{3/2}\polylog n = O_D\left(n^{3/2+2\eps'/3}\right) = O_D\left( m^{2/3}n^{2/3} \right) , \end{equation*} so using Lemma \ref{STResultLem}, we obtain \begin{equation*} I(\mathcal{P},\Gamma) = O_D\left( m^{2/3}n^{2/3} + m \right). \end{equation*} Henceforth we will assume that $m < n^{5/4+\eps'}$. The proof for this case proceeds by induction on $m$ and $n$. Concretely, given $\eps$, $D$, $s$, and $F$, we establish the bound \begin{equation} \label{indbd} I(\mathcal{P},\mathcal{C}) \le A m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} + B\left(m + n \right) , \end{equation} for any sets $\mathcal{P}\subset\mathbb{R}^2$ with $|\mathcal{P}|=m$, $\mathcal{C}\subset F$, $|\mathcal{C}|=n$, and $m < n^{5/4+\eps'}$, where $A = O_{\eps,D,s,\deg(F)}(1)$ and $B=O_{D,s,\deg(F)}(1)$. The induction, or rather recursion, bottoms out in three cases: \begin{enumerate} \item[(i)] We reach a subproblem with fewer than $r$ points, for some suitable constant parameter $r$ whose value will be set later. \item[(ii)] We reach a subproblem with $m \ge n^{5/4+\eps'}$ \item[(iii)] We reach a subproblem with $m \le n^{1/s}$. \end{enumerate} In all three cases, \eqref{indbd} holds provided we choose $A$ sufficiently large. This is clear for case (i), and requires some justification for case (ii) (provided below). For case (iii), we use the fact that the incidence subgraph of $\mathcal{P}\times\mathcal{C}$ is a semi-algebraic graph in $\mathbb{R}^2\times\mathbb{R}^s$, and this graph does not contain a large complete bipartite subgraph. Indeed, by B\'ezout's theorem, for example, it does not contain a copy of $K_{D^2+1,2}$ as a subgraph. By Corollary 2.3 from Fox et al.~\cite{FPSSZ}, this implies that \begin{equation}\label{FPSSZBound} I(\mathcal{P},\mathcal{C}) = O_{s,D,\deg(F)}\left( mn^{1-1/s} + n \right). \end{equation} If $m\le n^{1/s}$ then this quantity is $O_{s,D,\deg(F)}(n)$. Thus if we choose $A$ sufficiently large, the bound in \eqref{indbd} holds in this case. Apply Lemma \ref{dualityTransformLem} to $\mathcal{P}$ and $\mathcal{C}$. Let $W\subset\mathbb{R}^s$ be the resulting set of points, let $\Sigma$ be the resulting set of varieties, and let $\mathcal{P}_{\operatorname{bad}},\ \mathcal{C}_{\operatorname{bad}}$ be the leftover sets of problematic points and curves. Apply Theorem~\ref{thm:mp} to $W$ with a value of $r$ that will be specified later. Let $K$, $r_1,\ldots,r_s$, $t_1,\ldots,t_s$ be the parameters given by the theorem; let \begin{equation*} W = W^* \cup \bigcup_{i=1}^s \bigcup_{j=1}^{t_i} W_{ij} \end{equation*} be the corresponding partition of $W$; and for each index $i$ and $j$, let $S_{ij}$ be the connected set that contains $W_{ij}$. We have \begin{equation}\label{notBadIncidences} I(\mathcal{P}\backslash \mathcal{P}_{\operatorname{bad}},\mathcal{C}\backslash \mathcal{C}_{\operatorname{bad}}) = I(W,\Sigma) = I(W^*,\Sigma) + \sum_{i=1}^s \sum_{j=1}^{t_i} \Biggl( I(W_{ij},\Sigma_{ij}) + I(W_{ij},\Sigma^0_{ij}) \Biggr), \end{equation} where $\Sigma_{ij}$ (resp., $\Sigma^0_{ij}$) is the set of the surfaces of $\Sigma$ that cross (resp., contain) the corresponding set $S_{ij}$. Since $|\mathcal{P}_{\operatorname{bad}}|=O_{D,\deg(F)}(1)$ and $|\mathcal{C}_{\operatorname{bad}}|=O_{D,\deg(F)}(1)$, we have \begin{equation} \label{pbadc} I(\\mathcal{P}_{\operatorname{bad}},\mathcal{C})=O_{D,\deg(F)}(n) , \end{equation} and \begin{equation} \label{pcbad} I(\mathcal{P},\mathcal{C}_{\operatorname{bad}})=O_{D,\deg(F)}(m). \end{equation} Thus it suffices to bound the contribution from \eqref{notBadIncidences}. Let $m_{ij} = |\Sigma_{ij}|$, $m^0_{ij} = |\Sigma^0_{ij}|$, and $n_{ij} = |W_{ij}|$, for each $i$, $j$. We have $n_{ij} \le n/{r_i}$ for each $i,j$, and \begin{equation*} \sum_{j=1}^{t_i} m_{ij} \le bmr_i^{1-1/s} , \end{equation*} for each $i$, where $b$ is a constant that depends on $s$ and $D$. \paragraph{Incidences with crossing surfaces.} We apply the induction hypothesis to each $I(W_{ij},\Sigma_{ij})$ for which $m_{ij} = |\Sigma_{ij}| < |W_{ij}|^{5/4+\eps'} = n_{ij}^{5/4+\eps'}$. For the remaining indices $i$, $j$, where $m_{ij} \ge n_{ij}^{5/4+\eps'}$, we use the fact that $m < n^{5/4+\eps'}$ (or else we would not have applied the partitioning to $W$ and $\Sigma$). We can verify that \begin{equation*} m^{2/3}n^{2/3} \le m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} \end{equation*} if and only if \begin{equation*} m \le n^{\frac54 + \frac{3\eps(5s-4)}{4s-8}} , \end{equation*} and the latter inequality holds if we ensure that $\eps' \le \frac{3\eps(5s-4)}{4s-8}$. On the other hand, we have \begin{equation*} n^{3/2} \le m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps_1} \end{equation*} if and only if \begin{equation*} m \ge n^{\frac54 - \frac{\eps_1(5s-4)}{2s}} . \end{equation*} The latter inequality holds for $m_{ij}$ and $n_{ij}$, for any value of $\eps_1$, by assumption. Hence, when we reach a subproblem of this kind, we use the weak bound from Lemma \ref{STResultLem}, and get \begin{align*} I(W_{ij},\Sigma_{ij}) & = O_{s,D,\deg(F)}\left( m_{ij}^{2/3}n_{ij}^{2/3} + n_{ij}^{3/2} \polylog n \right) \\ & = O_{s,D,\deg(F)}\left( m^{2/3}n^{2/3} + n^{3/2} \polylog n \right) \\ &=O_{s,D,\deg(F)}\left( m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} \right) . \end{align*} Summing this bound over all relevant $i$ and $j$ multiplies the bound by a constant factor that depends on $s$, $D$, and $r$, so the overall contribution to the incidence count by ``borderline'' subproblems of this kind is at most \begin{equation*} B_1m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} , \end{equation*} for a suitable constant $B_1$ that depends on $s$, $D$, $\deg(F)$, and $r$. For subproblems satisfying $m_{ij} < n_{ij}^{5/4+\eps'}$, we get from the induction hypothesis \begin{equation*} I(W_{ij},\Sigma_{ij}) \le A m_{ij}^{\frac{2s}{5s-4}} n_{ij}^{\frac{5s-6}{5s-4}+\eps} + B\left(m_{ij} + n_{ij} \right) . \end{equation*} Therefore, for each fixed $i$, \begin{equation*} \sum_{j=1}^{t_i} I(W_{ij},\Sigma_{ij}) \le \sum_{j=1}^{t_i} \left(A m_{ij}^{\frac{2s}{5s-4}} n_{ij}^{\frac{5s-6}{5s-4}+\eps} + B(m_{ij} + n_{ij}) \right) . \end{equation*} We have \begin{equation} \label{mandn} \sum_{j=1}^{t_i} m_{ij} \le bmr_i^{1-1/s} ,\quad\quad\text{and}\quad\quad \sum_{j=1}^{t_i} n_{ij} = |W_i| , \end{equation} where $W_i = \bigcup_{j=1}^{t_i} W_{ij}$. Using H\"older's inequality to bound the sum of the first terms, we obtain \begin{align} \label{bd:cross} \sum_{j=1}^{t_i} Am_{ij}^{\frac{2s}{5s-4}} n_{ij}^{\frac{5s-6}{5s-4}+\eps} & \le A\sum_{j=1}^{t_i} m_{ij}^{\frac{2s}{5s-4}} n_{ij}^{\frac{3s-4}{5s-4}} \left( \frac{n}{r_i} \right)^{\frac{2s-2}{5s-4}+\eps} \nonumber \\ & \le A\left( \sum_{j=1}^{t_i} m_{ij} \right)^{\frac{2s}{5s-4}} \left( \sum_{j=1}^{t_i} n_{ij} \right)^{\frac{3s-4}{5s-4}} \left( \frac{n}{r_i} \right)^{\frac{2s-2}{5s-4}+\eps} \nonumber \\ & \le A \left( bmr_i^{1-1/s} \right)^{\frac{2s}{5s-4}} |W_i|^{\frac{3s-4}{5s-4}} \left( \frac{n}{r_i} \right)^{\frac{2s-2}{5s-4}+\eps} \nonumber \\ & = A\frac{b'}{r_i^\eps} m^{\frac{2s}{5s-4}} |W_i|^{\frac{3s-4}{5s-4}} n^{\frac{2s-2}{5s-4}+\eps} \quad\quad\text{(for $b' = b^{\frac{2s}{5s-4}}$)} \nonumber \\ & \le A\frac{b'}{r^\eps} m^{\frac{2s}{5s-4}} |W_i|^{\frac{3s-4}{5s-4}} n^{\frac{2s-2}{5s-4}+\eps} , \end{align} recalling that $r_i\ge r$ for each $i$. We now sum these bounds over all $i=1,\ldots,s$, and get a total of at most \begin{equation*} \frac{Ab's}{r^\eps} m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} . \end{equation*} In total, the number of incidences involving crossing surfaces is at most \begin{equation} \label{xinc} \left( \frac{Ab's}{r^\eps} + B_1\right) m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} + \left( Bb\sum_{i=1}^s r_i^{1-1/s} \right) m + Bn . \end{equation} \paragraph{Incidences with containing surfaces.} Fix $i$ and $j$, and consider the incidence count $I(W_{ij},\Sigma^0_{ij})$. All the points of $W_{ij}$ lie in the corresponding containing set $S_{ij}$, and all the surfaces of $\Sigma^0_{ij}$ contain $S_{ij}$. Consequently, every pair in $W_{ij} \times \Sigma^0_{ij}$ is an incident pair. However, by assumption, the incidence graph between $W_{ij}$ and $\Sigma^0_{ij}$ does not contain $K_{2,D^2+1}$. This implies that \begin{equation*} I(W_{ij},\Sigma^0_{ij}) \le D^2|W_{ij}| + |\Sigma^0_{ij}| = D^2n_{ij} + m^0_{ij} \end{equation*} (the first (resp., second) term accounts for sets $W_{ij}$ of size at least two (resp., at most one). Hence, summing these bounds over all $i,j$, using the trivial bound $m^0_{ij}\le m$, for all $i$, $j$, and the bound $\sum_{i,j} n_{ij} \le n$, we get \begin{equation} \label{inc-cont} \sum_{i=1}^s \sum_{j=1}^{t_i} I(W_{ij},\Sigma^0_{ij}) \le \sum_{i=1}^s \sum_{j=1}^{t_i} \left( D^2 n_{ij} + m^0_{ij} \right) \le B_2m + D^2 n , \end{equation} where $B_2$ is another constant that depends on $r$, $s$ and $D$. (It is here that we use the remark following Theorem~\ref{thm:mp}, concerning a bound on the quantities $t_i$.) Finally, we bound $I(W^*,\Sigma)$ simply by $mr^K$. Adding all bounds collected so far, in \eqref{pbadc}, \eqref{pcbad}, \eqref{xinc}, and \eqref{inc-cont}, we get a total of at most \begin{equation*} \left( \frac{Ab's}{r^\eps} + B_1\right) m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} + B_3 m + B_4 n , \end{equation*} where $B_3$ and $B_4$ are constants that depend on ($A$, $B$, and) $s$, $D$, $\deg(F)$, and $r$. We now observe that \begin{equation*} m \le m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}} \quad\text{if and only if}\quad m \le n^{\frac{5s-6}{3s-4}} , \end{equation*} In our case we have the stronger inequality $m < n^{5/4+\eps'}$; it is indeed stronger for $\eps'<1/4$, say, as can easily be verified. We thus have \begin{equation*} B_3 m \le \frac{B_3}{n^\eps} m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps}. \end{equation*} Similarly, we have \begin{equation*} n \le m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}} \quad\text{if and only if}\quad m \ge n^{1/s} , \end{equation*} which also holds by our recursion termination rules. Hence we have \begin{equation*} B_4 n \le \frac{B_4}{n^\eps} m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} . \end{equation*} Altogether, the incidence bound is at most \begin{equation*} \left( \frac{Ab's}{r^\eps} + B_1 + \frac{B_3+B_4}{n^\eps} \right) m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} . \end{equation*} We now take $r$ to be sufficiently large, so as to have $r^\eps > 3b's$ (recalling that $b'$ does not depend on $r$), take $A$ sufficiently large so that $B_1 < A/3$, and then require $n$ to be sufficiently large so that \begin{equation*} \frac{B_3+B_4}{n^\eps} < \frac{A}{3} . \end{equation*} With these choices, this expression is upper bounded by \begin{equation*} A m^{\frac{2s}{5s-4}} n^{\frac{5s-6}{5s-4}+\eps} , \end{equation*} which establishes the induction step and thereby completes the proof of the theorem. $\Box$ \section{The complexity of a level in an arrangement of curves} \label{subsec:level} Recall the definition of a level in an arrangement of curves from Section \ref{complexityOfALevelSec}. The main tool for establishing bounds on the complexity of levels in arrangements of curves is an upper bound given by Chan~\cite{Ch} on the complexity of a level in an arrangement of extendible pseudo-segments. A collection of $x$-monotone Jordan arcs is \emph{extendible} if each arc can be contained in a $x$-monotone simple curve that divides the plane into exactly two connected components, with the property that these larger curves form a collection of pseudo-lines (a collection of curves is called a collection of pseudo-lines if the curves are unbounded and every pair of curves intersect at most once). Chan established the following bound on the complexity of a level of an arrangement of extendible pseudo-segments. \begin{theorem}[Chan~\cite{Ch}, Theorem 2.1]\label{chanThm1} Let $\Gamma$ be a collection of $n$ extendible pseudo-segments, and let $X=\sum_{\gamma,\gamma^\prime\in\Gamma}|\gamma\cap\gamma^\prime|$. The complexity of a level in $A(\Gamma$) is $O(n+n^{2/3}X^{1/3})$. \end{theorem} In general, a collection of pseudo-segments need not be extendible. However, any collection of pseudo-segments can be cut into a slightly larger collection that is extendible. \begin{theorem}[Chan~\cite{Ch}, Theorem 3.3]\label{chanThm2} Any collection of $n$ $x$-monotone pseudo-segments can be cut into a collection of $O(n\log n)$ extendible pseudo-segments. \end{theorem} Combining Theorems \ref{chanThm1} and \ref{chanThm2} with the bounds in Theorems \ref{cuttingCurvesIntoSegments} and \ref{cuttingArcsIntoSegments}, we obtain the following result. \begin{levelsInArrThm} Let $\Gamma$ be a set of $n$ Jordan arcs, each of which is contained in an algebraic curve of degree at most $D$, and every pair of which have finite intersection. Then each level of $\mathcal{A}(\Gamma)$ has complexity $O_D(n^{5/3}\log^{O_D(1)} n)$. \end{levelsInArrThm} This result improves earlier works of Chan~\cite{Ch,Ch2} and Bien~\cite{lilach} for the case of general algebraic curves, and it almost matches the earlier results in \cite{ANPPSS,MT} for the case of pseudo-circles and pseudo-parabolas. \begin{remark} It is an interesting open problem to obtain a refined bound on the complexity of the $k$-level which depends on $k$. Such a bound is known for the case of lines (and pseudo-lines)~\cite{Dey}. \end{remark} As noted in \cite{ANPPSS}, the preceding theorem implies the following result in the area of kinetic geometry. This significantly extends the earlier results in \cite{ANPPSS,TT}, which were limited to the case of constant-velocity motions. \begin{corollary} \label{median} Let $P$ be a set of $n$ points in the plane, each moving along some algebraic trajectory of degree at most $D$(the coordinates of the position of a point at time $t$ are polynomials of degree at most $D$). For each time $t$, let $p(t)$ and $q(t)$ be the pair of points of $P$ whose distance is the median distance at time $t$. The number of times in which this median pair changes is $O_D(n^{10/3}\log^{O_D(1)} n)$. The same bound applies if the median is replaced by any fixed quantile. \end{corollary} \section{The complexity of many marked faces in an arrangement}\label{sec:manyf} In this section we prove Theorem \ref{manyfacesCurvesWeakBd}. Recall the setup from Section \ref{complexityMarkedFacesIntroSec}: Let $\Gamma$ be a set of $n$ Jordan arcs, each of pair of which have finite intersection. Let $\mathcal{P}$ be a set of $m$ points in the plane with the property that no point of $\mathcal{P}$ lies on any curve of $\Gamma$. We define $K(\mathcal{P},\Gamma)$ to be the sum of the complexities of the faces of $\mathcal{A}(\Gamma)$ that contain at least one point of $\mathcal{P}$, where the complexity of a face is the number of edges of $\mathcal{A}(\Gamma)$ on its boundary. For the reader's convenience, we restate the theorem here. \begin{manyfacesCurvesWeakBdThm} Let $\mathcal{C}$ be a set of algebraic plane curves of degree at most $D$, no two of which share a common component. Let $\Gamma$ be a set of $n$ Jordan arcs, each of which is contained in some curve of $\mathcal{C}$, and each pair of which have finite intersection. Let $\mathcal{P}$ be a set of $m$ points in the plane, so that no point of $\mathcal{P}$ lies on any curve of $\mathcal{C}$. Then \begin{equation}\tag{\ref{weakptfaces}} K(\mathcal{P},\Gamma) = O_D(m^{2/3}n^{2/3}+n^{3/2}\log^{O_D(1)}n). \end{equation} \end{manyfacesCurvesWeakBdThm} \begin{proof} The bound is an immediate consequence of the results in~\cite{AAS}, combined with Theorems \ref{cuttingCurvesIntoSegments} and \ref{cuttingArcsIntoSegments}. Specifically, Theorem 3.5 in \cite{AAS} asserts that the complexity of $m$ marked faces in an arrangement of $N$ pseudo-segments with $X$ intersection points is $O(m^{2/3}X^{1/3} + N\log^2 N)$. Applying this bound to the collection of pseudo-segments produced in Theorem~\ref{cuttingCurvesIntoSegments} or Theorem~\ref{cuttingArcsIntoSegments} yields the bound stated in \eqref{weakptfaces}. We note that the bound \eqref{weakptfaces} parallels the weak incidence bound in \eqref{boundOnIPC}, except for the missing term $O(m)$ and the fact that the exponent in the polylogarithmic factor is now larger by $2$. We also note that the term $O(N\log^2N)$ reduces to $O(N\log N)$ when the pseudo-segments are extendible; the extra logarithmic factor comes from Theorem~\ref{chanThm2}. \end{proof} \subsection{Discussion}\label{markedFacesDiscussionSec} As in the case of incidences, one would like to improve Theorem \ref{manyfacesCurvesWeakBd} and obtain a refined bound, similar to that in Theorem~\ref{incidencesPtsCurves}. However, the case of many faces is considerably more difficult, and it raises several technical issues that, so far, we do not know how to overcome. We briefly discuss these difficulties, and leave this extension as an interesting open problem. The approach, as in the case of incidences, would be to pass to the dual $s$-dimensional space. In the dual space, curves become points and the marking points become algebraic varieties. We even have a slight advantage here, because we can perturb the marking points slightly to ensure that they are in general position. One would then apply a polynomial partitioning in the dual space, apply the bound of Theorem \ref{manyfacesCurvesWeakBd} within each cell, and combine the bounds into a global bound for the whole problem. However, there are several major issues that arise here. \medskip \noindent{\bf (a)} Within a cell $\tau$ of the partition, we have a subset $\mathcal{P}_\tau$ of points of $\mathcal{P}$ whose dual surfaces cross $\tau$, and a subset $\Gamma_\tau$ of curves of $\Gamma$ whose dual points lie in $\tau$. The recursive subproblem at $\tau$ would then be to bound the complexity of the faces marked by the points of $\mathcal{P}_\tau$ in the arrangement $\mathcal{A}(\Gamma_\tau)$. However, this is not enough, as the points of $\mathcal{P}\backslash\mathcal{P}_\tau$ also mark faces of $\mathcal{A}(\Gamma_\tau)$, and we have to estimate the complexity of these faces as well. Informally, this is an effect of the ``non-local'' nature of the curve-face incidence relation: in contrast to the case of point-curve incidences, the property that a curve $\gamma$ bounds the face marked by a point $p$ is a global property that depends on the whole collection of curves and not just on $p$ and $\gamma$. In general, the complexity of these (many) additional faces of $\mathcal{A}(\Gamma_\tau)$ could be too large for the recursive analysis to yield the desired improved bound. \medskip \noindent{\bf (b)} As in the case of incidences, we need to bootstrap the recursion at subproblems for which $|\mathcal{P}_\tau|$ is much smaller than $|\Gamma_\tau|$. Concretely, if we are to obtain the same bound as for incidences, the threshold would be $|\mathcal{P}_\tau| \le |\Gamma_\tau|^{1/s}$. We would then need to argue that in this case the complexity of the marked faces is linear, or at least close to linear, in $|\Gamma_\tau|$. Again, the non-local nature of the problem makes it sifficult to show this. For example, we do not know whether the machinery in Fox et al.~\cite{FPSSZ} can be applied here, as it was in the case of incidences. \medskip \noindent{\bf (c)} When combining the bounds obtained at the recursive subproblems into a global bound, there are several additional technical issues that are more challenging when bounding the complexity of marked faces rather than incidences. For example, unlike the case of incidences, we cannot just add up the recursive bounds. This is because the structure of faces in an arrangement obtained by overlaying several sub-arrangements can become quite involved. Fortunately, the techniques in Agarwal et al.~\cite{AAS} provide a solution to this particular issue.
\section*{SUPPLEMENTAL MATERIAL} \subsection{Calculation of the critical distance} \label{critical} \begin{figure}[b] \begin{center} \includegraphics[scale=0.55,clip=true]{figS1} \caption{Percolation probability $P(\delta)$ for different numbers $N$ of \emph{penetrable} spherocylinders as a function of $\delta/\langle L\rangle$ for monodisperse (dashed lines) and bi-disperse (solid lines) spherocylinders. The parameters of the bi-disperse distribution are $L_1=60$, $L_2=20$, and $p=0.21$. The monodisperse systems are generated by considering rods with identical lengths coinciding with $\langle L\rangle$. All lengths are in units of $\delta_{c0}=2/\pi\rho \langle L\rangle^2$. In the figure the number density is fixed at $\rho=7.89\times10^{-4}$ for all cases.}\label{figS1} \end{center} \end{figure} For both penetrable and impenetrable spherocylinders we follow same the method to calculate the critical distance $\delta_c$. Namely, for a given number density $\rho$ of spherocylinders which are either penetrable or impenetrable with hard-core diameter $D$, we coat each spherocylinder with a penetrable shell of thickness $\delta/2$, and we consider two spherocylinders to be connected if their penetrable shells overlap. For penetrable systems (i.e., for $D=0$) $\delta$ represents the diameter of the penetrable spherocylinder. For each realization of the system, we compute through the clustering method described in Ref.~\cite{Nigro2011supp} the minimum value of $\delta$ such that a cluster of connected spherocylinders spans the entire sample. By counting the number of instances that sample-spanning clusters appear for a given $\delta$, we construct the percolation probability curve $P(\delta)$. As in the main text, for systems of penetrable rods we adopt as unit of length the quantity $\delta_{c0}=2/\pi\rho \langle L\rangle^2$, which corresponds to the critical distance in the second virial approximation for monodisperse spherocylinders with length fixed at $\langle L\rangle$. Examples of $P(\delta)$ obtained from $500$ realizations of polydisperse (solid lines) and monodisperse (dashed lines) systems of penetrable spherocylinders are shown in Fig.~\ref{figS1} for different numbers $N$ of spherocylinders with density fixed at $\rho=7.89\times10^{-4}$. For the polydisperse cases we have considered a bi-disperse length distribution $f(L)=p\delta(L-L_1)+(1-p)\delta(L-L_2)$ with $L_1=60$, $L_2=20$, and the number fraction of long rods $p=0.21$. The monodisperse systems were generated by spherocylinders of length equal to $\langle L\rangle=\int dL Lf(L)$, which for the particular distribution considered corresponds to $\langle L\rangle=28.4$. Figure \ref{figS1} reveals that the spanning probabilities for the bi-dispersed systems are shifted to lower values of $\delta$ when compared to the $P(\delta)$ curves for the monodisperse case, indicating that the polydisperse systems percolate at smaller volume fractions. For both the polydisperse and the monodisperse cases the curves for the two highest values of $N$ intersect at approximately $P=1/2$, which we take as our criterion for identifying the critical distance $\delta_c$. For the particular case of Fig.~\ref{figS1} we find $\delta_c/\langle L\rangle\simeq 0.046$ and $0.032$ for the monodisperse and polydisperse cases, respectively. \begin{figure}[b] \begin{center} \includegraphics[scale=0.38,clip=true]{figS2} \caption{Percolation probability $P(\delta)$ for different densities $\rho$ of \emph{impenetrable} and bi-disperse spherocylinders as a function of $\delta/D$, where $D$ is the hard-core diameter. The parameters of the bi-disperse distribution considered in the figure are $L_1/D=30$, $L_2/D=10$, and $p=3/8$. The corresponding value of $\sqrt{\langle L^2\rangle}/D$ is $20$.}\label{figS2} \end{center} \end{figure} Our results for penetrable spherocylinders shown in Figs. 1, 2, and 3 of the main text have been obtained by considering simulation box sizes $\mathcal{L}$ such that $\mathcal{L}/L_1 \geq 5$, where $L_1$ is the largest rod length for any given distribution, and the number $N$ of particles exceeds $2\times 10^4$. The resulting critical distances have been obtained by adopting the criterion $P(\delta_c)=1/2$. Figure \ref{figS2} shows the spanning probability $P(\delta)$ obtained from $300$ equilibrium configurations of bi-disperse systems of impenetrable spherocylinders with $L_1/D=30$, $L_2/D=10$ and $p=3/8$ and for different values of the number density $\rho$. From the largest to the lowest densities the number $N$ of spherocylinders decreases from $N=7000$ to $N=3000$ and the box size $\mathcal{L}$ increases from $\mathcal{L}\simeq 4L_1$ to $\mathcal{L}\simeq 10 L_1$. We have used a fitting to a simple sigmoidal function to evaluate the critical distance from $P(\delta_c)=1/2$ and from the mean of the distribution function $dP(\delta)/d\delta$. The two methods give values of $\delta_c$ which differ at most by a few percent. \subsection{Bi-disperse, Weibull, and uniform distributions} \label{distributions} \begin{figure*}[t] \begin{center} \includegraphics[scale=0.5,clip=true]{figS3} \caption{Scaled variance $\sigma_s^2=\langle L^2\rangle/\langle L\rangle^2-1$ as a function of the number fraction $p$ of long rods for the bi-disperse distribution of lengths used in the calculations.}\label{figS3} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[scale=0.5,clip=true]{figS4} \caption{Discretized Weibull (left panel) and uniform (right panel) distribution functions of the spherocylinder lengths.}\label{figS4} \end{center} \end{figure*} To avoid exceedingly large computational times, we have been careful to choose rod length distribution functions with length $L_1$ of the longest rod not exceeding about $4$ times that ($L_2$) of the shortest one. Despite of this constraint, we still have been able to generate length distributions with large scaled variances $\sigma_s^2=\langle L^2\rangle/\langle L\rangle^2-1$. Among the different distributions considered, the bi-disperse one, i.e., $f(L)=p\delta(L-L_1)+(1-p)\delta(L-L_2)$ with $0\leq p\leq 1$, had the largest $\sigma_s^2$ for a given $L_1/L_2$ \cite{notesupp}. Figure~\ref{figS3} shows $\sigma_2^2=p(1-p)(n-1)^2/[p(n-1)+1]^2$ where $n=L_1/L_2=2$, $2.5$, $3$, and $4$ as a function of $p$. We see that for $p\sim 0.2$ and $L_1/L_2>3$, $\sigma_s^2$ is well above $30\%$. In Fig.~\ref{figS4} we show the discretized Weibull (left panel) and uniform (right panel) distribution functions of the rod lengths used in our study on the penetrable polydisperse spherocylinders. The discretized Weibull distribution is defined as $f(L_i)=\exp[-(L_i/\lambda)^k]-\exp[-(L_{i+1}/\lambda)/^k]$, where $L_i=i$ ($i=1,\, 2,\, 3, \ldots$) are the rod lengths \cite{weibull}. We have used $k=6$ and $\lambda=3$, $15$, $60$, and $110$. The corresponding scaled variance $\sigma_s^2=\langle L^2\rangle/\langle L\rangle^2-1$ is $\sigma_s^2\simeq 4\%$. To guarantee that the ratio of lengths between the longest and shortest rods never exceeded $\sim 4$, the distribution was truncated (and subsequently normalized) by eliminating from the sampling all spherocylinders with $f(L_i)$ smaller than $10^{-2}$ of the maximum of the distribution. Uniform distributions of rods (right panel of Fig.~\ref{figS4}) have been constructed from $f(L)=1/(L_1-L_2)$ for $L_2\leq L\leq L_1$ and $f(L)=0$ otherwise, with $L_1/L_2=4$ and $L_2=1$, $5$, $20$, and $50$. For all cases the scaled variance is $\sigma_s^2=12\%$.
\section{Introduction} \IEEEPARstart{T}{he} rapid cycling synchrotron (RCS) in the Japan Proton Accelerator Research Complex (J-PARC) \cite{TDR} acts as a high intensity proton driver, which delivers high intensity proton beams to the Material and Life Science Experimental Facility (MLF) for generation of neutrons and muons, as well as the injector for the main ring synchrotron (MR). The design output beam power of the RCS is 1~MW. The beam commissioning of the RCS started in October 2007 and the output beam power has been steadily increasing with progress of the beam tuning and hardware upgrades. During the high intensity beam study performed in January 2015, $8.3\times 10^{13}$ protons, which corresponds to the beam power of 1~MW at the repetition rate of 25~Hz, were successfully accelerated with a low beam loss below 0.2\%\cite{hotchi:prab17}. The demonstration was performed in the single shot mode, where a beam pulse is injected from the linac to the RCS on demand. As of May 2018, the output beam power for the MLF is 500~kW and the RCS delivers $6.5\times10^{13}$~ppp to the MR. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{RCS_view_RT2018.png} \caption{Schematic view of the J-PARC RCS.} \label{fig:layout} \end{figure} \begin{table}[tb] \begin{center} \caption{Parameters of the J-PARC RCS and its rf system}\label{tab:parameters} \begin{tabular}[c]{ c c} \hline parameter & \\ \hline circumference & 348.333~m \\ energy & 0.400--3 GeV \\ beam intensity & (achieved) $8.3\times10^{13}$ ppp\\ harmonic number & 2 \\ accelerating frequency & 1.227--1.671~MHz \\ maximum rf voltage & 440~kV \\ repetition rate & 25~Hz \\ No. of cavities & 12\\ Q-value of rf cavity& 2 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure*}[t] \centering \includegraphics[width=0.75\textwidth]{LLRFblockDiagram_with_setsumei_3_english.eps} \caption{Block diagram of the existing LLRF control system.} \label{fig:existing_LLRF} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{LLRF_photo_with_setsumei.pdf} \caption{Photo of the existing LLRF control system.} \label{fig:existing_LLRF_photo} \end{figure} A schematic view of the RCS is shown in Fig.~\ref{fig:layout}, and the parameters of the RCS and its rf system are listed in Table~\ref{tab:parameters}. A 400~MeV $H^{-}$ beam is injected and converted to a proton beam by a charge exchange foil. To avoid longitudinal beam losses, the injected beam has a chopped structure synchronized to the RCS rf voltage. The RCS accelerates the protons up to 3~GeV in 20~ms with the repetition rate of 25~Hz. As shown in Fig.~\ref{fig:layout}, the RCS has a three-fold symmetry. The three straight sections are dedicated for the injection devices and the collimators, the extraction devices, and the rf systems. Twelve magnetic ally (MA) cavities are installed in the RCS to generate the high accelerating voltage of 440~kV maximum for acceleration of high intensity proton beams. The cavities are driven by tetrode tube amplifiers. The MA cavity has a wideband frequency response ($Q=2$), which covers not only the wide accelerating ($h=2$) frequency sweep to follow the velocity change of the proton beam during acceleration without a tuning bias loop, but also the frequency range of the second harmonic ($h=4$). The wideband frequency response enables the dual harmonic operation, where each cavity is driven by the superposition of the fundamental and second harmonic rf voltages for bunch shaping. The bunch shaping with the dual harmonic operation is indispensable for alleviating the space charge effects of the high intensity proton beams. The beam loading in the cavity \cite{Pedersen75} is a key issue for accelerating high intensity proton beams. In case of the wideband MA cavity, the wake voltage consists of not only the accelerating harmonic, but also the higher harmonics. A multiharmonic beam loading compensation is necessary. These functions and other functions are implemented in the low level rf (LLRF) control system. The LLRF control system is a key for stable acceleration of high intensity protons. The existing LLRF control system started its operation in 2007 from the beginning of the beam commissioning and operation of the J-PARC RCS. After a decade of operation, a next generation LLRF system for the RCS is under development. In this article, we describe the configuration of the new system. \section{Existing LLRF control system} \subsection{Configuration and functions} The functional block diagram and the photograph are shown in Fig.~\ref{fig:existing_LLRF} and Fig.~\ref{fig:existing_LLRF_photo}、 respectively. Specialized 9U height VME modules were developed to realize the functions. The P1 connector is connected to the normal VME bus and the P2 and P3 connectors are specialized ones dedicated for the signal transfer between the modules. All functions are implemented as logic circuits on FPGA, Xilinx Virtex-II pro and Spartan-2. The system clock frequency is 36~MHz. To realize the frequency sweep, a frequency pattern memory is implemented in a module. The revolution frequency signal is fed to a phase accumulator to generate the revolutional phase signal from $-\pi$ to $\pi$. The phase signals of the higher harmonics are generated by multiplying the revolutional phase signal by the harmonic number $h$, therefore the synchronization between the revolutional and higher harmonics is guaranteed. The multiharmonic phase signals are distributed to all modules via the backplane. By using the phase signals, synchronization of the modules for all cavities is also guaranteed. Sinusoidal signals for the I/Q demodulation and modulation of the rf and beam signals are generated from the phase signals. Since the frequency range of the cavity and beam signals is relatively low, MHz range, no additional analog parts for down and up conversion are necessary; the beam or cavity signal is directly digitized by ADCs and cavity driving signal is generated by DACs. The main role of the system is the regulation of the cavity voltages. Each cavity has the dual harmonic auto voltage control (AVC) loop\cite{tamura:prst-ab08-1}. In the dual harmonic AVC, the I/Q signals of the accelerating ($h=2$) and the second harmonic ($h=4$) voltages are converted to the amplitudes. The amplitudes are compared to the voltage patterns and via PI controllers, the AVC outputs the amplitude control signal. Finally, from the amplitudes and the phase signals of the harmonics ($h=2,4$), the dual harmonic rf signal is generated. The longitudinal painting injection \cite{tamura:prst-ab09} is achieved by using the dual harmonic AVC. The other important function for the high intensity acceleration is the multiharmonic beam loading compensation. The rf feedforward method is employed in the existing system\cite{tamura:prst-ab11}. The feedforward system picks up the complex amplitude of the beam signal for the selected harmonic. The gain and phase is set so that the feedforward signal cancels the wake voltage in the cavity. We developed the commissioning methodology of the multiharmonic feedforward system\cite{tamura:prst-ab11}. Originally the feedforward system for the even harmonics ($h=2,4,6$) was installed and commissioned, and the additional feedforward system for the odd harmonics ($h=1,3,5$) was installed for the high intensity single bunch operation\cite{tamura:prst-ab15}. The feedback loops to stabilize the beam are also implemented. The radial loop modulates the frequency using the beam position monitor (BPM) signal so that the beam orbit is centered in the bending magnets. The radial feedback is not used, because the reproducibilities of the bending field and the rf frequency good enough. The phase feedback modulates the phases of the cavity voltages to damp the longitudinal dipole oscillation. It compares the beam phase and the phase of the vector sum of the cavity voltages. In Fig.~\ref{fig:vector_sum}, the block diagram of the vector sum function is illustrated. The detected I/Q cavity voltage of the harmonic is rotated and sent to the vector sum module. The rotation angle is set corresponding to the cavity position in the RCS ring. Optionally a gain can be applied to the I/Q signal. The summation signal is normalized by using the number of cavities and sent to the phase feedback module. The miscellaneous functions not shown in the figure, such as generation of the trigger pulse for the extraction kicker magnets and generation of the linac chopper pulse, are also implemented. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{vector_sum_block.pdf} \caption{Schematic diagram of the vector sum.} \label{fig:vector_sum} \end{figure} \subsection{Demand of the next generation system} The existing LLRF control system started its operation in 2007, and has been working well without major problems for more than ten years. The dual harmonic AVC, multiharmonic feedforward and the other LLRF functions serve the high intensity beam operation. However, the old FPGAs (Xilinx Virtex-II pro and others) used in the modules are already discontinued and not supported by the current development environment. Although we have several spare modules, it will be difficult to maintain the existing system in near future. Therefore, we decided to develop a next generation LLRF control system. Since we developed the existing modules by analogy of analog LLRF modules, the module design is different for each of the functions. Maintenance of the spare modules is a practical issue. More generic configuration, the generic FPGA board and additional I/O board for example, is preferable for the new system. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{new_LLRF_overview_RT2018_ver2.pdf} \caption{Configuration of the next generation LLRF control system.} \label{fig:new_LLRF} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{AMC-block.png} \caption{Functional block diagram of the AMC module for the next generation LLRF control system.} \label{fig:AMC} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{next_generation_LLRF_photo_with_setsumei.pdf} \caption{Photograph of the next generation LLRF control system.} \label{fig:new_llrf_photo} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{common_function_RT18.pdf} \caption{Block diagram of the common function module.} \label{fig:common_function_block} \end{figure} \section{Next generation LLRF control system} \subsection{System overview} We employ the MicroTCA.4 platform for the next generation LLRF control system. Separation of the I/Os in rear transition modules (RTM) and the FPGA logic in AMC modules gives us design flexibility. The configuration of the system is shown in Fig.~\ref{fig:new_LLRF}. The clock generator eRTM generates the 144~MHz system clock from the J-PARC master clock of 12~MHz by using a phase lock loop. The DESY-type rf backplane is utilized for system clock distribution to the modules. The general purpose AMC module developed by Mitsubishi Electric TOKKI Systems Corporation is employed. The block diagram of the AMC board is shown in Fig.~\ref{fig:AMC}. It has a modern SoC FPGA, Xilinx Zynq XC7Z045, where an EPICS IOC with Linux is embedded. Setting and monitoring of the parameters are done via EPICS channel access. The EPICS waveform records of I/Q signals are useful for commissioning of the system. The 1~GB SDRAM is used as pattern memories. It has eight high speed ADCs and two DACs, i.e. it has capability to control two cavities. Also, it has 6-bit digital I/O. The RTM are developed for the specific I/O and functions. We classify the LLRF functions into the categories, ``common function'' and ``cavity driving function'', which are implemented the common function modules and the cavity driver modules, respectively. A high speed serial communication module is located in the slot of MCH2. A photograph of the next generation LLRF control system is shown in Fig.~\ref{fig:new_llrf_photo}. In JFY 2017, one common function module, one cavity driver module, the clock generator eRTM, and the high speed serial communication module were constructed. \subsection{Common function module} A functional block diagram of the common function module is illustrated in Fig.~\ref{fig:common_function_block}. The common function module manages the revolutional frequency pattern, the phase feedback to damp the longitudinal oscillations, and other functions. The common function module receives the triggers and the information of the RCS beam destination, ``mode (1..0)'', as shown in Fig.~\ref{fig:new_LLRF}. The common function module generates the control clock used in the feedback blocks and the pattern clock for pattern sampling. Frequencies of the clocks can be set independently. They are set to 1~MHz for this application. These clocks and informations are distributed to the cavity driver modules via the AMC backplane. The 32-bit revolutional frequency signal from the pattern memory is serialized and distributed, while the existing system distributes the phase signals of the accelerating ($h=2$) and the second harmonic ($h=4$). The cavity driver module has its own phase accumulator and multiplier to generate the multiharmonic phase signals. This configuration is necessary for the multiharmonic vector rf voltage control described below. At present, the phase feedback, the beam signal analysis for rf feedforward, the kicker trigger generation, and the chopper pulse generation, are not implemented yet. The radial feedback is not to be implemented based on our experience with the existing system. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cavity_driver_RT18.pdf} \caption{Block diagram of the cavity driver module.} \label{fig:cavity_driver_block} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{vector_voltage_control_block_simplified_RT2018_2_phase_accum.pdf} \caption{Block diagram of the multiharmonic vector rf voltage control.} \label{fig:vector_AVC} \end{figure} \subsection{Cavity driver module} A functional block diagram is shown in Fig.~\ref{fig:cavity_driver_block}. As described above, it handles two cavity voltages independently by using two ADCs and two DACs. Six cavity driver modules are necessary to control twelve cavity voltages, while one module is constructed in JFY 2017. The revolutional frequency signal from the backplane is led to the phase accumulator to generate the phase signal, which is multiplied by the harmonic numbers in the function blocks to generate the multiharmonic phase signal. The functions of the cavity driver are the multiharmonic vector rf voltage control and the feedforward driver. Available logic cells of the Zynq FPGA are much more than that of old FPGAs used in the existing system; now these functions for two cavities can be implemented in a single FPGA. The feedforward driver receives the I/Q amplitudes of the beam signal for the selected harmonics and generates the feedforward compensation signal similarly to the existing rf feedforward system. The feedforward driver is not implemented yet. The number of harmonics is to be extended from six of the existing system to eight. The multiharmonic vector rf voltage control is the key function of the next generation LLRF control system. In the existing system, the amplitudes of two harmonics ($h=2,4$) are controlled. The new system can control the I/Q complex amplitudes of eight harmonics ($h=1...8$). By controlling the complex amplitudes, the beam loading is compensated and the phase control of the higher harmonics is possible. The block consists of eight feedback blocks as shown in Fig.~\ref{fig:vector_AVC}. The I/Q complex amplitude of the cavity voltage signal is obtained by I/Q demodulation. A narrow band CIC (cascaded integrator and comb) filter is used as a low pass filter. The complex amplitude is compared to the I/Q voltage pattern. Through the PI controller and the I/Q modulator, feedback output is obtained. The revolutional frequency signal and the phase signal are multiplied by the harmonic number ($hn$) in the feedback block to obtain the frequency and the phase signal of the selected harmonic number ($hn$), respectively. The sine and cosine signals for the I/Q demodulator and modulator are generated by the CORDIC using the phase signal of the harmonic. The frequency signal is used for addressing of the phase offset LUT and the gain LUT. The phase offset LUT gives a phase offset between the I/Q demodulator and modulator to adjust the phase of the 1-turn transfer function. The gain LUT compensates the amplitude response of the cavity. These LUTs are necessary to cover the wide frequency range. Finally, eight rf signals from the feedback blocks are summed up to obtain the multiharmonic rf signal. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{high_speed_serial_communication_RT18.pdf} \caption{Block diagram of the high speed serial communication module and the signal flow around the modules.} \label{fig:high_speed_serial_block} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{IQ_data_format.png} \caption{Data format for the cavity I/Q signals.} \label{fig:IQ_data_format} \end{figure} \subsection{High speed serial communication module} Signal transfer between the modules is a key to realize the LLRF functions. Actually, the signal transfer of the existing system is not very sophisticated; parallel bus connection in the backplane is used for distribution of the multiharmonic phase signal and the cavity I/Q voltages are sent to the vector sum module by serial link via cables across the front panels of the modules, as shown in the photograph in Fig.~\ref{fig:existing_LLRF}. The signal transfers required by the LLRF functions are as follows. \begin{itemize} \item I/Q amplitudes of the cavity voltages for the all harmonics from the cavity drivers to the vector sum function \item I/Q amplitudes of the WCM beam signal from the common function module to the cavity driver modules \item phase feedback signal from the common function module to the cavity driver modules \end{itemize} One can see that all of the transfers are star topologies. A star topology can be implemented by using the port 1 connections of the AMC backplane and installing a dedicated module in the slot for MCH2, while the configuration sacrifices the redundancy. The block diagram of the high speed serial communication module and the signal flow around the module are illustrated in Fig.~\ref{fig:high_speed_serial_block}. The cavity driver module sends the I/Q amplitudes of the two cavities for eight harmonics ($h=1..8$), which are rotated according to the position in the tunnel, to the communication module. To realize a number of serial connections, Xilinx Virtex-5 is employed. The Xilinx Aurora protocol is employed for the signal transfer. The data format for the cavity I/Q signals is shown in Fig.~\ref{fig:IQ_data_format}. The data rate is set to 2.5~Gbps and a single data frame contains 40 data blocks. Therefore, the time width of the frame is 320~ns. In case of the I/Q signals for two cavities, 32 data blocks are actually used. The data frame is sent every control clock cycle, 1~$\mu$s. In the communication module, the vector sum function similar to Fig.~\ref{fig:vector_sum}. The I/Q amplitudes from the cavity drivers are summed up and normalized by the number of cavities. The vector sum of the all harmonics is sent to the common function module and it is used for the phase feedback loop. The I/Q amplitudes of the WCM beam signal for eight harmonics and the phase feedback signal are sent from the common function module to the communication module. The communication module distribute the signals to all cavity drivers. Thanks to the capability of the AMC backplane for the high speed serial communication, the signal transfer between the LLRF modules is much more sophisticated and simplified than the existing system. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{measurement_setup_RT2018_2.pdf} \caption{Test setup of the cavity driver module.} \label{fig:meas_setup} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{i_and_q_180531_02.png} \caption{Measured I/Q amplitudes of eight harmonics ($h=1..8$).} \label{fig:iq_measured} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{wfm180531_04_ch3_compare_gapvolt_01.00ms_dur0.0051ms.png} \caption{Comparison of the measured and calculated cavity gap voltage waveforms.} \label{fig:compare_waveforms} \end{figure} \section{Preliminary test results} \subsection{Multiharmonic vector rf control} The test setup of the cavity driver module is shown in Fig.~\ref{fig:meas_setup}. The rf output for the cavity 1 is led to the DUT (device under test) and the output of the DUT is fed into the cavity 1 input of the driver module. The DUT is the amplifier chain and the cavity. The phase offset LUT was set so that the feedback loop can be closed. To demonstrate the performance of the multiharmonic vector rf control, a sawtooth wave is generated. The Fourier series $f(t)$ of a sawtooth wave with a frequency $f_1$ and the amplitude 1 up to $m$-th harmonic is \begin{align} f(t) &= \frac{2}{\pi} \sum_{h=1}^m \frac{(-1)^{h+1}}{h} \sin 2\pi h f_1 t, \label{eq:fourier} \end{align} where $h$ is the harmonic number. The module can control eight harmonics ($h=1..8$). In the test, $f_1$ was set to 1~MHz and the I/Q amplitude of the revolutional harmonic ($h=1$) is set to (0,3000), which are digital values. The amplitudes of the higher harmonics are set according to (\ref{eq:fourier}). The measured I/Q signals of the eight harmonics ($h=1..8$) and the comparison of the measured and calculated cavity gap voltage waveforms are plotted in Fig.~\ref{fig:iq_measured} and Fig.~\ref{fig:compare_waveforms}, respectively. One can see that the I/Q amplitudes of the harmonics are very close to the set points. The measured and calculated waveforms nicely agree. The performance of the multiharmonic vector rf control is promising. The beam loading compensation up to $h=8$ with the vector rf control is foreseen. Also, the third ($h=6$) and fourth ($h=8$) harmonic voltages in addition to the existing dual harmonic operation may improve the performance of the bunch shaping to alleviate the space charge effects. \subsection{High speed serial communication and vector sum function} To examine the high speed serial communication and vector sum function, an I/Q signal rotated by a phase $\theta$ for the selected harmonic ($h=1$) is sent from the cavity driver module to the high speed serial communication module. The normalized vector sum I/Q signal is sent to the common function module. A 4~m long cable is used as the DUT for this test. The top plot of Fig.~\ref{fig:IQ_vector_sum_wfm} shows the measured I/Q signal at the cavity driver module. The amplitude of I is 20000 and the Q is zero. The envelope is a trapezoid. The rising and falling time is 0.2~ms and the flattop width is 2~ms. The other plots in Fig.~\ref{fig:IQ_vector_sum_wfm} are received vector sum signals at the common function module. The second top plot shows the signal normalized by 1 without rotation. It is identical to the I/Q signal measured by the cavity driver module. The middle plot shows the signal normalized by factor of 2 without rotation. The amplitude is a half of the original signal. The second bottom plot show the signal normalized by 1 with the rotation angle of 90 degrees. The amplitude of I is zero and the Q is 20000. The bottom is the signal normalized by 1 with the rotation angle of $-45$ degrees. The amplitude of I is close to $20000 \times 1/\sqrt{2}=14142$ and the Q is similar to I with negative sign. This simple test proves that the vector sum function works correctly as designed. We should note that we did not see any errors on the I/Q waveforms; the high speed serial transfer via the Xilinx Aurora is very stable. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{i_and_q_180529_03_04_05_06_08.png} \caption{I/Q waveforms. The top is the I/Q signal measured by the cavity driver. The others are received vector sum signals at the common function module. From the second top to the bottom, the signal normalized by 1 without rotation, the signal normalized by factor of 2 without rotation, the signal normalized by 1 with the rotation angle of 90 degrees, and the signal normalized by 1 with the rotation angle of $-45$ degrees.} \label{fig:IQ_vector_sum_wfm} \end{figure} \section{Summary and outlook} We summarize the article as follows. The existing LLRF control system has been working nicely without major problems for more than ten yeas. However, it will be difficult to maintain the system in near future because of the discontinued FPGAs on the system. The MicroTCA.4 based next generation LLRF control system is now under development. Similar LLRF functions are to be implemented in the new system with several new features. The key feature of the new system is the multiharmonic vector rf control, which would compensate the heavy beam loading in the wideband rf cavity and can expand the performance of the longitudinal painting injection. With the capability of the MicroTCA.4 backplane for high speed serial communication, the sophisticated signal transfer between the modules is realized. We will add five cavity driver modules to control twelve cavities and implement the remaining LLRF functions. We plan to replace the existing system with the new system during the summer maintenance period in 2019. Prior to the replacement, we will perform beam tests with the new system mainly focused on the beam loading compensation. \section*{Acknowledgments} We would like to thank Heiko Damerau and John Molendijk for fruitful discussions on LLRF topics. We also would like to thank the J-PARC writing support group, which continuously encouraged us to write up this article. Finally, we would like to thank all the members of the J-PARC. \bibliographystyle{IEEEtran}
\section{Introduction} \noindent Matrix factorizations can be traced back to Dirac's seminal description of the electron taking both quantum theory and relativity into account, cf.~\cite{Dirac} and also \cite{Murfet}. More recently, they have been studied in relation with, for example, Landau-Ginzburg models in homological mirror symmetry \cite{Orlov2}, the operation of flops in the minimal model program in birational geometry \cite{CurtoMorrison, Wemyss18}, curve-counting invariants \cite{BrownWemyss} used in enumerative geometry and the construction of new knot invariants \cite{KR}, cf. also \cite{Murfet}. Matrix factorizations of a power series $f \in P_n=\mathbb{C}\llbracket z_0, \ldots, z_n\rrbracket$ can be considered as objects of the \emph{homotopy category of matrix factorizations} $[\mathsf{MF}(P_n, f)]$, which has a triangulated structure \cite{Buchweitz, Eis80}. A fundamental result in the theory of matrix factorizations relates these categories for the power series $f \in P_n$ and $f + z_{n+1}^2 + z_{n+2}^2 \in P_{n+2}$, \cite{KnorrerCMmodules}. \begin{thm}[Kn{\"o}rrer 1987]\label{T:Knoerrer} For $n \in \mathbb{Z}_{\geq 0}$ and $0 \neq f \in P_n$ there is a triangle equivalence \begin{align}\label{E:Knoerrer} [\mathsf{MF}(P_n, f) ] \xrightarrow{\sim} [\mathsf{MF}(P_{n+2}, f + z_{n+1}^2 + z_{n+2}^2) ]. \end{align} \end{thm} \noindent In particular, this gives a bijection between matrix factorizations of $f$ and $f + z_{n+1}^2 + z_{n+2}^2$ up to sums of trivial matrix factorizations. Homotopy categories of matrix factorizations fit into the more general framework of \emph{triangulated singularity categories} $D^{sg}(R) = D^b(R)/\mathsf{Perf}(R)$, by Buchweitz \cite{Buchweitz}, Eisenbud~\cite{Eis80}. \begin{thm \label{T:BuchweitzEisenbud} For $n \in \mathbb{Z}_{\geq 0}$ and $0 \neq f \in P_n$ there is a triangle equivalence \begin{align}\label{E:BE} [\mathsf{MF}(P_n, f) ] \xrightarrow{\sim} D^{sg}(P_n/(f)). \end{align} \end{thm} \noindent The singularity category $D^{sg}(R)$ admits a canonical dg enhancement given by the dg quotient category $D^{sg}_{dg}(R)=D^b_{dg}(R)/\opname{Perf}_{dg}(R)$ \cite{Keller, Drinfeld}, where $D^b_{dg}(R)$ is the canonical dg enhancement of $D^b(R)$, which induces a dg enhancement $\opname{Perf}_{dg}(R)$ of $\opname{Perf}(R)$. There is a quasi-equivalence of dg categories lifting the triangle equivalence \eqref{E:BE}, cf. e.g. \cite[6.6.4]{Booth} \begin{align}\label{E:BElift} \mathsf{MF}(P_n, f) \xrightarrow{\sim} D^{sg}_{dg}(P_n/(f)). \end{align} Using this one can show that Kn{\"o}rrer's equivalences \eqref{E:Knoerrer} can be lifted to quasi-equivalences between dg singularity categories. These equivalences preserve the parity of the Krull dimension of the singularities, which leads to the following natural question \cite{KellerEmail}. \begin{question}[Keller \& Shinder]\label{Q:KellerShinder} Assume that $f \in P_n$ defines an isolated singularity and let $g \in P_m$ with $n \not\equiv m (\mod 2)$. Do quasi-equivalences $D^{sg}_{dg}(P_n/(f)) \cong D^{sg}_{dg}(P_{m}/(g))$ exist ? \end{question} \noindent Our main theorem gives a negative answer to this question. More generally, we give a complete classification of singularities $S$, which have dg singularity categories admitting quasi-equivalences to $D^{sg}_{dg}(P_n/(f))$. \begin{thm}\label{T:MAIN} Let $R \cong P_d/(f)$ be an isolated hypersurface singularity and let $S$ be a commutative complete local Noetherian $\mathbb{C}$-algebra of Krull dimension $e$. Then the following statements are equivalent. \begin{itemize} \item[(a)] There is a $\mathbb{C}$-linear quasi-equivalence between dg singularity categories \begin{align}\label{E:Quasi} D_{dg}^{sg}(R) \cong D_{dg}^{sg}(S). \end{align} \item[(b)] There is an $n \in \mathbb{Z}_{\geq 0}$ and an algebra isomorphism $S \cong P_e/(g)$, such that \begin{align} \lvert d-e \rvert=2n \qquad \text{and} \qquad g -f = z_1^2 + \cdots + z_{2n}^2. \end{align} In particular, $S$ and $R$ are stably equivalent singularities in the sense of Arnol'd. \end{itemize} \end{thm} \begin{rem} Theorem \ref{T:MAIN} shows in particular that there are only countably many isomorphism classes of commutative complete Noetherian $\mathbb{C}$-algebras with dg singularity categories quasi-equivalent to dg singularity categories $D_{dg}^{sg}(P_d/(f))$ for isolated hypersurface singularities $f$. This is in stark contrast to the non-commutative case: there is an uncountable family of pairwise non-Morita equivalent complete Noetherian $\mathbb{C}$-algebras with singularity categories triangle equivalent to $D^{sg}(P_{2d}/(f))$ for all ADE-singularities $f \in P_{2d}$ except $E_8$, see \cite{KIWY15}. For $A_1$-singularities $f=x_0^2 + \cdots + x_{2d}^2$, it is known (e.g.~by \cite{KW}) that these triangle equivalences lift to quasi-equivalences between dg singularity categories. \end{rem} For singularities of different Krull dimensions this result can be improved. \begin{cor}\label{C:MAIN} Let $R=P_n/I$ be a complete local isolated Gorenstein singularity and let $S$ be a commutative complete local Noetherian $\mathbb{C}$-algebra such that \begin{align} \opname{kr.dim}\nolimits S \neq \opname{kr.dim}\nolimits R. \end{align} If there is a $\mathbb{C}$-linear \emph{triangle} equivalence \begin{align}\label{E:Tria} D^{sg}(R) \cong D^{sg}(S). \end{align} then $R\cong P_d/(f)$ and $S \cong P_e/(g)$ are hypersurfaces. In particular, the statements (a) and (b) of Theorem \ref{T:MAIN} are also equivalent for Gorenstein algebras $R=P_n/I$, provided that $\opname{kr.dim}\nolimits S \neq \opname{kr.dim}\nolimits R$. \end{cor} \begin{rem} Question \ref{Q:KellerShinder} has a positive answer for certain \emph{group graded} singularity categories of some hypersurface singularities \cite{HIMO} and for certain \emph{non}-Gorenstein cyclic quotient singularities in dimension $2$ and $3$, see \eqref{E:K21}. \end{rem} \noindent We give a list of all known (cf. \cite{Matsui}) non-trivial triangulated singular equivalences between commutative complete local Noetherian $\mathbb{C}$-algebras. We use the following notation: for a primitive $n$th root of unity $\epsilon_n \in \mathbb{C}$, we define cyclic subgroups of order $n$ in $\opname{GL}(\mm{m}, \mathbb{C})$ \begin{align} {\frac{1}{n}(a_1, \ldots, a_m)}=\left\langle \mathsf{diag}\left(\epsilon_n^{a_1}, \ldots \epsilon_n^{a_m}\right) \right\rangle \subset \opname{GL}(\mm{m}, \mathbb{C}), \text{ where } a_i \in \mathbb{Z}_{>0}. \end{align} The invariant rings under the diagonal action on $\mathbb{C}\llbracket z_1, \ldots, z_m \rrbracket$ are denoted by \begin{align} \mathbb{C}\llbracket z_1, \ldots, z_m \rrbracket^{\frac{1}{n}(a_1, \ldots, a_m)}. \end{align} \begin{itemize} \item[(a)] Iterating Kn{\"o}rrer's equivalences \eqref{E:Knoerrer} for $0 \neq f \in P_n$ yields \begin{align} D^{sg}(P_{n}/(f)) \cong D^{sg}(P_{n+2m}/(f+z_1^2 + \ldots + z_{2m}^2)). \end{align} These are the only known non-trivial equivalences involving Gorenstein singularities -- which in view of Theorem \ref{T:MAIN} and Corollary \ref{C:MAIN} is maybe not so surprising. \item[(b)] Relative singularity category techniques \cite{KY16, KY18}, yield singular equivalences \cite{YangPrivate} \begin{align}\label{E:Yang} D^{sg}\left(\mathbb{C}\llbracket y_1, y_2\rrbracket^{\frac{1}{n}(1, 1)}\right) \cong D^{sg}\left(\frac{\mathbb{C}[z_1, \ldots, z_{n-1}]}{(z_1, \ldots, z_{n-1})^2}\right). \end{align} These equivalences can also be deduced from \cite{Kawamata15}. In \cite{KK17}, we give another proof of \eqref{E:Yang} \mm{and also construct (noncommutative) finite dimensional algebras $K_{n, a}$} that generalize \mm{\eqref{E:Yang}} to all cyclic quotient surface singularities \mm{$\mathbb{C}\llbracket y_1, y_2\rrbracket^{\frac{1}{n}(1, a)}$}. \item[(c)] The following are the only known singular equivalences that do not preserve the parity of the Krull dimension, see \cite{K21}. \begin{align}\label{E:K21} D^{sg}\left(\mathbb{C}\llbracket x_1, x_2, x_3\rrbracket^{\frac{1}{2}(1, 1, 1)}\right) \cong D^{sg}\left(\mathbb{C}\llbracket y_1, y_2\rrbracket^{\frac{1}{4}(1, 1)}\right) \cong D^{sg}\left(\frac{\mathbb{C}[z_1, \ldots, z_{3}]}{(z_1, \ldots, z_{3})^2}\right). \end{align} \end{itemize} \begin{rem} The triangle equivalences listed above can be lifted to quasi-equivalences between dg singularity categories, cf. Theorem \ref{T:MAIN} for (a) and \cite{KW} for (b) and (c). We do not know any examples of singularity categories, which are triangle equivalent and \emph{not} quasi-equivalent as dg categories. \end{rem} \noindent \emph{Acknowledgement.} I am very grateful to Bernhard Keller and Evgeny Shinder for asking me the question that started this work. Moreover, discussions with Evgeny Shinder were of great help for proving the main theorem. I would like to thank the anonymous referee of \cite{K21} for pointing out the graded singular equivalences in \cite{HIMO}, which led me to Proposition \ref{P:Knoerrer}. I am grateful to Nils Carqueville and Zhengfang Wang for feedback on a preliminary version of this article. \section{Consequences of triangle equivalences between singularity categories} \noindent In this section, we collect results about singularity categories and \emph{triangle} equivalences between them. These statements are either known or follow from combinations of well-known results. Our main references for singularity categories is \cite{Buchweitz}, see also \cite{Yoshino} for Gorenstein singularities. The triangulated singularity category detects Gorenstein isolated singularities. \begin{thm}\label{T:Auslander} Let $(R, \mathfrak{m}, k)$ be a complete local commutative Noetherian $k$-algebra with $k=R/\mathfrak{m}$. Then the following statements are equivalent. \begin{enumerate} \item[(a)] $R$ is Gorenstein and has an isolated singularity. \item[(b)] $D^{sg}(R)$ is Hom-finite over $k$, i.e. $\opname{Hom}_{D^{sg}(R)}(M, N)$ is a finite dimensional $k$-vector space for all $M, N \in D^{sg}(R)$. \end{enumerate} Moreover, if these equivalent conditions are satisfied, then the category $D^{sg}(R)$ has a Serre functor, which is given by the shift functor $[d-1]$. \end{thm} \begin{proof} If $R$ is Gorenstein, Buchweitz shows a triangle equivalence \cite{Buchweitz} \begin{align}\label{E:Buchweitz} D^{sg}(R) \cong \ul{\opname{MCM}\nolimits}(R). \end{align} By work of Auslander \cite{Aus84}, the latter category is Hom-finite if and only if $R$ has an isolated singularity. This shows that (a) implies (b) and in view of \eqref{E:Buchweitz} it also shows that for the implication $(b) \Rightarrow (a)$ it is enough to prove that $R$ is Gorenstein. Let $\opname{\widehat{Ext}}^0_R(k, k)$ be the stable cohomology (in the sense of \cite{AV}) of the $R$-module $k=R/\mathfrak{m}$. There is an isomorphism of $k$-vector spaces \begin{align} \opname{\widehat{Ext}}^0_R(k, k) \cong \opname{Hom}_{D^{sg}(R)}(k, k), \end{align} cf.~\cite[1.4.2]{AV}\footnote{More precisely, for finitely generated $R$-modules $M, N$ there are isomorphisms \begin{align} \opname{Hom}_{D^{sg}(R)}(M, N) \cong \lim_n \ul{\opname{Hom}}_R(\Omega^n(M), \Omega^n(N)) \cong \lim_n \opname{Ext}^n_R(M, \Omega^n(N)) \cong \opname{\widehat{Ext}}^0_R(M, N), \end{align} where the first isomorphism follows from \cite[Corollary 3.9(1)]{Bel00} and the last isomorphism is \cite[Lemma 5.1]{YoshinoHalf}, which uses the notation $\opname{\check{E}xt}^i_R(M, N)$ for the stable cohomology $\widehat{\opname{Ext}}^i_R(M, N)$ and $\Omega_n$ for the $n$th syzygy. }. Therefore, if $D^{sg}(R)$ is Hom-finite, then $\opname{\widehat{Ext}}^0_R(k, k)$ is a finite dimensional $k$-vector space, which shows that $R$ is Gorenstein by \cite[6.4]{AV}. The statement about the Serre functor follows from \eqref{E:Buchweitz} and \cite{Aus78}. \end{proof} \begin{lem}\label{L:Cohen} Let $(R, \mathfrak{m})$ be a complete local commutative Noetherian $\mathbb{C}$-algebra. Consider the following statements. \begin{itemize} \item[(a)] There is a $\mathbb{C}$-linear equivalence $D^{sg}(R) \cong D^{sg}(S)$, where $S \cong P_n/I$ defines an isolated Gorenstein singularity. \item[(b)] $D^{sg}(R)$ is Hom-finite over $\mathbb{C}$. \item[(c)] $R/\mathfrak{m} \cong \mathbb{C}$. \item[(d)] $R \cong P_m/J$. \end{itemize} The following implications hold: $(a) \Rightarrow (b)$ and $(c) \Rightarrow (d)$. Moreover, if $\opname{gldim}\nolimits R = \infty$, then $(b) \Rightarrow (c)$. \end{lem} \begin{proof} Our assumption $S \cong P_n/I$ implies $S/\mathfrak{m} \cong \mathbb{C}$. In combination with Theorem \ref{T:Auslander} this shows that $(a) \Rightarrow (b)$. We show that $(b)$ \& $\opname{gldim}\nolimits R = \infty$ imply $(c)$. If $\opname{gldim}\nolimits R = \infty$, then $\opname{Hom}_{D^{sg}(R)}(R/\mathfrak{m}, R/\mathfrak{m}) \neq 0$. By assumption $(b)$, this is a finite dimensional $\mathbb{C}$-vector space. It is also a $R/\mathfrak{m}$-vector space. This shows that the field extension $\mathbb{C} \subseteq R/\mathfrak{m}$ is finite, which gives $\mathbb{C} \cong R/\mathfrak{m}$ as $\mathbb{C}$ is algebraically closed. The implication $(c) \Rightarrow (d)$ is part of the Cohen structure theorem (cf.~e.g.~\cite[\href{https://stacks.math.columbia.edu/tag/032A}{Theorem 032A}]{stacks-project}). \end{proof} The singularity category also detects hypersurface singularities. \begin{thm}\label{T:EisenbudGulliksen} Let $(R, \mathfrak{m})$ be a complete local Noetherian Gorenstein $\mathbb{C}$-algebra, s.th. $R/\mathfrak{m} \cong \mathbb{C}$. Then the following statements are equivalent. \begin{itemize} \item[(a)] $R \cong P_n/(f)$ is a hypersurface singularity. \item[(b)] The shift functor of $D^{sg}(R)$ satisfies $[n] \cong \mathrm{id}$ for some $n \in \mathbb{Z} \setminus \{0\}$. \item[(c)] The shift functor of $D^{sg}(R)$ satisfies $[2] \cong \mathrm{id}$. \end{itemize} \end{thm} \begin{proof} If $R$ is a hypersurface singularity, then $D^{sg}(R)$ is equivalent to a homotopy category of matrix factorizations by Theorem \ref{T:BuchweitzEisenbud}. By definition, the latter category has a shift functor satisfying $[2] \cong \mathrm{id}$. This shows that (a) implies (c). The implication (c) to (b) is clear. If the shift functor satisfies $[n] \cong \mathrm{id}$ for some $n \in \mathbb{Z} \setminus \{0\}$, then $R$ is a hypersurface by \cite[Remark 6.5.12]{Booth}, which builds on work of Gulliksen \cite{Gulliksen}. Since $R/\mathfrak{m} \cong \mathbb{C}$, it follows from Lemma \ref{L:Cohen} that $R \cong P_n/(f)$. \end{proof} \begin{thm}\label{T:Hyper} Let $(R, \mathfrak{m})$ be a complete local commutative Noetherian Gorenstein $\mathbb{C}$-algebra of Krull dimension $d$ with an isolated singularity. Let ${\mathcal F} \neq 0$ be a Hom-finite $\mathbb{C}$-linear triangulated category with Serre functor $\mathbb{S}_{\mathcal F}$ satisfying \begin{align}\label{E:fractCY} \mathbb{S}_{\mathcal F}^n \cong [m] \quad \text{ for } \ (n, m) \in \mathbb{Z}_{>0} \times \mathbb{Z} \setminus \{(n , n(d-1))\}. \end{align} If there is a $\mathbb{C}$-linear equivalence of triangulated categories \begin{align}\label{E:fractional} D^{sg}(R) \cong {\mathcal F}, \end{align} then $R \cong P_d/(f)$ is a hypersurface. \end{thm} \begin{proof} By \cite{BondalKapranov}, Serre functors are unique up to isomorphism. Combining the equivalence \eqref{E:fractional} with the fact that $D^{sg}(R)$ has Serre functor $[d-1]$ (Theorem \ref{T:Auslander}) shows that $\mathbb{S}_{\mathcal F} \cong [d-1]$. Now \eqref{E:fractCY} yields natural isomorphisms in $D^{sg}(R)$ \begin{align} [m] \cong \mathbb{S}_{\mathcal F}^n \cong [n(d-1)] \quad \Rightarrow \quad [m - n(d-1)] \cong \mathrm{id}, \end{align} where $m - n(d-1) \neq 0$ by assumption. Therefore, the statement follows from Theorem \ref{T:EisenbudGulliksen} -- note that $R/\mathfrak{m} \cong \mathbb{C}$ by the implication (b) $\Rightarrow$ (c) in Lemma \ref{L:Cohen}, where we use that ${\mathcal F} \neq 0$ implies $\opname{gldim}\nolimits R = \infty$. \end{proof} Using the results above we can now collect known implications of \emph{triangle} equivalences between singularity categories of commutative $\mathbb{C}$-algebras. \begin{cor}\label{C:Hyper} Let $R=P_n/I$ and let $S$ be a commutative complete local Noetherian $\mathbb{C}$-algebra. If there is a $\mathbb{C}$-linear triangle equivalence \begin{align} D^{sg}(R) \cong D^{sg}(S), \end{align} then the following statements hold \begin{itemize} \item[(a)] $R$ is Gorenstein and has an isolated singularity if and only if $S$ has these properties. In particular, if these equivalent conditions are satisfied, then $S \cong P_m/J$. \item[(b)] If $R$ is a Gorenstein isolated singularity and $\opname{kr.dim}\nolimits R \neq \opname{kr.dim}\nolimits S$, then $R$ and $S$ are hypersurfaces. \item[(c)] $R$ is a isolated hypersurface singularity if and only if $S$ is a isolated hypersurface singularity. If these equivalent conditions are satisfied, then $S \cong P_m/(g)$. \end{itemize} \end{cor} \begin{proof} Part (a) follows from the categorical characterization of Gorenstein isolated singularities in Theorem \ref{T:Auslander} together with the implications (a) $\Rightarrow$ (c) \& (d) in Lemma \ref{L:Cohen}. Part (b) is a consequence of part (a) in combination with Theorem \ref{T:Hyper} applied to ${\mathcal F}=D^{sg}(S)$ with $(n, m)=(1, e-1)$. Since hypersurface singularities are Gorenstein, part (c) follows from part (a) together with the categorical characterization of hypersurface singularities in Theorem \ref{T:EisenbudGulliksen}. \end{proof} \begin{rem} More generally, if a complete local Gorenstein algebra $S$ is (triangle) singular equivalent to a complete intersection $R=P_n/(f_1, \ldots, f_c)$ of codimension $c$, then $S$ is isomorphic to a complete intersection $P_n/(g_1, \ldots, g_c)$ of codimension $c$, cf. \cite{Puthenpurakal}. \end{rem} \noindent Theorem \ref{T:Hyper} and Corollary \ref{C:Hyper} also have consequences for the existence of singular equivalences involving finite dimensional associative $\mathbb{C}$-algebras. \begin{cor}\label{C:NC} Let $R$ be a commutative complete local Noetherian Gorenstein $\mathbb{C}$-algebra of Krull dimension $d > 0$ and let $A$ be a finite dimensional connected associative $\mathbb{C}$-algebra. If there is a $\mathbb{C}$-linear triangle equivalence \begin{align} D^{sg}(R) \cong D^{sg}(A), \end{align} then the following statements hold \begin{itemize} \item[(a)] If $A$ is commutative, then $R$ is a hypersurface and $A \cong \mathbb{C}[x]/(x^n)$. \item[(b)] If $A$ is symmetric, i.e. $A \cong \opname{Hom}_\mathbb{C}(A, \mathbb{C})$ as $A$-$A$-bimodules, then $R$ is a hypersurface. \end{itemize} \end{cor} \begin{proof} Part (a) is a special case of Corollary \ref{C:Hyper} (b). Part (b) follows from Theorem \ref{T:Hyper} (b) applied to ${\mathcal F}=D^{sg}(A)$, which has Serre functor $[-1]$, see e.g. \cite{KrauseIyengar20}. \end{proof} \noindent The following result follows from work of Kn{\"o}rrer \cite{KnorrerCMmodules}, cf. \cite[Appendix]{K21} for (a) $\Rightarrow$ (b). \begin{prop}\label{P:Knoerrer} Let $n \in \mathbb{Z}_{\geq 0}$ and $m \in \mathbb{Z}_{> 0}$. Assume that $0 \neq f \in P_n$ has an isolated singularity. Then the following statements are equivalent. \begin{itemize} \item[(a)] There is a $\mathbb{C}$-linear triangle equivalence \begin{align} D^{sg}(P_{n}/(f)) \cong D^{sg}(P_{n+m}/(f+z_{n+1}^2 + \ldots + z_{n+m}^2)). \end{align} \item[(b)] $m$ is even. \end{itemize} \end{prop} \begin{proof} The implication (b) $\Rightarrow$ (a) follows by iterating Theorem \ref{T:Knoerrer}. To see the other direction, we assume that $m$ is odd. We first note that by Theorem \ref{T:Knoerrer} it is enough to consider the case $m=1$. This case is shown in \cite[Appendix]{K21}. For the convenience of the reader we repeat the argument. The Serre functors $[n-1]$ and $[n]$ of the equivalent categories $D^{sg}(P_{n}/(f))$ and $D^{sg}(P_{n+1}/(f+z_{n+1}^2 ))$ are isomorphic \cite{BondalKapranov}. This yields the following natural isomorphism in both categories \begin{align} [1] \cong \mathrm{id}. \end{align} In particular, $X[1]\cong X$ for every indecomposable object $X$ in $D^{sg}(P_{n}/(f))$. It follows from \cite[Prop. 2.7. i)]{KnorrerCMmodules} that there is an indecomposable matrix factorization $Y$ of $f+z_{n+1}^2$ such that $Y[1]\ncong Y$. Since $X$ corresponds to a non-trivial matrix factorization, $Y$ is non-trivial by \cite[Lemma 2.5. ii)]{KnorrerCMmodules} and therefore $Y \not\cong 0$ in $D^{sg}(P_{n+1}/(f+z_{n+1}^2 ))$. This contradicts $[1] \cong \mathrm{id}$ in $D^{sg}(P_{n+1}/(f+z_{n+1}^2 ))$ and shows that (a) is impossible. \end{proof} \section{Proof of Theorem \ref{T:MAIN} and Corollary \ref{C:MAIN}} \begin{defn} Let $R=P_n/(f)$ be a hypersurface singularity. The algebra \begin{align} T_R = P_n/(f, \partial_0 f, \ldots, \partial_n f) \end{align} is called the \emph{Tyurina algebra} of $R$. \end{defn} By definition, the Tyurina algebra is invariant under changing a hypersurface by adding squares in additional variables. \begin{lem}\label{L:Tyurina} Let $R=P_n/(f)$ and $S=P_m/(f+z_{n+1}^2 + \ldots + z_m^2)$ be hypersurface singularities. Then there is an isomorphism of algebras \begin{align} T_R \cong T_S. \end{align} \end{lem} We are ready to prove Theorem \ref{T:MAIN}. \begin{proof} We start with showing that (a) implies (b). The quasi-equivalence between dg singularity categories \eqref{E:Quasi} yields a triangle equivalence between singularity categories \begin{align}\label{E:SingTriang} D^{sg}(R) \cong D^{sg}(S). \end{align} Since $R$ is an isolated hypersurface singularity the same is true for $S$ by Corollary \ref{C:Hyper} (c). Now, the quasi-equivalence \eqref{E:Quasi} between dg singularity categories of hypersurfaces $R$ and $S \cong P_e/(g')$, yields an isomorphism between their Tyurina algebras \begin{align}\label{E:Tyurina} T_R \cong T_{S}, \end{align} see \cite{HuaKeller} cf. also \cite[Theorem 6.6.11]{Booth}. Without loss of generality, assume that $e\geq d$ and define $R'=P_e/(f + z_{d+1}^2 + \cdots z_{e}^2)$. By Lemma \ref{L:Tyurina}, we have \begin{align}\label{E:Tyurina2} T_R \cong T_{R'}. \end{align} Now $R'$ and $S$ are complete local $\mathbb{C}$-algebras of the same Krull dimension $e$, with isomorphic Tyurina algebras (by \eqref{E:Tyurina} and \eqref{E:Tyurina2}). Therefore, the formal version \cite[Prop. 2.1.]{GreuelPham} of the Mather--Yau Theorem \cite{MY}, yields an isomorphism \begin{align} P_e/(f + z_{d+1}^2 + \cdots z_{e}^2) = R' \cong S \end{align} So we can set $g=f + z_{d+1}^2 + \cdots z_{e}^2$. It remains to show that $e-d$ is even. Assume that $e-d>0$ is odd. Using that $R' \cong S$, \eqref{E:SingTriang} and Kn{\"o}rrer periodicity \eqref{E:Knoerrer}, we know that \begin{align*} D^{sg}\left(\frac{P_e}{(f + z_{d+1}^2 + \cdots z_{e}^2)}\right) \cong D^{sg}(R') \cong D^{sg}(S) \cong D^{sg}(R) \cong D^{sg}\left(\frac{P_{e-1}}{(f + z_{d+1}^2 + \cdots z_{e-1}^2)}\right) \end{align*} But Proposition \ref{P:Knoerrer} shows that a triangle equivalence between singularity categories of hypersurface singularities $P_{n}/(h +z_n^2)$ and $P_{n-1}/(h)$ cannot exist. This contradiction shows that $e-d$ is even and completes the proof of the implication (a) $\Rightarrow$ (b). The other implication (b) $\Rightarrow$ (a) follows from Kn{\"o}rrer's periodicity result \cite{KnorrerCMmodules}, which is induced by a quasi-equivalence between the corresponding dg singularity categories by work of Orlov \cite{Orlov2, OrlovIdempotent}, cf.~e.g.~\cite[Theorems 1.3 and 1.5]{PavicShinder}. \end{proof} Corollary \ref{C:MAIN} follows from Corollary \ref{C:Hyper} (b) in combination with Theorem \ref{T:MAIN}.
\section{Introduction} MRIs provide us with the technology to detect tumors or neoplasms at an early stage and provides essential information for early disease detection, i.e., identify abnormal or diseased tissue. A neoplasm is this uncoordinated abnormal and excessive growth of a tissue occurring inside the body \cite{b1}. This growth is referred to as a tumor when it forms a mass. However, it should be noted that neoplasms do not always form a mass \cite{b2} and some do not form a tumor such as leukemia and forms of carcinoma in situ. Furthermore, the growth of a neoplasm is independent of its surrounding tissues. Regardless of removing the original growth trigger \cite{b3}, the neoplasm or tumor persists to grow at an abnormal rate \cite{b4}\cite{b5}. Thus, presenting a threat to the human anatomy. \subsection{Neoplasms or tumors} Neoplasm can be grouped into five categories as according ICD-O behavior codes \cite{b6} and ICD-10 \cite{b7}. These include: \begin{itemize} \item{Benign neoplasms – noncancerous,} \item{Neoplasms of uncertain and unknown behavior,} \item{Carcinoma in situ – These will not spread and grow in situ. These could potentially be cancer \cite{b8}.} \item{Malignant neoplasms stated or presumed to be primary, of lymphoid, hematopoietic and related tissue and} \item{Malignant neoplasms of ill-defined, secondary and unspecified sites.} \end{itemize} A malignant neoplasm or malignant tumor is also known as cancer. These cells divide and grow excessively to form lumps that are cancerous \cite{b9}. Hence, spreading to other parts of the body and invading healthy tissues. Treatments include chemotherapy and radiation therapy that are used to kill cancer cells throughout and specific parts of the body, respectively. \subsection{Magnetic Resonance Imaging} Rather than using X-rays or ionizing radiation like CAT or PET scans do, MRI scanners use radio waves and strong magnetic fields to produce cross-sectional images of the internal anatomy of the body. An MRI system works on the principle of nuclear magnetic resonance (NMR) and consists of the following components as depicted in figure \ref{fig1} \begin{itemize} \item{The main magnet used to generate a strong uniform static field or the $B_0$ field. This partially polarizes the nuclear spins and causes the hydrogen atom to line up in the direction of the field. The strength of the magnetic field produced by this magnet is typically between 0.5 tesla to 2.0 tesla \cite{b10}} \item{The magnetic field gradient system consisting of a gradient controller and a gradient coil.} \item{The radio frequency (RF) system consists of an RF coil, an RF amplifier, and an RF controller. The RF transmitter coil generates a rotating magnetic field, $B_1$, for exciting a spin system in the unmatched protons. This specific resonance frequency is based on the tissue being image and is termed as the Larmour frequency.} \item{The receiver coil is connected to the computer system via a Digital to Analog converter (DAC). This coil converts the magnetization into an electric signal for imaging. } \end{itemize} \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{imgs/fig1.png}} \caption{A Magnetic Resonance Imaging System (MRI), simplified. \cite{b11}} \label{fig1} \end{figure} \subsection{Fourier Transforms} The magnetic signal received is decomposed as the sum of a series of simple waves with varying amplitudes and frequencies using Fourier transforms (FTs) \cite{b12}. Figure \ref{fig2} illustrates this decomposition from a complicated signal to simple waves. \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{imgs/fig2.png}} \caption{Generating a complicated signal by superimposing three simpler waves. \cite{b12}} \label{fig2} \end{figure} FT isolates the critical components of an image such as by expressing the signal (i.e., a function of time) into its underlying frequencies. FT are classified as orthogonal sinusoidal basis function, and this is known as the frequency domain representation of the original signal. Equation \eqref{eq1} defines the frequency domain representation or Fourier transform of a continuous function of time, $f(t)$ \cite{b13}. While, equation \eqref{eq2} denotes the same equation using Euler’s formula, $e^{j\theta}=\cos\theta+j\sin\theta$. Note that since $t$ is integrated out, we can rewrite $\mathfrak{I}\{f(t)\} $ as a function of $\mu$. We can represent this as $\mathfrak{I}\{f(t)\} = F(\mu)$. \begin{equation} \mathfrak{I}\{f(t)\} = \int_{-\infty}^{\infty} f(t)e^{-2\pi j\mu t}dt\label{eq1} \end{equation} \begin{equation} F(\mu) = \int_{-\infty}^{\infty} f(t) [\cos(2\pi \mu t) - j \sin(2\pi \mu t)]dt\label{eq2} \end{equation} where, $t$ and $\mu$ are continuous variables. \subsection{Image Acquisition in MRI} Since all the spin systems in the protons process at the same frequency and phase dictated by the magnetic field, $B_0$, a dynamically changing gradient field is applied for the separation of spin systems \cite{b14}. This is followed by applying the FT on the digitized signal and converting it into its Fourier k-space. A k-space is where the signal is organized into its spatial frequencies and amplitude information. Figure \ref{fig3} depicts this process. An inverse Fourier transform (IFT) is then applied to transform the image to the image space as shown in figure \ref{fig3}. This entire step by step process can be illustrated in a simplified manner by Figure \ref{fig4}. The figure gives us an overview of the MR imaging process from a signal processing perspective. \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{imgs/fig3.png}} \caption{A small part of a coronal slice of a brain interrogated for all its spatial frequencies and amplitude information in Fourier k-spaces. The summation of relative frequencies and the IFT of all other points in k-space contributes to give the image space. \cite{b12}} \label{fig3} \end{figure} \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{imgs/fig4.png}} \caption{The MR imaging process used in image acquisition. A simplified overview. \cite{b11}} \label{fig4} \end{figure} \section{Image Processing Techniques} The previous section discussed the definition of a tumor and the process of MRI. Furthermore, it introduced the Fourier Transform and discussed its purpose in imaging the digitized signal received from the MRI system. The following section will discuss the step by step techniques in detecting tumors in the images received from MRI scans. Following image acquisition, this section focuses on segmentation techniques such as MTs and RGTs in order to identify tumors at an early stage. Patil and Bhalchandra present a MATLAB step by step implementation of brain tumor extraction in \cite{b15}. This method incorporates filters for noise removal, filters for enhancement, segmentation and morphological operations to detect the tumor. \subsection{Preprocessing} \subsubsection{Acquiring a grayscale MRI scan} This is the first step of any image processing. The object of interest is captured by a sensor (e.g., camera) and then digitized using an analog to digital converter. The acquired magnetic resonance images are represented in grayscale. The intensity or amplitude of a grayscale image is represented as a function $f(x,y)$ where $x$ and $y$ are the spatial coordinates of the image. It should be noted that $f,x$ and $y$ are finite and discrete quantities. Since a grayscale image is represented as an 8-bit image, the value ranges from 0 to 255. With 0 being the weakest intensity and represented as the color black. This is due to the absence of light. While, 1 being the strongest intensity and represented as the color white. This is caused by the ``total transmission of light at all visible wavelengths" \cite{b16}. \subsubsection{High Pass filter for Image Sharpening} High pass filters or a sharpening filter is used for preserving all the high-frequency information in an image while reducing low frequencies. A fraction of the image after passing it through a high-pass filter can be added to the original image to obtain an enhanced version of the input image \cite{b17}. However, high-pass filters are very sensitive to noise as depend mainly on elevating high frequencies and attenuating lower ones. However, Russo presents a new approach in \cite{b18} for the contrast enhancement of image based on a multiple output system. The chief advantage of this technique is the superior performance in the event of corruption due to Gaussian noise. This is done by adopting fuzzy models. \subsubsection{Median filter for Image quality enhancement} Median filters are order-statistics filters. These are a form of nonlinear smoothing operators used to perform noise reduction on an image or signal. Median filters are typically used for salt and pepper noise, also known as impulse noises, that can occur due to random bit error during image transmission or conversion \cite{b19}. The median filtering algorithm works by running through a window of entries. This window slides over the entire signal. Suppose the window is of size $(2K+1)+(2L+1)$ at position $(k,l)$, then the input samples would be defined as – $u_{k-K,l-L},...,u_{k,l},...,u_{k+K,l+L}$. Figure \ref{fig5} illustrates this calculation of the median value. \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{imgs/fig5.png}} \caption{Calculation of median value using the neighborhood values\cite{b24}} \label{fig5} \end{figure} However, median filters present the issue of slight image blurring due as they also tend to smoothen the image details. To overcome this issue, Sun and Neuvo present a Detail-preserving median based filter in \cite{b20}. Their approach outperforms the weighted median filter \cite{b21}, stack filters \cite{b22} and adaptive weighted mean filer \cite{b23}. This approach removes impulses with minimal signal distortion while being detail preserving. Furthermore, unlike median filters, the detail-preserving median filter does not affect the image if impulse corruption is absent. Hence, making it an ideal prefilter for tumor extraction. \subsection{Segmentation} \subsubsection{Thresholding} This is considered to be the most trivial method of image segmentation \cite{b25}. Equation \eqref{eq3} represents the thresholding process of converting a grayscale image into a binary image \begin{equation} g(x,y) = \begin{cases} 1 & \text{if} f(x,y) > T \\ 0 & \text{if} f(x,y) \le T \end{cases} \label{eq3} \end{equation} Where, $T$ is the fixed threshold value ranging between 0 and 255 and $g(x,y)$ is a binary intensity value (since it can only be 0 or 1) of a pixel at the spatial coordinate $(x,y)$. \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{imgs/fig6.png}} \caption{The effect of thresholding (\textit{right}) on an image (\textit{left}).\cite{b26}} \label{fig6} \end{figure} Some of the common thresholding techniques are explained in \cite{b27}: \subsubsection{Global Thresholding (Single Threshold)} These are used when the differences between the foreground and background are very distinct. Have also proposed a novel global thresholding algorithm that uses boundary blocks for extracting a bimodal histogram \cite{b28}. \begin{itemize} \item{Traditional Thresholding (Otsu's method) \cite{b29} – used when the image has two distinct peaks in its histogram representation. This method calculates the optimum threshold separating the two classes such that their inter-class variance is maximum. } \item{Iterative Thresholding (A new iterative triclass thresholding technique) \cite{b30}– This method first uses Otsu’s method to obtain the threshold and the means of the two separated classes. The image is then separated into three classes using the means derived from the two classes. The first two classes will not be processed further. They are termed as the foreground and the background. The third class is referred to as the ``To Be Determined" (TBD) region and is involved in the next iteration of triclass separation using Otsu’s method. This method identifies weak objects and reveals fine structures of complex objects better than Otsu’s original approach. } \item{Multistage Thresholding (Quadratic Ratio Technique for Handwritten Character) – as the name suggests, QIR is used for retaining all the details of handwritten, hence it would not perform well for MRI images. Due to the use of fuzzy stage in the iteration, it performs better than other approaches for segmenting handwritten characters.} \end{itemize} \subsubsection{Local Thresholding} \begin{itemize} \item{Single Threshold – uses a single threshold value as described in equation \eqref{eq3}} \item{Multiple Threshold \cite{b31} – segments image into multiple levels using its mean and variance.} \end{itemize} \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{imgs/fig7.png}} \caption{Some popular methods for image thresholding.\cite{b27}} \label{fig7} \end{figure} Global thresholding methods tend to work well for medical images when the object of interest is significantly different from the background with respect to some characteristic. Such methods such as the one proposed by Bao and Zhang \cite{b32}, can also be used for noise detection while preserving edges in MRI images. Such methods also tend to perform better than wavelet-thresholding denoising methods. Furthermore, a multilevel thresholding method suggested by Manikandan et al. in \cite{b33} segments medical images by maximizing entropy. This method uses a real coded genetic algorithm with SBX crossover and performs more consistently for medical images. \subsubsection{Watershed segmentation} The watershed transformation process treats the gray-level image as a topographic relief. The brightness or intensity of each point is treated as its altitude. Based off of a geological watershed, a drop of water falls onto the surface, seeps along a path, then reaches a local minimum. This is used in the separation of adjacent drainage basins and find watershed lines. Furthermore, as proposed by Najman and Schmitt in \cite{b35}, watershed algorithms can also be specified over a continuous domain. Some of the different watershed definitions are: \begin{itemize} \item{Watershed by flooding – This method was proposed by Buecher and Lantuejoul in \cite{b36}. Their method extends the idea of drainage basins by continuously allowing ``water" from sources to collect in the local minima until the complete relief is flooded. Furthermore, a barrier is built where the ``water" sources meet. The arrangement of these barriers marks a watershed formed via flooding. One improvement of this method is the Priority-flood method \cite{b37}. } \item{Watershed by topographic distance – This definition verifies that the catchment basin is the local minimum in the topographic relief.} \item{Watershed by the drop of water principle – This idea was formally proposed by Cousty et al. in \cite{b38}. Intuitively the watershed of relief corresponds to the distinct local minima where a ``drop of water" can flow into.} \end{itemize} \subsubsection{Inter-pixel watershed algorithm} This approach was proposed by Beucher and Meyer in \cite{b39}. The algorithm can be described as \ref{al1}. \begin{algorithm}[htbp] \SetAlgoLined \textbf{Initialize} a set \textit{S} with distinct label nodes for each minimum\; \For{\textit{S} $\neq \{\theta\}$}{ \textbf{Extract} a node \textit{x} of minimum altitude\; \textbf{Attribute} the label of \textit{x} to each non-labeled node \textit{y} neighboring \textit{x}\; \textbf{Insert} \textit{y} into set \textit{S}\; } \caption{Inter-pixel watershed algorithm} \label{al1} \end{algorithm} \subsubsection{Meyer’s flooding algorithm} Proposed by Meyer and Maragos in \cite{b40}, this multiscale segmentation scheme works on grayscale images. A gradient image is used for the flooding process. Since successive flooding leads to the formation of adjacent catchment basins, basins emerge along the images. Hence the noise would lead to over-segmentation of the image. Thus, requiring that the data be preprocessed. Another approach is to merge regions based on similarity criterion afterwards. The algorithm works as described in \ref{al2}. \begin{algorithm}[htbp] \SetAlgoLined \KwResult{non-labeled pixels as watershed lines} \textbf{Initialize} a random set of seed markers for flooding, each with a distinct label\; \For{\textit{neighboring pixels} of each marker}{ \textbf{Enqueue} pixel to priority queue \textbf{P} (Associated priority is the gradient magnitude of each pixel)\; } \For{\textbf{P} $\ne \{\theta\}$}{ \textbf{Dequeue} pixel $p_l$ with least priority\; \If{neighboring pixels of $p_l$ have the same labels}{ \textbf{label} $p_l$ with neighbors' label\; } \textbf{Enqueue} unlabeled neighboring pixels of $p_l$ } \caption{Meyer’s flooding algorithm} \label{al2} \end{algorithm} \subsubsection{Region Competition} A novel algorithm proposed by Zhu and Yuille in \cite{b41} by unifies the following approaches: \begin{itemize} \item{snakes \cite{b42} and balloon methods \cite{b43}\cite{b44}\cite{b45}\cite{b46}} \item{region growing and merging techniques \cite{b47}\cite{b48}\cite{b49}} \item{Bayesian \cite{b50}\cite{b51}and Minimum Description Length (MDL) Criteria \cite{b52}\cite{b53}} \end{itemize} This multiband image segmentation technique is derived by minimizing a generalized Bayesian and MDL criterion. Furthermore, it also combines the statistical features of region growing and geometrical features of snakes and balloon methods. An implementation by Amo et al. \cite{b54}, utilizes the region competition algorithm for road extraction from aerial images. The proposed implementation extracts roads, their centerlines, and sides. The algorithm utilizes the small changes in the curvature and radiometer of the road and its light appearance for extracting it from the aerial image. Hence, the implementation finds the region of interest, i.e., road margins, accurately and is robust. However, it requires a user to set seeds and hence is susceptible to human error \cite{b55}. \subsection{Morphological Operations} The last step may be morphological operations on the binary image formed. These are a collection of non-linear operations used to extract morphological features such as the form and structure of an image. Furthermore, morphological operations can also be used to remove imperfections in the segmented image. Morphological operations are performed using a structuring element to an input image, and the value is based on two factors. These are again illustrated in Figure \ref{fig8}. \begin{figure}[htbp] \centerline{\includegraphics[width=0.3\textwidth]{imgs/fig8.png}} \caption{\textit{A} is a fit, \textit{B} is a hit, \textit{C} is neither a fit nor a hit, hence we term it as a miss. \cite{b55}} \label{fig8} \end{figure} \begin{itemize} \item{\textbf{Fit}: All pixels on structuring element matches the pixels of the input image (\textit{A} in Fig. \ref{fig8})} \item{\textbf{Hit}: Any pixel on structuring element matches the pixels of the input image (\textit{B} in Fig. \ref{fig8})} \end{itemize} Some of the basic morphological operations along with their equations can be found below. Note that \textit{X} is the reference image and \textit{B} is the structuring element. \begin{itemize} \item{\textbf{Erosion}: Used for noise removal in the background and removal of holes in either the foreground or background. This process shrinks the foreground and enlarges the background. Given as: $X\ominus B = \{z|(B)_z \subseteq X \}$} \item{\textbf{Dilation}: Enlarges the foreground and shrinks the background. Helps in enlarging the region of interest if it resides in the foreground. Furthermore, it is used for bridging gaps in an image since \textit{B} is expanding the features of \textit{X}. $X\oplus B = \{z| [(\hat{B})_z \cap X] \subseteq X \}$} \item{\textbf{Opening}: Used to remove noise and Charged Coupled Defects (CCD) in images. This detail and simplifies images by rounding the corners from inside the object where the kernel fits. It is erosion followed by dilation. $X\circ B= (X \ominus B) \oplus B$} \item{\textbf{Closing}: Smoothens contours and maintains shapes and sizes of objects. Closing protects coarse structures, closes small gaps and rounds off concave corners. It is dilation followed by erosion. $X\bullet B = (X \oplus B) \ominus B$} \end{itemize} \subsection{Region filling method} Region filling methods utilize morphological operations and are also termed as coloring. It is defined by equation \eqref{eq4} \begin{equation} X_k = \{X_{k-1} \oplus B\} \cap A^c, k=1,2,3... \label{eq4} \end{equation} Where B denotes the structuring element, A denotes a set containing a subset whose elements are 8 connected boundary points of a region and k denotes the number of iterations. If the region is filled, then stop the iterations. A user could also predefine the number of iterations to fill the region. Deb, Dutta and Roy propose a novel method for noise removal from brain images in \cite{b34}. This method uses region filling to denoise the image. Region filling takes place by interposing the pixel values from the boundaries of the region of interest. The method suggests the use of an interpolation method based on Laplace's equation to obtain the smoothest possible fills at the boundaries. However, this method requires user intervention to determine the region of interest. Furthermore, the selection of the region must be accurate. \section*{Acknowledgment} The author, Jacob John would like to thank Dr. Prabu Sevugan for his continuous support throughout this paper. I would also like to thank Vellore Institute of Technology for their aid without which this paper wouldn't have been completed.
\section{Introduction} Structured matrices appear in various domains, such as scientific computing, signal processing, \dots They usually express, in a linearize way, a problem which depends on fewer parameters than the number of entries of the corresponding matrix. An important area of research is devoted to the development of methods for the treatment of such matrices, which depend on the parameters defining them. Among well-known structured matrices, Toeplitz and Hankel structures have been intensively studied \cite{MR782105,MR1355506}. Nearly optimal algorithms are known for their multiplication by a vector and the solution of linear systems, for such structure. Namely, if $A$ is a Toeplitz matrix of size $n$, multiplying it by a vector or solving a linear system with $A$ requires $\Osft(n)$ arithmetic operations (where $\Osft(n)=\Oc(n \log^{c}(n))$ for some $c>0$) \cite{LDRBA80,MR1871324}. Such algorithms are called superfast, in opposition with fast algorithms requiring $\Oc(n^{2})$ arithmetic operations. The fundamental ingredients in these algorithms are the so-called generators \cite{MR1355506}, encoding the minimal information stored in these matrices, and on which the matrix transformations are translated. The correlation with other types of structured matrices has also been well developed in the literature \cite{MR1843842,MR1755557}, allowing to treat efficiently other structures such as Vandermonde or Cauchy-like structures. Such problems are strongly connected to polynomial problems \cite{LDRFuh96b,MR1289412}. For instance, the product of a Toeplitz matrix by a vector can be deduced from the product of two univariate polynomials, and thus can be computed efficiently by evaluation-interpolation techniques, based on FFT. The inverse of a Hankel or Toeplitz matrix is connected to the Bezoutian of the polynomials associated to their generators. Such a construction is related to Gohberg-Semencul formula \cite{LDRGS72} (or Trench algorithm \cite{LDRTre64}), which describes the inverse of a Toeplitz matrix in terms of the solution of two specific Toeplitz systems (see also Gohberg-Prupnick formula \cite{LDRGK72}). Most of these methods involve univariate polynomials. So far, few investigations have been pursued for the treatment of multilevel structured matrices \cite{LDRTyr85,khalil}, related to multivariate problems. Such linear systems appear for instance in resultant or in residue constructions, in normal form computations, or more generally in multivariate polynomial algebra. We refer to \cite{MR1762401} for a general description of multi-structured matrices and their correlations with multivariate polynomials. Surprisingly, these multivariate structure also appear in numerical scheme and preconditionners \cite{khalil}. A main challenge here is to devise superfast algorithms of complexity $\Osft(n)$ for the solution of multi-structured systems of size $n$. In this paper, we re-investigate the solution of Toeplitz systems $T\, u =g$ from a new point of view which can be generalized to two-level Toeplitz systems. We correlate the solution of such problems with syzygies of polynomials. We show an explicit connection between the generators of a Toeplitz matrix and the generators of the corresponding module of syzygies. We show that this module is generated by two elements of degree $n$ and the solution of $T\,u=g$ can be reinterpreted as the remainder of an explicit polynomial vector depending on $g$, by these two generators. We give two algorithms, with computational complexity $\O(n\log^2n)$, to compute the generators of the module of syzygies. We give finally an algorithm, with computational complexity $\O(n\log^2n)$, for the division of the generators by the polynomial vector depending on $g$. Our new syzygy approach can be connected with Pad\'e approximation method developed in \cite{LDRBGY80} to compute efficiently particular solutions of Toeplitz linear system. But we replace the computation of generators of structured matrices by the computation of generators of a syzygy module and the solution of the linear system from particular solutions by Euclidean reduction by the generators of the syzygy module. Let $R=\kk[x]$. For $n \in \NN$, we denote by $\kk[x]_{n}$ the vector space of polynomials of degree $\le n$. Let $L=\kk[x,x^{-1}]$ be the set of Laurent polynomials in the variable $x$. For any polynomial $p=\sum_{i=-m}^{n} p_{i}\, x^{i} \in L$, we denote by $p^{+}$ the sum of terms with non-negative exponents: $p^{+}=\sum_{i=0}^{n} p_{i}\, x^{i}$ and by $p^{-}$, the sum of terms with strictly negative exponents: $p^{-}=\sum_{i=-m}^{-1} p_{i}\, x^{i}$. We have $p=p^{+} +p^{-}$. For $n\in \NN$, we denote by $\Unit{n}=\{\omega; \omega^{n}=1\}$ the set of roots of unity of order $n$. For a vector $u=(u_0,\dots,u_{k-1})^T\in\kk^k$, we denote by $u(x)$ the polynomial of degree $k-1$ given by $u(x)=\sum_{i=0}^{k-1}u_ix^i$. Conversely, if $v(x)=\sum_{i=0}^{k-1}v_ix^i$ is a polynomial of degree $k-1$, we denote by $v$ the vector of length $k$ of coefficients of $v(x)$. If no confusion arises, we may also use $v$ to denote the polynomial $v(x)$. \section{Sygygies and Toeplitz matrices} Let $T\in\kk^{n\times n}$ be an $n\times n$ Toeplitz matrix. Then $T$ is of the following form: \begin{equation} \begin{pmatrix} t_0&t_{-1}&\dots&t_{-n+1}\\ t_{1}&t_0&\ddots&\vdots\\ \vdots&\ddots&\ddots&t_{-1}\\ t_{n-1}&\dots&t_{1}&t_0 \end{pmatrix}. \end{equation} Let $g=(g_0,\dots,g_{n-1}) \in \kk^{n}$ be a vector of length $n$. We are interested in the following problem: \begin{prob}\label{pb:initial} Find $u=(u_0,\dots,u_{n-1}) \in \kk^{n}$ such that \begin{equation}\label{pb:toep} T\,u=g. \end{equation} \end{prob} \begin{defn} Let $E=\{1,\dots,x^{n-1}\}$, and $\Pi_{E}$ be the projection of $L$ on the vector space generated by $E$, along $\<x^{n},x^{n+1},\ldots\>$. \end{defn} \begin{defn} From the matrix $T$ and the vectors $g$ and $u$ we define the following polynomials: \begin{itemize} \item $T(x)=\displaystyle\sum _{i=-n+1}^{n-1}t_ix^i,$ \item $\tilde{T}(x)=\displaystyle\sum_{i=0}^{2n-1}\tilde{t}_ix^i$ with $\tilde{t}_i=\left\{ \begin{array}{ll} t_i&\textrm{ if } i< n\\ t_{i-2n}&\textrm{ if } i\ge n \end{array}\right.$, \item $u(x)=\displaystyle\sum_{i=0}^{n-1}u_ix^i,\:g(x)=\sum_{i=0}^{n-1}g_ix^i$. \end{itemize} \end{defn} \noindent{}Notice that $T(x)$ is a Laurent polynomial and that $\tilde{T}(x)$ is a polynomial of degree $2n-1$. By construction, we have the following properties: \begin{prop} $\tilde{T}= T^{+} + x^{2\,n}\, T^{-}$ and $T(w)=\tilde{T}(w)$ if $w \in \Unit{2\,n}$. \end{prop} \begin{proof} We can deduce directly, from the definition of $T(x)$ and $\Tl(x)$, that $\tilde{T}= T^{+} + x^{2n}\, T^{-}$. Moreover, since $w^{2n}=1$ and $\Tl(x)=T^{+}(x)+x^{2n}T^{-}(x)$, then $\Tl(w)=T^{+}(w)+T^{-}(w)=T(w)$. \end{proof} According to Proposition $2.1.2$ of \cite{MR1762401}, we have the following relation between the problem \ref{pb:initial} and polynomials: \begin{prop}\label{Toepolyn} We have $$ T\,u=g\Leftrightarrow\Pi_{E}(T(x)u(x))=g(x). $$ \end{prop} As $\Pi_{E}(T(x)u(x))$ is the polynomial $T(x)u(x)$ from which we remove terms of negative degree and of degree $\geq n$, then we can write $T(x)\, u(x)$ as following: \begin{prop}\label{transf} \begin{equation} T(x)\, u(x) = \Pi_{E}(T(x)u(x))+ x^{-n} A(x) + x^{n} B(x), \end{equation} where $A(x)\in\kk[x]_{n-1}$ and $B(x)\in\kk[x]_{n-2}$. \end{prop} \begin{proof} By expanding $T(x)u(x)$ we can write \begin{eqnarray*} T(x)u(x)&=&\Pi_{E}(T(x)u(x))+(\alpha_{-n+1}x^{-n+1}+\dots+\alpha_{-1}x^{-1}) + (\alpha_{n}x^n+\dots+\alpha_{2n-2}x^{2n-2})\\ &=&\Pi_{E}(T(x)u(x))+x^{-n}(\alpha_{-n+1}x+\dots+\alpha_{-1}x^{n-1}) + x^{n}(\alpha_{n}+\dots+\alpha_{2n-2}x^{n-2})\\ &=&\Pi_{E}(T(x)u(x))+ x^{-n} A(x) + x^{n} B(x) \end{eqnarray*} \end{proof} Therefore, according to Proposition \ref{Toepolyn} and Proposition \ref{transf}, if $u$ is solution of $Tu=g$ then there exist two polynomials $A(x)$ and $B(x)$ in $\kk[x]_{n-1}$ such that \begin{equation}\label{Tsyz} T(x)u(x)- x^{-n} A(x) - x^{n} B(x) = g(x). \end{equation} By evaluation at the roots $\omega \in \Unit{2n}$, and since $\omega^{-n}= \omega^{n}$ and $\Tl(\omega)=T(\omega)$ for $\omega\in \Unit{2n}$, we have $$ \Tl(\omega) u(\omega) + \omega^{n} v(\omega) = g(\omega), \forall \omega \in \Unit{2n}(\omega), $$ where $v(x)= -A(x)-B(x)$ of degree $\le n-1$. Therefore the polynomial $\Tl(x) u(x) + x^{n} v(x) - g(x)$ is multiple of $x^{2n}-1$. We deduce that there exists $w(x)\in\kk[x]$ such that \begin{equation}\label{TTsyz} \Tl(x) u(x) + x^{n} v(x) + (x^{2n}-1) w(x)= g(x). \end{equation} Notice that $w(x)$ is of degree $\le n-1$ because $(x^{2n}-1)\, w(x)$ is of degree $\le 3n-1$. \subsection{Syzygies} The solutions of Equation \eqref{TTsyz} is a particular case of the following problem, related to interesting questions in Effective Algebraic Geometry. \begin{prob}\label{pb:mvlines} Given three polynomials $a, b, c \in R$ respectively of degree $<l, <m, <n$, find three polynomials $p, q, r \in R$ of degree $< \nu-l, <\nu-m, <\nu-n$, such that \begin{equation} \label{eq:mvlines} a(x)\, p(x) + b(x)\, q(x) + c(x)\, r(x) =0. \end{equation} \end{prob} The polynomial vector $(p,q,r) \in\kk[x]^{3}$ is called a {\em syzygy} of $(a,b,c)$. We denote by $\ML(a,b,c)$ the set of syzygies $(p,q,r)\in\kk[x]^{3}$ of $(a,b,c)$, i.e. the solutions of \eqref{eq:mvlines}. It is a $\kk[x]$-module of $\kk[x]^{3}$ and it is called the module of syzygies of $(a,b,c)$. The solutions of Problem \ref{pb:mvlines} are $\ML(a,b,c) \cap\kk[x]_{\nu-l-1}\times\kk[x]_{\nu-m-1}\times\kk[x]_{\nu-n-1}$. Given a new polynomial $d(x)\in \kk[x]$, we denote by $\ML(a,b,c;d)$ the set of $(p,q,r)\in\kk[x]^{3}$ such that \begin{equation} a(x)\, p(x) + b(x)\, q(x) + c(x)\, r(x) = d(x). \end{equation} \begin{thm} For any non-zero vector of polynomials $(a,b,c)\in \kk[x]^{3}$, the $\kk[x]$-module $\ML(a,b,c)$ is free of rank $2$. \end{thm} \begin{proof} By the Hilbert's theorem, the ideal $I$ generated by $(a,b,c)$ has a free resolution of length at most $1$ (see \cite[chap. 6]{MR1639811}), that is of the form: $$ 0\rightarrow\kk[x]^p\rightarrow \kk[x]^3\rightarrow \kk[x] \rightarrow \kk[x]/I \rightarrow 0. $$ As $I\neq 0$, for dimensional reasons, we must have $p-3+1=0$, then $p=2$. \end{proof} \begin{defn} For a polynomial vector $p=(p_1,\dots,p_k)\in\,\kk[x]^k$, we define $$\deg(p_{1}, \ldots, p_{k})=\max(\deg(p_1),\dots,\deg(p_k)).$$ \end{defn} \begin{defn} Assume that $\deg (p,q,r)\leq \deg(p',q',r')$. A $\mu$-base of $\ML(a,b,c)$ is a basis $\{(p,q,r),\,(p',q',r')\}$ of $\ML(a,b,c)$, with $\deg(p,q,r)=\mu$. \end{defn} We have the following relation between the degrees of the two elements of a basis of $\ML(a,b,c)$: \begin{prop}\label{degree} Let $\{(p_{1},q_{1},r_{1}),\,(p_{2},q_{2},r_{2})\}$ be a basis of $\ML(a,b,c)$, $\mu_1=\deg(p_1,q_1,r_1)$ and $\mu_2=\deg(p_2,q_2,r_2)$. We have $\deg(a,b,c)=\mu_1+\mu_2$. \end{prop} \begin{proof} We have \begin{equation*} 0 \rightarrow \kk[x]_{\nu-d-\mu_1} \oplus \kk[x]_{\nu-d-\mu_2} \rightarrow \kk[x]_{\nu-d}^3\rightarrow \kk[x]_{\nu} \rightarrow \kk[x]_{\nu}/(a,b,c)_{\nu} \rightarrow 0, \end{equation*} for $\nu \gg 0$. As the alternate sum of the dimension of the $\kk$-vector spaces is zero and $\kk[x]_{\nu}/(a,b,c)_{\nu}$ is $0$ for $\nu \gg 0$, we have $$ 0 = 3\,(d-\nu-1) +\nu -\mu_1- d +1 + \nu -\mu_2 -d +1 + \nu +1 = d -\mu_1 -\mu_2. $$ \end{proof} \subsection{The module $\ML(\Tl(x), x^{n}, x^{2n}-1)$} Returning to the initial problem, we saw that if $u$ is solution of $Tu=g$ then there exist two polynomials $v(x)$ and $w(x)$ in $\kk[x]_{n-1}$ such that $(u(x),v(x),w(x))$ $\in\ML(\Tl(x), x^{n}, x^{2n}-1;g(x))$. By the proposition \ref{degree}, if $(p,q,r)$ and$(p',q',r')$ form a basis of $\ML(\Tl(x), x^{n}, x^{2n}-1)$ of degree $\mu_1$ and $\mu_2$ respectively then we have $\mu_1+\mu_2=2\,n$. We are going to show now that in fact $\ML(\Tl(x), x^{n}, x^{2n}-1)$ has a $n$-basis, that is a basis of two elements of degree $\mu_1=\mu_2=n$: \begin{prop}\label{prop:ML} The $\kk[x]$-module $\ML(\Tl(x), x^{n}, x^{2n}-1)$ has a $n$-basis. \end{prop} \begin{proof} Consider the linear map \vspace{-0.5cm} \begin{eqnarray}\label{syzfunction1} \kk[x]_{n-1}^3 &\rightarrow & \kk[x]_{3n-1}\\ (p(x),q(x),r(x)) & \mapsto & \Tl(x) p(x) + x^{n} q(x) +(x^{2n}-1) r(x),\nonumber \end{eqnarray} which $3n \times 3n$ matrix is of the form \begin{equation}\label{form:S} S:= \left( \begin{array}{c|c|c} T_{0} &\mathbf{0} & -\II_{n} \\ T_{1} & \II_{n} & \mathbf{0} \\ T_{2} & \mathbf{0} & \ \, \II_{n} \\ \end{array} \right), \end{equation} where $T_{0}, T_{1}, T_{2}$ are the coefficient matrices of $(\Tl(x)$, $x\, \Tl(x)$, $\ldots,$ $x^{n}\Tl(x))$, respectively for the list of monomials $(1,\ldots,x^{n-1})$, $(x^{n},\ldots,x^{2n-1})$, $(x^{2n},\ldots, x^{3n-1})$. Notice in particular that $T= T_{0}+T_{2}$. Reducing the first block $(T_{0}| \mathbf{0} | -\II_{n})$ by the last block $(T_{2}| \mathbf{0} | \II_{n})$, we replace it by the block $(T_{0}+T_{2}| \mathbf{0} | \mathbf{0})$, without changing the rank of $S$. As $T=T_{0}+T_{2}$ is invertible, this shows that the matrix $S$ is of rank $3n$. Therefore $\ker (S)=0$ and there is no syzygies in degree $n-1$. As the sum $2n=\mu_1+\mu_2$, where $\mu_1,\mu_2$ are the degrees of a pair of generators of $\ML(\Tl(x), x^{n}, x^{2n}-1)$, and as $\mu_1\geq n$ and $\mu_2\geq n$, we have $\mu_1=\mu_2=n$. Moreover, $\ML(\Tl(x), x^{n}, x^{2n}-1)$ is free of rank $2$. Thus there exist two linearly independent syzygies $(u_1,v_1,w_1)$, $(u_2,v_2,w_2)$ of degree $n$, which generate $\ML(\Tl(x), x^{n}, x^{2n}-1)$. \end{proof} A similar result can also be found in \cite{MR1871324}, but the proof much longer than this one, is based on interpolation techniques and explicit computations. Let us now describe how to construct explicitly two generators $(u_1,v_1,w_1)$, $(u_2,v_2,w_2)$ of $\ML(\Tl(x), x^{n}, x^{2n}-1)$ of degree $n$. As $\Tl(x)$ is of degree $\le 2\,n -1$ and the map \eqref{syzfunction1} is surjective, there exists $(u,v,w) \in \kk[x]_{n-1}^3$ such that \begin{equation}\label{base1} \Tl(x) u(x) + x^n v(x) + (x^{2\,n}-1)\, w = \Tl(x) x^n. \end{equation} We deduce that $(u_1,v_1,w_1)=(x^n-u, -v, -w) \in \ML(\Tl(x), x^{n}, x^{2n}-1)$. Since there exists $(u',v',w') \in \kk[x]_{n-1}^3$ such that \begin{equation}\label{base2} \Tl(x) u'(x) + x^n v'(x) + (x^{2\,n}-1)\, w' =1 = x^n\, x^n - (x^{2\,n}-1), \end{equation} we deduce that $(u_2,v_2,w_2)=(-u',x^n -v', -w' - 1) \in \ML(\Tl(x), x^{n}, x^{2n}-1)$. Now, T and are linearly independent since by construction, The coefficient vectors of $x^{n}$ in $(u_1,v_1,w_1)$ and $(u_2,v_2,w_2)$ are respectively $(1,0,0)$ and $(0,1,0)$, which shows that vectors $(u_1,v_1,w_1)$, $(u_2,v_2,w_2)\in \ML( \Tl(x),x^{n},x^{2n}-1)\cap \kk[x]_{n}$ are linearly independent. Therefore, they form a basis of $\ML(\Tl(x),x^{n},x^{2n}-1)$. Now we can prove our aim theorem: \begin{thm}\label{division} The vector $u$ is solution of \eqref{pb:toep} if and only if there exist $v(x)$ and $w(x)$ in $\kk[x]_{n-1}$ such that $$ (u(x), v(x), w(x)) \in \ML(\tilde{T}(x), x^{n}, x^{2n}-1; g(x) ) $$ \end{thm} \begin{proof} If $u$ is solution of \eqref{pb:toep}, we see that there exist $v(x)\in\kk[x]_{n-1}$ and $w(x)\in\kk[x]_{n-1}$ such that $$ \Tl(x) u(x) + x^{n} v(x) + (x^{2n}-1) w(x)= g(x). $$ Conversely, a solution $(u(x), v(x), w(x)) \in \ML(\tilde{T}(x),x^{n},x^{2n}-1; g(x) )\cap \kk[x]_{n-1}^{3}$ implies that $(u,v,w)\in \kk^{3\,n}$ is a solution of the linear system: $$ S \, \left( \begin{array}{c} u\\ v\\ w\\ \end{array} \right) = \left( \begin{array}{c} g\\ 0\\ 0\\ \end{array} \right), $$ where $S$ is has the block structure \eqref{form:S}, so that $T_{2}\, u + w =0$ and $T_{0}\, u - w = (T_{0}+T_{2}) u=g$. As we have $T_{0}+T_{2}=T$, the vector $u$ is a solution of \eqref{pb:toep}, which ends the proof of the theorem. \end{proof} Computing the inverse of a Toeplitz matrix $T$ is equivalent to computing the first and the last column of $T^{-1}$, based on Gohberg-Semencul decomposition (see \cite{LDRHR84,MR1179345,MR1491603,LDRKC89} for more details about Gohberg-Semencul decomposition). We are going to show that the solutions of Equations \eqref{base1} and \eqref{base2} which gives us the $n$-basis $\{(u_1,v_1,w_1),(u_2,v_2,w_2)\}$ is related to the solution of two specific Toeplitz linear systems. \begin{prop}\label{prop:2.15} Let $(u(x),v(x),w(x))$ and $(u'(x),v'(x),w'(x))$ be in $\kk_{n-1}[x]^3$ such that $$ \left\{\begin{array}{l} \Tl(x) u(x) + x^n v(x) + (x^{2\,n}-1)\, w(x) = \Tl(x) x^n,\\ \Tl(x) u'(x) + x^n v'(x) + (x^{2\,n}-1)\, w'(x) =1. \end{array}\right. $$ Then $Tu'=e_1$ and $Tu=ZTe_n$, with $Z$ is the lower shift matrix of size $n$. \end{prop} \begin{proof} As $u'(x),\,v'(x),\, w'(x)$ and $1$ are of degree $\leq n-1$, then, by Theorem \ref{division}, $\Tl(x) u'(x) + x^n v'(x) + (x^{2\,n}-1)\, w'(x) =1$ is equivalent to $Tu'=e_1$ ($e_1(x)=1$) and $u'$ is the first column of $T^{-1}$. We have $\Tl(x)=T_+(x)+x^{2n}T_-(x)$, then $$\Tl(x) u(x) + x^n v(x) + (x^{2n}-1)w(x) =x^nT_+(x)+x^n((x^{2n}-1)T_-(x)+T_-(x)).$$ Therefore, $$\Tl(x)u(x)+x^n(v(x)-T_+(x))+(x^{2n}-1)(w(x)-x^nT_-(x))=x^nT_-(x).$$ As $x^nT_-(x)$ is of degree $\leq n-1$ and is the polynomial associated with the vector $ZTe_n$, by Theorem \ref{division}, $u$ is such that $Tu=ZTe_n$. \end{proof} Notice that $u$ is not the last column of $T^{-1}$, but we can use $u$ and $u'$ to compute it (see \cite{LDRHR84}). Therefore, defining a $n$-basis of $\ML(\Tl(x),x^{n},x^{2n}-1)$ from the solution of Equations \eqref{base1} and \eqref{base2} is equivalent to computing the Gohberg-Semencul decomposition of $T^{-1}$. In the following section, we reduce translation of the solution of $Tu=g$ to an Euclidean division, based on our decomposition, instead of multiplying $g$ by triangular Toeplitz matrices, based on Gohberg-Semencul decomposition. The advantage of our decomposition is that we can generalized it to two-level problems, which allows us to describe a ``Gohberg-Semencul'' decomposition of Toeplitz-block-Toeplitz matrices. \section{Euclidean division} In this section, we show how to obtain the solution vector $(u(x),v(x),w(x))\in \ML(\Tl(x), x^{n}, x^{2\,n}-1;g(x))\cap \kk[x]_{n}^{3}$ from a $n$-basis of $\ML(\Tl(x),x^{n},x^{2n}-1)$ and a particular solution in $\ML(\Tl(x), x^{n}, x^{2\,n}-1;g(x))$. From Theorem \ref{division} we deduce the two following corollaries: \begin{cor} For all $g(x)\in \kk_{n-1}[x]$, the set $\ML(\tilde{T}(x), x^{n}, x^{2n}-1;g(x))\cap\kk_{n-1}^3[x]$ has exactly one element. \end{cor} \begin{proof} As $T$ is invertible, there exists a unique $u$ such that $Tu=g$. From the theorem \ref{division}, there exists $v(x),\,w(x)$ of degree $\leq n-1$, such that $(u(x),v(x),w(x))\in\ML(\tilde{T}(x), x^{n}, x^{2n}-1;g(x))\cap\kk_{n-1}^3[x]$. The uniqueness is also obvious: if $(u'(x),v'(x),w'(x))\in\ML(\tilde{T}(x), x^{n}, x^{2n}-1;g(x))\cap\kk_{n-1}^3[x]$, then $(u(x),v(x),w(x))-(u'(x),v'(x),w'(x))\in\ML(\tilde{T}(x), x^{n}, x^{2n}-1)\cap\kk_{n-1}^3[x]$ which equal $\{(0,0,0)\}$ (see the demonstration of the proposition \ref{prop:ML}). Then $(u(x)$, $v(x)$, $w(x))$ $=(u'(x),v'(x),w'(x))$. \end{proof} \begin{cor} Let $\{(u_1,v_1,w_1),(u_2,v_2,w_2)\}$ be a $n$-basis of $\ML(\Tl(x),x^{n},x^{2n}-1)$. Let $(p,q,r)$ be in $\ML(\Tl(x),x^{n},x^{2n}-1; g(x))$. There exists a unique $(u,v,w)\in\ML(\tilde{T}(x), x^{n}, x^{2n}-1;g(x)) \cap\kk_{n-1}^3[x]$ and a unique pair of polynomials $p_1$ and $p_2$ such that $$ \begin{pmatrix}p\\q\\r\end{pmatrix}= p_1\begin{pmatrix}u_1\\v_1\\w_1\end{pmatrix}+ p_2\begin{pmatrix}u_2\\v_2\\w_2\end{pmatrix}+ \begin{pmatrix}u\\v\\w\end{pmatrix}. $$ This decomposition is called the division of $(p,q,r)$ by $(u_1,v_1,w_1)$ and $(u_2,v_2,w_2)$. \end{cor} \begin{proof} From the previous corollary, there exist a unique element in $\ML(\tilde{T}(x), x^{n}, x^{2n}-1;g(x)) \cap\kk_{n-1}^3[x]$, let $(u,v,w)$ be this element. As $\{(u_1,v_1,w_1),(u_2,v_2,w_2)\}$ is a $n$-basis of $\ML(\Tl(x),x^{n},x^{2n}-1)$, and as $(p,q,r)-(u,v,w)\in\ML(\tilde{T}(x), x^{n}, x^{2n}-1)$, then there exist a unique pair of polynomials unique $p_1$ and $p_2$ such that $$ \begin{pmatrix}p\\q\\r\end{pmatrix}- \begin{pmatrix}u\\v\\w\end{pmatrix}= p_1\begin{pmatrix}u_1\\v_1\\w_1\end{pmatrix}+ p_2\begin{pmatrix}u_2\\v_2\\w_2\end{pmatrix} $$ \end{proof} As a consequence of the two corollaries, we have the following important property: \begin{thm}\label{divi} Let $\{(u_1,v_1,w_1),(u_2,v_2,w_2)\}$ be a $n$-basis of $\ML(\Tl(x),x^{n},x^{2n}-1)$, and let $g\in\kk^n$. The remainder of the division of $\begin{pmatrix}0\\x^n\,g\\g\end{pmatrix}$ by $\begin{pmatrix}u_1&u_2\\v_1&v_2\\w_1&w_2\end{pmatrix}$ is the unique element $(u,v,w)\in \ML(\tilde{T}(x), x^{n}, x^{2n}-1;g(x)) \cap\kk_{n-1}^3[x]$, and therefore $u$ is the solution of $Tu=g$. \end{thm} \begin{proof} The vector $\begin{pmatrix}0\\x^n\, g\\ -g\end{pmatrix}\in\ML(\Tl(x), x^{n}, x^{2\,n}-1;g)$ is a particular solution. We reduce it by $\begin{pmatrix}u_1&u_2\\v_1&v_2\\w_1&w_2\end{pmatrix}$ and obtain $$\begin{pmatrix}u\\v\\w\end{pmatrix}=\begin{pmatrix}0\\x^n\,g\\g\end{pmatrix}-\begin{pmatrix}u_1&u_2\\v_1&v_2\\w_1&w_2\end{pmatrix}\begin{pmatrix}p\\q\end{pmatrix}, $$ where $(u,v,w)\in\kk[x]^3_{n-1}\cap\ML(\Tl(x), x^{n}, x^{2\,n}-1;g)$ is the remainder of division. Thus $(u,v,w)$ is the unique vector $\in\kk[x]^3_{n-1}\cap\ML(\Tl(x), x^{n}, x^{2\,n}-1;g)$. \end{proof} A way to perform the division is to choose a $n$-basis $\{(u_1,v_1,w_1),$ $(u_2,v_2,w_2)\}$ of $\ML(\Tl(x),x^{n},x^{2n}-1)$ so that the $2\times2$ coefficient matrix of $x^n$ in $$\begin{pmatrix}u_1(x)&u_2(x)\\v_1(x)&v_2(x) \end{pmatrix}$$ is invertible. In this case we can reduce the polynomial $(0,x^ng(x))$ to reach to a degree $< n-1$ and we can write in a unique way $$ \begin{pmatrix}0\\x^ng(x)\end{pmatrix}= p_1\begin{pmatrix}u_1\\v_1\end{pmatrix}+ p_2\begin{pmatrix}u_2\\v_2\end{pmatrix}+ \begin{pmatrix}u\\v\end{pmatrix}. $$ By the uniqueness of the remainder in the Euclidean division, we obtain the following proposition: \begin{prop}\label{simplificationdiv} The first coordinate of the remainder in the division of $\begin{pmatrix}0\\x^ng\end{pmatrix}$ by $\begin{pmatrix}u&u_2\\v_1&v_2\end{pmatrix}$ is the polynomial $u(x)$ such that its associated vector $u$ is the solution of $T\,u=g$. \end{prop} So we set the following problem: \begin{prob}\label{pb:division} Given a matrix and a vector of polynomials $\begin{pmatrix}e(x)&e'(x)\\f(x)&f'(x)\end{pmatrix}$ of degree $n$ such that $\begin{pmatrix}e_n&e_n'\\f_n&f_n' \end{pmatrix}$ is invertible and $\begin{pmatrix}p(x)\\q(x)\end{pmatrix}$ of degree $m\geq n$, find the remainder of the division of $\begin{pmatrix}p(x)\\q(x)\end{pmatrix} $ by $\begin{pmatrix}e(x)&e'(x)\\f(x)&f'(x)\end{pmatrix}$. \end{prob} We describe here a generalized Euclidean division algorithm to solve problem \ref{pb:division}. Let $E(x)=\begin{pmatrix}p(x)\\q(x)\end{pmatrix}$ of degree $m$, $B(x)=\begin{pmatrix}e(x)&e'(x)\\f(x)&f'(x)\end{pmatrix}$ of degree $n\leq m$. $E(x)=B(x)Q(x)+R(x)$ with $\deg(R(x))<n,$ and $ \deg(Q(x))\leq m-n$. Let $z=\frac{1}{x}$. We have \begin{eqnarray}\label{div} &E(x)&=B(x)Q(x)+R(x)\nonumber\\ \Leftrightarrow& E(\displaystyle \frac{1}{z})&=B(\frac{1}{z})Q(\frac{1}{z})+R(\frac{1}{z})\nonumber\\ \Leftrightarrow& z^{m}E(\displaystyle \frac{1}{z})&=z^nB(\frac{1}{z})z^{m-n}Q(\frac{1}{z})+z^{m-n+1}z^{n-1}R(\frac{1}{z})\nonumber\\ \Leftrightarrow& \hat{E}(z)&= \hat{B}(z) \hat{Q}(z)+z^{m-n+1} \hat{R}(z) \end{eqnarray} with $ \hat{E}(z), \hat{B}(z), \hat{Q}(z), \hat{R}(z)$ are the polynomials obtained by reversing the order of coefficients of polynomials $E(z),B(z),Q(z),R(z)$. \begin{eqnarray*} \eqref{div}&\Rightarrow& { \hat{B}(z)}^{-1}{ \hat{E}(z)}= \hat{Q}(z)+z^{m+n-1} { \hat{B}(z)}^{-1} { \hat{R}(z)}\\ &\Rightarrow& \hat{Q}(z)={ \hat{B}(z)}^{-1} { \hat{E}(z)} \mod z^{m-n+1} \end{eqnarray*} The formal power series ${ \hat{B}(z)}^{-1}$ exists because the constant coefficient of $\hat{B}(z)$ is invertible. Thus $\hat{Q}(z)$ is obtained by computing the first $m-n+1$ coefficients of $\displaystyle{ \hat{B}(z)}^{-1}{ \hat{E}(z)}$, which is obtained by computing $W(x)=\displaystyle{ \hat{B}(z)}^{-1}$, then by multiplying $W(x)$ by $ \hat{E}(z)$. To find $W(x)=\displaystyle { \hat{B}(z)}^{-1}$ we use Newton's iteration. Let $f(W)=\hat{B}-W^{-1}$. We have $$f'(W_l).(W_{l+1}-W_l)=-W_l^{-1}(W_{l+1}-W_l)W_l^{-1}=f(W_l)=\hat{B}-W_l^{-1}.$$ Thus we set $$W_{l+1}=2W_l-W_l\hat{B}W_l,$$ and $W_0=\hat{B}_0^{-1}$, which exists. Moreover, we have \begin{eqnarray*} W-W_{l+1}&=&W-2W_l+W_l\hat{B}W_l\\ &=&W(\mathbb{I}_2-\hat{B}W_l)^2\\ &=&(W-W_l)\hat{B}(W-W_l) \end{eqnarray*} Thus $W_l(x)=W(x) \mod x^{2l}$ for $l=0,\dots,\lceil\log(m-n+1) \rceil$. \begin{prop} We need $\mathcal{O}(n\log(n)\log(m-n)+m\log m)$ operations to solve problem \ref{pb:division}. \end{prop} \begin{proof} We must do $\lceil\log(m-n+1) \rceil$ Newton's iteration to obtain the first $m-n+1$ coefficients of $\displaystyle { \hat{B}(z)}^{-1} =W(x)$. And each iteration requires $\mathcal{O}(n\log n)$ operations (multiplication and summation of polynomials of degree $n$). Finally, multiplication $\displaystyle { \hat{B}(z)}^{-1} \hat{E} (z)$ requires $\mathcal{O}(m\log m)$ operations. \end{proof} Notice that, for our problem $m=n$ and this algorithm requires $\O(n\log^2 n)$ arithmetic operations. In the following section, we show how to compute a $n$-basis in $\O(n\log^2 n)$ arithmetic operations. \section{Construction of the generators} The canonical basis of $\kk[x]^3$ is denoted by $\sigma_1,\sigma_2,\sigma_3$. Let $\rho_1,\,\rho_2$ be the generators of $\ML(\Tl(x),x^n,x^{2n}-1)$ of degree $n$ given by \begin{equation}\label{base3} \begin{array}{l}\rho_1=x^n\sigma_1-(u,v,w)=(u_1,v_1,w_1)\\ \rho_2=x^n\sigma_2-(u',v',w')=(u_2,v_2,w_2),\end{array} \end{equation} where $(u,v,w),\,(u',v',w')$ are the vectors given in \eqref{base1} and \eqref{base2}. We describe two methods for computing $(u_1,v_1,w_1)$ and $(u_2,v_2,w_2)$. The first one uses the Euclidean gcd algorithm, the second one is based on the method in \cite{MR1871324}. We recall firstly the algebraic and computational properties of the well known extended euclidean algorithm (see \cite{MR2001757}): Given $p(x), p'(x)$ two polynomials in degree $m$ and $m'$ respectively, let $$\begin{array}{ll} r_0=p,\qquad&r_1=p',\qquad\\s_0=1,&s_1=0,\\t_0=0,&t_1=1. \end{array}$$ and define \vspace{-0.5cm} \begin{eqnarray*} r_{i+1}&=&r_{i-1}-q_ir_i,\\ s_{i+1}&=&s_{i-1}-q_is_i,\\ t_{i+1}&=&t_{i-1}-q_it_i, \end{eqnarray*} where $q_i$ results when the division algorithm is applied to $r_{i-1}$ and $r_i$, i.e. $r_{i-1}=q_ir_i+r_{i+1}$ . \begin{prop} Let $l\in\NN$ such that $r_l=0$. Then $r_{l-1}=\gcd(p(x),p'(x))$. \end{prop} And more generally we have: \begin{prop}\label{eea} For all $i=1,\ldots,l$ we have $$s_ip+t_ip'=r_i\quad \textrm{ and }\quad(s_i,t_i)=1,$$ and $$\left\{\begin{array}{l}\vspace{2mm} \deg r_{i+1}<\deg r_i, \quad i=1,\ldots,l-1\\ \vspace{2mm} \deg s_{i+1}>\deg s_i\quad\textrm{ and }\quad \deg t_{i+1}>\deg t_i,\\\vspace{2mm} \deg s_{i+1}=\deg(q_i.s_i)=\deg v-\deg r_i,\\\vspace{2mm} \deg t_{i+1}=\deg(q_i.t_i)=\deg u-\deg r_i. \end{array}\right.$$ \end{prop} We can now present our algorithm. It can be found in the proof of the following theorem: \begin{thm} By applying the Euclidean gcd algorithm to $p(x)=x^{n-1}T$ and $p'(x)=x^{2n-1}$ stopping in degree $n-1$ and $n-2$, we obtain $\rho_1$ and $\rho_2$ respectively. \end{thm} \begin{proof} We see that $Tu=g$ if and only if there exist $a(x)$ and $b(x)$ in $\kk[x]_{n-1}$ such that $$\bar{T}(x)u(x)+x^{2n-1}b(x)=x^{n-1}g(x)+a(x),$$ where $\bar{T}(x)=x^{n-1}T(x)$ is a polynomial of degree $\leq2n-2$. In \eqref{base1} and \eqref{base2} we saw that for $g(x)=1$ $(g=e_1)$ and $g(x)=x^nT(x)$ $(g=(0,t_{-n+1},\ldots,t_{-1})^T)$ we obtain a base of $\ML(\Tl(x),x^n,x^{2n}-1)$. Notice that $Tu_1=e_1$ if and only if there exist $a_1(x)\in \kk[x]_{n-2}$, $b_1(x)\in \kk[x]_{n-1}$ such that \begin{equation}\label{eea1}\bar{T}(x)u_1(x)+x^{2n-1}b_1(x)=x^{n-1}+a_1(x),\end{equation} and $Tu_2=(0,t_{-n+1},\ldots,t_{-1})^T$ if and only if there exist $a_2(x) \in \kk[x]_{n-2}$, $b_2(x) \in \kk[x]_{n-1}$ such that \begin{equation}\label{eea2}\bar{T}(x)(u_2(x)+x^{n})+x^{2n-1}b_2(x)=a_2(x).\end{equation} As $\deg a_1(x)\leq n-2$ and $\deg a_2(x)\leq n-2$, by applying the extended Euclidean algorithm in $p(x)=x^{n-1}T$ and $p'(x)=x^{2n-1}$ until we have $\deg r_l(x)=n-1$ and $\deg r_{l+1}(x)=n-2$ we obtain $$u_1(x)=\frac{1}{c_1}s_l(x),\quad b_1(x)=\frac{1}{c_1}t_l(x),\quad x^{n-1}+a_1(x)=\frac{1}{c_1}r_l(x),$$ and $$x^n+u_2(x)=\frac{1}{c_2}s_{l+1}(x),\quad b_2(x)=\frac{1}{c_2}t_{l+1}(x),\quad a_2(x)=\frac{1}{c_2}r_{l+1}(x),$$ with $c_1$ and $c_2$ are the highest coefficients of $r_l(x)$ and $s_{l+1}(x)$ respectively. In fact, Equation \eqref{eea1} is equivalent to $$ \begin{array}{r} \overbrace{\phantom{.mmmmmmm}}^{n}\quad\overbrace{\phantom{.mmmmmm}}^{n-1}\quad\\ \begin{array}{r} \left. \begin{array}{l} {}_{\displaystyle{n-1}}\\\phantom{r} \end{array}\right\{ \\\phantom{r}\\ \left.\begin{array}{l} \phantom{r}\\n\\\phantom{r} \end{array}\right\{\\\phantom{r}\\ \left.\begin{array}{l} {}_{\displaystyle{n-1}}\\\phantom{r} \end{array}\right\{ \end{array} \left( \begin{array}{ccc|ccc} t_{-n+1}&&&&&\\ \vdots&\ddots&&&&\\ \hline t_0&\dots&t_{-n+1}&&&\\ \vdots&\ddots&\vdots&&&\\ t_{n-1}&\dots&t_0&&&\\ \hline &\ddots&\vdots&\;\;1\;\;&&\\ &&&&\;\ddots\;&\\ &&t_{n-1}&&&\;\;1\;\; \end{array} \right) \end{array} \begin{pmatrix} \phantom{r}\\u_1\\\phantom{r}\\b_1\\\phantom{r} \end{pmatrix} =\begin{pmatrix}\phantom{r}\\a_1\\\phantom{r}\\\hline 1\\0\\\vdots\\0\end{pmatrix} $$ since $T$ is invertible then the $(2n-1)\times(2n-1)$ block at the bottom is invertible and then $u_1$ and $b_1$ are unique. Therefore $a_1$ is also unique. As $\deg r_l=n-1$ then, by Proposition \ref{eea}, $\deg s_{l+1}=(2n-1)-(n-1)=n$ and $\deg t_{l+1}=(2n-2)-(n-1)=n-1$. By the same proposition, we also have $\deg s_l\leq n-1$ and $\deg t_l\leq n-2$. Therefore, $\deg u_1=\deg s_l$ and $\deg b_1=\deg t_l$. Then as $u_1 (x)$ and $\frac{1}{c_1} s_l$ are unitaries, $\frac{1}{c_1} s_l (x) =u_1 (x)$ which implies that $\frac{1}{c_1} t_l (x) =b_1 (x)$. For the same reasons, we have $x^n+u_2(x)=\frac{1}{c_2}s_{l+1}(x)$ and $b_2(x)=\frac{1}{c_2}t_{l+1}(x)$. Finally, $Tu=e_1$ if and only if there exist $v(x)$, $w(x)$ such that \begin{equation} \Tl(x)u(x)+x^nv(x)+(x^{2n}-1)w(x)=1. \end{equation} As $\Tl(x)=T^++x^{2n}T^-=T+(x^{2n}-1)T^-$, we deduce that \begin{equation}\label{syz} T(x)u(x)+x^nv(x)+(x^{2n}-1)(w(x)+T^-(x)u(x))=1. \end{equation} Moreover, we also have $T(x)u(x)-x^{-n+1}a_1(x)+x^nb_1(x)=1$ and $x^{-n+1}a_1(x)=x^n(x\,a_1)-x^{-n}(x^{2n}-1)x\,a_1$. Thus \begin{equation}\label{syz2}T(x)u(x)+x^{n}(b(x)-x\, a(x))+(x^{2n}-1)x^{-n+1}a(x)=1.\end{equation} Comparing \eqref{syz} and \eqref{syz2}, and as $1=x^nx^n-(x^{2n}-1)$ we deduce that $w(x)=x^{-n+1}a(x)-T_-(x)u(x)+1$, which is the part of positive degree of $-T_-(x)u(x)+1$. This conclude the proof of the proposition. \end{proof} \begin{rem} The usual Euclidean gcd algorithms are of computational complexity $\O(n^2)$, but superfast euclidean gcd algorithms use no more then $\Oc(n\,log^2 n)$ operations, exist. See for example \cite{MR2001757} chapter 11. \end{rem} The second method for computing $(u_1,v_1,w_1)$ and $(u_2,v_2,w_2)$ is of polynomials and interpolation points. We are interested in computing the coefficients of the canonical basis element $\sigma_1,\,\sigma_2$ in this basis. The coefficients of $\sigma_3$ can be obtained by reduction of $(\Tl(x)\,x^n)\,B(x)$ by $x^{2n}-1$ where $$ B(x) \begin{pmatrix}u(x)&u'(x)\\v(x)&v'(x)\end{pmatrix}, $$ where $(u,v),\,(u',v')$ are the two first coordinates of the solution of Equations \eqref{base1} and \eqref{base2}. A superfast algorithm for computing $B(x)$ is given in \cite{MR1871324}. Let us describe how to compute it. By evaluation of \eqref{base3} at the roots $\omega_j\in \Unit{2n}$, we deduce that $(u(x), v(x))$ and $(u'(x), v'(x))$ are the solution of the following rational interpolation problem: $$\left\{\begin{array}{l}\Tl(\omega_j)u(\omega_j)+\omega_j^nv(\omega_j)=0\\ \Tl(\omega_j)u'(\omega_j)+\omega_j^nv'(\omega_j)=0\end{array},\right. $$ with $$\left\{\begin{array}{l}u_n=1,\,v_n=0,\\u'_n=0,\,v'_n=1.\end{array}\right.$$ \begin{defn} The $\tau$-degree of a vector polynomial $w(x)=(w_1(x)\,w_2(x))^T$ is defined as $$\tau-\deg w(x):=\max\{\deg w_1(x),\,\deg w_2(x)-\tau\}$$ \end{defn} \begin{defn} A polynomial vector in $\kk[x]^{2}$ is called $\tau$-reduced if the $\tau$-highest degree coefficients are linearly independent. \end{defn} By construction, the columns of $B(x)$ form a $n$-reduced basis of the module of polynomial vectors $r(x)\in\kk[x]^2$ that satisfy the interpolation conditions $$f_j\, r(\omega_j)=0,\;\;j=0,\ldots,2n-1$$ with $f_j= (\Tl(\omega_j),\omega^n_j)\in \kk^{2}$. The columns of $B(x)$ are also called a $n$-reduced basis for the interpolation data $(\omega_j,f_j),\,j=0,\ldots,2n-1$. \begin{thm} Let $\tau=n$ and $J$ be a positive integer. Let $\lambda_1,\ldots,\lambda_J\in\kk$ and $\phi_1,\ldots,\phi_J\in\kk^2\setminus\{(0,0)\}$. Let $1\leq j\leq J$ and $\tau_J\in\mathbb{Z}$. Suppose that $B_j(x)\in\kk[x]^{2\times2}$ is a $\tau_J$-reduced basis matrix with basis vectors having $\tau_J-$degree $\delta_1$ and $\delta_2$, respectively, corresponding to the interpolation data $\{(\lambda_i,\phi_i); i=1,\ldots,j\}$. Let $\tau_{j\rightarrow J}:=\delta_1-\delta_2$. Let $B_{j\rightarrow J}(x)$ be a $\tau_{j\rightarrow J}$-reduced basis matrix corresponding to the interpolation data $\{(\lambda_i, \phi_i \,B_j(\lambda_j)); i=j+1,\ldots,J\}$. Then $B_J(x):=B_j(x)B_{j\rightarrow J}(x)$ is a $\tau_J$-reduced basis matrix corresponding to the interpolation data $\{(\lambda_i,\phi_i); i=1,\ldots,J\}$. \end{thm} \begin{proof} For the proof, see \cite{MR1871324}. \end{proof} When we apply this theorem with $\lambda_{j}=\omega_j\in\Unit{2n}$ as interpolation points, we obtain a superfast algorithm in $\mathcal{O}(n\log^2n)$ to compute $B(x)$. See \cite{MR1871324} for more details. \section{Conclusion} In this paper, we re-investigate the solution of a Toeplitz system $T\, u =g$ from a new point of view, by correlating the solution of such a problem with generators of the syzygy module $\ML(\Tl(x), x^{n}, x^{2n}-1)$ associated to the Toeplitz matrix $T$. We show that $\ML(\Tl(x), x^{n}, x^{2n}-1)$ is free of rank $2$ and that it has a $n$-basis. We show that finding a $n$-basis of $\ML(\Tl(x),x^{n},x^{2n}-1)$ is equivalent to computing the Gohberg-Semencul decomposition of $T^{-1}$, and we reduce the solution of $T\,u=g$ to an Euclidean division. We give two superfast algorithms computing a $n$-basis of $\ML(\Tl(x),x^{n},x^{2n}-1)$ and a superfast algorithm to obtain the solution from this $n$-basis. A perspective of this work is to generalize the approach to two-level Toeplitz systems or to Toeplitz-block-Toeplitz matrices and to correlate the basis computation of a multivariate syzygy module to ``Gohberg-Semencul'' decompositions for Toeplitz-block-Toeplitz matrices.
\section{Introduction} We consider the two dimensional defocusing cubic fractional nonlinear Schr\"odinger equations \begin{align}\label{fNLS} \begin{cases} {\textbf{i}} \partial_t u -(-\Delta)^{\alpha} u = |u|^2u, & \alpha \in (0 ,1] \\ u(0,x) = \phi(x) , \end{cases} \end{align} posed on the unit disk $\Theta = \{x \in {\Bbb{R}}^2 \, \Big| \, \abs{x} < 1\} $, where $u=u(t,x)$ is a complex-value function in spacetime ${\Bbb{R}} \times \Theta$. We assume radial symmetry on the initial datum $u_0$ and Dirichlet boundary condition: \begin{align*} u\big|_{\partial \Theta}=0 . \end{align*} Note that when $\alpha = 1$, this is the classical nonlinear Schr\"odinger equation (NLS) \begin{align} {\textbf{i}} \partial_t u + \Delta u = \abs{u}^2 u \label{NLS} , \end{align} and for $\alpha \in (0 , 1)$, this is where our main interest located -- fractional nonlinear Schr\"odinger equation \eqref{fNLS}. Similar as in NLS, the FNLS model conserves the energy and mass in the following forms \begin{align} M(u) &=\frac{1}{2}\int_{\Theta} \abs{u}^2 \, dx , \label{Mass} \\ E(u) &= \int_{\Theta} \frac{1}{2} \abs{ \abs{\nabla}^{\alpha} u}^2 +\frac{1}{4} \abs{u}^4 \, dx , \label{Energy} \end{align} Conservation laws above give the control of the $L^2$ and $H^{\alpha}$ norms of the solutions, respectively. \subsection{Motivation} In recent decades, there has been of great interest in using fractional Laplacians to model physical phenomena. Laskin \cite{laskin} introduced the fractional quantum mechanics as a generalization of the standard quantum mechanics. Moreover, the equation \eqref{fNLS} and its discrete versions are very much relevant in molecular biology as they have been proposed to describe the charge transport between base pairs in the DNA molecule where typical long range interactions occur \cite{dnadiscrete}. The continuum limit for discrete FNLS was studied rigorously first in \cite{KLS} and see also \cite{Gr1, Gr2, HY} for the recent works on the continuum limits. In this paper, we are interested in the global well-posedness theory of FNLS. We recall here that with local/gloval well-posedness we mean local/global in time existence, uniqueness and continuous dependence of the data to solution map. Due to lack of strong dispersion in FNLS (comparing to NLS), the global well-posedness theory of FNLS is under development. We consider the FNLS posed on the unit disk, since the compact manifold setting allows weaker dispersion than Euclidean spaces (hence less favorable). Apart from the challenges behind weak dispersion mentioned above, we should be aware of another difficulty on the unit disk -- lack of good Fourier convolution theorem. This theorem is fundamental in analyzing nonlinearities in the equation, the absence of which is due to the different Fourier transform on the unit disk (comparing to those in Euclidean spaces) and will cause great difficulties in understanding the nonlinear term in the equation. Our goal in this paper is to prove the global well-posedness of \eqref{fNLS} with the regularity of the initial data below the energy space $H^{\alpha}$. Before we present our result, let us first view related works. \subsection{History and related works} Let us start from the related works in NLS (that is, $\alpha=1$ in \eqref{fNLS}). Recall that in Euclidean spaces ${\Bbb{R}}^d$, the scaling of \eqref{NLS} given by \begin{align*} s_c = \frac{d}{2}- 1 . \end{align*} The problem \eqref{NLS} is called {\it subcritical} when the regularity of the initial data is smoother than the scaling $s_c$ of \eqref{NLS}. We will adopt the language in the scaling context in other general manifolds. In the subcritical regime ($s> s_c$), it is well-known that the initial value problem \eqref{NLS} with $\alpha=1$ is locally well-posed \cite{Ca}. Thanks to the conservation laws of energy and mass defined in \eqref{Mass} and \eqref{Energy} (with $\alpha =1$), the $H^1$-subcritical initial value problem and the $L^2$-subcritical initial value problem are globally well-posed in the energy space $H^1$ and mass space $L^2$, respectively. In fact, these two initial value problems are also shown to scatter. In general terms, with {\it scattering} we intend that the nonlinear solution as time goes to infinity approaches a linear one. However, one does not expect the scattering phenomenon in compact manifolds. In the Euclidean space, the very first global well-posedness result in the subcritical case between the two (mass and energy) conservation laws ($0<s<1$) was given by Bourgain in \cite{BourHL}, where he developed the high-low method to prove global well-posedness for the cubic NLS in two dimensions for initial data in $H^s, \, s > \frac{3}{5}$. In \cite{CKSTT}, Colliander-Keel-Staffilani-Takaoka-Tao improved the global well-posedness index of the initial data to $s >\frac{4}{7}$ by introducing a different method, now known as I-method. Let us recall the I-method mechanism in \cite{CKSTT}. One first defines a Fourier multiplier that smooths out the initial data into the energy space and proves that the energy of the smoothed solution is almost conserved, that is, at each iteration the growth of such modified energy is uniformly small. The index $s >\frac{4}{7}$ is derived by keeping the accumulation of energy controlled. As a result, in \cite{CKSTT} the authors obtained a polynomial bound of the sub-energy Sobolev norm of the global solution. The cubic NLS in ${\Bbb{R}}^3$ was also considered in \cite{CKSTT} and the index $s> \frac{5}{6}$. Later, in \cite{CKSTT2} by combining the Morawetz estimate with the I-method and a bootstrapping argument, the same authors were able to lower the global well-posedness index to $\frac{4}{5}$ and proved, for the first time{\footnote{Actually in \cite{B2} Bourgain proved the global well-posedness for general data in $H^s , \, s> \frac{11}{13}$ and scattering for radial data in $H^s, \, s> \frac{5}{7}$.}}, that the global solution also scatters for data in $H^s, \, s >\frac{4}{5}$. The high-low method and I-method have been widely adapted into other dispersive settings and more general manifolds. For instance, \cite{KPVHL} showed the global well-posedness for nonlinear wave equations using the high-low frequency decomposition of Bourgain and \cite{Shen} applied I-method to nonlinear wave equations. \cite{Ha,Zh} studied the global well-posedness of the cubic NLS on closed manifolds without boundary using the I-method. As for the FNLS setting, using the high-low method, \cite{DET} was able to show the global well-posedness for FNLS on the one-dimensional torus. However, the higher dimensional analogue is still open and challenging. We will in fact investigate the the global behavior of FNLS in this paper in the higher dimensions. More results on the high-low method and the I-method can be found in \cite{CKSTT3, CR, DPST1, DPST2, DPST3, D1, D3, D4, D7, FG, GC, LWX, Su, Tz, SY20a, GP, Roy, Wu, CKSTT4, CKSTT5}. \subsection{Main result} \begin{thm}\label{thm GWP} The initial value problem \eqref{fNLS} with $\alpha \in (\frac{2}{3} ,1]$ is globally well-posed from radial data $u_0 \in H_{rad}^s (\Theta)$, where \begin{align*} s > s_* (\alpha) = \max \{ \frac{1}{4} \parenthese{\frac{4\alpha^2 - \alpha - 1}{2\alpha-1} + \sqrt{ \frac{5\alpha^2 -4\alpha +1}{(2\alpha -1)^2}} } , \frac{1}{4} \parenthese{ \frac{\alpha^2 + \alpha -1}{2\alpha -1} + \sqrt{\frac{\alpha^4 + 10 \alpha^3 -5\alpha^2 - 2\alpha +1}{(2\alpha-1)^2}}} \} . \end{align*} Moreover, we establish the polynomial growth of the solution \begin{align*} \norm{u(T)}_{H^s (\Theta)} \lesssim T^{\frac{1}{\alpha}(\alpha -s)p} , \end{align*} where the power $p$ above is given by \begin{align*} N^p : = \min \{ N^{(\alpha -s)(\frac{2}{\alpha} -4 - \frac{2\alpha+1}{2s-1}) + \alpha - \frac{1}{2}-} , N^{(\alpha -s) (\frac{2}{\alpha} -4) + 3\alpha -2+} \} . \end{align*} \end{thm} \begin{rmq} Note that $s_* (\alpha) < \alpha$. Actually $s_* (\alpha)$ looks very complicated and it is hard to see the behavior from its expression. Here is a quick plot. \begin{center} \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $\alpha$, ylabel = {$s_*(\alpha)$}, ] \addplot [ domain=2/3:1, samples=100, color=blue, ] {x}; \addlegendentry{$\alpha$} \addplot [ domain=2/3:1, samples=100, color=red, ] {max( 0.25* ((-4 *x^2 + x + 1)/(1 - 2* x) + sqrt((5 *x^2 - 4 *x + 1)/(1 - 2* x)^2)) , (0.25 *(x^2 + x - 1))/(2 *x - 1) + 0.25 *sqrt((x^4 + 10 *x^3 - 5 *x^2 - 2 *x + 1)/(2 *x - 1)^2) )}; \addlegendentry{$s_*(\alpha)$} \end{axis} \end{tikzpicture} \end{center} \end{rmq} \subsubsection{Discussion on the setting and the difficulties} \paragraph{$\bullet$ \it Compact manifold} We are interested in the bounded manifold in this paper, since compact domains usually allow weaker dispersion than Euclidean spaces do. Mathematically we can observe this phenomenon (`loss of regularity') in the Strichartz estimates on the bounded manifolds. For example in \cite{bss} the loss of $\frac{1}{p}$ derivatives was established for the classical NLS posed on the compact Riemannian manifold $\Omega$ with boundary \begin{align}\label{eq loss of reg} \norm{e^{{\textbf{i}} t \Delta} f }_{L^P ([0,T] ; L^q (\Omega))} \leq C \norm{f}_{H^{\frac{1}{p}} (\Omega)} \end{align} for fixed finite $T$, $p > 2$, $q < \infty$ and $\frac{2}{p} + \frac{d}{q} = \frac{d}{2} $. We expect that a similar loss of regularity phenomenon happens for FNLS. To beat the weaker dispersion caused by the compact domain, we assume the radial symmetry on the initial data. Under this assumption, we can actually benefit a lot from the decay of the radial Laplace operator. More precisely, the radial eigenfunctions of the Laplace operator $-\Delta$ with Dirichlet boundary conditions behave like \begin{align*} e_n (r) \sim \frac{\cos((n-\frac{1}{2}) \pi r- \frac{\pi}{4})}{\sqrt{r}} . \end{align*} (where $r = \abs{x}$) and their associated eigenvalues are $z_n^2 \sim n^2$ (see Section \ref{ssec eigen} for more detailed discussion on $e_n$'s and $z_n$'s). Relying on the decay in $e_n$'s, we are able to derive a bilinear Strichartz estimate for a product of two solutions that are localized in high and low frequencies respectively. The benefit of the bilinear Strichartz estimate is that the 'loss of regularity' falls on the term with low frequency instead of on both terms (if naively splitting two functions in the bilinear estimate into two separate pieces then applying the Strichartz estimates), which is crucial to make up for the lack of dispersion. \paragraph{$\bullet$ \it Absence of Fourier convolution theorem} The Fourier convolution theorem plays an essential rule in I-method, since the convolution theorem translates the Fourier transform of a product into the convolution of Fourier transforms. Combining this fundamental fact with Littlewood-Paley decomposition, we can interpret the nonlinear term $\abs{u}^2u$ as the sum of the interaction between functions $u_1, u_2 ,u_3$ with frequencies localized at $\xi_i$ on the Fourier side. For example, let the output frequency of in the nonlinearity $\abs{u}^2 u$ to be $\xi$ and each function $u$ is frequency localized at $\xi_1, \xi_2 , \xi_3$. Hence this convolution theorem implies that $\xi_1 -\xi_2 + \xi_3 = \xi$, which means that this connection in $\xi_1, \xi_2 , \xi_3$ does not allow the existence of any extremely huge frequency. However, on the unit disk, we lose such control in the highest frequency is due to the absence of Fourier convolution theorem. This causes great difficulty in summing over the frequencies produced from Littlewood-Paley decomposition back to the original nonlinearity. Let us mention that in a recent work \cite{SY20a}, where the authors extended the high-low method of Bourgain in the hyperbolic setting. They had similar issue with the convolution theorem, but they managed to recover the smoothing estimate on the Duhamel term via the local smoothing estimate combining the radial Sobolev embedding. However, one does not expect to hold such local smoothing estimates on the compact domains. Back to our unit disk setting, in order to make up for this loss, let us take a closer look at the eigenfunctions. In the approximate expression of $e_n$, we see nothing but trigonometric functions. This makes us expect the existence of certain type of weak interaction between functions whose frequencies are far from each other. Another hope for us to expect such `convolution' type control is behind the following result. It is known that in the compact domain without boundary (for example $\mathbb{S}^2$), \cite{bgtBil} was able to show the weak interaction functions with separated frequencies. More precisely, for any $j = 1,2,3$, $ z_{n_j} \lesssim z_{n_0}$ (recall that $z_n$'s are eigenvalues corresponding to eigenfunctions $e_n$'s), then for every $p >0$ there exists $C_p > 0$ such that for every $w_j \in L^2 (\mathcal{M})$, $j =0,1,2,3$, \begin{align}\label{eq BGT1} \abs{\int_{\mathbb{S}^2} P_{n_0} w_0 P_{n_1} w_1 P_{n_2} w_2 P_{n_3} w_3 \, dx} \leq C_p z_{n_0}^{-p} \prod_{j=0}^3 \norm{w_j}_{L^2} . \end{align} Note that the factor $z_{n_0}^{-p}$ above can be understood as the weak interaction in their setting. Hence to obtain a similar weak interaction in our domain with boundary, we develop Proposition \ref{prop weak}, which essentially captures the features in \eqref{eq BGT1}. \begin{align}\label{eq BGT2} \abs{\int_{{\Bbb{R}} \times \Theta} P_{n_0} w_0 P_{n_1} w_1 P_{n_2} w_2 P_{n_3} w_3 \, dx dt } \lesssim \frac{z_{n_2} z_{n_3}}{z_{n_0}^2} \frac{ 1}{ \inner{ z_{n_0}^{\alpha +} }} \prod_{j=0}^3 \norm{P_{n_i} w_i}_{X^{0, b}} \end{align} for $z_{n_0} \geq 2 z_{n_1} \geq z_{n_2} \geq z_{n_3}$. Here the factor $\frac{z_{n_2} z_{n_3}}{z_{n_0}^2} \frac{ 1}{ \inner{ z_{n_0}^{\alpha +} }} $ serves as a similar role of $z_{n_0}^{-p}$ in \eqref{eq BGT1}. It also should be pointed out that this is the key to sum up decomposed functions with frequencies greatly separated. Now let us give the main ideas of the proofs. \subsubsection{Outline of the proofs} In this subsection we summarize the main three parts in the proof of the main Theorem \ref{thm GWP}. In the first part of the proof we present the local theory of the I-operator modified FNLS. In this local theory, as one did in the NLS case, we need a Strichartz-type estimate to run the contraction mapping argument. To this end, we adapt the proof of bilinear estimates for NLS on the unit ball in \cite{an} (see also \cite{SY20b} for the multilinear estimates for NLS on the unit ball) in Section \ref{sec bilinear}. However, it is worth pointing out that due to the fractionality of the dispersion operator, it is impossible to periodize the time in the bilinear estimates and count the integer points on its Fourier characteristic surface. Instead, we have to count the integer points near the characteristic surface, which results in the local well-posedness index not as good as the one in the NLS setting. As for the proof of the local theory, with the help of the bilinear estimtes in Section \ref{sec bilinear}, we are able to obtain an estimate on the nonlinear term, hence obtain the local well-posedness via a standard contraction argument. Let us also mention that since this counting argument does not see the difference in the fractional power $\alpha$ of Laplacian, the local well-posedness index is in fact uniform for all $\alpha \in [\frac{1}{2} ,1)$. Following the I-method mechanism in \cite{CKSTT}, the second part of the proof deals with the analysis of the energy increment of the modified equation. To this end, a typical strategy to follow is that one dyadically decomposes all the functions in the change of energy, then proceeds the analysis in different localized frequency scenarios, and at the end sums all the decomposed frequencies back to the original form. In order to sum up all the decomposed functions in frequencies, we require a good control on the highest frequency, whose range is usually governed by the Fourier convolution theorem. However, such nice control in the highest frequency does not hold on the disk due to different format of eigenfunctions of the radial Dirichlet Laplacian. Hence a different analysis is needed. Instead of the dyadic decomposition, we make a finer and delicate decomposition on frequencies, which allows us to observe a very weak interaction between functions localized in uncomparable frequencies. Fortunately, this treatment fulfills the role of convolution theorem and allows to sum the frequency localized functions in a proper way, which is presented in \eqref{eq BGT2}. In the last part, we iterate the local theory obtained in the first part, hence globalize the solution. It should be noted that in this argument, to make the iteration work, we need to guarantee that the accumulated energy increment does not surpass the size of the initial energy of the modified initial data, which ensures that the initial setup remains the same in the next iteration. As a byproduct of the method, one obtains that the global solutions satisfy polynomial-in-time bounds. \subsection{Organization of the paper} In Section \ref{sec Preliminaries}, we introduce the notations, eigenfunctions and eigenvalues of the radial Dirichlet Laplacian and the functional spaces with their properties that we will need in this paper. In Section \ref{sec bilinear}, we prove bilinear Strichartz estimates, which is an important tool in the proof of the energy increment. In Section \ref{sec LWPI}, we first define the I-operator in our setting and present a local theory based on the I-operator modified equation. In Section \ref{sec weak}, we discuss the weak interaction between functions whose frequencies are localized far away. Then in Section \ref{sec energy increment}, we compute the energy increment of the modified energy on small time intervals. Finally, In Section \ref{sec gwp}, we show the global well-posedness and the polynomial bound for the global solutions in Theorem \ref{thm GWP}. \subsection*{Acknowledgement} X.Y. is funded in part by the Jarve Seed Fund and an AMS-Simons travel grant. Both authors would like to thank Gigliola Staffilani for very insightful comments on a preliminary draft of this paper. \section{Preliminaries}\label{sec Preliminaries} In this section, we first discuss notations used in the rest of the paper and recall the behaviors of eigenfunctions and eigenvalues of the radial Dirichlet Laplacian. Then we introduce the function spaces ($H^s$ and $X^{s,b}$ spaces) that we will be working on and list some useful inequalities from harmonic analysis. \subsection{Notations} We define \begin{align*} \norm{f}_{L_t^q L_x^r (I \times \Theta)} : = \square{\int_I \parenthese{\int_{\Theta} \abs{f(t,x)}^r \, dx}^{\frac{q}{r}} dt}^{\frac{1}{q}}, \end{align*} where $I$ is a time interval. For $x\in {\Bbb{R}}$, we set $\inner{x} = (1 + \abs{x}^2)^{\frac{1}{2}}$. We adopt the usual notation that $A \lesssim B$ or $B \gtrsim A$ to denote an estimate of the form $A \leq C B$ , for some constant $0 < C < \infty$ depending only on the {\it a priori} fixed constants of the problem. We write $A \sim B$ when both $A \lesssim B $ and $B \lesssim A$. \subsection{Eigenfunctions and eigenvalues of the radial Dirichlet Laplacian}\label{ssec eigen} We denote $e_n (r)$ (where $r = \abs{x}$) to be the eigenfunctions of the radial Laplace operator $-\Delta$ with Dirichlet boundary condition $\Theta$, and the eigenvalues associated to $e_n$ are $z_n^2$. Both $e_n$'s and $z_n$'s are defined via Bessel functions. Let $J_0$ be the Bessel function of order zero \begin{align}\label{eq J_0} J_0 (x) = \sqrt{\frac{2}{\pi}} \frac{\cos(x- \frac{\pi}{4})}{\sqrt{x}} + \mathcal{O} (x^{-\frac{3}{2}}) . \end{align} and let $z_n$'s be the (simple) zeros of $J_0 (x)$ such that $0 < z_1 < z_2 < \cdots < z_n < \cdots$. It is known that $z_n$ satisfies \begin{align}\label{eq z_n} z_n = \pi (n-\frac{1}{2}) + \mathcal{O} (\frac{1}{n}) . \end{align} We also have that $J_0 (z_n r)$ are eigenfunctions of the Dirichlet self adjoint realization of $-\Delta$, corresponding to eigenvalues $z_n^2$. Moreover any $L^2(\theta)$ radial function can be expanded with respect to $J_0 (z_n r)$. Let us set \begin{align}\label{eq e_n} e_n : = e_n (r) = \norm{J_0 (z_n \cdot)}_{L^2(\Theta)}^{-1} J_0 (z_n r) . \end{align} A direct computation gives $ \norm{J_0 (z_n \cdot)}_{L^2(\theta)} \sim (n-\frac{1}{2})^{-\frac{1}{2}}$, then combining with \eqref{eq J_0}, \eqref{eq z_n} and \eqref{eq e_n} we have \begin{align}\label{eq e_n approx} e_n (r) \sim \frac{\cos((n-\frac{1}{2}) \pi r- \frac{\pi}{4})}{\sqrt{r}} . \end{align} In Lemma 2.5 in \cite{AT}, one also has \begin{align}\label{eq e_n bdd} \norm{e_n}_{L_x^p(\Theta)} & \lesssim \begin{cases} 1 , & \text{ if } 2 \leq p < 4,\\ \ln (1+n)^{\frac{1}{4}} & \text{ if } p= 4,\\ n^{\frac{1}{2}-\frac{2}{p}} , & \text{ if } p> 4 \end{cases} \end{align} \subsection{$H_{rad}^s$ spaces} Recall that $(e_n)_{n=1}^{\infty}$ form an orthonormal bases of the Hilbert space of $L^2$ radial functions on $\Theta$. That is, \begin{align*} \int e_n^2 \, dL = 1 , \end{align*} where $dL = \frac{1}{4\pi} r \, d\theta dr$ is the normalized Lebesgue measure on $\Theta$. Therefore, we have the expansion formula for a function $u \in L^2 (\Theta)$, \begin{align*} u=\sum_{n=1}^{\infty} \inner{u , e_n} e_n . \end{align*} For $s \in {\Bbb{R}}$, we define the Sobolev space $H^{s} (\Theta)$ on the closed unit ball $\Theta$ as \begin{align*} H_{rad}^{s} (\Theta) : = \bracket{ u = \sum_{n=1}^{\infty} c_n e_n, \, c_n \in {\Bbb{C}} : \norm{u}_{H^{s} (\Theta)}^2 = \sum_{n=1}^{\infty} z_n^{2s} \abs{c_n}^2 < \infty } . \end{align*} We can equip $H_{rad}^{s} (\Theta)$ with the natural complex Hilbert space structure. In particular, if $s =0$, we denote $H_{rad}^{0} (\Theta)$ by $L_{rad}^2 (\Theta)$. For $\gamma \in {\Bbb{R}}$, we define the map $\sqrt{-\Delta}^{\gamma}$ acting as isometry from $H_{rad}^{s} (\Theta)$ and $H_{rad}^{s - \gamma} (\Theta)$ by \begin{align*} \sqrt{-\Delta}^{\gamma} (\sum_{n=1}^{\infty} c_n e_n) = \sum_{n=1}^{\infty} z_n^{\gamma} c_n e_n . \end{align*} We denote $S_{\alpha}(t) = e^{- {\textbf{i}} t (-\Delta)^{\alpha}}$ the flow of the linear Schr\"odinger equation with Dirichlet boundary conditions on the unit ball $\Theta$, and it can be written into \begin{align*} S_{\alpha}(t) (\sum_{n=1}^{\infty} c_n e_n) = \sum_{n=1}^{\infty} e^{-{\textbf{i}} t z_n^{2 \alpha} } c_n e_n. \end{align*} \subsection{$X_{rad}^{s,b}$ spaces} Using again the $L^2$ orthonormal basis of eigenfunctions $\{ e_n\}_{n=1}^{\infty}$ with their eigenvalues $z_n^2$ on $\Theta$, we define the $X^{s,b}$ spaces of functions on ${\Bbb{R}} \times \Theta$ which are radial with respect to the second argument. \begin{defi}[$X_{rad}^{s,b}$ spaces]\label{defn Xsb} For $s \geq 0$ and $b \in {\Bbb{R}}$, \begin{align*} X_{rad}^{s,b} ({\Bbb{R}} \times \Theta) = \{ u \in \mathcal{S}' ({\Bbb{R}} , L^2(\Theta)) : \norm{u}_{X_{rad}^{s,b} ({\Bbb{R}} \times \Theta)} < \infty \} , \end{align*} where \begin{align}\label{eq Xsb} \norm{u}_{X_{rad}^{s,b} ({\Bbb{R}} \times \Theta)}^2 = \sum_{n=1}^{\infty} \norm{\inner{\tau + z_n^{2\alpha}}^b \inner{z_n}^{s} \widehat{c_n} (\tau) }_{L^2({\Bbb{R}}_{\tau} ) }^2 , \end{align} and \begin{align*} u(t) = \sum_{n=1}^{\infty} c_n (t) e_n . \end{align*} Moreover, for $u \in X_{rad}^{0, \infty} (\Theta) = \cap_{b \in {\Bbb{R}}} X_{rad}^{0,b} (\Theta)$ we define, for $s \leq 0$ and $b \in {\Bbb{R}}$, the norm $\norm{u}_{X_{rad}^{s,b} ({\Bbb{R}} \times \Theta)}$ by \eqref{eq Xsb}. \end{defi} Equivalently, we can write the norm \eqref{eq Xsb} in the definition above into \begin{align*} \norm{u}_{X_{rad}^{s,b} ({\Bbb{R}} \times \Theta)} = \norm{S(-t) u}_{H_t^b H_x^s ({\Bbb{R}} \times \Theta)} . \end{align*} For $T > 0$, we define the restriction spaces $X_T^{s,b} (\Theta)$ equipped with the natural norm \begin{align*} \norm{u}_{X_T^{s,b} ( \Theta)} = \inf \{ \norm{\tilde{u}}_{X_{rad}^{s,b} ({\Bbb{R}} \times \Theta)} : \tilde{u}\big|_{(-T,T) \times \Theta} =u\} . \end{align*} \begin{lem}[Basic properties of $X_{rad}^{s,b}$ spaces]\label{lem X property1} \begin{enumerate} \item We have the trivial nesting \begin{align*} X_{rad}^{s,b} \subset X_{rad}^{s' , b' } \end{align*} whenever $s' \leq s$ and $b' \leq b$, and \begin{align*} X_{T}^{s,b} \subset X_{T'}^{s,b} \end{align*} whenever $T' \leq T$ . \item The $X_{rad}^{s,b}$ spaces interpolate nicely in the $s, b$ indices. \item For $b > \frac{1}{2}$, we have the following embedding \begin{align*} \norm{u}_{L_t^{\infty} H_x^{s} ({\Bbb{R}} \times \Theta) } \leq C \norm{u}_{X_{rad}^{s,b} ({\Bbb{R}} \times \Theta)}. \end{align*} \item An embedding that will be used frequently in this paper \begin{align*} X^{0, \frac{1}{4}} \hookrightarrow L_t^4 L_x^2 . \end{align*} \end{enumerate} \end{lem} Note that \begin{align*} \norm{f}_{L_t^{4} L_x^2} = \norm{S_{\alpha}(t) f}_{L_t^{4} L_x^2} \leq \norm{S_{\alpha}(t) f}_{H_t^{\frac{1}{4}} L_x^2} = \norm{f}_{X^{0, \frac{1}{4}}} . \end{align*} \begin{lem}\label{lem X property2} Let $b,s >0$ and $u_0 \in H_{rad}^s (\Theta)$. Then there exists $c >0$ such that for $0 < T \leq 1$, \begin{align*} \norm{S_{\alpha}(t) u_0}_{X_{rad}^{s,b} ((-T, T) \times \Theta)} \leq c \norm{u_0}_{H^s}. \end{align*} \end{lem} The proofs of Lemma \ref{lem X property1} and Lemma \ref{lem X property2} can be found in \cite{an}. We also recall the following lemma in \cite{BourExp, gi} \begin{lem}\label{lem Duhamel} Let $0 < b' < \frac{1}{2}$ and $0 < b < 1-b'$. Then for all $f \in X_\delta^{s, -b'} (\Theta)$, we have the Duhamel term $w(t) = \int_0^t S(t-s) f(\tau) \, ds \in X_\delta^{s,b} (\Theta)$ and moreover \begin{align*} \norm{w}_{X_T^{s,b} (\Theta)} \leq C T^{1-b-b'} \norm{f}_{X_T^{s,-b'} (\Theta)} . \end{align*} \end{lem} \subsection{Useful inequalities} \begin{lem}[Gagliardo-Nirenberg interpolation inequality]\label{lem GN} Let $1 < p < q \leq \infty$ and $s > 0$ be such that $\frac{1}{q} = \frac{1}{p} - \frac{s\theta}{d}$ for some $0 < \theta = \theta (d , p ,q ,s) < 1$. Then for any $u \in \dot{W}^{s,p} ({\Bbb{R}}^d)$, we have \begin{align*} \norm{u}_{L^q ({\Bbb{R}}^d)} \lesssim_{d, p, q, s} \norm{u}_{L^p ({\Bbb{R}}^d)}^{1-\theta} \norm{u}_{\dot{W}^{s,p} ({\Bbb{R}}^d)}^{\theta} . \end{align*} \end{lem} \begin{lem}[Sobolev embedding]\label{lem Sobolev} For any $u \in C_0^{\infty} ({\Bbb{R}}^d)$, $\frac{1}{p} - \frac{1}{q} = \frac{s}{d}$ and $s > 0$, we have \begin{align*} \dot{W}^{s,p} ({\Bbb{R}}^d) \hookrightarrow L^q ({\Bbb{R}}^d) . \end{align*} \end{lem} \section{Bilinear Strichartz estimates}\label{sec bilinear} In this section, we prove the bilinear estimates that will be heavily used in the rest of this paper. The proof is adapted from \cite{an} with two dimensional modification and a different counting lemma. \subsection{Bilinear Strichartz estimates for FNLS} \begin{lem}[Bilinear estimates]\label{lem bilinear} Consider $\alpha \in [\frac{1}{2} ,1)$. For $j =1,2$, $N_j >0$ and $u_j \in L_{rad}^2 (\Theta)$ satisfying \begin{align*} \mathbf{1}_{\sqrt{-\Delta} \in [N_j , 2 N_j]} u_j = u_j , \end{align*} we have the following bilinear estimates. \begin{enumerate} \item The bilinear estimate without derivatives.\\ Without loss of generality, we assume $N_1 \geq N_2 $, then for any $\varepsilon >0 $ \begin{align}\label{eq bilinear1} \norm{S_{\alpha}(t) u_1 \, S_{\alpha}(t) u_2}_{L_{t,x}^2 ((0,1) \times \Theta)} \lesssim N_2^{\frac{1}{2} + \varepsilon} \norm{u_1}_{L_x^2 (\Theta)} \norm{u_2}_{L_x^2 (\Theta)} . \end{align} \item The bilinear estimate with derivatives.\\ Moreover, if $u_j \in H_0^1 (\Theta)$, then for any $\varepsilon >0 $ \begin{align}\label{eq bilinear2} \norm{ \nabla S_{\alpha}(t) u_1 \, S_{\alpha}(t) u_2}_{L_{t,x}^2 ((0,1) \times \Theta)} \lesssim N_1N_2^{\frac{1}{2} +\varepsilon} \norm{u_1}_{L_x^2 (\Theta)} \norm{u_2}_{L_x^2 (\Theta)}. \end{align} \end{enumerate} \end{lem} \begin{rmq} Lemma \ref{prop bilinear} also holds for $S_{\alpha}(t) u_0 \, \overline{S_{\alpha}(t) v_0}$. In fact, \begin{align*} \norm{S_{\alpha}(t) u_0 \, S_{\alpha}(t)v_0}_{L_t^2 L_x^2}^2 = \norm{S_{\alpha}(t)u_0 \, S_{\alpha}(t) v_0 \overline{S_{\alpha}(t) u_0} \, \overline{S_{\alpha}(t)v_0}}_{L_t^1 L_x^1} = \norm{S_{\alpha}(t) u_0 \, \overline{S_{\alpha}(t) v_0}}_{L_t^2 L_x^2}^2 . \end{align*} \end{rmq} \begin{prop}[Lemma 2.3 in \cite{bgtBil}: Transfer principle]\label{prop bilinear} For any $b > \frac{1}{2}$ and for $j =1,2$, $N_j >0$ and $f_j \in X^{0,b} ({\Bbb{R}} \times \Theta)$ satisfying \begin{align*} \mathbf{1}_{\sqrt{-\Delta} \in [N_j , 2 N_j]} f_j = f_j , \end{align*} one has the following bilinear estimates. \begin{enumerate} \item The bilinear estimate without derivatives. Without loss of generality, we assume $N_1 \geq N_2$, then for any $\varepsilon >0 $ \begin{align}\label{eq bilinear1'} \norm{ f_1 f_2}_{L_{t,x}^2 ((0,1) \times \Theta)} \lesssim N_2^{\frac{1}{2} + \varepsilon} \norm{f_1}_{X^{0,b} ((0,1) \times \Theta)} \norm{f_2}_{X^{0,b} ((0,1) \times \Theta)} . \end{align} \item The bilinear estimate with derivatives. Moreover, if $f_j \in H_0^1 (\Theta)$, then for any $\varepsilon >0 $ \begin{align}\label{eq bilinear2'} \norm{ \nabla f_1 f_2}_{L_{t,x}^2 ((0,1) \times \Theta)} \lesssim N_1 N_2^{\frac{1}{2} + \varepsilon} \norm{f_1}_{X^{0,b} ((0,1) \times \Theta)} \norm{f_2}_{X^{0,b} ((0,1) \times \Theta)}. \end{align} \end{enumerate} \end{prop} \begin{rmq}[Interpolation of bilinear estimates]\label{rmk inter bilinear} In fact, using H\"older inequality, Bernstein inequality and Lemma \ref{lem X property1}, we write \begin{align*} \norm{ f_1 f_2}_{L_{t,x}^2 ((0,1) \times \Theta)} & \lesssim \norm{f_1}_{L_t^4 L_x^2 ((0,1 ) \times \Theta)} \norm{f_2}_{L_t^4 L_x^{\infty} ((0,1 ) \times \Theta)} \\ & \lesssim \norm{f_1}_{X^{0, \frac{1}{4}} (\Theta)} N_2 \norm{f_2}_{L_t^4 L_x^{2} ((0,1 ) \times \Theta)} \\ & \lesssim N_2 \norm{f_1}_{X^{0, \frac{1}{4}} ((0,1) \times \Theta)} \norm{f_2}_{X^{0, \frac{1}{4}} ((0,1) \times \Theta)} . \end{align*} Then interpolating it with \eqref{eq bilinear1'}, for $b = \frac{1}{2}+$ \begin{align*} \norm{ f_1 f_2}_{L_{t,x}^2 ((0,1) \times \Theta)} \lesssim N_2^{\frac{1}{2} + \varepsilon} \norm{f_1}_{X^{0,b} ((0,1) \times \Theta)} \norm{f_2}_{X^{0,b} ((0,1) \times \Theta)} , \end{align*} we obtain \begin{align*} \norm{ f_1 f_2}_{L_{t,x}^2 ((0,1) \times \Theta)} \lesssim N_2^{\beta} \norm{f_1}_{X^{0, b(\beta) } ((0,1) \times \Theta)} \norm{f_2}_{X^{0, b(\beta) } ((0,1) \times \Theta)} . \end{align*} where $b(\beta) =\frac{1}{4}+ (1-\beta)\frac{1}{2}+$, $\beta \in (\frac{1}{2} , 1]$. Moreover, if restricting in the time interval $[0, \delta]$, we have by H\"older inequality \begin{align}\label{eq inter bilinear} \norm{ f_1 f_2}_{L_{t,x}^2 ((0,\delta) \times \Theta)} \lesssim N_2^{\beta} \delta^{2(b- b(\beta) )} \norm{f_1}_{X_{\delta}^{0, b} } \norm{f_2}_{X_{\delta}^{0, b}} . \end{align} \end{rmq} \begin{proof}[Proof of Lemma \ref{lem bilinear}] First we write \begin{align*} u_1 = \sum_{n_1 \sim N_1} c_{n_1} e_{n_1}(r) , \quad u_2 = \sum_{n_2 \sim N_2} d_{n_2} e_{n_2}(r) , \end{align*} where $c_{n_1} = (u_1 , e_{n_1})_{L^2}$ and $d_{n_2} = (u_2 , e_{n_2})_{L^2}$. Then \begin{align*} S_{\alpha}(t) u_1 = \sum_{n_1 \sim N_1} e^{-{\textbf{i}} t z_{n_1}^{2 \alpha} } c_{n_1} e_{n_1}(r) , \quad S_{\alpha}(t) u_2= \sum_{n_2 \sim N_2} e^{-{\textbf{i}} t z_{n_2}^{2 \alpha} } d_{n_2} e_{n_2}(r) . \end{align*} Therefore, the bilinear objects that one needs to estimate are the $L_{t,x}^2$ norms of \begin{align*} E_0(N_1, N_2) & = \sum_{n_1 \sim N_1} \sum_{n_2 \sim N_2} e^{-{\textbf{i}} t (z_{n_1}^{2\alpha} +z_{n_2}^{2\alpha}) } (c_{n_1} d_{n_2}) ( e_{n_1} e_{n_2}) ,\\ E_1(N_1, N_2) & = \sum_{n_1 \sim N_1} \sum_{n_2 \sim N_2} e^{-{\textbf{i}} t (z_{n_1}^{2\alpha} +z_{n_2}^{2\alpha}) } (c_{n_1} d_{n_2} )(\nabla e_{n_1} e_{n_2} ). \end{align*} Let us focus on \eqref{eq bilinear1} first. \begin{align}\label{eq bi2} (\text{LHS of } \eqref{eq bilinear1})^2 & = \norm{E_0(N_1, N_2) }_{L^2 ((0, 1) \times \Theta)}^2 = \int_{{\Bbb{R}} \times \Theta} \abs{\sum_{n_1 \sim N_1} \sum_{n_2 \sim N_2} e^{-{\textbf{i}} t (z_{n_1}^{2\alpha} +z_{n_2}^{2\alpha}) } (c_{n_1} d_{n_2}) ( e_{n_1} e_{n_2}) }^2 \, dx dt \end{align} Here we employ a similar argument used in the proof of Lemma 2.6 in \cite{ST}. We fix $\eta \in C_0^{\infty} ((0,1))$, such that $\eta \big|_{I} \equiv 1$ where $I$ is a slight enlargement of $(0,1)$. Thus we continue from \eqref{eq bi2} \begin{align}\label{eq bi3} \eqref{eq bi2} & \leq \int_{{\Bbb{R}} \times \Theta} \eta(t) \abs{ \sum_{n_1 \sim N_1} \sum_{n_2 \sim N_2} e^{-{\textbf{i}} t (z_{n_1}^{2\alpha} +z_{n_2}^{2\alpha}) } (c_{n_1} d_{n_2}) ( e_{n_1} e_{n_2})}^2 \, dxdt \notag\\ & = \int_{{\Bbb{R}} \times \Theta} \eta(t) \abs{\sum_{\tau} \sum_{ (n_1 ,n_2) \in \Lambda_{N_1 ,N_2, \tau} } e^{-{\textbf{i}} t (z_{n_1}^{2\alpha} +z_{n_1}^{2\alpha}) } (c_{n_1} d_{n_2}) ( e_{n_1} e_{n_2})}^2 \, dxdt , \end{align} where \begin{align*} \# \Lambda_{N_1 ,N_2, \tau} = \# \{ (n_1 ,n_2) \in {\Bbb{N}}^2 : n_1 \sim N_1 , n_2 \sim N_2 , \abs{z_{n_1}^{2\alpha} +z_{n_2}^{2\alpha} - \tau}\leq \frac{1}{2} \} . \end{align*} By expanding the square above and using Plancherel in time, we have \begin{align}\label{eq bi4} \eqref{eq bi3} & = \int_{{\Bbb{R}} \times \Theta} \eta(t) \sum_{\tau , \tau'} \sum_{\substack{ (n_1 ,n_2) \in \Lambda_{N_1 ,N_2, \tau}\\ (n_1' ,n_2') \in \Lambda_{N_1' ,N_2', \tau'} }} e^{{\textbf{i}} t (z_{n_1'}^{2\alpha} +z_{n_2'}^{2\alpha} - z_{n_1}^{2\alpha} z_{n_2}^{2\alpha}) } (c_{n_1} d_{n_2}) (\overline{c_{n_1'} d_{n_2'}}) ( e_{n_1} e_{n_2})( e_{n_1'} e_{n_2'}) \, dxdt \notag\\ & = \sum_{\tau , \tau'} \sum_{\substack{ (n_1 ,n_2) \in \Lambda_{N_1 ,N_2, \tau}\\ (n_1' ,n_2') \in \Lambda_{N_1' ,N_2', \tau'} }} \widehat{\eta}((z_{n_1’}^{2\alpha} +z_{n_1'}^{2\alpha}) - (z_{n_1}^{2\alpha} +z_{n_2}^{2\alpha} )) (c_{n_1} d_{n_2}) (\overline{c_{n_1'} d_{n_2'} }) \int_{ \Theta} ( e_{n_1} e_{n_2}) ( e_{n_1'} e_{n_2'}) \, dx \notag\\ & \lesssim \sum_{\tau ,\tau'} \frac{1}{1+ \abs{\tau -\tau'}^2} \sum_{\substack{n_1 \sim N_1 , n_2 \sim N_2 \\n_1' \sim N_1' , n_2' \sim N_2'}} \mathbf{1}_{\Lambda_{N_1 ,N_2, \tau}} \mathbf{1}_{\Lambda_{N_1' ,N_2', \tau'}} (c_{n_1} d_{n_2}) (\overline{c_{n_1'} d_{n_2'}}) \norm{ e_{n_1} e_{n_2}}_{L^2 (\Theta)} \norm{ e_{n_1'} e_{n_2'}}_{L^2 (\Theta)} . \end{align} Then by Schur's test, we arrive at \begin{align}\label{eq bi5} \eqref{eq bi4} & \lesssim \sum_{\tau \in {\Bbb{N}}} \parenthese{ \sum_{(n_1 ,n_2) \in \Lambda_{N_1, N_2, \tau}} \abs{c_{n_1} d_{n_2}} \norm{ e_{n_1} e_{n_2}}_{L^2 (\Theta)} }^2 \notag\\ & \lesssim \sum_{\tau \in {\Bbb{N}}} \# \Lambda_{N_1 ,N_2, \tau} \sum_{(n_1 ,n_2) \in \Lambda_{N_1, N_2, \tau}} \abs{c_{n_1} d_{n_2}}^2 \norm{ e_{n_1} e_{n_2}}_{L^2 (\Theta)}^2 . \end{align} We claim that \begin{claim}\label{claim bilinear1} \begin{enumerate} \item $\# \Lambda_{N_1, N_2, \tau} = \mathcal{O}(N_2)$ ; \item $\norm{ e_{n_1} e_{n_2} }_{L^2(\Theta)}^2 \lesssim N_2^{\varepsilon}$ . \end{enumerate} \end{claim} Assuming Claim \ref{claim bilinear1}, we see that \begin{align*} \eqref{eq bi5} & \lesssim \sum_{\tau \in {\Bbb{N}}} N_2^{1+\varepsilon} \sum_{(n_1 ,n_2) \in \Lambda_{N_1, N_2, \tau}} \abs{c_{n_1} d_{n_2}}^2 \lesssim N_2^{1 + \varepsilon} \norm{u_1}_{L^2 (\Theta)}^2 \norm{u_2}_{L^2 (\Theta)}^2. \end{align*} Therefore, \eqref{eq bilinear1} follows. Now we are left to prove Claim \ref{claim bilinear1}. \begin{proof}[Proof of Claim \ref{claim bilinear1}] In fact, {\it (2)} is due to H\"older inequality and the logarithmic bound on the $L^p$ norm of $e_n$ in \eqref{eq e_n bdd}. For {\it (1)}, we have that for fixed $\tau \in {\Bbb{N}}$ and fixed $n_2 \sim N_2$ \begin{align*} \abs{z_{n_1}^{2\alpha} +z_{n_2}^{2\alpha} - \tau}\leq \frac{1}{2} \implies z_{n_1} \in [(\tau -\frac{1}{2} - z_{n_2}^{2\alpha})^{\frac{1}{2\alpha}} , (\tau + \frac{1}{2} - z_{n_2}^{2\alpha})^{\frac{1}{2\alpha}} ] . \end{align*} There are at most 1 integer $z_{n_1}$ in this interval by convexity \begin{align*} (\tau + \frac{1}{2} - z_{n_2}^{2\alpha})^{\frac{1}{2\alpha}} - (\tau -\frac{1}{2} - z_{n_2}^{2\alpha})^{\frac{1}{2\alpha}} \leq 1^{\frac{1}{2\alpha}} =1 . \end{align*} Let us remark that the restriction $\alpha \geq \frac{1}{2}$ on the fractional Laplacian in this section comes from the convexity that we used here. Then \begin{align*} \# \Lambda_{N_1 ,N_2, \tau} = \# \{ (n_1 ,n_2) \in {\Bbb{N}}^2 : n_1 \sim N_1 , n_2 \sim N_2 , \abs{z_{n_1}^{2\alpha} +z_{n_2}^{2\alpha} - \tau}\leq \frac{1}{2} \} \sim \mathcal{O} (N_2) . \end{align*} We finish the proof of Claim \ref{claim bilinear1}. \end{proof} The estimation of \eqref{eq bilinear2} is similar, hence omitted. The proof of Lemma \ref{lem bilinear} is complete now. \end{proof} \begin{rmq} One may guess that the bilinear Strichartz could be done via the radial Sobolev embedding, \begin{align*} \abs{\abs{x}^{\frac{1}{2}-} f} \lesssim \norm{f}_{\dot{H}^{\frac{1}{2}+}} , \end{align*} however it is not clear how to deal with the weight on the left hand side. If there were no such weight, it should be sufficient to prove the bilinear estimate using the radial Sobolev embedding. \end{rmq} \subsection{Bilinear Strichartz estimates for NLS} The computation above also holds for the classical NLS ($\alpha=1$). However, we have a slightly better local well-posedness index because of the following better the bilinear estimate. \begin{lem}[Bilinear estimates for classical NLS]\label{lem bilinear'} Under the same setup as in Lemma \ref{lem bilinear}, the bilinear estimate for classical NLS is given by \begin{align*} \norm{S_1(t) u_1 \, S_1(t) u_2}_{L_{t,x}^2 ((0,1) \times \Theta)} & \lesssim N_2^{ \varepsilon} \norm{u_1}_{L_x^2 (\Theta)} \norm{u_2}_{L_x^2 (\Theta)}, \\ \norm{ \nabla S_1(t) u_1 \, S_1(t) u_2}_{L_{t,x}^2 ((0,1) \times \Theta)} & \lesssim N_1N_2^{\varepsilon} \norm{u_1}_{L_x^2 (\Theta)} \norm{u_2}_{L_x^2 (\Theta)}. \end{align*} \end{lem} Notice that the proof of Lemma \ref{lem bilinear'} in fact can be extended from \cite{an}, hence omitted here. It is worth pointing out that in the proof of Lemma \ref{lem bilinear'}, for example in \eqref{eq bi2}, we can periodize the time and use the following stronger counting lemma to estimate the number of integer points on the characteristic surface instead of counting the integer points in a thin neighborhood of the characteristic surface \begin{lem}[Lemma 3.2 in \cite{bgtBil}]\label{lem counting} Let $M,N \in {\Bbb{N}}$, then for any $\varepsilon > 0$, there exists $C>0$ such that \begin{align*} \# \{ (k_1 , k_2) \in {\Bbb{N}}^2 : N \leq k_1 \leq 2N, k_1^2 + k_2^2 = M \} \leq C N^{\varepsilon} . \end{align*} \end{lem} However, we will not distinguish $\alpha=1$ case from other fractional ones in the following sections. This is because the dominated term in the energy increment (the term that will give the largest energy increment and then determine the global well-posedness index in Section \ref{sec gwp}) will be the almost the same even if we take this better bilinear estimate into consideration. \section{I-operator and a local theory on the modified equation}\label{sec LWPI} In this section, we first define the I-operator in our setting and then present a local well-posedness argument for the I-operator modified equation. \subsection{Definition of I-operator} \begin{defi}[I-operator]\label{defn I} For $N \gg 1$, and a function $u = \sum_{n=1}^{\infty} c_n e_n$, define a smooth operator $I_N$, such that \begin{align*} I_N u = \sum_{n=1}^{\infty} m_N(z_n)c_n e_n. \end{align*} where $m_N$ is a smooth function satisfying \begin{align*} m_N(\xi) = \begin{cases} 1, & \abs{\xi} \leq N\\ \abs{\frac{\xi}{N}}^{s-\alpha}, & \abs{\xi} \geq 2N . \end{cases} \end{align*} \end{defi} \begin{rmq}\label{rmk ID} The I-operator defined above is the analogue of the one in \cite{CKSTT} in the physical space. In the rest of this section, we will adopt the name `multiplier' of $m$ from the context in \cite{CKSTT}. It is easy to check that for $N \gg 1$ \begin{align*} \norm{u}_{H^s} & \lesssim \norm{I_N u}_{H^{\alpha}} \lesssim N^{\alpha -s} \norm{u}_{H^s} ,\\ \norm{u}_{X^{s, \frac{1}{2}+}} & \lesssim \norm{I_N u}_{X^{\alpha , \frac{1}{2}+}} \lesssim N^{\alpha -s} \norm{u}_{X^{s, \frac{1}{2}+}} . \end{align*} \end{rmq} A standard I-method argument usually comes in three major parts. \begin{enumerate}[\it \text{Part} 1] \item (Subsection \ref{ssec LWP}) a well-adapted local theory for the I-operator modified fractional NLS, \item (Section \ref{sec energy increment}) the almost conservation law argument addressing the energy increment on each iteration, \item (Section \ref{sec gwp}) an iterative globalization argument giving the global index and a polynomial growth of the $H^s$ norm of the solution. \end{enumerate} \subsection{A local theory based on $I_N$-operator}\label{ssec LWP} Consider the following $I_N$-operator modified FNLS equation with initial data also being smoothed into the energy space. \begin{align}\label{fNLSI} \begin{cases} (i \partial_t - (-\Delta)^{\alpha}) I_N u = I_N (\abs{u}^2 u),\\ I_N u(0) = I_N u_0 . \end{cases} \end{align} Note that $Iu_0 \in H^{\alpha}$ and $\norm{Iu_0}_{H^{\alpha}} \lesssim N^{\alpha -s}$. To keep our notation compact, we will write $I$ and $m$ instead of $I_N$ and $m_N$ as in Definition \ref{defn I}. The main result in this subsection is the following local well-posedness theory. \begin{prop}[Local well-posedness]\label{prop LWPI} For $s > \frac{1}{2}$ and $Iu_0 \in H^{\alpha}$, \eqref{fNLSI} is locally well posed. That is, there exists $\delta \sim \norm{Iu_0}_{H^{\alpha}}^{-\frac{2}{1 + 2b -4b(s)}} \gtrsim N^{-\frac{2(\alpha -s)}{1 + 2b -4b(s)}} $, where $b(s) = \frac{1}{4} +\frac{1}{2} (1-s)+$, such that $Iu \in C([0, \delta] , H^{\alpha}(\Theta))$ solves \eqref{fNLSI} on $[0, \delta]$ and satisfies \begin{align*} \norm{I u}_{X_{\delta}^{\alpha ,\frac{1}{2}+}} \lesssim \norm{Iu_0}_{H^{\alpha}} \lesssim N^{\alpha -s}. \end{align*} \end{prop} We will prove Proposition \ref{prop LWPI} by a standard contraction mapping argument. Note that the key step to close such argument is the following nonlinear estimate lemma. \begin{lem}[Nonlinear estimates]\label{lem nonlinear est} For $s > \frac{1}{2}$, there exist $b, b' \in {\Bbb{R}}$ satisfying \begin{align*} 0 < b' < \frac{1}{2} < b, \quad b + b' < 1 , \end{align*} such that for every triple $(u_1, u_2 , u_3)$ in $X^{s,b} ({\Bbb{R}} \times \Theta)$, \begin{align*} \norm{I (\abs{u}^2 u)}_{X^{s, -b'} ({\Bbb{R}} \times \Theta)} \lesssim \prod_{j=1}^3 \norm{Iu}_{X^{s,b} ({\Bbb{R}} \times \Theta)}^3 . \end{align*} \end{lem} Assuming Lemma \ref{lem nonlinear est}, we can easily finish the proof of Proposition \ref{prop LWPI}. \begin{proof}[Proof of Proposition \ref{prop LWPI}] Using Lemma \ref{lem Duhamel} and Proposition \ref{prop LWPI}, we have the following standard contraction mapping calculation \begin{align*} \norm{Iu}_{X_{\delta}^{\alpha,b}} & \lesssim \norm{Iu_0}_{H^{\alpha}} + \delta^{1- b - b(s)} \norm{I (\abs{u}^{2} u)}_{X_{\delta}^{\alpha, -b(s)}} \\ & \lesssim \norm{Iu_0}_{H^{\alpha}} + \delta^{1- b - b(s)} \norm{Iu}_{X_{\delta}^{\alpha, b(s)}}^{3} \\ & \lesssim \norm{Iu_0}_{H^{\alpha}} + \delta^{1- b - b(s)} \delta^{3(b-b(s))-} \norm{Iu}_{X_{\delta}^{\alpha, b}}^{3} , \end{align*} where $b = \frac{1}{1}+$. By choosing $\delta^{1+ 2b - 4b(s)} \sim \norm{Iu_0}_{H^{\alpha}}^{-2} \gtrsim N^{-2(\alpha -s)}$ as what one did in a standard contraction mapping proof and a continuity argument we have that \begin{align*} \norm{Iu}_{X_{\delta}^{\alpha,b}} \lesssim \norm{Iu_0}_{H^{\alpha}} \lesssim N^{\alpha -s} \end{align*} and \begin{align*} \delta \gtrsim N^{-\frac{2(\alpha -s)}{1 + 2b -4b(s)}} . \end{align*} \end{proof} Now we are left to prove the key Lemma \ref{lem nonlinear est} in this section. \begin{proof}[Proof of Lemma \ref{lem nonlinear est}] By the duality, it is sufficient to show the following nonlinear estimate: for $\frac{1}{2} < s < 1$, $b(s) = \frac{1}{4} +\frac{1}{2} (1-s)+$ and $u \in X_{\delta}^{s,b(s)}$ \begin{align}\label{eq loc nonlinear} \norm{I (\abs{u}^2 u)}_{X_{\delta}^{\alpha , -b(s)}} \lesssim \norm{I u}_{X_{\delta}^{\alpha , b(s)}}^3 . \end{align} By duality argument, it is sufficient to prove for $v \in X^{-\alpha , b(s)}$ \begin{align*} \abs{\int_{{\Bbb{R}} \times \Theta} \overline{v} \, I (\abs{u}^2 u) \, dx dt } \lesssim \norm{v}_{X^{-\alpha , b(s)}} \norm{I u}_{X_{\delta}^{\alpha , b(s)}}^3 . \end{align*} We will frequently make use of a dyadic decomposition in frequency using the orthonomal basis $e_n$'s of the radial Dirichlet Laplacian $-\Delta$, writing \begin{align*} v_0 = \sum_{N_0 \leq \inner{z_n} < 2N_0} P_n v, \end{align*} and \begin{align}\label{eq u decomp} u_i = \sum_{N_i \leq \inner{z_n} < 2 N_i} P_n u, \quad \text{for } i =1,2,3. \end{align} After the dyadic frequency decomposition, we take a typical term and compute for the quadruple $\underline{N} = (N_0, N_1, N_2, N_3)$ \begin{align*} L(\underline{N}) & :=\abs{\int_{{\Bbb{R}} \times \Theta} \overline{v_0} \, I(u_1 \overline{u_2} u_3) \, dxdt } . \end{align*} In order to distribute the I-operator inside the nonlinear term, we first move the I-operator on $v_0$, then introduce $m(N_i)$ (instead of I-operator) into each $u_i$. \begin{align} L(\underline{N}) & = \abs{\int_{{\Bbb{R}} \times \Theta} \overline{I v_0} \, (u_1 \overline{u_2} u_3) \, dxdt } \notag\\ & = \frac{1}{m( N_1) m( N_2) m( N_3)} \abs{\int_{{\Bbb{R}} \times \Theta} \overline{I v_0} \, \cdot m( N_1) u_1 \cdot \overline{m( N_2)u_2} \cdot m( N_3) u_3 \, dxdt } . \label{eq loc2} \end{align} We will explain the reason why we brought in $m$ instead of I-operator in this calculation in Remark \ref{rmk abuse}. By symmetry argument and because the presence of complex conjugates will play no role here, we can assume $N_1 \geq N_2 \geq N_3$. Then we can reduce the sum into the following two cases: \begin{enumerate} \item $ N_0 \lesssim N_1$ \item $ N_0 \gtrsim N_1$. \end{enumerate} {\bf Case 1:} $N_0 \lesssim N_1$ Recall Remark \ref{rmk inter bilinear} where by taking $\beta = s$, we have \begin{align}\label{eq bi1} \norm{ f_i f_j}_{L_{t,x}^2 ((0,\delta) \times \Theta)} \lesssim \min \{N_i, N_j \}^{s} \norm{f_i}_{X_{\delta}^{0, b(s)} } \norm{f_j}_{X_{\delta}^{0, b(s)} } , \end{align} where \begin{align*} b(s) = \frac{1}{4} +\frac{1}{2} (1-s)+ , \quad s \in ( \frac{1}{2} , 1]. \end{align*} Using \eqref{eq bi1} and Definition \ref{defn Xsb}, we write \eqref{eq loc2} as \begin{align} L (\underline{N}) & \lesssim \frac{m( N_0)}{m( N_1) m( N_2) m( N_3)} \int_{{\Bbb{R}} \times \Theta} \abs{ \overline{v_0} \, \cdot m( N_1) u_1 \cdot \overline{m( N_2)u_2} \cdot m( N_3) u_3} \, dxdt \notag\\ & \lesssim \frac{m (N_0)}{m (N_1) m (N_2) m (N_3)} \norm{v_0 m(N_2)u_2}_{L_{t,x}^2} \norm{m(N_1)u_1 \cdot m(N_3)u_3}_{L_{t,x}^2} \notag\\ & \lesssim \frac{m (N_0 )}{m (N_1 ) m (N_2 ) m (N_3 )} (N_2N_3)^{s} \norm{v_0}_{X_{\delta}^{0, b(s) }} \prod_{i=1}^3 \norm{Iu_i}_{X_{\delta}^{0, b(s) }} \notag\\ & \lesssim \frac{m (N_0 )}{m (N_1 ) m (N_2 ) m (N_3 )} (N_0 N_1^{-1})^{\alpha} (N_2 N_3)^{s-\alpha} \norm{v_0}_{X_{\delta}^{-\alpha, b(s) }} \prod_{i=1}^3 \norm{Iu_i}_{X_{\delta}^{\alpha, b(s) }}\label{eq loc1} . \end{align} Note that here we used $\norm{Iu_i}_{X^{s,b}} \sim \norm{m(N_i) u_i}_{X^{s,b}}$. To continue the computation, we then consider the following two scenarios for $N_2$ and $N_3$ \begin{align*} \frac{N_i^{s -\alpha}}{m(N_i )} = \begin{cases} N_i^{s -\alpha} & \text{ if } N_i \leq N\\ (N^{-1}N_i)^{\alpha -s} N_i^{s -\alpha} = N^{s -\alpha} & \text{ if } N_i > 2N . \end{cases} \end{align*} This observation implies that $L(\underline{N})$ is summable in $N_2$ and $N_3$. That is, by taking out the terms in $L(\underline{N})$ in \eqref{eq loc1} that only depend on frequencies $N_2$ and $N_3$, we see \begin{align*} \sum_{N_3 \leq N_2} \frac{(N_2 N_3)^{s-\alpha} }{m (N_2 ) m (N_3 )} \norm{Iu_2}_{X^{\alpha, b(s) }} \norm{Iu_3}_{X^{\alpha, b(s) }} \lesssim \norm{Iu}_{X^{\alpha, b(s) }}^2 . \end{align*} Now we focus on the sum over $N_0$ and $N_1$ in \eqref{eq loc1}. First write \begin{align*} \frac{m (N_0 )}{m (N_1 ) } \parenthese{\frac{N_0}{N_1}}^{\alpha} & \lesssim \begin{cases} \parenthese{\frac{N_0}{N_1}}^{\alpha} & \text{ if } N_0 \lesssim N_1 \leq N\\ \parenthese{\frac{N_1}{N}}^{\alpha -s} \parenthese{\frac{N_0}{N_1}}^{\alpha} & \text{ if } N_0 \leq N \leq N_1 \\ \parenthese{\frac{N_1}{N_0}}^{\alpha -s} \parenthese{\frac{N_0}{N_1}}^{\alpha} & \text{ if } N \leq N_0 \leq N_1 \end{cases}\\ & \lesssim \parenthese{\frac{N_0}{N_1}}^{\alpha -s} , \end{align*} then the sum over $N_0$ and $N_1$ in \eqref{eq loc1} becomes \begin{align*} \sum_{N_0 \lesssim N_1} \frac{m (N_0 )}{m (N_1 ) } \parenthese{\frac{N_0}{N_1}}^{\alpha} \norm{v_0}_{X^{-\alpha, b(s) }} \norm{Iu_1}_{X^{\alpha, b(s) }} & \lesssim \sum_{N_0 \lesssim N_1} \parenthese{\frac{N_0}{N_1}}^{\alpha -s} \norm{v_0}_{X^{-\alpha, b(s) }} \norm{Iu_1}_{X^{\alpha, b(s) }} \\ & \lesssim \norm{v}_{X^{-\alpha , b(s)}} \norm{I u}_{X_{\delta}^{\alpha , b(s)}} . \end{align*} Here is a quick remark on the calculation in \eqref{eq loc2} and what follows. We originally planned to introduce the I-operator into \eqref{eq loc2} instead of $m(N_i)$. But this needs to bring the absolute value sign inside of the integral in \eqref{eq loc2}. {\bf Case 2: } $N_0 \gtrsim N_1$. First recall Green's theorem, \begin{align*} \int_{\Theta} \Delta f g - f \Delta g \, dx = \int_{\mathbb{S}} \frac{\partial f}{\partial v} g - f \frac{\partial g}{\partial v} \, d \sigma . \end{align*} Note that \begin{align*} -\Delta e_k = z_k^2 e_k , \end{align*} where $ z_k^2$'s are the eigenvalues defined in \eqref{eq z_n}. Then we write \begin{align*} Iv_{0} = -\frac{\Delta}{N_0^2} \sum_{z_{n_0} \sim N_0} c_{n_0} (\frac{N_0}{z_{n_0}})^2 e_{n_0} . \end{align*} Define \begin{align*} T (Iv_{0 }) & = \sum_{z_{n_0} \sim N_0} c_{n_0} (\frac{N_0}{z_{n_0}})^2 e_{n_0} , \qquad V(Iv_{0 }) = \sum_{z_{n_0} \sim N_0} c_{n_0} (\frac{z_{n_0}}{N_0})^2 e_{n_0} . \end{align*} It is easy to see that for all $s$ \begin{align*} TV (Iv_{0 }) & = VT (Iv_{0 }) = (Iv_{0 }) ,\\ \norm{T (Iv_{0 })}_{H_x^s} & \sim \norm{ (Iv_{0 }) }_{H_x^s} \sim \norm{V (Iv_{0 }) }_{H_x^s} . \end{align*} Using this notation, we write \begin{align*} v_{0} = -\frac{\Delta}{N_0^2} T v_{0} \end{align*} and \begin{align*} L(\underline{N}) \lesssim \frac{1}{m (N_1 ) m (N_2 ) m (N_3 )} \frac{1}{N_0^2} \int_{{\Bbb{R}} \times \Theta} T (Iv_{0 }) \Delta (\prod_{j=1}^{3} m(N_j)u_{j}) . \end{align*} By the product rule and the assumption that $N_1 \geq N_2 \geq N_3$, we only need to consider the two largest cases of $\Delta ( u_1 u_{2} u_{3}) $. They are \begin{enumerate} \item $(\Delta u_{1}) u_{2} u_{3}$, \item $(\nabla u_{1}) \cdot (\nabla u_{2}) u_{3} $. \end{enumerate} We denote \begin{align*} J_{11} (\underline{N}) & = \int_{{\Bbb{R}} \times \Theta} T (Iv_{0}) (\Delta m(N_1)u_{1}) (m(N_2)u_{2}) (m(N_3) u_{3}), \\ J_{12} (\underline{N}) & =\int_{{\Bbb{R}} \times \Theta} T (Iv_{0}) (\nabla m(N_1)u_{1}) \cdot (\nabla m(N_2)u_{2}) (m(N_3)u_{3}) . \end{align*} Using $\Delta u_i = -N_i^2 V u_i$ and \eqref{eq bi1} under the similar calculation as in \eqref{eq loc1}, we obtain \begin{align*} \frac{1}{N_0^2} \abs{J_{11} (\underline{N})} \lesssim m(N_0) (\frac{N_1}{N_0})^2 (N_0 N_1^{-1})^{\alpha} (N_2 N_3)^{s-\alpha} \norm{v_0}_{X^{-\alpha, b(s) }} \prod_{i=1}^3 \norm{Iu_i}_{X^{\alpha, b(s) }}. \end{align*} Now for $\abs{J_{12} (\underline{N})} $, we estimate it in a similar fashion and obtain that \begin{align*} \frac{1}{N_0^2} \abs{J_{12} (\underline{N})} \lesssim m(N_0) \frac{N_1 N_2}{N_0^2} (N_0 N_1^{-1})^{\alpha} (N_2 N_3)^{s-\alpha} \norm{v_0}_{X^{-\alpha, b(s) }} \prod_{i=1}^3 \norm{Iu_i}_{X^{\alpha, b(s) }}. \end{align*} Therefore, we have \begin{align*} L(\underline{N}) \lesssim \frac{m (N_0 )}{m (N_1 ) m (N_2 ) m (N_3 )} \parenthese{\frac{N_1}{N_0}}^2 (N_0 N_1^{-1})^{\alpha} (N_2 N_3)^{s-\alpha} \norm{v_0}_{X^{-\alpha, b(s) }} \prod_{i=1}^3 \norm{Iu_i}_{X^{\alpha, b(s) }}. \end{align*} To sum $N_2$ and $N_3$, we can do exactly the same thing as in {\bf Case 1}. Then the sum over $N_0$ and $N_1$ becomes \begin{align*} \sum_{N_0 \gtrsim N_1} \frac{m (N_0 )}{m (N_1 ) } \parenthese{\frac{N_0}{N_1}}^{\alpha} \parenthese{\frac{N_1}{N_0}}^2 \norm{v_0}_{X^{-\alpha, b(s) }} \norm{Iu_1}_{X^{\alpha, b(s) }} & \lesssim \sum_{N_0 \gtrsim N_1} \parenthese{\frac{N_1}{N_0}}^{2-\alpha } \norm{v_0}_{X^{-\alpha, b(s) }} \norm{Iu_1}_{X^{\alpha, b(s) }} \\ & \lesssim \norm{v}_{X^{-\alpha , b(s)}} \norm{I u}_{X_{\delta}^{\alpha , b(s)}} . \end{align*} Therefore \begin{align*} \abs{\int_{{\Bbb{R}} \times \Theta} \overline{v} I (\abs{u}^2 u) \, dx dt } \lesssim \sum_{\underline{N}} L(\underline{N}) \lesssim \norm{v}_{X^{-\alpha , b(s)}} \norm{I u}_{X_{\delta}^{\alpha , b(s)}}^3 . \end{align*} which implies \eqref{eq loc nonlinear}. This finishes the proof of Lemma \ref{lem nonlinear est}. \end{proof} \begin{rmq}\label{rmk abuse} Here is a quick remark on the calculation in \eqref{eq loc2} and what follows. We originally planned to introduce the I-operator into \eqref{eq loc2} instead of $m(N_i)$. But this needs to bring the absolute value sign inside of the integral in \eqref{eq loc2}, which may ruin the Green's theorem. So with the calculation in this proof, we justify the `legality' of bringing I-operators inside. In the rest of this paper, we will use a slight abuse of calculation -- moving the I-operator without justification. For example, in \eqref{eq loc2} \begin{align*} \abs{\int_{{\Bbb{R}} \times \Theta} \overline{I v_0} \, (u_1 \overline{u_2} u_3) \, dxdt } \lesssim \frac{m(N_0)}{m( N_1) m( N_2) m( N_3)} \abs{\int_{{\Bbb{R}} \times \Theta} \overline{ v_0} \, \cdot I u_1 \cdot \overline{ Iu_2} \cdot I u_3 \, dxdt } . \end{align*} \end{rmq} \section{Weak interaction between functions localized in uncomparable frequencies}\label{sec weak} Before we start the I-method argument, let us first understand the interaction between the functions at separate frequencies, which will be heavily used our proof and simplifies a lot of the case classification in Section \ref{sec energy increment}. Recall that we have no similar convolution properties as what we have on ${\Bbb{R}}^d$, which results in losing control of the frequency connection. In fact, after taking Fourier transformation on the nonlinear term, the convolution property implies that the frequencies in each function are linearly connected $\xi = \xi_1 -\xi_2 + \xi_3$. However this is no longer true under our setting, which means that for instance the maximum frequency could be extremely large instead of being controlled by a linear form of other lower frequencies. To deal with this bad scenario, let us take a closer look at the interaction among the nonlinearity. This weak interaction is inspired by Lemma 2.6 in \cite{bgtBil}, that is \begin{lem} There exists $C >0$ such that, if for any $j = 1,2,3$, $C z_{n_j} \leq z_{n_0}$, then for every $p >0$ there exists $C_p > 0$ such that for every $w_j \in L^2 (\mathcal{M})$, $j =0,1,2,3$, \begin{align}\label{eq BGT} \abs{\int_{\mathcal{M}} P_{n_0} w_0 P_{n_1} w_1 P_{n_2} w_2 P_{n_3} w_3 \, dx} \leq C_p z_{n_0}^{-p} \prod_{j=0}^3 \norm{w_j}_{L^2} . \end{align} \end{lem} Notice that on the right-hand side of \eqref{eq BGT}, the factor $z_{n_0}^{-p}$ gives a huge decay, which means that the interaction in fact is weak. Now let us present our version of such weak interaction. \begin{prop}[Weak interaction]\label{prop weak} For the frequency quadruple $\underline{n} = (n_0 , n_1 , n_2 , n_3) \in {\Bbb{N}}^4$ and $n_0 \gg n_1 \geq n_2 \geq n_3$ (`$n_0 \gg n_1$' means $n_0 \geq 2 n_1$), we consider the functions $w, f, g, h \in X^{0,b}$, $b=\frac{1}{2}+$ with their frequencies localized at $n_0, n_1, n_2 , n_3$. More precisely, \begin{align*} w (t,x) = w_{n_0} (t) e_{n_0}(x), \quad f (t,x) =f_{n_1} (t) e_{n_1}(x), \quad g (t,x) = g_{n_2} (t) e_{n_2}(x), \quad h(t,x) = h_{n_3} (t) e_{n_3}(x). \end{align*} Define the interaction between these functions as follows \begin{align}\label{eq J} J(\underline{n}) : = \abs{ \int_{{\Bbb{R}} \times \Theta} \overline{w} \cdot f \cdot \overline{g} \cdot h \, dx dt }. \end{align} Then this interaction $J$ satisfies \begin{align}\label{eq J bdd} J(\underline{n}) \lesssim \frac{n_2 n_3}{n_0^2} \frac{\norm{w}_{X^{0, b}} \norm{f}_{X^{0, b}} \norm{g}_{X^{0, b}} \norm{h}_{X^{0, b}} }{ \inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}}^{b}} . \end{align} \end{prop} \begin{proof}[Proof of Proposition \ref{prop weak}] Notice that \eqref{eq J} can be written as \begin{align*} J (\underline{n}) & = \int_{{\Bbb{R}} \times \Theta} \overline{w_{n_0} (t)} e_{n_0} (x) \cdot f_{n_1}(t) e_{n_1}(x) \cdot \overline{g_{n_2}(t) } e_{n_2} (x) \cdot h_{n_3}(t) e_{n_3} (x) \, dxdt \\ & = \parenthese{ \int_{{\Bbb{R}}} \overline{w_{n_0} (t)} \cdot f_{n_1}(t) \cdot \overline{g_{n_2}(t)} \cdot h_{n_3}(t) \, dt} \parenthese{\int_{\Theta} e_{n_0} (x) e_{n_1}(x) e_{n_2} (x) e_{n_3}(x) \, dx} = : A \times B . \end{align*} We will estimate the contributions of the two terms in \eqref{eq J} in Lemma \ref{lem weak e_n} and Lemma \ref{lem weak fn} separately. \begin{lem}[Weak interaction among separated eigenfunctions]\label{lem weak e_n} If $n_0 \gg n_1 \geq n_2 \geq n_3 $, then \begin{align*} B : = \int_{\Theta} e_{n_0} e_{n_1} e_{n_2} e_{n_3} \, dx < \mathcal{O} \parenthese{\frac{n_2 n_3}{(n_0-n_1)^2}} < \mathcal{O} \parenthese{\frac{n_2 n_3}{n_0^2}} . \end{align*} \end{lem} \begin{lem}[Interaction between frequency localized functions]\label{lem weak fn} Under the same assumption as in Proposition \ref{prop weak}, the interaction between their coefficients at frequencies $n_0, n_1, n_2 , n_3$ satisfies \begin{align*} A : = \abs{\int_{{\Bbb{R}}} \overline{w_{n_0} (t)} f_{n_1}(t) \overline{g_{n_2}(t)} h_{n_3}(t) \, dt} \lesssim \frac{\norm{w}_{X^{0, b}} \norm{f}_{X^{0, b}} \norm{ g}_{X^{0, b}} \norm{h}_{X^{0, b}} }{ \inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}}^{b}} . \end{align*} \end{lem} Assuming the two lemmas above, it is easy to see the weak interaction as in \eqref{eq J bdd}. \end{proof} Now we are left to present the proofs of Lemma \ref{lem weak e_n} and Lemma \ref{lem weak fn}. \begin{proof}[Proof of Lemma \ref{lem weak e_n}] Before proving it, let us first make an observation. In fact, a naive estimate from \eqref{eq e_n bdd} gives that \begin{align*} \int_{\Theta} e_{n_0} e_{n_1} e_{n_2} e_{n_3} \, dx \lesssim \prod_{i=0}^3 \norm{e_{n_i}}_{L^4} \lesssim n_0^+ n_1^+ n_2^+ n_3^+ . \end{align*} However, we should expect a much weaker interaction when the frequencies are separate ($n_0 \gg n_1 \geq n_2 \geq n_3 $). To prove this lemma, we first write \begin{align*} \phi (r) & = e_{n_2} (r) \, e_{n_3} (r) ,\\ F(r) & = \int_0^r \gamma \, e_{n_0}(\gamma) \, e_{n_1}(\gamma) \, d \gamma , \end{align*} then performing an integration by parts, we see that \begin{align*} \int_{\Theta} e_{n_0} e_{n_1} e_{n_2} e_{n_3} \, dx & = \int_0^1 (r e_{n_0}(r) e_{n_1}(r)) \phi(r) \, dr = F(r) \phi(r) \Big|_0^1 - \int_0^1 F(r) \phi'(r) \, dr . \end{align*} To see the weak interaction, we claim that \begin{claim}\label{claim e_n} \begin{enumerate} \item The boundary terms are zeros \begin{align*} F(1) \phi(1) = F(0) \phi(0) =0 ; \end{align*} \item The growth of $\phi$ is bounded \begin{align*} \norm{\phi'}_{L^{\infty}} \lesssim n_2 n_3 ; \end{align*} \item The integral of $F$ is also bounded \begin{align*} \int_0^1 \abs{F(r)} \, dr \lesssim \frac{1}{(n_0 - n_1)^2} . \end{align*} \end{enumerate} \end{claim} Assuming Claim \ref{claim e_n}, we conclude the weak interaction under $n_0 \gg n_1 \geq n_2 \geq n_3 $ \begin{align*} \abs{\int_{\Theta} e_{n_0} e_{n_1} e_{n_2} e_{n_3} \, dx } & = \abs{ \int_0^1 F(r) \phi'(r) \, dr }< \mathcal{O} \parenthese{\frac{\norm{\phi'}_{L^{\infty}}}{(n_0 - n_1)^2}} < \mathcal{O} \parenthese{\frac{n_2 n_3}{(n_0-n_1)^2}} < \mathcal{O} \parenthese{\frac{n_2 n_3}{n_0^2}} . \end{align*} \begin{proof}[Proof of Claim \ref{claim e_n}] For {\it (1)}, the zero boundary conditions can be justified since $F(r)=0$ at $r=0$ and $\phi(r) =0$ at $r=1$ (recall $e_n (1) =0$). For {\it (2)}, by \eqref{eq e_n approx}, we have \begin{align*} \phi(r) = e_{n_2} (r) \, e_{n_3} (r) \sim \begin{cases} (n_2 n_3)^{\frac{1}{2}} & \text{ when $r$ small}\\ r^{-1} \cos((n_2-\frac{1}{2}) \pi r- \frac{\pi}{4}) \cos((n_3-\frac{1}{2}) \pi r- \frac{\pi}{4}) & \text{ when $r$ large}. \end{cases} \end{align*} Hence when $r$ is small, $\phi(r)$ has almost no growth, hence $\norm{\phi'}_{L^{\infty}} \lesssim n_2 n_3$. When $r$ is large, we write \begin{align*} \phi'(r) & \sim - \frac{1}{r} \parenthese{(n_2 -\frac{1}{2}) \sin((n_2-\frac{1}{2}) \pi r- \frac{\pi}{4}) \cos((n_3-\frac{1}{2}) \pi r- \frac{\pi}{4})} \\ & \quad - \frac{1}{r} \parenthese{ (n_3 -\frac{1}{2}) \cos((n_2-\frac{1}{2}) \pi r- \frac{\pi}{4}) \sin((n_3-\frac{1}{2}) \pi r- \frac{\pi}{4}) } \\ & \quad - \frac{1}{r^2} \cos((n_2-\frac{1}{2}) \pi r- \frac{\pi}{4}) \cos((n_3-\frac{1}{2}) \pi r- \frac{\pi}{4}) . \end{align*} Therefore, \begin{align*} \norm{\phi'}_{L^{\infty}} \lesssim n_2 n_3 . \end{align*} Now let us move on to {\it (3)}. Similar analysis as in $\phi(r)$ gives \begin{align*} \gamma \, e_{n_0}(\gamma) \, e_{n_1}(\gamma) \sim \begin{cases} (n_0 n_1)^{\frac{1}{2}} \gamma & \text{ when $\gamma$ small}\\ \cos((n_0 + n_1-1) \pi r- \frac{\pi}{2}) + \cos((n_0 -n_1) \pi r) & \text{ when $\gamma$ large} , \end{cases} \end{align*} then its anti-derivative is given by \begin{align*} F(r) & = \int_0^r \gamma \, e_{n_0}(\gamma) \, e_{n_1}(\gamma) \, d \gamma \\ & \lesssim \begin{cases} (n_0 n_1)^{\frac{1}{2}} r^2 & \text{ when $r$ small} \\ (n_0 +n_1 -1)^{-1} \cos((n_0 + n_1 -1) \pi r) + (n_0 -n_1)^{-1} \sin ((n_0 -n_1) \pi r) & \text{ when $r$ large} . \end{cases} \end{align*} Therefore, if $r \ll 1$ the integral is very small. Otherwise \begin{align*} \abs{\int_0^1 F(r) \phi'(r) \, dr} & \leq \norm{\phi'}_{L^{\infty}} \int_0^1 \frac{\abs{\sin ((n_0 -n_1) \pi r)}}{n_0 -n_1} \, dr < \mathcal{O} \parenthese{\frac{\norm{\phi'}_{L^{\infty}}}{(n_0 - n_1)^2}} < \mathcal{O} \parenthese{\frac{N_2 N_3}{(N_0-N_1)^2}} . \end{align*} Now the proof of Claim \ref{claim e_n} is complete. \end{proof} Hence we finish the proof of Lemma \ref{lem weak e_n}. \end{proof} Let us move on to the proof of Lemma \ref{lem weak fn}. \begin{proof}[Proof of Lemma \ref{lem weak fn}] Using Plancherel and convolution theorem, we write \begin{align*} A & := \int_{{\Bbb{R}}} \overline{w_{n_0} (t)} f_{n_1}(t) \overline{g_{n_2}(t)} h_{n_3}(t) \, dt\\ & = \int_{{\Bbb{R}}} \widehat{\overline{w}_{n_0} }(\tau) \parenthese{ f_{n_1} \overline{g}_{n_2} h_{n_3} }^{\wedge} (\tau)\, d\tau\\ & = \int_{{\Bbb{R}}} \widehat{\overline{w}_{n_0} }(\tau) \int_{\tau_1 - \tau_2 + \tau_3 = \tau} \widehat{f_{n_1}}(\tau_1) \widehat{ \overline{g}_{n_2}}(\tau_2) \widehat{h_{n_3}}(\tau_3) \, d \tau_1 d \tau_2 d \tau_3 d\tau . \end{align*} To make up $X^{0,b}$ norms out of $A$, we introduce the following new notations \begin{align}\label{eq wfgh} \begin{aligned} \widetilde{w}_{n_0}(\tau , n_0) & = \widehat{\overline{w}_{n_0} }(\tau) \inner{\tau - n_0^{2\alpha}}^{b} & \widetilde{f}_{n_1} (\tau_1 , n_1) & = \widehat{f_{n_1}}(\tau_1) \inner{\tau_1 - n_1^{2\alpha}}^{b} \\ \widetilde{g}_{n_2} (\tau_2 , n_2) & = \widehat{ \overline{g}_{n_2}}(\tau_2) \inner{\tau_2 - n_2^{2\alpha}}^{b} & \widetilde{h}_{n_3} (\tau_3 , n_3) & = \widehat{h_{n_3}}(\tau_3) \inner{\tau_3 - n_3^{2\alpha}}^{b} . \end{aligned} \end{align} Note that the $L_{n_i}^2 L_{\tau}^2$ norms of the functions above are in fact the $X^{0,b}$ norms of $w, f, g,h$. Then using the new notations and H\"older inequality, we obtain \begin{align*} A & = \int_{{\Bbb{R}}} \int_{\tau_1 - \tau_2 + \tau_3 = \tau} \frac{\widetilde{w}_{n_0} (\tau , n_0) }{\inner{\tau - n_0^{2\alpha}}^{b} } \, \frac{\widetilde{f}_{n_1} (\tau_1 , n_1) }{\inner{\tau_1 - n_1^{2\alpha}}^{b}} \, \frac{\widetilde{g}_{n_2} (\tau_2 , n_2)}{\inner{\tau_2 - n_2^{2\alpha}}^{b}} \, \frac{\widetilde{h}_{n_3} (\tau_3 , n_3)}{ \inner{\tau_3 - n_3^{2\alpha}}^{b}} \, d \tau_1 d \tau_2 d \tau_3 d\tau \\ & \lesssim \norm{\widetilde{w}_{n_0} (\tau , n_0) \widetilde{f}_{n_1} (\tau_1 , n_1) \widetilde{g}_{n_2} (\tau_2 , n_2) \widetilde{h}_{n_3} (\tau_3 , n_3)}_{L_{\tau}^1 L_{\tau_1, \tau_2,\tau_3 ({\tau_1 - \tau_2 + \tau_3 = \tau})}^2} \\ & \quad \times \norm{\frac{1}{\inner{\tau - n_0^2}^{b} \inner{\tau_1 - n_1^2}^{b} \inner{\tau_2 - n_2^2}^{b} \inner{\tau_3 - n_3^2}^{b} }}_{L_{\tau}^{\infty} L_{\tau_1, \tau_2,\tau_3 ({\tau_1 - \tau_2 + \tau_3 = \tau}) }^2} \\ & =:A_1 \times A_2 . \end{align*} For $A_1$ term, by Cauchy-Schwarz inequality, Young's convolution inequality and Definition \ref{defn Xsb}, we get \begin{align*} A_1 & = \int_{{\Bbb{R}}} \widetilde{w}_{n_0} (\tau , n_0) \parenthese{ \int_{\tau_1 - \tau_2 + \tau_3 = \tau} \parenthese{ \widetilde{f}_{n_1} (\tau_1 , n_1) \widetilde{g}_{n_2} (\tau_2 , n_2) \widetilde{h}_{n_3} (\tau_3 , n_3)}^2 \, d \tau_1 d \tau_2 d \tau_3 }^{\frac{1}{2}} d\tau\\ & \lesssim \norm{\widetilde{w}_{n_0} }_{L_{\tau}^2} \norm{\parenthese{\widetilde{f}_{n_1}^2 * \widetilde{g}_{n_2}^2 * \widetilde{h}_{n_3}^2 }^{\frac{1}{2}}}_{L_{\tau }^2} \\ & = \norm{\widetilde{w}_{n_0} }_{L_{\tau}^2} \norm{\widetilde{f}_{n_1}^2 * \widetilde{g}_{n_2}^2 * \widetilde{h}_{n_3}^2 }_{L_{\tau }^1}^{\frac{1}{2}} \\ & \lesssim \norm{\widetilde{w}_{n_0} }_{L_{\tau}^2} \norm{\widetilde{f}_{n_1}}_{L_{\tau }^2} \norm{ \widetilde{g}_{n_2}}_{L_{\tau }^2} \norm{ \widetilde{h}_{n_3}}_{L_{\tau }^2} \\ & = \norm{w}_{X^{0, b}} \norm{f}_{X^{0, b}} \norm{ g}_{X^{0, b}} \norm{h}_{X^{0, b}} . \end{align*} For the $A_2$ term, a quick observation gives the integrability of the integrand \begin{align*} A_2 \lesssim 1 , \end{align*} since $b = \frac{1}{2}+$. To be more precise, we estimate of the term $A_2$ using the following lemma from \cite{DET}. See also the related treatment for similar integrals in \cite{KPV}. \begin{lem}[Lemma 1 in \cite{DET}] If $\gamma \geq 1$, then \begin{align*} \int_{{\Bbb{R}}} \frac{1}{\inner{\tau -k_1}^{\gamma} \inner{\tau- k_2}^{\gamma}} \, d \tau \lesssim \inner{k_1 - k_2}^{-\gamma} . \end{align*} \end{lem} Continuing the computation using the summing lemma above, we arrive at \begin{align*} A_2 & = \sup_{\tau} \parenthese{ \int_{\tau_1 - \tau_2 + \tau_3 = \tau} \frac{1}{\inner{\tau - n_0^{2\alpha}}^{2b} \inner{\tau_1 - n_1^{2\alpha}}^{2b} \inner{\tau_2 - n_2^{2\alpha}}^{2b} \inner{\tau_3 - n_3^{2\alpha}}^{2b} } \, d \tau_1 d \tau_2 d \tau_3 }^{\frac{1}{2}} \\ & \lesssim \parenthese{ \int \frac{1}{\inner{ \tau_2 - \tau_3 + n_0^{2\alpha} - n_1^{2\alpha}}^{2b} \inner{\tau_2 - n_2^{2\alpha}}^{2b} \inner{\tau_3 - n_3^{2\alpha}}^{2b} } \, d \tau_2 d \tau_3 }^{\frac{1}{2}} \\ & \lesssim \parenthese{ \int \frac{1}{\inner{ \tau_3 - n_0^{2\alpha} + n_1^{2\alpha} - n_2^{2\alpha}}^{2b} \inner{\tau_3 - n_3^{2\alpha}}^{2b} } \, d \tau_3 }^{\frac{1}{2}} \\ & \lesssim \frac{1}{\inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}}^{b}} . \end{align*} Now putting the calculation on $A_1$ and $A_2$ together, we finish the proof of Lemma \ref{lem weak fn}. \end{proof} \begin{rmq}[Variations on assumptions in Proposition \ref{prop weak}]\label{rmk weak} \begin{enumerate} \item In fact, if assuming $n_0 \geq n_1 \geq n_2 \geq n_3$ and $n_0 \geq 2 n_3$ in Proposition \ref{prop weak}, we should expect a very similar bound with slight modification in the proof of Claim \ref{claim e_n} \begin{align}\label{eq J bdd'} J(\underline{n}) = \int_{{\Bbb{R}} \times \Theta} \overline{w} \cdot f \cdot \overline{g} \cdot h \, dx dt \lesssim \frac{n_1 n_2}{n_0^2} \frac{\norm{w}_{X^{0, b}} \norm{f}_{X^{0, b}} \norm{g}_{X^{0, b}} \norm{h}_{X^{0, b}} }{ \inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}}^{b}} . \end{align} \item Instead of $X^{0,b}$, we can take $w \in L_{t,x}^2$, in which case, the only difference will be in the change of variables in \eqref{eq wfgh}. In fact, by not touching on $\widetilde{w}_{n_0}(\tau , n_0) = \widehat{\overline{w}_{n_0} }(\tau) $, the rest of the argument follows perfectly, but resulting the appearance of $L_{t,x}^2$ norm of $w$ in the bound \begin{align}\label{eq J bdd''} J(\underline{n}) = \int_{{\Bbb{R}} \times \Theta} \overline{w} \cdot f \cdot \overline{g} \cdot h \, dx dt \lesssim \frac{n_2 n_3}{n_0^2} \norm{w}_{L_{t,x}^2} \norm{f}_{X^{0, b}} \norm{g}_{X^{0, b}} \norm{h}_{X^{0, b}} . \end{align} \end{enumerate} \end{rmq} \section{Energy increment}\label{sec energy increment} In this section, we compute the energy increment of the I-operator modified equation on a short time interval. This will be the key ingredient in this iterative argument in Section \ref{sec gwp}. \begin{prop}[Energy increment]\label{prop energy increment} Consider $v$ as in \eqref{fNLSI} defined on $[0, \delta ] \times \Theta$, then for $s > $ and sufficiently large $N$, the solution $v$ satisfies the following energy increment \begin{align*} E(Iu(\delta)) -E(Iu(0)) \lesssim N^{1- 2\alpha } \norm{Iu_0}_{H^{\alpha}}^4 + N^{s- 3\alpha } N^{-\frac{4(\alpha -s)(b- b(\alpha) )}{1 + 2b -4b(s)}} \norm{Iu_0}_{H^{\alpha}}^6 . \end{align*} \end{prop} \begin{proof}[Proof of Proposition \ref{prop energy increment}] We first start with writing the energy conservation \begin{align*} \frac{d}{dt} E( u(t)) & = \re \int_{\Theta} \overline{u_t} (\abs{u}^2 u + (-\Delta)^{\alpha} u) \, dx = \re \int_{\Theta} \overline{u_t} (\abs{u}^2 u + (-\Delta)^{\alpha} u - i u_t) \, dx = 0 . \end{align*} Similarly we can compute the rate of change in the energy of the modified equation \eqref{fNLSI}. \begin{align*} \frac{d}{dt} E( I u(t)) & = \re \int_{\Theta} \overline{Iu_t} (\abs{Iu}^2 Iu + (-\Delta)^{\alpha} Iu - i Iu_t) \, dx = \re \int_{\Theta} \overline{Iu_t} (\abs{Iu}^2 Iu - I(\abs{u}^2 u) ) \, dx \\ & = \im \int_{\Theta} \overline{(-\Delta)^{\alpha} I u} (\abs{Iu}^2 Iu - I(\abs{u}^2 u) ) \, dx + \im \int_{\Theta} \overline{ I (\abs{u}^2 u) } (\abs{Iu}^2 Iu - I(\abs{u}^2 u) ) \, dx . \end{align*} Then by the fundamental theorem of calculus, we obtain \begin{align} & \quad E(Iu)(\delta) - E(Iu)(0) \notag\\ & = \im \int_0^{\delta} \int_{\Theta} \overline{(-\Delta)^{\alpha} I u} (\abs{Iu}^2 Iu - I(\abs{u}^2 u) ) \, dx dt + \im \int_0^{\delta} \int_{\Theta} \overline{ I (\abs{u}^2 u) } (\abs{Iu}^2 Iu - I(\abs{u}^2 u) ) \, dx dt\notag \\ & = : \text{Term I} + \text{Term II} . \label{eq 2term} \end{align} To conclude the energy increment in this proposition, we just need to estimate the two terms in \eqref{eq 2term}. \subsection{Estimate on Term I} First we decompose each $u$ in Term I as we did in \eqref{eq u decomp} in Lemma \ref{lem nonlinear est}, then write for the quadruple $\underline{N} = (N_0, N_1, N_2, N_3)$, \begin{align*} \text{Term I} \sim \sum_{\underline{N}} \int_0^{\delta} \int_{\Theta} \overline{(-\Delta)^{\alpha} u_{0}} (Iu_{1} \overline{Iu_{2}} Iu_{3} - I (u_1 \overline{u_2} u_3) ) \, dxdt , \end{align*} where $u_{i} = \sum_{N_i \leq \inner{z_n} < 2N_i} P_n u$, $i = 0, 1 ,2, 3$. Let \begin{align*} \text{Term I} (\underline{N}) : = \int_0^{\delta} \int_{\Theta} \overline{(-\Delta)^{\alpha} u_{0}} (Iu_{1} \overline{Iu_{2}} Iu_{3} - I (u_1 \overline{u_2} u_3) ) \, dxdt . \end{align*} Without loss of generality, we assume $N_1 \geq N_2 \geq N_3$, and analyze the following different scenarios. Let us outline the cases that we will be considering. \begin{enumerate}[-] \item Trivial cases in {\bf Case I-1}; \item The maximum frequency is much larger than the second highest frequency in {\bf Case I-2}; \item The largest two frequencies are comparable; \begin{enumerate}[$*$] \item The largest two frequencies $N_1 = N_0$ in {\bf Case I-3}, \item The largest two frequencies $N_1 =N_2$ n {\bf Case I-4}. \end{enumerate} \end{enumerate} \noindent {\bf Case I-1:} The trivial cases. After the decomposition in frequencies, we have the following two trivial cases: \begin{itemize} \item All frequencies are not comparable to $N$, that is $N_0, N_1 , N_2, N_3 \ll N$, \item $N_0 = N_1 \geq N \geq N_2 \geq N_3$. \end{itemize} In both cases, we have \begin{align*} \text{Term I} (\underline{N}) =0 , \end{align*} hence \begin{align*} \sum_{\underline{N} \in \textbf{ Case I-1}} \text{Term I} (\underline{N}) =0 . \end{align*} Now we focus on the regime where at least the maximum frequency is larger than $N$, that is $\max \{ N_0, N_1 , N_2, N_3 \} \gtrsim N$. Using the abuse the notation in Remark \ref{rmk abuse}, we write \begin{align} \text{Term I} (\underline{N}) & \leq \abs{ \int_0^{\delta} \int_{\Theta} \overline{(-\Delta)^{\alpha} u_{0}} (Iu_{1} \overline{Iu_{2}} Iu_{3} ) \, dxdt} + \abs{\int_0^{\delta} \int_{\Theta} \overline{(-\Delta)^{\alpha} u_{0}} I (u_1 \overline{u_2} u_3) \, dxdt} \notag\\ & \lesssim \abs{ \int_0^{\delta} \int_{\Theta} \overline{(-\Delta)^{\alpha} u_{0}} (Iu_{1} \overline{Iu_{2}} Iu_{3} ) \, dxdt} + \frac{m (N_0)}{m (N_1) m (N_2) m (N_3)} \abs{\int_0^{\delta} \int_{\Theta} \overline{(-\Delta)^{\alpha} u_{0}} (I u_1 \overline{I u_2} Iu_3) \, dxdt} \notag\\ & = (1+ \frac{m (N_0)}{m (N_1) m (N_2) m (N_3)}) \abs{\int_0^{\delta} \int_{\Theta} \overline{(-\Delta)^{\alpha} u_{0}} (I u_1 \overline{I u_2} Iu_3) \, dxdt} \notag \\ & = : M (\underline{N}) \times \text{Term I}' (\underline{N}) . \label{eq term I} \end{align} \noindent {\bf Case I-2:} The maximum frequency is much larger than the second highest frequency. \noindent {\bf Case I-2a:} $\max \{ n_0, n_1 , n_2, n_3 \} =n_0 \gtrsim N$ and $n_0 \gg n_1$. Take $\textbf{Term I}' (\underline{N})$ in \eqref{eq term I} first. Applying the weak interaction between frequency localized functions in Proposition \ref{prop weak} (where we take $w = P_{n_0} I u$, $f = P_{n_1} I u$, $g =P_{n_2} I u$ and $h =P_{n_3} I u$) and taking out the derivative on $P_{n_0} u$, we are able to write \begin{align*} \text{Term I}' (\underline{n}) \lesssim n_0^{2\alpha} \frac{n_2 n_3}{n_0^2} \frac{\norm{P_{n_0} I u}_{X^{0, b}} \norm{P_{n_1} I u}_{X^{0, b}} \norm{P_{n_2} I u}_{X^{0, b}} \norm{P_{n_3} I u}_{X^{0, b}} }{ \inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}}^{b}} , \end{align*} where $b = \frac{1}{2}+$. Note that in the rest of the proof we will constantly use the notation $b = \frac{1}{2}+$ for simplicity. Then we estimate $M(\underline{n})$ by \begin{align}\label{eq Mn} M(\underline{n}) \lesssim \begin{cases} \parenthese{\frac{N}{n_0}}^{\alpha -s} & \text{ if } n_3 \leq n_2 \leq n_1 \leq N \lesssim n_0\\ \parenthese{\frac{n_1}{n_0}}^{\alpha -s} & \text{ if } n_3 \leq n_2 \leq N \leq n_1 \lesssim n_0\\ \parenthese{\frac{n_1 n_2}{n_0 N}}^{\alpha -s} & \text{ if } n_3 \leq N \leq n_2 \leq n_1 \lesssim n_0\\ \parenthese{\frac{n_1 n_2 n_3}{n_0 N^2}}^{\alpha -s} & \text{ if } N \leq n_3 \leq n_2 \leq n_1 \lesssim n_0 . \end{cases} \end{align} Now summing over all $n_i$ using Cauchy-Schwarz inequality, Bernstein inequality and Definition \ref{defn Xsb}, we have \begin{align*} \sum_{\underline{n}} \text{Term I} (\underline{n}) & \lesssim \sum_{\underline{n} } M(\underline{n}) \cdot n_0^{2\alpha} \frac{n_2 n_3 }{n_0^2} \frac{\norm{P_{n_0} I u}_{X^{0, b}} \norm{P_{n_1} I u}_{X^{0, b}} \norm{P_{n_2} I u}_{X^{0, b}} \norm{P_{n_3} I u}_{X^{0, b}} }{ \inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}}^{b}} \\ & \lesssim \parenthese{\sum_{\underline{n}} \parenthese{\frac{n_2 n_3}{n_0^2} \frac{M(\underline{n}) \cdot n_0^{2\alpha} }{n_0^{\alpha} n_1^{\alpha} n_2^{\alpha} n_3^{\alpha} \inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}}^{b}} }^2}^{\frac{1}{2}} \norm{Iu}_{X_{\delta}^{\alpha, b}}^4 . \end{align*} Notice that $n_0 \gg n_1$ implies $\inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}} \gtrsim \inner{n_0^{2\alpha}} $. Then we compute the sum. In the first case $M(\underline{n}) = \parenthese{\frac{N}{n_0}}^{\alpha -s} $, we have \begin{align*} & \quad \sum_{\underline{n}} \parenthese{ \frac{n_2 n_3}{n_0^2} \parenthese{ \frac{N}{n_0}}^{\alpha-s} \frac{ n_0^{2\alpha}}{n_0^{\alpha} n_1^{\alpha} n_2^{\alpha} n_3^{\alpha} \inner{ n_0^{2\alpha} - n_1^{2\alpha} + n_2^{2\alpha} - n_3^{2\alpha}}^{b}} }^2 \lesssim \sum_{\underline{n}} \frac{n_2^2 n_3^2}{n_0^4} \frac{ N^{2\alpha -2s} \cdot n_0^{2\alpha}}{ n_0^{2\alpha -2s} n_1^{2\alpha} n_2^{2\alpha} n_3^{2\alpha} \inner{ n_0^{2\alpha} }^{1+}} \\ & \lesssim N^{2\alpha -2s} \sum_{n_0} n_0^{-4-2\alpha +2s -}\sum_{n_1} n_1^{-2\alpha}\sum_{n_2} n_2^{2-2\alpha} \sum_{n_3} n_3^{2-2\alpha}\\ & \lesssim N^{2\alpha -2s} \sum_{n_0} n_0^{3 - 8 \alpha +2s- } \lesssim N^{4-6\alpha -} . \end{align*} The same bound holds for other cases of $M(\underline{n})$ in \eqref{eq Mn}. Therefore, \begin{align*} \sum_{\underline{n} \in \textbf{ Case I-2a}} \text{Term I} (\underline{n}) \lesssim N^{2-3\alpha - } \norm{Iu}_{X_{\delta}^{\alpha, b}}^4 . \end{align*} \noindent {\bf Case I-2b:} $\max \{ n_0, n_1 , n_2, n_3 \} =n_1 \gtrsim N $ and $n_1 \gg \max \{ n_0 , n_2 \}$. This case is in fact similar to the previous {\bf Case I-2a}. Using \begin{align}\label{eq Mn2} M(\underline{n}) \lesssim \begin{cases} \parenthese{\frac{n_1}{N}}^{\alpha -s} & \text{ if } n_3 , n_2 , n_0 \leq N \lesssim n_1 \\ \parenthese{\frac{n_1}{n_0}}^{\alpha -s} & \text{ if } n_3 \leq n_2 \leq N \lesssim n_0 \leq n_1\\ \parenthese{\frac{n_1 n_2}{ N^2}}^{\alpha -s} & \text{ if } n_3, n_0 \leq N \lesssim n_2 \leq n_1 \\ \parenthese{\frac{n_1 n_2 n_0}{ N^3}}^{\alpha -s} & \text{ if } n_3 \leq N \lesssim n_2 \leq n_1, N \leq n_0 \\ \parenthese{\frac{n_1 n_2 n_3}{N^3}}^{\alpha -s} & \text{ if } n_0 \leq N \lesssim n_3 \leq n_2 \leq n_1 \\ \parenthese{\frac{n_1 n_2 n_3}{n_0 N^2}}^{\alpha -s} & \text{ if } N \lesssim n_3 \leq n_2 \leq n_1, N \leq n_0 . \end{cases} \end{align} and similar calculation in {\bf Case I-2a}, we see that \begin{align*} \sum_{\underline{n} \in \textbf{ Case I-2b}} \text{Term I} (\underline{n}) \lesssim N^{2- 3\alpha- } \norm{Iu}_{X_{\delta}^{\alpha, b}}^4 . \end{align*} Therefore, by Cauchy-Schwarz inequality and Definition \ref{defn Xsb} we have in {\bf Case I-2} \begin{align*} \sum_{\underline{N} \in \textbf{ Case I-2}} \text{Term I} (\underline{N}) \lesssim N^{2- 3\alpha- } \norm{Iu}_{X_{\delta}^{\alpha, b}}^4 . \end{align*} Now we focus on the case when the largest two frequencies are comparable. Under our assumption on $N_0 , N_1 , N_2 ,N_3$, there are only following two possibilities and we will discuss them separately. \begin{itemize} \item {\bf Case I-3:} $N_1 = N_0 \gtrsim N$ and $N_1 \geq N_2 \geq N_3$; \item {\bf Case I-4:} $N_1 = N_2 \gtrsim N$ and $N_1 \geq N_0$. \end{itemize} \noindent {\bf Case I-3:} The largest two frequencies are comparable. $N_1 = N_0 \gtrsim N$. In this case, we write \begin{align*} M(\underline{N}) \lesssim \frac{m(N_0)}{ m(N_1) m (N_2) m (N_3)} \sim \frac{1}{m (N_2) m (N_3)} . \end{align*} Recall the bilinear estimate that we obtained in \eqref{eq inter bilinear}, \begin{align*} \norm{ f_1 f_2}_{L_{t,x}^2 ((0, \delta) \times \Theta)} \lesssim N_2^{\beta} \delta^{2(b- b(\beta) )} \norm{f_1}_{X_{\delta}^{0, b} (\Theta)} \norm{f_2}_{X_{\delta}^{0, b} (\Theta)} . \end{align*} for $b(\beta) =\frac{1}{4}+ (1-\beta)\frac{1}{2}+$, $\beta \in (\frac{1}{2} , 1]$. Taking $\beta = \alpha$ and combining with $\delta \sim N^{-\frac{2(\alpha -s)}{1 + 2b -4b(s)}} $ in Proposition \ref{prop LWPI}, we have \begin{align}\label{eq bi6} \norm{ f_1 f_2}_{L_{t,x}^2 ((0,\delta) \times \Theta)} \lesssim N_2^{\alpha} N^{-\frac{4(\alpha -s)(b- b(\alpha) )}{1 + 2b -4b(s)}} \norm{f_1}_{X_{\delta}^{0, b} (\Theta)} \norm{f_2}_{X_{\delta}^{0, b} (\Theta)} . \end{align} \noindent {\bf Case I-3a:} $N_1 = N_0 \gtrsim N \geq N_2 \geq N_3$. This is trivial as showed in {\bf Case I-1}. \noindent {\bf Case I-3b:} $N_1 = N_0 \geq N_2 \gtrsim N \geq N_3$. Using \begin{align*} M(\underline{N}) \lesssim (N^{-1} N_2)^{\alpha -s} \end{align*} with H\"older inequality, Proposition \ref{prop bilinear}, \eqref{eq bi6} and Bernstein inequality, we have \begin{align*} \text{Term I} (\underline{N}) & \lesssim N_0^{2\alpha} (N^{-1} N_2)^{\alpha -s} \norm{Iu_0 Iu_2}_{L_{t,x}^2 ([0,\delta] \times \Theta)} \norm{Iu_1 Iu_3}_{L_{t,x}^2 ([0,\delta] \times \Theta)} \\ & \lesssim N_0^{2\alpha} (N^{-1} N_2)^{\alpha -s} N_2^{\frac{1}{2}+} N_3^{\alpha} N^{-\frac{4(\alpha -s)(b- b(\alpha) )}{1 + 2b -4b(s)}} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{0 , b} } \\ & \lesssim \frac{1}{N^{\alpha -s}}\frac{N_0^{2\alpha} N_2^{\frac{1}{2}+} N_3^{\alpha-}}{N_0^{\alpha} N_1^{\alpha} N_2^{\alpha} N_3^{\alpha}} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b} } \\ & \lesssim N^{\frac{1}{2} - 2\alpha+s+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} N_2^{0-} N_3^{0-} \parenthese{\frac{N_0}{ N_1}}^{\alpha} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b} } . \end{align*} Therefore, by Cauchy-Schwarz inequality and Definition \ref{defn Xsb} we have \begin{align*} \sum_{\underline{N} \in \textbf{ Case I-3b}} \text{Term I} (\underline{N}) \lesssim N^{\frac{1}{2} - 2\alpha+s +} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 . \end{align*} \noindent {\bf Case I-3c:} $N_1 = N_0 \geq N_2 \geq N_3 \gtrsim N$. Using \begin{align*} M(\underline{N}) \lesssim (N^{-2} N_2 N_3)^{\alpha -s} \end{align*} H\"older inequality, Proposition \ref{prop bilinear} and Bernstein inequality, we have \begin{align*} \text{Term I} (\underline{N}) & \lesssim N_0^{2\alpha} (N^{-2} N_2 N_3)^{\alpha -s} \norm{Iu_0 Iu_2}_{L_{t,x}^2 ([0,\delta] \times \Theta)} \norm{Iu_1 Iu_3}_{L_{t,x}^2 ([0,\delta] \times \Theta)} \\ & \lesssim N_0^{2\alpha} (N^{-2} N_2 N_3)^{\alpha -s}(N_2 N_3)^{\frac{1}{2}+} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{0 , b} } \\ & \lesssim N^{-2(\alpha -s)} \frac{N_0^{2\alpha} (N_2 N_3)^{\frac{1}{2}+} (N_2 N_3)^{\alpha -s} }{N_0^{\alpha} N_1^{\alpha} N_2^{\alpha} N_3^{\alpha}} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b} } \\ & \lesssim N^{1-2 \alpha+} N_2^{0-} N_3^{0-} \parenthese{ \frac{N_0}{N_1}}^{\alpha} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b} } . \end{align*} Therefor, by Cauchy-Schwarz inequality and Definition \ref{defn Xsb} we have \begin{align*} \sum_{\underline{N} \in \textbf{ Case I-3c}} \text{Term I} (\underline{N}) \lesssim N^{1-2 \alpha+} \norm{Iu}_{X_{\delta}^{\alpha , b} }^4 . \end{align*} To sum up, the bound in { \bf Case I-3} is given by \begin{align*} \sum_{\underline{N} \in \textbf{ Case I-3}} \text{Term I} (\underline{N}) \lesssim N^{\frac{1}{2} - 2\alpha+s +} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \norm{Iu}_{X_{\delta}^{\alpha , b} }^4 + N^{1-2 \alpha+} \norm{Iu}_{X_{\delta}^{\alpha , b} }^4 . \end{align*} \noindent {\bf Case I-4:} The largest two frequencies are comparable. $N_1 = N_2 \gtrsim N$ and $N_1 \geq N_0$. In this case, we write \begin{align*} M(\underline{N}) & \lesssim \frac{m (N_0)}{m (N_1) m (N_2) m (N_3)} \lesssim \frac{m (N_0)}{m (N_3)} (N^{-2} N_1 N_2)^{\alpha -s} , \end{align*} where \begin{align}\label{eq Mn03} \frac{m (N_0)}{m (N_3)} \lesssim \begin{cases} 1 & \text{ if } N_0 , N_3 \lesssim N\\ \parenthese{\frac{N}{N_0}}^{\alpha -s} & \text{ if } N_3 \lesssim N \lesssim N_0 \\ \parenthese{\frac{N_3}{N}}^{\alpha -s} & \text{ if } N_0 \lesssim N \lesssim N_3 \\ \parenthese{\frac{N_3}{N_0}}^{\alpha -s} & \text{ if } N \lesssim N_0 , N_3 . \end{cases} \end{align} Take the first case in \eqref{eq Mn03}, then by H\"older inequality, Proposition \ref{prop bilinear}, \eqref{eq bi6} and Bernstein inequality, we write \begin{align*} \text{Term I} (\underline{N}) & \lesssim N_0^{2\alpha} (N^{-2} N_1 N_2)^{\alpha -s} \norm{Iu_0 Iu_2}_{L_{t,x}^2 ([0,\delta] \times \Theta)} \norm{Iu_1 Iu_3}_{L_{t,x}^2 ([0,\delta] \times \Theta)} \\ & \lesssim N_0^{2\alpha} (N^{-2} N_1 N_2)^{\alpha -s} N_0^{\frac{1}{2}+} N_3^{\alpha-} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{0 , b} } \\ & \lesssim N^{-2(\alpha -s)} \frac{N_0^{2\alpha} (N_1 N_2 )^{\alpha -s} N_0^{\frac{1}{2}+} N_3^{\alpha-} }{N_0^{\alpha} N_1^{\alpha} N_2^{\alpha} N_3^{\alpha}} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b} } \\ & \lesssim N^{\frac{1}{2}-\alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} N_0^{0-} N_1^{0-} N_3^{0-} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b} } . \end{align*} The same bound hold for the second case in \eqref{eq Mn03}. For the third case in \eqref{eq Mn03}, using H\"older inequality, Proposition \ref{prop bilinear} and Bernstein inequality, we have \begin{align*} \text{Term I} (\underline{N}) & \lesssim N_0^{2\alpha} (N^{-2} N_1 N_2)^{\alpha -s} \parenthese{\frac{N_3}{N}}^{\alpha -s} \norm{Iu_0 Iu_2}_{L_{t,x}^2 ([0,\delta] \times \Theta)} \norm{Iu_1 Iu_3}_{L_{t,x}^2 ([0,\delta] \times \Theta)} \\ & \lesssim N_0^{2\alpha} (N^{-2} N_1 N_2)^{\alpha -s} \parenthese{\frac{N_3}{N}}^{\alpha -s} N_0^{\frac{1}{2}+} N_3^{\frac{1}{2}+} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{0 , b} } \\ & \lesssim N^{-3(\alpha -s)} \frac{N_0^{2\alpha} (N_1 N_2 N_3)^{\alpha -s} N_0^{\frac{1}{2}+} N_3^{\frac{1}{2}+} }{N_0^{\alpha} N_1^{\alpha} N_2^{\alpha} N_3^{\alpha}} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b} } \\ & \lesssim N^{1-2\alpha+} N_0^{0-} N_1^{0-} N_3^{0-} \prod_{i=0}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b} } . \end{align*} The same bound hold for the forth case in \eqref{eq Mn03}. Therefor, by Cauchy-Schwarz inequality and Definition \ref{defn Xsb} we have that in {\bf Case I-4}. \begin{align*} \sum_{\underline{N} \in \textbf{ Case I-4}} \text{Term I} (\underline{N}) \lesssim N^{\frac{1}{2}-\alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \norm{Iu}_{X_{\delta}^{\alpha , b} }^4 + N^{1-2\alpha +} \norm{Iu}_{X_{\delta}^{\alpha , b} }^4 . \end{align*} Now we summarize the estimation on {\bf Term I}. \begin{align*} \text{Term I} & \lesssim \parenthese{ \sum_{\underline{N} \in \textbf{ Case I-2}} + \sum_{\underline{N} \in \textbf{ Case I-3}} + \sum_{\underline{N} \in \textbf{ Case I-4}} } \text{Term I} (\underline{N}) \\ & \lesssim N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \norm{Iu}_{X_{\delta}^{\alpha , b} }^4 + N^{2- 3\alpha- } \norm{Iu}_{X_{\delta}^{\alpha , b} }^4 . \end{align*} \subsection{Estimate on Term II} Now let us focus on {\bf Term II}: \begin{align*} \im \int_0^{\delta} \int_{\Theta} \overline{ I (\abs{u}^2 u) } (\abs{Iu}^2 Iu - I(\abs{u}^2 u) ) \, dx . \end{align*} Similarly, we decompose each $u$ in {\bf Term II} using the orthonomal basis $e_n$'s of the radial Dirichlet Laplacian $-\Delta$. That is, for the quadruple $\underline{N} = (N_0, N_1, N_2 , N_3)$ \begin{align*} \text{Term II} \sim \sum_{\underline{N}} \int_0^{\delta} \int_{\Theta} \overline{I P_{N_0}(\abs{u}^2 u)} (Iu_{1} \overline{Iu_{2}} Iu_{3} - I (u_1 \overline{u_2} u_3) ) \, dxdt , \end{align*} where $u_{i} = \sum_{N_i \leq \inner{z_n} < 2N_i} P_n u$, $i = 1 ,2, 3$. Let \begin{align*} \text{Term II} (\underline{N}) : = \int_0^{\delta} \int_{\Theta} \overline{I P_{N_0}(\abs{u}^2 u)} (Iu_{1} \overline{Iu_{2}} Iu_{3} - I (u_1 \overline{u_2} u_3) ) \, dxdt . \end{align*} Without loss of generality, we assume $N_1 \geq N_2 \geq N_3$. Let us outline the cases that we will be considering. \begin{enumerate}[-] \item Trivial cases in {\bf Case II-1}; \item The maximum frequency is much larger than the minimum frequency in {\bf Case II-2}; \item All frequencies are comparable in {\bf Case II-3} . \end{enumerate} Again, we start with the trivial cases. \noindent {\bf Case II-1:} The trivial cases. We have two trivial cases as in {\bf Term I} \begin{itemize} \item All frequencies are not comparable to $N$, that is $N_0, N_1 , N_2, N_3 \ll N$. \item $N_0 = N_1 \geq N \geq N_2 \geq N_3$ \end{itemize} In both cases, we have \begin{align*} \sum_{\underline{N} \in \textbf{ Case II-1}} \text{Term II} (\underline{N}) =0 . \end{align*} This implies that at least the maximum frequency is larger than the cutoff frequency $N$, that is $\max \{ N_0, N_1 , N_2, N_3 \} \gtrsim N$. Similar as in \eqref{eq term I}, we write with abuse notation in Remark \ref{rmk abuse}. \begin{align*} \text{Term II} (\underline{N}) & \lesssim (1+ \frac{m (N_0)}{m (N_1) m (N_2) m (N_3)}) \abs{\int_0^{\delta} \int_{\Theta} \overline{I P_{N_0}(\abs{u}^2 u)} (I u_1 \overline{I u_2} Iu_3) \, dxdt}\\ & = : M (\underline{N}) \times \text{Term II}' (\underline{N}) . \end{align*} \noindent {\bf Case II-2:} The maximum frequency is much larger than minimum frequency. \noindent {\bf Case II-2a:} $\max \{ n_0, n_1 , n_2, n_3 \} = n_0 \gtrsim N $ and $n_0 \gg n_3$. In this case, using the same estimate of $M(\underline{N})$ as in \eqref{eq Mn} and Remark \ref{rmk weak}, we write \begin{align*} \text{Term II} (\underline{n}) & \lesssim M(\underline{n}) \frac{n_1 n_2}{n_0^2} \frac{1 }{n_1^{\alpha} n_2^{\alpha} n_3^{\alpha}} \norm{I P_{n_0}(\abs{u}^2 u)}_{L_{t,x}^2} \prod_{i=1}^3 \norm{P_{n_i}Iu}_{X^{\alpha, b}} . \end{align*} In the following claim, we compute the term $\norm{I P_{n_0}(\abs{u}^2 u)}_{L_{t,x}^2} $ separately. \begin{claim}\label{claim B} \begin{align*} B : =\norm{I P_{n_0}(\abs{u}^2 u)}_{L_{t,x}^2} & \lesssim m(n_0) B(N) \norm{Iu}_{X_{\delta}^{\alpha , b}}^3, \end{align*} where \begin{align*} B(N) : = N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} + N^{ \frac{3}{2} -3\alpha +} + N^{-\alpha + } . \end{align*} \end{claim} Assuming Claim \ref{claim B}, we can write \begin{align*} \text{Term II} (\underline{n}) & \lesssim M(\underline{n}) \frac{n_1 n_2}{n_0^2} \frac{ m(n_0)}{n_1^{\alpha} n_2^{\alpha} n_3^{\alpha}} \prod_{i=1}^3 \norm{P_{n_i}Iu}_{X^{\alpha, b}} \parenthese{ B(N) \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 } . \end{align*} Using Cauchy-Schwarz inequality, \eqref{eq Mn} and Definition \ref{defn Xsb}, we obtain that \begin{align*} \sum_{\underline{n} \in \textbf{ Case II-2a}} \text{Term II} (\underline{n}) & \lesssim \parenthese{\sum_{\underline{n}} \parenthese{\frac{n_1 n_2}{n_0^2} \frac{M(\underline{n}) \cdot m(n_0)}{n_1^{\alpha} n_2^{\alpha} n_3^{\alpha} } }^2}^{\frac{1}{2}} B(N) \norm{Iu}_{X_{\delta}^{\alpha, b}}^6 \\ & \lesssim N^{\frac{3}{2} -2\alpha} B(N) \norm{Iu}_{X_{\delta}^{\alpha , b}}^6 . \end{align*} Now we present the calculation for Claim \ref{claim B}. \begin{proof}[Proof of Claim \ref{claim B}] Decompose $B$ into dyadic frequencies, then we have \begin{align*} \norm{I P_{n_0}(u_4 \overline{u_5} u_6)}_{L_{t,x}^2} \sim \frac{m (n_0)}{m (N_4) m (N_5) m (N_6)} \norm{Iu_4 Iu_5 Iu_6}_{L_{t,x}^2} = : m(n_0) \times B(\underline{N_{456}}) . \end{align*} Let us focus on $B (\underline{N_{456}}) $. We fist rewrite it using H\"older inequality and Sobolev embedding \begin{align*} B(\underline{N_{456}}) & \lesssim \frac{1}{m (N_4) m (N_5) m (N_6)} \norm{Iu_4 Iu_5}_{L_{t,x}^2} \norm{Iu_6}_{L_{t,x}^{\infty}} \\ & \lesssim \frac{1}{m (N_4) m (N_5) m (N_6)} N_6^{1-\alpha} \norm{Iu_4 Iu_5}_{L_{t,x}^2} \norm{Iu_6}_{X_{\delta}^{\alpha , b}} . \end{align*} Without loss of generality, we assume $N_4 \geq N_5 \geq N_6$. Similar as the computation that we did earlier, we will consider the following two cases \begin{enumerate}[-] \item All frequencies are smaller than $N$ in {\bf Case B-1}; \item At least on frequency is larger than $N$ in {\bf Case B-2}. \end{enumerate} \noindent {\bf Case B-1:} $N_6 \leq N_5 \leq N_4 \ll N$. Using \begin{align*} \frac{1}{m (N_4) m (N_5) m (N_6)} = 1, \end{align*} with \eqref{eq bi6} and Bernstein inequality, we obtain \begin{align*} B(\underline{N_{456}}) & \lesssim N_5 N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} N_6^{1-\alpha} \norm{Iu_4}_{X_{\delta}^{0 ,b}} \norm{Iu_5}_{X_{\delta}^{0 , b}} \norm{Iu_6}_{X_{\delta}^{\alpha ,b}} \\ & \lesssim N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} \frac{N_5 N_6^{1-\alpha}}{N_4^{\alpha} N_5^{\alpha}} \prod_{i=4}^6 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} . \end{align*} Then by Cauchy-Schwarz inequality with Definition \ref{defn Xsb}, \begin{align*} \sum_{N_4, N_5 , N_6 \in \textbf{ Case B-1}} B(\underline{N_{456}}) \lesssim NN^{-\frac{\alpha -s}{1 + 2b -4b(s)}} \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 . \end{align*} \noindent {\bf Case B-2:} $N \lesssim N_4$. We compute \begin{align}\label{eq Mn3} \frac{1}{m (N_4) m (N_5) m (N_6)} & \lesssim \begin{cases} (N^{-1}N_4)^{\alpha -s} & \text{ if } N_6 \leq N_5 \ll N \leq N_4\\ (N^{-2}N_4 N_5)^{\alpha -s} & \text{ if } N_6 \ll N \leq N_5 \leq N_4\\ (N^{-3}N_4 N_5 N_6)^{\alpha -s} & \text{ if } N \ll N_6 \leq N_5 \leq N_4 . \end{cases} \end{align} Now take the first case in \eqref{eq Mn3}, with the bilinear estimate in Proposition \ref{prop bilinear}, we then obtain \begin{align*} B(\underline{N_{456}}) & \lesssim (N^{-1}N_4)^{\alpha -s} N_5^{\frac{1}{2} +} N_6^{1-\alpha} \norm{Iu_4}_{X_{\delta}^{0 ,b}} \norm{Iu_5}_{X_{\delta}^{0 , b}} \norm{Iu_6}_{X_{\delta}^{\alpha ,b}} \\ & \lesssim (N^{-1}N_4 )^{\alpha -s} \frac{ N_5^{\frac{1}{2} +} N_6^{1-\alpha} }{N_4^{\alpha} N_5^{\alpha}} \prod_{i=4}^6 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} \\ & \lesssim \begin{cases} N^{\frac{3}{2}-3\alpha+} N_4^{0-} N_5^{0-} N_6^{0-} \prod_{i=4}^6 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} & \text{ if } \alpha < \frac{3}{4} \\ N^{-\alpha+} N_4^{0-} N_5^{0-} N_6^{0-} \prod_{i=4}^6 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} & \text{ if } \alpha \geq \frac{3}{4} \end{cases}\\ & = \max \{ N^{\frac{3}{2}-3\alpha+} , N^{-\alpha+} \} N_4^{0-} N_5^{0-} N_6^{0-} \prod_{i=4}^6 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} . \end{align*} Similarly, the second and third cases are bounded receptively by \begin{align*} \max \{ N^{\frac{3}{2}-3\alpha+} , N^{-2\alpha+s+} \} N_4^{0-} N_5^{0-} N_6^{0-} \prod_{i=4}^6 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} \end{align*} and \begin{align*} \max \{ N^{\frac{3}{2}-3\alpha+} , N^{-3\alpha+2s+} \} N_4^{0-} N_5^{0-} N_6^{0-} \prod_{i=4}^6 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} . \end{align*} Then by Cauchy-Schwarz inequality with Definition \ref{defn Xsb}, \begin{align*} \sum_{N_4, N_5 , N_6 \in \textbf{ Case B-2}} B(\underline{N_{456}}) \lesssim \max \{ N^{ \frac{3}{2} -3\alpha +}, N^{-\alpha + } \} \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 . \end{align*} Therefore, by putting the two cases together, we finish the proof of Claim \ref{claim B}. \begin{align*} \sum_{N_4, N_5 , N_6} B(\underline{N_{456}}) & \lesssim \parenthese{\sum_{N_4, N_5 , N_6 \in \textbf{ Case B-1}} + \sum_{N_4, N_5 , N_6 \in \textbf{ Case B-2}} } B(\underline{N_{456}}) \\ & \lesssim N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 + N^{ \frac{3}{2} -3\alpha +} \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 + N^{-\alpha + } \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 . \end{align*} \end{proof} \noindent {\bf Case II-2b:} $\max \{ n_0, n_1 , n_2, n_3 \} = n_1 \gtrsim N $ and $n_1 \gg \min \{ n_0, n_3 \}$ Writing \begin{align*} M(\underline{n}) \lesssim \frac{m (n_0)}{m (n_1) m (n_2) m (n_3)} \end{align*} and using Remark \ref{rmk weak} and Claim \ref{claim B}, we have \begin{align*} \text{Term II} (\underline{N}) & \lesssim M(\underline{n}) \frac{n_2 n_3}{n_1^2} \frac{1 }{n_1^{\alpha} n_2^{\alpha} n_3^{\alpha}} \norm{I P_{n_0}(\abs{u}^2 u)}_{L_{t,x}^2} \prod_{i=1}^3 \norm{Iu_i}_{X^{\alpha, b}} \\ & \lesssim M(\underline{n}) \frac{n_2 n_3}{n_1^2} \frac{m(n_0)}{n_1^{\alpha} n_2^{\alpha} n_3^{\alpha}} \prod_{i=1}^3 \norm{Iu_i}_{X^{\alpha, b}} \parenthese{ B(N) \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 } . \end{align*} Applying the analysis as in \eqref{eq Mn2} and same calculation as in {\bf Case II-2a}, we obtain the bound \begin{align*} \sum_{\underline{n} \in \textbf{ Case II-2b}} \text{Term II} (\underline{n}) \lesssim N^{2 -3\alpha} B(N) \norm{Iu}_{X_{\delta}^{\alpha , b}}^6 . \end{align*} Now we have the only case left. \noindent {\bf Case II-3:} All frequencies are comparable. $N_0 = N_1 = N_2 = N_3 \gtrsim N$. In this last case, we have \begin{align*} M(\underline{N}) \lesssim \frac{m (N_0)}{m (N_1) m (N_2) m (N_3)} \sim \frac{1}{ m (N_2) m (N_3)} \lesssim (N^{-2} N_2 N_3)^{\alpha -s} . \end{align*} Then using the $M(\underline{N})$ bound above, Proposition \ref{prop bilinear} and Claim \ref{claim B}, we write \begin{align*} \text{Term II} (\underline{N}) & \lesssim (N^{-2} N_2 N_3)^{\alpha -s} \abs{\int_0^{\delta} \int_{\Theta} \overline{I P_{N_0}(\abs{u}^2 u)} (I u_1 \overline{I u_2} Iu_3) \, dxdt} \\ & \lesssim (N^{-2} N_2 N_3)^{\alpha -s} \norm{Iu_1 Iu_2 Iu_3}_{L_{t,x}^2} \norm{I P_{N_0}(\abs{u}^2 u)}_{L_{t,x}^2} \\ & \lesssim \parenthese{\frac{N_2 N_3}{N^2}}^{\alpha -s} \parenthese{ \frac{ N_2^{\frac{1}{2} +} N_3^{1-\alpha} }{N_1^{\alpha} N_2^{\alpha}} \prod_{i=1}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} } \parenthese{ m(N_0) B(N) \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 } \\ & \lesssim \parenthese{\frac{N_2 N_3}{N_0 N}}^{\alpha -s} \frac{ N_2^{\frac{1}{2} +} N_3^{1-\alpha} }{N_1^{\alpha} N_2^{\alpha}} B(N) \prod_{i=1}^3 \norm{Iu_i}_{X_{\delta}^{\alpha , b}} \norm{Iu}_{X_{\delta}^{\alpha , b}}^3 . \end{align*} Then by Cauchy-Schwarz inequality with Definition \ref{defn Xsb}, \begin{align*} \sum_{\underline{N} \in \textbf{ Case II-3}} \text{Term II} (\underline{N}) \lesssim N^{ \frac{3}{2} -3\alpha} B(N) \norm{Iu}_{X_{\delta}^{\alpha , b}}^6 . \end{align*} Therefore, we summarize the estimation on all the cases in {\bf Term II} , \begin{align*} \text{Term II} & \lesssim \parenthese{ \sum_{\underline{N} \in \textbf{ Case II-2a}} + \sum_{\underline{N} \in \textbf{ Case II-2b}} + \sum_{\underline{N} \in \textbf{ Case II-3}} } \text{Term II} (\underline{N}) \\ & \lesssim N^{ 2-3\alpha} B(N) \norm{Iu}_{X_{\delta}^{\alpha , b}}^6 \lesssim N^{ 2-3\alpha} B(N) \norm{Iu_0}_{H^{\alpha}}^6 . \end{align*} Let us finish the calculation on the energy increment by combining the computation for both {\bf Term I} and {\bf Term II} \begin{align*} & \quad E(Iu(\delta)) - E(Iu(0)) \\ & \lesssim N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \norm{Iu_0}_{H^{\alpha}}^4 + N^{2- 3\alpha- } \norm{Iu_0}_{H^{\alpha}}^4 \\ & \quad + N^{ 2-3\alpha} N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} \norm{Iu_0}_{H^{\alpha}}^6 + N^{ 2-3\alpha} N^{ \frac{3}{2} -3\alpha +} \norm{Iu_0}_{H^{\alpha}}^6 + N^{ 2-3\alpha} N^{-\alpha + } \norm{Iu_0}_{H^{\alpha}}^6 \\ & \lesssim N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \norm{Iu_0}_{H^{\alpha}}^4 + N^{2- 3\alpha- } \norm{Iu_0}_{H^{\alpha}}^4 \\ & \quad + N^{ 2-3\alpha} N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} \norm{Iu_0}_{H^{\alpha}}^6 + N^{ \frac{7}{2} -6\alpha +} \norm{Iu_0}_{H^{\alpha}}^6 + N^{ 2-4\alpha +} \norm{Iu_0}_{H^{\alpha}}^6 . \end{align*} The last inequality follows from Remark \ref{rmk ID}. Then we finish the proof of Proposition \ref{prop energy increment}. \end{proof} \section{Global well-posedness}\label{sec gwp} In this section, we finally show the global well-posedness result stated in Theorem \ref{thm GWP} by iteration. We also will see the choices of the parameters in previous sections and the constrain on the regularity. As a consequence of these choices, we obtain a polynomial growth as presented in Theorem \ref{thm GWP}. \begin{proof}[Proof of Theorem \ref{thm GWP}] By the definition of energy and Gagliardo–Nirenberg interpolation inequality, we have \begin{align*} E(Iu_0) = \frac{1}{2} \norm{Iu_0}_{\dot{H}^{\alpha}}^2 + \frac{1}{4} \norm{Iu_0}_{L^4}^4 \lesssim \norm{Iu_0}_{\dot{H}^{\alpha}}^2 + \norm{Iu_0}_{L^2}^{4-\frac{2}{\alpha}} \norm{Iu_0}_{\dot{H}^{\alpha}}^{\frac{2}{\alpha}} \lesssim \norm{Iu_0}_{\dot{H}^{\alpha}}^{\frac{2}{\alpha}} \lesssim N^{(\alpha -s) \frac{2}{\alpha}} . \end{align*} Then the energy increment obtained in Proposition \ref{prop energy increment} becomes \begin{align} E(Iu(\delta)) - E(Iu(0)) & \lesssim N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} \norm{Iu_0}_{H^{\alpha}}^4 + N^{2- 3\alpha- } \norm{Iu_0}_{H^{\alpha}}^4 \notag \\ & \quad + N^{ 2-3\alpha} N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} \norm{Iu_0}_{H^{\alpha}}^6 + N^{ \frac{7}{2} -6\alpha +} \norm{Iu_0}_{H^{\alpha}}^6 + N^{ 2-4\alpha +} \norm{Iu_0}_{H^{\alpha}}^6 \notag\\ & \lesssim N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} N^{4(\alpha -s) } + N^{2- 3\alpha- } N^{4(\alpha -s) } \notag\\ & \quad + N^{ 2-3\alpha} N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} N^{6(\alpha -s) } + N^{ \frac{7}{2} -6\alpha +} N^{6(\alpha -s) } + N^{ 2-4\alpha +} N^{6(\alpha -s) } . \label{eq incr} \end{align} In order to reach a fixed time $T \gg 1$, the number of steps in the iteration is at most, \begin{align} \frac{T}{\delta} \sim T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} \sim T N^{\frac{2(\alpha -s)}{2s-1}} . \label{eq step} \end{align} Combining \eqref{eq incr} and \eqref{eq step}, we write the modified energy at time $T$ as \begin{align*} E(Iu(T)) & \lesssim E(Iu(0)) + \frac{T}{\delta} \parenthese{ N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} N^{4(\alpha -s) } + N^{2- 3\alpha- } N^{4(\alpha -s) }} \\ & \quad + \frac{T}{\delta} \parenthese{ N^{ 2-3\alpha} N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} N^{6(\alpha -s) } + N^{ \frac{7}{2} -6\alpha +} N^{6(\alpha -s) } + N^{ 2-4\alpha +} N^{6(\alpha -s) } } \\ & \lesssim N^{(\alpha -s) \frac{2}{\alpha}} + T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} \parenthese{ N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} N^{4(\alpha -s) } + N^{2- 3\alpha- } N^{4(\alpha -s) }} \\ & \quad + T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} \parenthese{ N^{ 2-3\alpha} N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} N^{6(\alpha -s) } + N^{ \frac{7}{2} -6\alpha +} N^{6(\alpha -s) } + N^{ 2-4\alpha +} N^{6(\alpha -s) } } . \end{align*} In order to keep this iteration valid at each step , we have to ensure that the total energy increment is always being controlled by the initial energy $E(Iu(0))$, that is \begin{align*} & \quad T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} \parenthese{ N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} N^{4(\alpha -s) } + N^{2- 3\alpha- } N^{4(\alpha -s) }} \\ & \quad + T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} \parenthese{ N^{ 2-3\alpha} N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} N^{6(\alpha -s) } + N^{ \frac{7}{2} -6\alpha +} N^{6(\alpha -s) } + N^{ 2-4\alpha +} N^{6(\alpha -s) } } \\ & \lesssim E(Iu(0)) \lesssim N^{(\alpha -s) \frac{2}{\alpha}} . \end{align*} The requirement above implies the following five inequalities \begin{align*} \begin{cases} T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} N^{\frac{1}{2} - \alpha+} N^{-\frac{4(\alpha -s)(b- b(\alpha-) )}{1 + 2b -4b(s)}} N^{4(\alpha -s) } \lesssim N^{(\alpha -s) \frac{2}{\alpha}} \\ T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} N^{2- 3\alpha- } N^{4(\alpha -s) } \lesssim N^{(\alpha -s) \frac{2}{\alpha}} \\ T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} N^{ 2-3\alpha} N^{-\frac{\alpha -s}{1 + 2b -4b(s)}} N^{6(\alpha -s) } \lesssim N^{(\alpha -s) \frac{2}{\alpha}} \\ T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} N^{ \frac{7}{2} -6\alpha +} N^{6(\alpha -s) } \lesssim N^{(\alpha -s) \frac{2}{\alpha}} \\ T N^{\frac{2(\alpha -s)}{1 + 2b -4b(s)}} N^{ 2-4\alpha +} N^{6(\alpha -s) } \lesssim N^{(\alpha -s) \frac{2}{\alpha}} . \end{cases} \end{align*} whose solutions are give by \begin{align*} \begin{cases} s > s_1(\alpha) : = \frac{1}{4} \parenthese{\frac{4\alpha^2 - \alpha - 1}{2\alpha-1} + \sqrt{ \frac{5\alpha^2 -4\alpha +1}{(2\alpha -1)^2}} } \\ s > s_2 (\alpha) : = \frac{1}{4} \parenthese{ \frac{\alpha^2 + \alpha -1}{2\alpha -1} + \sqrt{\frac{\alpha^4 + 10 \alpha^3 -5\alpha^2 - 2\alpha +1}{(2\alpha-1)^2}}} , \alpha > \frac{2}{3}\\ s > s_3 (\alpha) : = \frac{1}{8} \parenthese{\frac{6\alpha^2 + 5\alpha -2}{3\alpha -1} + \sqrt{\frac{36 \alpha^4 - 36 \alpha^3 +33 \alpha^2 -20 \alpha +4}{(3\alpha -1)^2}}} , \alpha > \frac{2}{3}\\ s > s_4 (\alpha) : = \frac{1}{8} \parenthese{\frac{7\alpha -2}{3\alpha -1} + \sqrt{\frac{96\alpha^3 - 55\alpha^2 -4 \alpha +4}{(3\alpha -1)^2}}} , \alpha > \frac{7}{12}\\ s > s_5 (\alpha) : = \frac{2\alpha^2 + 2\alpha -1}{6\alpha -2} . \end{cases} \end{align*} Let us remark here that the restriction $\alpha \in (\frac{2}{3} , 1]$ on the fractional Laplacian in this paper comes from the solutions above. If we track further back to the place where we obtained the second and the third conditions, we see that in {\bf Case I-2}, the bounds of both {\bf Case I-2a} and {\bf Case I-2b} are $N^{2-3\alpha-} \norm{Iu}_{X_{\delta}^{\alpha,b}}^4$. In the almost conservation law of energy, we anticipate a small factor $N^{2-3\alpha-}$, hence $2-3\alpha <0$ and $\alpha > \frac{2}{3}$. We can see the global well-posedness index easily from the picture below. \begin{center} \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $\alpha$, ylabel = {$s_i(\alpha)$}, ] \addplot [ domain=2/3:1, samples=100, color=red, ] { 0.25* ((-4 *x^2 + x + 1)/(1 - 2* x) + sqrt((5 *x^2 - 4 *x + 1)/(1 - 2* x)^2)) }; \addlegendentry{$s_1$} \addplot [ domain=2/3:1, samples=100, color=blue, ] {(0.25 *(x^2 + x - 1))/(2 *x - 1) + 0.25 *sqrt((x^4 + 10 *x^3 - 5 *x^2 - 2 *x + 1)/(2 *x - 1)^2) }; \addlegendentry{$s_2$} \addplot [ domain=2/3:1, samples=100, color=orange, ] {(0.125 *(6 * x^2 + 5 *x - 2))/(3 *x - 1) + 0.125 * sqrt((36 *x^4 - 36 *x^3 + 33 *x^2 - 20 *x + 4)/(3 *x - 1)^2) }; \addlegendentry{$s_3$} \addplot [ domain=2/3:1, samples=100, color=purple, ] {0.125 *sqrt((96 *x^3 - 55 *x^2 - 4 *x + 4)/(3 *x - 1)^2) + (0.125 * (7 *x - 2))/(3 *x - 1) }; \addlegendentry{$s_4$} \addplot [ domain=2/3:1, samples=100, color=green, ] {(2*x^2 + 2* x -1)/(6 *x -2)}; \addlegendentry{$s_5$} \end{axis} \end{tikzpicture} \end{center} Therefore, the global well-posedness index that we obtain in this work is \begin{align*} s > s_* (\alpha) = \max \{ s_1(\alpha) , s_2(\alpha) \} . \end{align*} Moreover, with the choice of $N$ solved from $s_1(\alpha) , s_2(\alpha)$, we have \begin{align*} T \sim \min \{ N^{(\alpha -s)(\frac{2}{\alpha} -4 - \frac{2\alpha+1}{2s-1}) + \alpha - \frac{1}{2}-} , N^{(\alpha -s) (\frac{2}{\alpha} -4) + 3\alpha -2+} \} = : N^p. \end{align*} Then as a consequence of the I-method, we establish the following polynomial bound of the global solution \begin{align*} \norm{u(T)}_{H^s (\Theta)} \lesssim \norm{Iu(T)}_{H^{\alpha} (\Theta)} \lesssim E(Iu(T))^{\frac{1}{2}} \sim N^{(\alpha -s)\frac{1}{\alpha}} = T^{(\alpha -s)\frac{p}{\alpha}} . \end{align*} Now we finish the proof of Theorem \ref{thm GWP}. \end{proof}
\section{Introduction}\label{scIntroduction} Reaching superconductivity at room temperature has {been the} focus of intense research activities in the last few years (see \cite{Pickard2020,Flores2020} for recent surveys). Especially promising results have been {achieved for} superhydrides, such as H$_3$S, with a transition temperature of 203 K at a pressure of 155 GPa \cite{Drozdov2015}, LaH$_{10}$ with a $T_c$ around 250 K at a pressure of 170 GPa or higher \cite{Liu2017,Drozdov2019,Somayazulu2019}, and YH$_{6}$ with {$T_c\simeq220\,\mathrm{K}$} at 166 - 237 GPa \cite{Troyan2019,Kong2019}. Very recent studies report room-temperature superconductivity (287 K) in a carbonaceous sulfur hydride at 267 GPa \cite{Snider2020}, and possibly even {a} higher {critical temperature} in a La superhydride mixed with ammonia borane \cite{Grockowiak2020}. A unifying aspect of these recently discovered high-temperature superconductors is the prevalent conventional electron-phonon mechanism that is responsible for the record high critical temperatures \cite{Flores2020}. The quest for room-temperature superconductivity in hydrides goes back to a proposal by Ashcroft \cite{Ashcroft1968}, stating that dense metallic atomic hydrogen could exhibit superconductivity at a very high critical temperature. The existence of {a} metallic {phase of} atomic hydrogen was first conceived by Wigner and Huntington in 1935 \cite{Wigner1935}. Since these seeding works, massive efforts have been devoted to {experimentally confirm such predictions at high pressures (see \cite{Mao1994,McMahon2012RMP}), eventually aiming for the final demonstration of high temperature superconductivity in this material. However, the formation of metallic atomic hydrogen at high pressure has been difficult to establish in diamond-anvil pressure cells. So far, some evidence for a metallic phase has been presented at various pressures, from 250 GPa to 495 GPa \cite{Hemley1989,Goncharov2001,Eremets2016,Dias2017,Ji2019,Loubeyre2020}, but the findings of these works are not yet unambiguously accepted by the entire scientific community. To understand better the formation of superconductivity in hydrides at high transition temperatures, theory can provide valuable insight. It is widely accepted that the conventional electron-phonon mechanism is at play, being enhanced by the small ionic mass of hydrogen, the large electron-ion Coulomb interaction, and relatively weak electron-electron interaction. Although the appearance of superconductivity has not yet been reported, first-principles crystal structure investigations have determined that atomic hydrogen will adopt the $I4_1/amd$ structure for a large pressure interval of $500 - 1000$ GPa \cite{Pickard2007,McMahon2011,Degtyarenko2016}. Advanced quantum Monte-Carlo calculations estimated the transition pressure of 374 GPa for the transition from the molecular phase to the atomic $I4_1/amd$ phase \cite{Azadi2014}. Superconductivity in the latter phase has been investigated using first-principles electronic structure calculations of the electron and phonon bands and their coupling, using the semi-empirical McMillan and Allen-Dynes equation \cite{McMahon2011,McMahon2012err,Yan2011} or by solving the isotropic Eliashberg equations \cite{Durajski2014,Borinaga2016}. The obtained transition temperatures $T_c$ are around room temperature for a Coulomb pseudopotential value $\mu^{\star} =0.10$. In this work we present a phonon-mode resolved analysis of metallic hydrogen in the superconducting state at pressures of $400$ and $600\,\mathrm{GPa}$, where the $I4_1/amd$ phase is prevalent. Our calculations are carried out with the Uppsala Superconductivity (\textsc{uppsc}) code \cite{UppSC,Aperis2015,Schrodi2018,Schrodi2019,Schrodi2020_2,Schrodi2020_3,Schrodi2020_4}. Specifically, we solve here the anisotropic Migdal-Eliashberg equations using first-principles electron energies, phonon frequencies, and electron-phonon couplings as input. The total electron-phonon coupling constant $\lambda\simeq2.32$ at $400\,\mathrm{GPa}$ contains dominant contributions from the $B_{1g}$ phonon mode, while the $A_{2u}$ mode has the smallest impact. The remaining $E_u$ and $E_g$ modes both contribute with comparable and substantial magnitude to $\lambda$. We find $T_c$ approximately as room temperature for a reasonable range of Coulomb pseudopotential values $\mu^{\star}$, which is consistent with previous investigations \cite{McMahon2012err,Durajski2014,Borinaga2016}. Selectively investigating each of the phonon modes reveals that the $E_u$ mode contributes most to the $T_c$, despite having a subdominant role concerning the electron-phonon coupling strength. We provide a further proof of this observation by increasing the pressure to $600\,\mathrm{GPa}$, where the critical temperature slightly increases, despite a reduction in electron-phonon coupling strength $\lambda\simeq2.09$. In accordance with the just described picture, our mode-resolved Eliashberg calculations reveal that this stems from an enhanced contribution from the $E_u$ mode. \section{Methodology}\label{scTheory} \subsection{First-principles calculations} We {perform} first-principles calculations within the density functional theory (DFT) framework using the Quantum Espresso package \cite{Giannozzi2009}. We {adopt} the $I4_1/amd$ crystal structure of atomic hydrogen that was predicted to be the stable structure over a large pressure range of 400 to 1000 GPa \cite{McMahon2011}. The exchange-correlation energy functional {is} treated within the generalized gradient corrected scheme of Perdew-Burke-Ernzerhof \cite{Perdew1996}. The interactions between valence electrons and core {are} treated within the projector-augmented-wave (PAW) approach and {the} plane wave basis set {is} constructed using an energy cut-off of 80 Ry. The Brillouin zone (BZ) integrations {are} carried out using a uniform dense $24\times24\times24$ Monkhorst-Pack {$\mathbf{k}$}-point grid. The phonon dispersions and electron-phonon couplings {are} calculated on a dense $12\times12\times12$ {$\mathbf{q}$}-point grid using density functional perturbation theory (DFTP). All free parameters of the crystal lattice {are} optimized at 400 and 600 GPa. Since anharmonicity shows a minor effect on the critical temperature \cite{Borinaga2016}, these effects are not considered in the present calculations. \subsection{Eliashberg theory calculations} From {\em ab initio} calculations we obtain branch $\nu$ and wave vector $\mathbf{q}$ dependent phonon frequencies $\omega_{\mathbf{q},\nu}$, as well as electron-phonon coupling constants $\lambda_{\mathbf{q},\nu}$ and quasiparticle lifetimes $\gamma_{\mathbf{q},\nu}$. By defining bosonic Matsubara frequencies $q_l=2\pi Tl$, $l\in\mathbb{Z}$, at temperature $T$ we obtain the dynamic electron-phonon couplings via \begin{align} \lambda_{\mathbf{q},l} = \sum_{\nu} \lambda_{\mathbf{q},\nu} \frac{\omega_{\mathbf{q},\nu}^2}{\omega_{\mathbf{q},\nu}^2 + q_l^2} ~. \label{lambdaql} \end{align} In the above we use the notation $g(\mathbf{q},iq_l)=g_{\mathbf{q},l}$ for any function $g$ for the sake of brevity. The couplings calculated from Eq.\,(\ref{lambdaql}) serve as input for the self-consistent anisotropic Eliashberg equations \begin{align} Z_{\mathbf{k},m} &= 1 + \frac{\pi T}{\omega_m} \sum_{\mathbf{k}',m'} \frac{\delta(\xi_{\mathbf{k}'})}{N_0} \lambda_{\mathbf{k}-\mathbf{k}',m-m'} \frac{\omega_{m'}}{\sqrt{\omega_{m'}^2 + \Delta_{\mathbf{k}',m'}^2}} ~, \label{z} \\ \Delta_{\mathbf{k},m} &= \frac{\pi T}{Z_{\mathbf{k},m}} \sum_{\mathbf{k}',m'} \frac{\delta(\xi_{\mathbf{k}'})}{N_0} \left[ \lambda_{\mathbf{k}-\mathbf{k}',m-m'} - \mu^{\star}(\omega_c)\right] \nonumber\\ & ~~~~~~~~~~ \times \frac{\Delta_{\mathbf{k}',m'}}{\sqrt{\omega_{m'}^2 + \Delta_{\mathbf{k}',m'}^2}} , \label{delta} \end{align} describing the electron mass renormalization $Z_{\mathbf{k},m}$ and superconducting gap function $\Delta_{\mathbf{k},m}$ \cite{Aperis2015}. Again we write $f(\mathbf{k},i\omega_m)=f_{\mathbf{k},m}$, now with fermion Matsubara frequencies $\omega_m=\pi T(2m+1)$, $m\in\mathbb{Z}$. We use $\mu^{\star}$ as Anderson-Morel Coulomb pseudopotential, which enters Eq.\,(\ref{delta}) with a Matsubara frequency cutoff $\omega_c$. The critical temperature $T_c$ is defined as the smallest $T$ at which the self-consistent solution to Eqs.\,(\ref{z}-\ref{delta}) yields a vanishing superconducting gap. The electron density of states $N_0$ at the Fermi level is calculated via the adaptive smearing method, namely \begin{align} N_0 = \sum_{\mathbf{k},n} \frac{1}{\sqrt{2\pi}} \frac{1}{W_{\mathbf{k},n}} \exp\Big( -\frac{\xi_{\mathbf{k},n}^2}{2W_{\mathbf{k},n}^2} \Big) ~, \label{N0} \end{align} where the broadening tensor is defined as \begin{align} W_{\mathbf{k},n} = a\cdot\Delta k\cdot \Big| \frac{\partial \xi_{\mathbf{k},n}}{\partial \mathbf{k}} \Big| ~, \label{wsmear} \end{align} in combination with the Methfessel-Paxton scheme\,\cite{Methfessel1989}. In Eq.\,(\ref{wsmear}), $\Delta k$ is the momentum resolution and $a$ can be chosen $\mathcal{O}(1)$\,\cite{Yates2007}. Furthermore, $\xi_{\mathbf{k},n}$ is the electron dispersion as computed from DFT, with $\mathbf{k}$ a Brillouin zone (BZ) momentum and $n$ a band index. We consider here only electronic states at the Fermi level, hence our calculations are carried out for the two partially occupied energy bands (shown further below). We obtain a more simplified estimate of $T_c$ by employing the semi-empirical McMillan equation\,\cite{McMillan1968}, including a modification due to Allen and Dynes \cite{Allen1975}, \begin{align} T_c = \frac{\omega_{\mathrm{\log}}}{1.2} \exp\Big( \frac{-1.04(1+\lambda)}{\lambda (1 - 0.62\mu^{\star}) - \mu^{\star}}\Big) ~. \end{align} Here $\lambda$ is the total electron-phonon coupling constant, \begin{align} \lambda &=\sum_{\mathbf{q},\nu}\lambda_{\mathbf{q},\nu} \label{lambda_sum} \\ & = 2\int_0^{\infty} \frac{\alpha^2F(\omega)}{\omega} \mathrm{d}\omega \label{lambda_int} , \end{align} and $\alpha^2 F(\omega)$ is the real-frequency $\omega$ dependent Eliashberg function, given as \begin{align} \alpha^2F(\omega) = \frac{1}{2\pi N_0} \sum_{\mathbf{q},\nu} \delta(\omega-\omega_{\mathbf{q},\nu}) \frac{\gamma_{\mathbf{q},\nu}}{\omega_{\mathbf{q},\nu}} ~. \label{a2F} \end{align} The characteristic phonon energy scale $\omega_{\rm log}$ is defined as \begin{align} \omega_{\mathrm{log}} = \exp\Big( \frac{2}{\lambda} \int_0^{\infty} \frac{\mathrm{d}\omega}{\omega} \alpha^2F(\omega) \log(\omega) \Big) ~. \end{align} \begin{figure}[h!] \includegraphics[width=0.7709\linewidth]{fig1.png} \caption{\textit{Ab initio} calculated electronic properties of metallic hydrogen at 400 GPa. (a) Electronic band structure along the high symmetry directions of the BZ. (b) Electronic density of states (DOS) function, shown in orange color. The light green curve show the DOS results of the $Cmca$-8 phase of molecular hydrogen {\cite{Pickard2007}}. (c) Computed Fermi-surface sheets, colored according to the Fermi velocity values, starting with blue (lowest velocities) -- light green (medium range velocities) -- red colors (highest velocities), (d) The COHP functions. The orange shaded area shows the result for atomic hydrogen and the blue curve (grey shaded) shows the result for the $Cmca$-8 phase of molecular hydrogen. Negative COHP values represent bonding interactions and positive COHP values represent anti-bonding interactions.} \end {figure} \section{Results}\label{scResults} We begin by calculating the electronic properties of metallic hydrogen in the optimized $I4_1/amd$ structure. The results of our electronic structure calculations at a pressure of 400 GPa are presented in Fig.\ 1. Figure 1(a) shows the computed electronic bandstructure plotted along high-symmetry directions in the {BZ}. The electronic states are highly dispersive forming very wide bands which reflects their nearly free electron nature. Two bands cross the Fermi level and are responsible for metallicity. One band corresponds to the bonding $s$-orbital and {the} other {one} to {the} antibonding $s$-orbital. The antibonding state is mostly unoccupied and crosses along the $\Gamma -$Z direction. Our calculated electron bandstructure agrees well with previously reported results \cite{Borinaga2016,Degtyarenko2016,Kudryashov2017}. The electronic density of states (DOS), shown in Fig.\ 1(b) with the orange color, is also consistent with the free electron behavior, being nearly parabolic below to Fermi level. Note that the DOS value at the Fermi level is higher than that of metallic molecular hydrogen in the $Cmca$-8 phase \cite{Pickard2007}, shown in green. Figure 1(c) shows the calculated 3D Fermi surface of atomic hydrogen metal at 400 GPa, which consists of two sheets corresponding to the bonding and antibonding states. The bonding state leads to ribbon-like hole Fermi surface sheets and the antibonding state leads to a concave lens shaped electron Fermi sheet at Z (the Fermi surfaces were rendered using FermiSurfer software \cite{Kawamura2019}). The sheets in Fig.\ 1(c) are colored according to the values of the Fermi velocities; the high Fermi velocities correspond to the free electron nature. The covalent character of the H-H bonds is investigated by calculating the crystal orbital Hamiltonian populations (COHP) functions \cite{Dronskowski1993,Grechnev2003,Steinberg2018,Andersen,Verma2018} which counts the population of wavefunctions on two atomic orbitals of a pair of atoms (shown in Fig.\ 1(d)). In a given energy window, negative values of COHP describe bonding interactions whereas positive values of COHP describe anti-bonding interactions. This analysis shows that the overlap of nearest hydrogen states below the Fermi level are bonding states. The H-H bond in molecular phase (grey shaded area) has stronger covalent character than that of atomic phase. The integrated COHP values (computed with the code of Ref.\ \cite{Andersen}) are 1.30 and 3.27 eV/H-H for atomic and molecular phases, respectively. \FloatBarrier The computed phonon dispersions of metallic hydrogen at 400 GPa (not shown) are very similar to the previously reported phonon dispersions \cite{Borinaga2016}. After this we turn our attention to the superconducting properties of metallic hydrogen. In Fig.\,\ref{lambdaA2F}(a) we show our convergence study of {the global electron-phonon coupling strength} $\lambda$, as obtained from Eq.\,(\ref{lambda_sum}), as function of smearing $\sigma$, which is used by Quantum Espresso to compute the electron-phonon coupling coefficients $\lambda_{\mathbf{q},\nu}$. The results show good convergence for $\sigma\gtrsim0.04\,\mathrm{Ry}$, and therefore this value of $\sigma$ will be used from here on. The converged value of $\lambda$ is {2.32, which} is in agreement with the {$\lambda=2.08$} computed for molecular hydrogen at 450 GPa \cite{Cudazzo2008}. The coupling coefficient is also consistent with values for H$_3$S at 200 GPa (2.19) \cite{Duan2014}, LaH$_{10}$ at 250 GPa (2.29) \cite{Liu2017}, and YH$_{10}$ at 400 GPa (2.41) \cite{Peng2017}. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{fig2.pdf} \caption{(a) Global coupling constant as function of broadening. (b) Frequency dependent Eliashberg function for a broadening value of 0.04 Ry (blue curve). The red curve shows the cumulative coupling strength calculated from Eq.\,(\ref{lambda_int}). The purple dashed line represents $\lambda$ as obtained from Eq.\,(\ref{lambda_sum}).}\label{lambdaA2F} \end{figure} Now we turn to the analysis of how individual phonon modes contribute to the electron-phonon couplings. For this we {calculate} the Eliashberg function $\alpha^2F(\omega)$, shown in Fig.\,\ref{lambdaA2F}(b) in blue. The most prominent contributions appear at $\omega\sim100\,\mathrm{meV}$ and $\omega\sim250\,\mathrm{meV}$. This is further emphasized by the red curve representing the cumulative electron-phonon coupling as calculated from Eq.\,(\ref{lambda_int}). The aforementioned frequencies lead to the steepest increase in $\lambda$ with $\omega$. As crosscheck, we calculated the total electron-phonon coupling using Eq.\,(\ref{lambda_sum}), shown in dashed purple. Both calculations yield identical values of $\lambda$. \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{fig3.pdf} \caption{(a) Self-consistently computed maximum superconducting gap for atomic hydrogen at 400 GPa as function of $T$ and screened Coulomb potential $\mu^{\star}$. The critical temperatures according to Eliashberg theory and the modified McMillan equation are drawn in solid red and dashed black lines, respectively. (b) Calculated critical temperature $T_c$ plotted against $\mu^{\star}$.}\label{gaptc} \end{figure} Next we solve the Eliashberg equations as function of $T$ and pseudopotential $\mu^{\star}$, using the first-principles input computed for $\sigma=0.04\,\mathrm{Ry}$. We show the result for the maximum superconducting gap $\Delta=\mathrm{max}_{\mathbf{k}}\,\Delta_{\mathbf{k},m=0}$ in Fig.\,\ref{gaptc}(a). In solid red we indicate the onset of superconductivity, hence the critical temperatures. Above we mention another recipe of calculating $T_c$ by means of the modified McMillan equation; the outcome is plotted as dashed black line. We make the $\mu^{\star}$-dependence of $T_c$ explicit in Fig.\,\ref{gaptc}(b), where we show the $T_c$ corresponding to room temperature in solid blue. As apparent, when using the modified McMillan equation we underestimate the critical temperature, in comparison to the solution of the more accurate Eliashberg equations, for all values of $\mu^{\star}$. The dashed black line stays below room temperature even in the complete absence of pair-breaking Coulomb repulsion. The red solid line, on the other hand, predicts room temperature superconductivity for values of $\mu^{\star}$ up to $\sim0.14$. We now turn to the question about the most significant phonon branches. The irreducible representations in this system are $B_{1g}$ (one mode), $E_g$ (two modes), $A_{2u}$ (one mode), and $E_u$ (two modes). We split $\lambda_{\mathbf{q},\nu}$, $\gamma_{\mathbf{q},\nu}$, and $\omega_{\mathbf{q},\nu}$ according to these subsets, and repeat the calculation of $\lambda$ and $\alpha^2F(\omega)$, respectively via Eq.\,(\ref{lambda_sum}) and Eq.\,(\ref{a2F}). The relative contribution to the electron-phonon coupling due to the different phonon modes is shown as inset in Fig.\,\ref{splita2F}. In the main graph, we plot the partial Eliashberg functions arising from each irreducible representation. Concerning $\alpha^2F(\omega)$, we clearly see that each subset of phonon modes contributes mainly in a respective characteristic frequency range. As for the magnitude of $\lambda$, the largest (smallest) contributions are due to $B_{1g}$ ($A_{2u}$), while $E_u$ and $E_g$ are on a comparable intermediate level. \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{fig4.pdf} \caption{Frequency dependent Eliashberg function of atomic hydrogen at 400 GPa produced by the four different irreducible representations. The inset shows the contributions of $B_{1g}$, $E_g$, $A_{2u}$ and $E_u$ to the global electron-phonon coupling constant. The same colors are used for the main graph and the inset.}\label{splita2F} \end{figure} \begin{figure}[b!] \centering \includegraphics[width=1\linewidth]{fig5.pdf} \caption{Calculated transition temperatures as function of the Coulomb pseudopotential. The red thick line represents our result for including all modes in the system. The remaining four solid lines are found by neglecting one subset of phonon modes at a time, see legend. The dotted yellow curve is calculated from the $E_u$ modes only.}\label{splitTc} \end{figure} We want to examine how the different phonon modes affect the superconducting transition temperature. The $\mu^{\star}$ dependent result for $T_c$ as obtained from the full calculation is shown in Fig.\,\ref{splitTc} as fat, light red curve. The calculations are now repeated by selectively leaving out one particular subset of phonon modes. For example, the blue line in Fig.\,\ref{splitTc} is found by taking into account only the $E_g$, $A_{2u}$ and $E_u$ irreducible representations, i.e., neglecting any influence due to $B_{1g}$. From this we observe that the smallest decrease in $T_c$ is found when leaving out either the $B_{1g}$ or the $A_{2u}$ modes, hence their significance for superconductivity is comparatively minor. The largest loss in $T_c$ is found when excluding the $E_u$ modes, see the dark red curve. To investigate this representation closer, we perform calculations with the $E_u$ modes only, shown as dotted yellow curve in Fig.\,\ref{splitTc}, and find a maximum $T_c\sim140\,\mathrm{K}$. This contribution is significantly larger than found for any other isolated irreducible representation (not shown). Hence we conclude that phonon modes belonging to the $E_u$ representation are most important for the high-temperature superconducting state. \begin{figure}[b!] \centering \includegraphics[width=1\linewidth]{fig6.pdf} \caption{(a) Computed global coupling constant $\lambda$ as function of broadening for atomic hydrogen at a pressure of 600 GPa. (b) The critical temperature versus Coulomb pseudopotential $\mu^{\star}$ computed with Eliashberg theory and with the modified McMillan equation. (c) As Fig.\,\ref{splita2F}, but for atomic hydrogen at {a pressure of 600} GPa.}\label{lambdaTc600GPa} \end{figure} We performed similar calculations for atomic hydrogen at 600 GPa. As shown in Fig.\ \ref{lambdaTc600GPa}(a), we find a slight decrease in the value of electron-phonon coupling, $\lambda\,=\,2.09$, but overall, as can be seen in Fig.\ \ref{lambdaTc600GPa}(b)}, $T_c$ increases slightly. Although this behavior may seem counter-intuitive, it can be explained by the way different phonon modes contribute to superconductivity. Despite the small decrease in the total electron-phonon coupling, we now find an increased coupling to $E_u$ symmetry modes, as illustrated in the inset of Fig.\ \ref{lambdaTc600GPa}(c), which leads to the increase in transition temperature. Hence, this underlines the dominant contribution stemming from the $E_u$ phonon modes. \\ \section{CONCLUSIONS} In summary, we have reported a detailed analysis of the superconducting properties of metallic atomic hydrogen under high pressure conditions. To this end, we solved the anisotropic Migdal-Eliashberg equations, in combination with first-principles input results for the electron energies, phonons, and electron-phonon couplings. Our calculations show that, although the H-H covalent bond has weakened in atomic hydrogen metal as compared to the molecular hydrogen phase, it still has a substantial amount of covalent character. Further we find that metallic atomic hydrogen exhibits above room temperature superconductivity for reasonable values of the screened Coulomb pseudopotential $\mu^{\star}$. Analyzing which modes contribute most, we find that the high transition temperature is mainly due to the $E_u$ phonon modes. The critical transition temperature shows only a slight increase with pressure. \begin{acknowledgments} A.K.V.\ and P.M.\ acknowledge the support of ANUPAM supercomputing facility of BARC. F.S., A.A., and P.M.O.\ acknowledge support by the Swedish Research Council (VR), the R{\"o}ntgen-{\AA}ngstr{\"o}m Cluster, the Knut and Alice Wallenberg Foundation (No.\ 2015.0060), and the Swedish National Infrastructure for Computing (SNIC). \end{acknowledgments} \bibliographystyle{apsrev4-1}
\section{Introduction} Heavy-ion collision experiments conducted at particle accelerators allow us to study the properties of fundamental constituents of nature, such as quarks and gluons. Experiments at Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) indicate the formation of such a deconfined medium of quarks and gluons. The partonic medium so produced, behaves like a strongly interacting liquid with a small value of shear viscosity ($\eta$) to entropy density ($s$) ratio $(\eta/s)$, cools down with expansion, undergoes a transition to the hadronic phase and finally free streams to the detector. One of the successful descriptions of the bulk evolution of such strongly interacting matter has been through relativistic hydrodynamics. Transport coefficients are important input parameters that enter in such a dissipative hydrodynamic description as well as in transport simulations that have been used to describe the evolution of such matter produced in a heavy-ion collision. Hydrodynamic studies of the heavy-ion collisions suggest that the medium produced has a very small ratio of shear viscosity to entropy density ($\eta/s$)\cite{Heinz:2013th, Romatschke:2007mq, Kovtun:2004de}. It is amongst the smallest of known materials suggesting the quark-gluon plasma (QGP) formed is the most perfect fluid. The value of this ratio estimated from experiments is also found to be close to the conjectured KSS bound on the value of $\eta/s$\cite{Kovtun:2004de}. Just like shear viscosity determines the response to transverse momentum gradients there are other transport coefficients such as bulk viscosity, electrical conductivity, etc. which determine the response of the system to other such perturbations. Bulk viscosity \cite{Dobado:2012zf, Sasaki:2008fg, Sasaki:2008um,Karsch:2007jc,Finazzo:2014cna,Jeon:1995zm} determines the response to bulk stresses. It scales with the conformal anomaly ($\frac{\epsilon - 3P}{T^4}$) and is expected to be large near the phase transition as inferred from lattice calculations \cite{Bazavov:2009zn,Bazavov:2010pg}. The effect of such a large bulk viscosity to entropy ratio have been investigated on the particle spectrum and flow coefficients \cite{Bozek:2009dw,Rose:2014fba}. Electrical conductivity ($\sigma_{el}$) \cite{Tuchin:2010gx, Tuchin:2010vs, Inghirami:2016iru, Das:2017qfi, Greif:2016skc, Greif:2014oia, Puglisi:2014pda, Puglisi:2014sha, Cassing:2013iz, Steinert:2013fza, Aarts:2014nba, Aarts:2007wj, Amato:2013naa, Gupta:2003zh, Ding:2010ga, Kaczmarek:2013dya, Qin:2013aaa, Marty:2013ita, FernandezFraile:2005ka} is also important as the heavy-ion collisions may be associated with large electromagnetic fields. The magnetic field produced in non-central collisions has been estimated to be of the order of $\sim m_\pi^2$ at RHIC energy scales \cite{Kharzeev:2007jp,Skokov:2009qp,Li:2016tel,Inghirami:2019mkc,Inghirami:2018ziv,Shokri:2017xxn,Shokri:2018qcu,Tabatabaee:2020efb}. Such magnetic fields are amongst the strongest magnetic fields produced in nature and can affect various properties of the strongly interacting medium. They may also lead to interesting CP-violating effects such as chiral magnetic effect etc \cite{Kharzeev:2012ph}. In a conducting medium, the evolution of the magnetic field depends on the electrical conductivity. Electrical conductivity modifies the decay of the magnetic field substantially in comparison with the decay of the magnetic field in vacuum. Hence the estimation of the electrical conductivity of the strongly interacting medium is important regarding the decay of the magnetic field produced at the initial stage of heavy ion collision. These transport coefficients have been estimated in perturbative QCD and effective models\cite{Greif:2017byw, Prakash:1993bt, Wiranata:2012br, Chakraborty:2010fr, Khvorostukhin:2010aj, Plumari:2012ep, Gorenstein:2007mw, NoronhaHostler:2012ug, Tiwari:2011km, Ghosh:2013cba, Lang:2015nca, Ghosh:2014qba, Wiranata:2014kva, Wiranata:2012vv, NoronhaHostler:2008ju, Kadam:2014cua, Kadam:2014xka, Ghosh:2014yea, Rose:2017bjz, Wesp:2011yy, Greif:2014oia}. At finite baryon densities, the other transport coefficient that is relevant is the coefficient of thermal conductivity and has been studied in \cite{Denicol:2012vq, Kapusta:2012zb} both in the hadronic matter as well as partonic matter. In the present investigation, we focus on the thermoelectric response of the strongly interacting quark matter produced in a heavy-ion collision. It is well known from condensed matter systems that in a conducting medium, a temperature gradient can result in the generation of an electric current known as the Seebeck effect. Due to a temperature gradient, there is a non zero gradient of charge carriers leading to the generation of an electric field. A measure of the electric field produced in such a conducting medium due to a temperature gradient is the Seebeck coefficient which is defined as the ratio of the electric field to the temperature gradient in the limit of vanishing electric current. Seebeck effect has been extensively studied in condensed matter systems such as superconductors, quantum dots, high-temperature cuprates, superconductor-ferromagnetic tunnel junctions, low dimensional organic metals, \cite{seebconds1,seebconds2,seebconds3,seebconds4,seebconds5,seebconds6,seebconds7,seebconds8,seebconds9}. Such a phenomenon could also be present in the thermal medium created in heavy-ion collisions. It may further be noted that, in condensed matter systems a temperature gradient is sufficient for thermoelectric effect as there is only one type of dominant charge carriers in these systems. In the strongly interacting medium produced in heavy-ion collisions, on the otherhand, both positive and negative charges contribute to transport phenomena. For vanishing baryon chemical potential (quark chemical potential) with equal numbers of particles and antiparticles there is no net thermoelectric effect. A finite baryon chemical potential (quark chemical potential) is required for the thermoelectric effect to be observed. The strongly interacting matter at finite baryon density can be produced in low energy heavy-ion collisions at finite, e.g. at FAIR and NICA. Along with the temperature gradient, we also consider a gradient in the baryon (quark) chemical potential to estimate the Seebeck coefficient of the partonic medium. The gradient in the chemical potential has effects similar to the temperature gradient. Using Gibbs Duhem relation for a static medium one can express gradient in the baryon (quark) chemical potential to a gradient in temperature. Effect of the chemical potential gradient significantly affects the thermoelectric coefficients as has been demonstrated in Ref.\cite{Das:2020beh} for hadronic system. Seebeck effect in the hadronic matter has been investigated previously by some of us within the framework of the Hadron resonance gas model \cite{Bhatt:2018ncr,Das:2020beh}. However, the Hadron resonance gas model can only describe the hadronic medium at chemical freezeout whereas one expects deconfined partonic medium at the early stages of the heavy-ion collisions. In this investigation, we estimate the thermoelectric behavior of the partonic medium within the framework of the NJL model. Seebeck coefficient has also been estimated for the partonic matter including effects of a magnetic field within a relaxation time approximation in Ref.s\cite{Dey:2020sbm,Zhang:2020efz}. However, this has been attempted with the relaxation time estimated within perturbative QCD which may be valid only for asymptotically high temperatures. Further, it ought to be mentioned that, the vacuum structure of QCD remain nontrivial near the critical temperature region with nonvanishing values for the quark-antiquark condensates associated with chiral symmetry breaking as well as Polyakov loop condensates associated with the physics of statistical confinement \cite{Singha:2017jmq,Abhishek:2017pkp,Singh:2018wps,Ratti:2005jh}. Indeed, within the ambit of the NJL model, it was shown that the temperature dependence of viscosity coefficients exhibits interesting behavior of phase transition with the shear viscosity to entropy ratio showing a minimum while the coefficient of bulk viscosity showing a maximum at the phase transition \cite{Deb:2016myz,Singha:2017jmq,Abhishek:2017pkp}. The crucial reason for this behavior was the estimation of relaxation time using medium dependent masses for the quarks as well as the exchanged mesons which reveal nontrivial dependence before and after the transition temperature. This motivates us to investigate the behavior of thermoelectric transport coefficients within the NJL model which takes into account the medium dependence of quark and meson masses. This model has been used to study different transport properties of quark matter at high temperatures \cite{Marty:2013ita, Deb:2016myz, Rehberg:1995kh, Sasaki:2008um} and high densities\cite{Tsue:2012jz, Tatsumi2011, Menezes:2008qt, Chatterjee:2011ry, Mandal:2009uk, Mandal:2012fq, Mandal:2016dzg, Coppola:2017edn}. We organize the paper in the following manner. In Sec.~\eqref{formalism}, we discuss the Boltzmann equation within relaxation time approximation to have the expressions for the different thermoelectric transport coefficients when the quasi-particles have a medium dependent masses. In Sec.~\eqref{NJLmodel} we discuss thermodynamics and estimation of relaxation time within the two flavor NJL model. In Sec.~\eqref{results} we present the results of different transport coefficients. Finally, we give a possible outlook of the present investigation and conclude in Sec.~\eqref{conclusion}. \section{ Boltzmann equation in relaxation time approximation and transport coefficients} \label{formalism} Within a quasiparticle approximation, a kinetic theory treatment for the calculation of transport coefficient can be a reasonable approximation that we shall be following similar to that in Refs. \cite{Sasaki:2008fg,Sasaki:2008um,Chakraborty:2010fr,Khvorostukhin:2010aj,Khvorostukhin:2012kw,Zhuang:1995uf}. The plasma can be described by a phase space density for each species of particle. Near equilibrium, the distribution function can be expanded about a local equilibrium distribution function for the quarks as, $$ f(\vec{x},\vec{p},t) =f^{(0)}(\vec{x},\vec{p})+\delta f(\vec{x},\vec{p}, t), $$ where the local equilibrium distribution function $f^{(0)}$ is given as \begin{equation} f^{(0)}(\vec{x},\vec{p})=\left[\exp\left(\beta(\vec x)\left(u_\nu p^\nu\mp\mu(\vec{x})\right)\right)+1\right]^{-1}. \label{f0} \end{equation} Here, $u^\mu=\gamma_u(1,\vec u)$, is the flow four-velocity, where, $\gamma_u=(1-\vec u^2)^{1/2}$; $\mu$ is the chemical potential associated with a conserved charge. Here $\mu$ denotes the quark chemical potential and $\beta=1/T$ is the inverse of temperature. Further, $p^{\mu}=(E,\vec{p})$ is the particle four momenta, single particle energy $E=\sqrt{p^2+M^2}$ with $p=|\vec{p}|$. $M$ is the mass of the particle which in general is medium dependent. The departure from the equilibrium is described by the Boltzmann equation, \begin{equation} \frac{df_a(\vec{x},\vec{p},t)}{dt}=\frac{\partial f_a}{\partial t}+\frac{dx^i}{dt}\frac{\partial f_a}{\partial x^i} +\frac{dp^i}{dt}\frac{\partial f_a}{\partial p^i}=C^a[f], \label{boltzeq} \end{equation} where we have introduced the species index `$a$' on the distribution function. The right-hand side is the collision term which we shall discuss later. The left-hand side of the Boltzmann equation involves the trajectory $\vec{x}(t)$ and the momentum $\vec{p}(t)$. This trajectory, in general, not a straight line as the particle is moving in a mean-field, which, in general, can be space time-dependent. The velocity of the particle `$a$' is given by $$\frac{d x^i}{d t}=\frac{\partial E_a}{\partial p_a^i}=\frac{p_a^i}{E_a}=v_a^i.$$ Next, the time derivative of momentum , the force, in presence of an electric field $(\vec{\mathcal{E}})$, magnetic field $(\vec{B})$ and a mean field dependent mass can be written as $$\frac{dp^i}{dt}=-\frac{\partial E_a}{\partial x^i}+q_a(\mathcal{E}^i+\epsilon^{ijk}v_jB_k).$$ The time derivatives of $\vec x$ and $\vec p$ can be substituted on the left-hand side of the Boltzmann equation Eq.(\ref{boltzeq}) and the same reduces to \begin{equation} \frac{\partial f_a}{\partial t}+ v^i \frac{\partial f_a}{\partial x^i} +\frac{\partial f_a}{\partial p^i}\left(-\frac{M_a}{E_a}\frac{\partial M_a}{\partial x^i}+ q_a(\mathcal{E}^i+\epsilon^{ijk}v_jB_k)\right)=C^a[f]. \label{boltzeq1} \end{equation} For the collision term on the right-hand side, we shall be limiting ourselves to $2\rightarrow 2$ scatterings only. In the relaxation time approximation the collision term for species $a$, all the distribution functions are given by the equilibrium distribution function except the distribution function for particle $a$. The collision term, to first order in the deviation from the equilibrium function, will then be proportional to $\delta f_a$, given the fact that $C^a[f^{(0)}]=0$ by the principle of local detailed balance. In that case, the collision term is given by \begin{equation} C[f]=-\frac{\delta f_a}{\tau_a}, \end{equation} where, $\tau_a$, the relaxation time for particle `$a$'. In general relaxation time is a function of energy. We shall discuss more about it in the subsequent subsection where we estimate it within the NJL model. Returning back to the left-hand side of Eq.(\ref{boltzeq1}), we keep up to the first order in gradients in space-time. The left-hand side of the Boltzmann equation Eq.(\ref{boltzeq1}), is explicitly small because of the gradients and we, therefore, may replace $f_a$ by $f^{(0)}_a$. While the spatial derivative of the distribution function is given by, \begin{equation} \frac{\partial f^{(0)}_a}{\partial x^i}=-f^{(0)}_a(1-f^{(0)}_a)\partial_i(\beta E_a-\beta\mu_a)= -f^{(0)}_a(1-f^{(0)}_a)\left(-\frac{E_a}{T^2}\partial_i T+\beta\frac{M_a}{E_a}\frac{\partial M_a}{\partial x^i} -\partial_i(\beta\mu_a)\right), \label{delxf} \end{equation} here $\mu_a = b_a\mu$, $b_a$ being the quark number, i.e. $b_a=1$ for quarks and $b_a=-1$ for antiquarks. The momentum derivative of the equilibrium distribution function is given by, \begin{equation} \frac{\partial f^{(0)}_a}{\partial p^i}=-\frac{1}{T}f^{(0)}_a(1-f^{(0)}_a)v_a^i. \label{delpf} \end{equation} Substituting Eqs.\eqref{delpf} and \eqref{delxf} in the Boltzmann equation Eq.(\ref{boltzeq1}) for the static case (where the distribution function is not an explicit function of time) in the absence of magnetic field we have \begin{equation} -f_a^{(0)}(1-f_a^{(0)})\left[v_a^i\left(-\frac{1}{T^2}\partial_iT E_a-\partial_i (\beta\mu_a)\right) +q_a\beta v_a^i\mathcal{E}^i\right]=-\frac{\delta f_a}{\tau_a}. \label{boltztau} \end{equation} The spatial gradients of temperature and chemical potential can be related using momentum conservation in the system and Gibbs Duhem relation. Momentum conservation in a steady-state leads to $\partial_i P=0$ ( $P$, being the pressure)\cite{Gavin:1985ph}. Using Gibbs Duhem relation, the pressure gradient can be written as, with the enthalpy $\omega=\epsilon+P$, \begin{equation} \partial _i P= \frac{\omega}{T}\partial_i T+T n_q\partial_i(\mu/T) \end{equation} which vanishes in steady-state. $n_q$ denotes the net quark number density and $\epsilon$ is the energy density. The above equation relates the spatial gradient of temperature to the spatial gradients in chemical potential as, \begin{equation} \partial_i\mu=\left(\mu-\frac{\omega}{n_q}\right)\frac{\partial_i T}{T}. \label{tmuder} \end{equation} Using Eq.\eqref{tmuder} and Eq.\eqref{boltztau}, $\delta f_a$, the deviation of the distribution function is given as, \begin{equation} \delta f_a=\frac{\tau_a f_a^0(1-f_a^0)}{T}\left[q_a\vec{v}_a\cdot\vec{\mathcal{E}}-\left(E_a-b_a\frac{\omega}{n_q}\right)\frac{\vec v_a\cdot\vec{\nabla} T}{T}\right]. \label{delfa} \end{equation} The nonequilibrium part of the distribution function gives rise to transport coefficients. The electric current is now given as, \begin{eqnarray} \vec J& = &\sum_a g_a\int \frac{d^3p_a}{(2\pi)^3}~q_a\vec{v}_a~\delta f_a\nonumber\\ &=& \sum_a\frac{g_aq_a^2}{3T}\int \frac {d^3 p_a}{(2\pi)^3}~v_a^2\tau_a f_a^0(1-f_a^0)~ \vec{\mathcal{E}}\nonumber\\ &-& \sum_a\frac{g_aq_a}{3T^2}\int \frac{d^3 p_a}{(2\pi)^3}~\tau_a\left(E_a-b_a\frac{\omega}{n_q}\right)f_a^0(1-f_a^0) v_a^2~\vec{\nabla}T. \label{jelec} \end{eqnarray} In Eq.\eqref{jelec} we have used $ v_a^iv_a^j=\frac{1}{3}v_a^2\delta^{ij}$ as because the integrand only depends on the magnitude of momenta. Further, the sum is over all flavors including antiparticles. The degeneracy factor $g_a=6$ corresponding to color and spin degrees of freedom. $b_a$ is the quark number i.e. $b_a=\pm 1$ for quarks and antiquarks respectively. Next, we write down the heat current $\vec{\mathcal{I}}$ associated with the conserved quark number. For a relativistic system, thermal current arises corresponding to a conserved particle number. The thermal conduction due to quarks arises when there is energy flow relative to enthalpy~\cite{Gavin:1985ph}. Therefore the heat current is defined as \cite{Gavin:1985ph}, \begin{equation} \mathcal{I}^i=\sum_aT_a^{0i}-\frac{\omega}{n_q}\sum_ab_a J_{qa}^i. \label{hcurr} \end{equation} Here, $n_q$ is the net quark number density. The energy flux is given by $T^{0i}$, the spatio-temporal component of energy-momentum tensor ($T^{\mu\nu}$)\cite{Gavin:1985ph}, \begin{equation} T^{0i}_a=g_a\int\frac{d^3 p_a}{(2\pi)^3}p_a^if_a. \label{EMtensor} \end{equation} While, quark current is given $\vec J_q$ is given by \begin{equation} J^i_{qa}=g_a\int\frac{d^3 p_a}{(2\pi)^3}\frac{p^i_a}{E_a}f^{}_a b_a, \label{qcurr} \end{equation} Clearly, the contribution to the energy flux and quark current vanishes arising from the equilibrium distribution function $f_a^{(0)}$ due to symmetry consideration and it is only the nonequilibrium part $\delta f_a$ that contribute to the energy flux and quark current in Eqs.\eqref{EMtensor} and Eq.\eqref{qcurr} respectively. Substituting the expression for $\delta f_a$ from Eq.\eqref{delfa} in Eq.\eqref{hcurr}, the heat current $\vec{\mathcal{I}}$ is given as, \begin{equation} \vec{\mathcal{I}}=\sum_a \frac{g_a}{3T}\int \frac{d^3 p_a}{(2\pi)^3}f_a^0(1-f_a^0) v_a^2\tau_a\left[q_a\left(E_a-b_a\frac{\omega}{n_q}\right)\vec{\mathcal{E}} -\left(E_a-b_a\frac{\omega}{n_q}\right)^2\frac{\vec{\nabla}T}{T}\right] \label{hcurr1} \end{equation} The Seebeck coefficient $ S$ is defined by setting the electric current $\vec J=0$ in Eq.(\ref{jelec}) so that the electric field becomes proportional to the temperature gradient i.e. \begin{equation} \vec{\mathcal{E}}=S\vec{\nabla}T. \end{equation} Therefore the Seebeck coefficient for the quark matter in the presence of a gradient in temperature and chemical potential can be expressed as, \begin{equation} S=\frac{\sum_a\frac{g_aq_a}{3T}\int \frac {d^3 p_a}{(2\pi)^3}\tau_a v^2 \left(E_a-b_a\frac{\omega}{n_q}\right)f_a^{(0)}(1-f_a^{(0)})}{T \sum_a\frac{g_a}{3T}q_a^2\int\frac{d^3 p_a}{(2\pi)^3} v^2 \tau_a f_a^{(0)}(1-f_a^{(0)})} \label{seeq} \end{equation} The denominator of the Seebeck coefficient in the above may be identified as $T \sigma_{el}$, where the electrical conductivity $\sigma_{el}$ is given by\cite{Puglisi:2014sha,Kadam:2017iaz}, \begin{equation} \sigma_{el}= \sum_a\frac{g_a}{3T}q_a^2\int\frac{d^3 p_a}{(2\pi)^3} \left(\frac{p_a}{E_a}\right)^2 \tau_a f_a^{(0)}(1-f_a^{(0)}) \label{econd} \end{equation} which may be identified from Eq.\eqref{jelec}. Let us note that, while the denominator of the Seebeck coefficient is positive definite, the numerator is not so as it is linearly dependent on the electric charge of the species as well as on the difference $(E_a-b_a\frac{\omega}{n_q})$. This makes the Seebeck coefficient not always positive definite. This is also observed in different condensed matter systems \cite{Zhou:2020}. In terms of the electrical conductivity and the Seebeck coefficient, the electric current Eq.(\ref{jelec}) can be written as \begin{equation} \vec J=\sigma_{el}\vec{\mathcal{E}}-\sigma_{el}S\vec\nabla T. \label{equnew19} \end{equation} In a similar manner, the heat current as given in Eq.(\ref{hcurr1}) can be written as, \begin{equation} \vec{\mathcal{I}}=T\sigma_{el}S\vec{\mathcal{E}}-\kappa_0 \vec\nabla T, \label{equnew20} \end{equation} where, $\kappa_0$, the thermal conductivity can be written as\cite{Gavin:1985ph} \begin{equation} \kappa_0=\sum_a\frac{g_a}{3T^2}\int \frac{d^3 p_a}{(2\pi)^3} \tau_a\left(\frac{p_a}{E_a}\right)^2 \left(E_a-b_a\frac{\omega}{n_q}\right)^2f_a^{(0)}(1-f_a^{(0)}). \label{tcond} \end{equation} Using Eqs.\eqref{equnew19} and \eqref{equnew20}, we can express the heat current ($\vec{\mathcal{I}}$) in terms of electric current ($\vec{J}$) in the following way, \begin{equation} \vec{\mathcal{I}}=T S \vec{J}-\left(\kappa_{0}-T \sigma_{el} S^{2}\right) \vec{\nabla} T. \label{equnew22} \end{equation} From Eq.\eqref{equnew22} we can identify the Peltier coefficient ($\Pi$) and thermal conductivity ( $k$ ) in the presence of nonvanishing Seebeck coefficient as, \begin{equation} \Pi=T S, ~~ \kappa=\kappa_{0}- T \sigma_{el} S^{2}. \label{equnew23} \end{equation} Note that the relation between the Peltier coefficient ($\Pi$) and the Seebeck coefficient as given in Eq.\eqref{equnew23} can be considered as the consistency relation. Also, note that the thermal conductivity in the absence of any thermoelectric effect as given in Eq.\eqref{tcond} matches with the expression of the thermal conductivity as reported in \cite{Gavin:1985ph}. The Seebeck coefficient ($S$), thermal conductivity ($\kappa_0$), and the electrical conductivity ($\sigma_{el}$) depend upon, the estimation of the relaxation time as well as the quark masses that goes into the distribution functions through the single-particle energies and are medium dependent. We estimate these quantities in the Nambu-Jona-Lasinio model which is described in the next section. \section{Estimation of relaxation time in NJL model} \label{NJLmodel} We model the partonic medium using the two flavor Nambu-Jona-Lasinio (NJL) model and estimate the thermodynamic quantities, the quasi particle masses in the medium and the relaxation time. The two flavour NJL model with $u$ and $d$ quark, can be described by the following Lagrangian~\cite{Buballa:2003qv}, \begin{equation} \mathcal {L}=\bar\psi(i\slashed{\partial}-m_q)\psi+G\left[(\bar\psi\psi)^2+(\bar\psi i\gamma^5\vec\tau\psi)^2\right]. \label{njllag} \end{equation} Here, $\psi$ is the doublet of $u$ and $d$ quarks; $m_q$ is the current quark mass matrix which is diagonal with elements $m_u$ and $m_d$ and we take them to be same as $m_0$ assuming isospin symmetry; $\vec \tau$ are the Pauli matrices in the flavor space; $G$ is the scalar coupling. NJL model is a QCD inspired effective model which incorporates various aspects of the chiral symmetry of QCD. The NJL model Lagrangian as given in Eq.~\eqref{njllag} is symmetric under the chiral symmetry group $SU(2)_V\times SU(2)_A\times U(1)_V$. The thermodynamic quantities e.g., pressure ($P$), energy density ($\epsilon$) and the number density $(n_q)$ can be obtained once we know the thermodynamic potential of the NJL model. In a grand canonical ensemble, the thermodynamic potential ($\Omega$) or equivalently the pressure ($P$) can be expressed as, \begin{equation} -P=\Omega(\beta,\mu)=\frac{(M-m_0)^2}{4G}-\frac{2N_cN_f}{(2\pi)^3\beta}\int d\vec k\left[\log(1+e^{-\beta(E-\mu)}) +\log(1+e^{-\beta(E+\mu)})\right]-\frac{2N_c N_f}{(2\pi)^3}\int d\vec k \sqrt{\vec k^2+M^2}, \label{pres} \end{equation} here in the intergrals $d\vec k$ denotes $d^3k$. In the above, $N_c=3$ is the number of colors and $N_f=2$ is the number of flavors, $E=\sqrt{\vec k^2+M^2}$ is the single particle energy with `constituent' quark mass M which satisfies the self consistent gap equation \begin{equation} M=m_0+\frac{2N_cN_f}{(2\pi)^3}\int d\vec k \frac{M}{\sqrt{ k^2+M^2}} (1-f^{(0)}-\bar f^{(0)}). \label{gapeq} \end{equation} In the above equations $f^{(0)}=(1+\exp(\beta\omega_-))^{-1}$ and $\bar f^{(0)}=(1+\exp(\beta\omega_+))^{-1}$ are the equilibrium distribution functions for quarks and antiquarks respectively and we have written $\omega_{\pm}( k)=E( \vec k)\pm\mu$ with $k\equiv|\vec{k}|$. The energy density $\epsilon$ is given by, \begin{equation} \epsilon=-\frac{2 N_cN_f}{(2\pi)^3}\int d\vec k E(k)(1-f^{(0)}-\bar f^{(0)})+\frac{(M-m_0)^2}{4G}, \end{equation} so that enthalpy $\omega=\epsilon+P$ is also defined once the solution to the mass gap equation Eq.(\ref{gapeq}) is known. In these calculations, we have taken a three momentum cutoff $\Lambda$ for the for calculations of integrals not involving the Fermi distribution functions. The net number density of quarks $n_q$ is given as \begin{equation} n_q=\frac{2 N_c N_f}{(2\pi)^3}\int d\vec{k} (f^{(0)}-\bar f^{(0)}). \end{equation} This completes the discussion on the all the bulk thermodynamic quantities defined for NJL model which enters in the definitions for Seebeck coefficient, electrical conductivity and thermal conductivity. Next we discuss the estimation of relaxation time and as mentioned earlier we consider two particle scattering processes only. For a process $a+b\rightarrow c+d$, the relaxation time for the particle $a$ i.e. $\tau_a(E_a)$ is given by~\cite{Deb:2016myz}, \begin{equation} \tau_a^{-1}(E_a)\equiv\tilde{\omega}(E_a)=\frac{1}{2 E_a}\sum_b\int d\vec\pi_b W_{ab}f_b^{(0)}(E_b), \label{relaxa} \end{equation} where, the summation is over all species other than the particle $``a"$. Further, in Eq.(\ref{relaxa}), we have introduced the notation $d\vec\pi_i=\frac{d^3 p_i}{(2\pi)^3 2 E_i}$ and $W_{ab}$ is the dimensionless transition rate for the processes with $a,b$ as the initial states. $W_{ab}$ which is Lorentz invariant and a function of the Mandelstam variable ($s$) can be given by, \begin{equation} W_{ab}(s)=\frac{1}{1+\delta_{ab}}\int d\vec\pi_cd\vec\pi_d (2\pi)^4\delta(p_a+p_b-p_c-p_d)|\mathcal{M}|_{ab\rightarrow cd}^2 (1-f_c^{(0)}(p_c))(1-f_d^{(0)}(p_d)). \end{equation} In the expression of $W_{ab}$ the Pauli blocking factors have been considered. The quantity $W_{a b}$ can be related to the cross sections of various scattering processes. In the present case within the NJL model, the quark-quark, quark-antiquark and antiquark-antiquark scattering cross sections are calculated to order $1/N_c$ which occur through the $\pi$ and $\sigma$ meson exchanges in ``$s$'' and ``$t$'' channels. The meson propagators that enters into the scattering amplitude is calculated within the random phase approximation and includes their masses and the widths. The mass of the meson is estimated from the pole of the meson propagator at vanishing three momentum i.e., \begin{equation} 1-2G~\text{Re}\Pi_{\tilde m}(M_{\tilde m },0)=0. \label{mesonmass} \end{equation} where $\tilde{m}$ denotes $\sigma, \pi$ for scalar and pseudoscalar channel mesons, respectively. Polarization function in the corresponding mesonic channel is expressed as $\Pi_{\tilde{m}}$. The explicit expressions for $\text{Re}\Pi_{\tilde{m}}$ and the imaginary part $\text{Im}\Pi_{\tilde{m}}$ is given in Ref.\cite{Deb:2016myz} and we do not repeat here. While, the relaxation time is energy dependent, one can also define an energy independent mean relaxation time by taking a thermal average as, \begin{equation} \bar\omega_a\equiv\bar \tau^{-1}_a=\frac{1}{n_a}\int \frac{d^3 p_a}{(2\pi)^3} f^{(0)}_a(E_a)\tilde{\omega}_a(E_a)\equiv\sum_b n_b \bar W_{ab}, \label{bartau} \end{equation} to get an estimate of the average relaxation time. In the above equation, the sum is over all the particles other than `a'; $$n_a=\int \frac{d^3 p_a}{(2\pi)^3} f^{(0)}_a(E_a),$$ is the number density of the species ``$a$'' apart from the degeneracy factor. Here, $\bar W_{ab}$ is the thermal-averaged transition rate given as \begin{equation} \bar W_{ab}=\frac{1}{n_an_b}\int d\vec\pi_a d\vec\pi_b f(E_a)f(E_b) W_{ab}. \label{barw} \end{equation} For the case of two flavors, there are 12 different processes but the corresponding matrix elements can be related using i-spin symmetry, charge conjugation and crossing symmetries with only two independent matrix elements. We have chosen them, as in Refs.\cite{Zhuang:1995uf,Deb:2016myz}, to be the processes $u\bar u\rightarrow u\bar u$ and $u\bar d\rightarrow u\bar d$. The explicit expressions for the matrix elements are given in Refs.\cite{Zhuang:1995uf,Deb:2016myz}. In the meson propagators we have kept both the mass and the width of the meson resonances which are medium dependent. It is important to mention that while the matrix elements of different scattering processes are related, the thermal-averaged rates are not. This is because the thermal averaged rates involve also the thermal distribution functions for the initial states along with the Pauli blocking factors for the final states. \section{Results} \label{results} The two flavor NJL model as given in Eq.(\ref{njllag}) has three parameters, the four fermions coupling $G$, the three momenta cut off ($\Lambda$) to regularize the momentum integral in vacuum and the current quark mass $m_0$. These values are adjusted to fit the physical values of the pion mass ($m_{\pi}$=135 MeV), the pion decay constant ($f_{\pi}$=94 MeV) and the value of the quark condensate in vacuum, $\langle \bar u u\rangle=\langle\bar d d\rangle=(-241 ~\text{MeV})^3$ . We have considered here the value of the parameters as $m_0=5.6$ MeV, $\Lambda=587.9$ MeV and $G\Lambda^2=2.44$ \cite{Buballa:2003qv}. This leads to the constituent quark mass for $u$ and $d$ type quarks, $M=397$ MeV in vacuum ($T=0,\mu=0$). \begin{figure} \begin{minipage}{.485\textwidth} \centering \includegraphics[width=1.1\textwidth]{massq.eps} \end{minipage} \begin{minipage}{.485\textwidth} \centering \includegraphics[width=1.1\textwidth]{dmdtq.eps} \end{minipage} \caption{Left plot: temperature dependence of the masses of constituent quarks ($M$) for different chemical potentials. Right plot: variation of $dM/dT$ with temperature for different chemical potentials. The nonmonotonic variation of $dM/dT$ with a peak structure indicate the pseudo critical temperature for the chiral transition. Note that for the NJL model parameter set and the range of temperature and chemical potential considered here the chiral transition is a smooth crossover.} \label{fig:1} \end{figure} To analyze the variation of different transport coefficients with temperature and quark chemical potential, we have first plotted in the left plot of Fig.~\eqref{fig:1}, the constituent quark masses ($M$) as a function of temperature ($T$) for different values of the quark chemical potential ($\mu$). The constituent quark mass ($M$) results as a solution to the gap equation, Eq.(\ref{gapeq}). Constituent quark masses for $u$ and $d$ quarks are the same and they are related to the quark-antiquark condensate $\langle\bar\psi\psi\rangle$. In the right plot of Fig.~\eqref{fig:1}, we have plotted $dM/dT$ with temperature for different values of the chemical potential. For the range of temperature and chemical potential considered here the chiral transition is a smooth crossover. The chiral crossover temperature may be defined by the position of the peak in the variation of $dM/dT$ with temperature. For $\mu=$ 0, 100 and 200 MeV, the corresponding chiral crossover temperatures turns out to be $\sim $ 188 MeV, 180 MeV and 153 MeV respectively. It is expected that with an increase in chemical potential the crossover temperature decreases. Note that we have considered here the values of the chemical potential which are lower than the chemical potential corresponding to the speculated critical endpoint of the quark-hadron phase transition in the QCD phase diagram. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{mass.eps} \caption{Variation of $\sigma$ and $\pi$ meson masses with temperature for different values of the chemical potentials. The solid lines correspond to $M_{\sigma}$ while the dashed lines correspond to pion masses, $M_\pi$.} \label{fig:2} \end{figure} In Fig.~\eqref{fig:2} we have plotted the meson masses $M_\pi$ and $M_\sigma$ as a function of temperature for different values of chemical potential as solutions of Eq.(\ref{mesonmass}). Note that pions are pseudo-Goldstone modes, therefore in the chiral symmetry broken phase pion mass varies weakly. But $M_\sigma$ decreases rapidly near the crossover temperature. At higher temperatures, $M_\pi$ and $M_\sigma$, being chiral partners, become approximately degenerate and increase with temperature. Further one can define a characteristic temperature, the ``Mott temperature" ($T_M$) where the pion mass becomes twice that of quark mass i.e. at Mott temperature $M_\pi(T_M)=2M(T_M)$. The Mott temperatures for $\mu$=0, 100 and 200 MeV turns out to be $\sim$ $198$ MeV, 192 MeV and 166 MeV respectively. As we shall see later it is the Mott temperature that becomes relevant while estimating the relaxation times of the quarks using thermal scattering rates of the quarks through meson exchange. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{taut.eps} \caption{Variation of thermal averaged relaxation times for quarks and antiquarks with temperature for different chemical potentials. Solid lines correspond to the relaxation time for quarks while the dotted lines correspond to relaxation time for antiquarks. For $\mu=0$ the thermal averaged relaxation times for the quarks and antiquarks are same. Difference between the relaxation times of quarks and antiquarks appears only at finite chemical potential.} \label{fig:3} \end{figure} In Fig.~\eqref{fig:3}, we show the variation of average relaxation time as defined in Eq.(\ref{bartau}), for quarks (solid lines) and antiquarks(dashed lines) with temperature for different chemical potentials. Let us note that the relaxation time of given particle '$a$', as shown in Eq.(\ref{bartau}), depends both on the scattering rates $\bar W_{ab}$ as well as on the number density $n_b$ of the particles other than '$a$' in the initial state i.e. number density of scatterers. It turns out that, for the scattering processes considered here, the process $u\bar{d}\rightarrow u\bar{d}$ \cite{Deb:2016myz}, through charged pion exchange in the s-channel gives the largest contribution to the scattering rate $\bar W_{ab}$ as compared to other channels. As mentioned earlier,by crossing symmetry arguments, this also means that the $ud\rightarrow u d$ scattering rate also contribute dominantly to the thermally averaged scattering rate. Let us discuss first the behaviour of the relaxation time below the Mott temperature $T_M$. Below $T_M$, the average scattering rate is suppresed mostly due to thermal distribution with large constituent quark masses apart from the suppression from the sigma meson propagators with large $M_\sigma$ in the scattering amplitudes. As one approaches $T_M$ from lower temperature, the scattering rates become larger as the constituent quark mass decreases leading to a decrease of the relaxation time for quarks as well as antiquarks. Further, as the chemical potential increases, the densities of antiquarks gets suppressed leading to larger relaxation time for quarks compared to antiquarks. This is what is observed for the behaviour of relaxation time as a function of $T$ and $\mu$ in Fig. ~\eqref{fig:3} below thw Mott temperature. Above $T_M$, the meson propagator develop a pole in the s-channel leading to an enhancement of the scatterring rate. However, at large temperature beyond $T_M$, there will be a suppression due to the large meson masses which increase with temperature. This results in a maximum scattering rate at $T_M$ or a minimum in the average relaxation time as generically seen in Fig.~\eqref{fig:3}. \begin{figure} \begin{minipage}{.485\textwidth} \centering \includegraphics[width=1.1\textwidth]{elec.eps} \end{minipage} \begin{minipage}{.485\textwidth} \centering \includegraphics[width=1.1\textwidth]{thermcond.eps} \end{minipage} \caption{Left plot: Variation of normalized electrical conductivity ($\sigma_{el}/T$) with temperature for different values of the chemical potential. Right plot: Variation of normalized thermal conductivity ($\kappa_0/T^2$) with temperature for different values of the chemical potential.} \label{fig:4} \end{figure} At finite quark chemical potentials, beyond the Mott temperature, the quark-antiquark scattering still contributes dominantly to the scattering $\bar W_{ab}$. However, at finite densities, there are few antiquarks as compared to quarks so that the quarks have fewer antiquarks to scatter off. This leads to a smaller cross-section giving rise to a larger relaxation time for quarks compared to $\mu=0$ case. Due to the enhancement of quark densities at finite $\mu$, the cross-section for quark-quark scattering becomes larger resulting in a smaller relaxation time for the quarks compared to the case at vanishing chemical potential below the Mott temperature. The antiquark relaxation time, on the other hand, is always smaller compared to $\mu=0$ case as there are more quarks to scatter off at finite chemical potential. In the left plot of Fig.\eqref{fig:4} we show the behavior of normalized electrical conductivity $\sigma_{el}/T$ with temperature for different values of chemical potential. The generic behavior of relaxation time of Fig.\eqref{fig:3} is reflected in the behavior of electrical conductivity, having a minimum at Mott transition temperature. Apart from this, it is also observed that $\sigma_{el}/T$ increases with chemical potential. This is because the contribution to the electrical conductivity arises dominantly from quarks rather than antiquarks at finite chemical potential, as the antiquark contribution gets suppressed due to the distribution function. This apart, there is an enhancement of the relaxation time at finite $\mu$ beyond the Mott transition. Both, due to an increase of dominant charge carrier densitiy and an increase in relaxation time with $\mu$ lead to enhancement of electrical conductivity beyond the Mott temperature. On the other hand, below the Mott temperature, although the relaxation time decrease with chemical potential for a given temperature, the increase in the quark number density makes the coefficient of electrical conductivity increasing with chemical potential. Further, in the high-temperature range i.e. for temperatures much greater than the constituent quark mass $M$, $\sigma_{el}/T$ as given in Eq.(\ref{econd}), can be shown to be $\sigma_{el}/T\sim T\tau \exp(\mu/T)$. Therefore for a temperatures larger than the $T_M$, $\sigma_{el}/T$ increase with temperature essentially due to increase in relaxation time. Further, at high temperatures it increases with chemical potential due to the factor of $\exp(\mu/T)$ as seen in Fig.\eqref{fig:4}. In the right plot of Fig.\eqref{fig:4} we show the variation of the normalized thermal conductivity ($\kappa_0/T^2$) with temperature. The ratio shows again a nonmonotonic variation with temperature. The origin of such behavior again lies with the variation of relaxation time with temperature. Beyond the Mott temperature, the thermal conductivity increases sharply with temperature. This can be understood as follows. For large temperatures, when quark masses can be neglected, it can be easily shown that the enthalpy to the net quark number density ratio behaves as $\omega/n_q\sim T\coth(\mu/T)$. Further, in the expression of the thermal conductivity as in Eq.(\ref{tcond}), $(E-\frac{\omega}{n_q})^2\sim (\frac{\omega}{n_q})^2$, due to the fact that single-particle energy ($E$) is much smaller than enthalpy per particle i.e. $\omega/n_q$. Therefore, the variation of the normalized thermal conductivity with temperature and chemical potential is essentially determined by the variation of relaxation time, $\omega/n_q$, and the distribution function with temperature and/or chemical potential. It can be shown as earlier, in the high-temperature limit the normalized thermal conductivity, $\kappa_0/T^2$ can be approximately expressed as, $\kappa_0/T^2\sim T\tau \exp(\mu/T) (\coth(\mu/T))^2$. Thus,beyond $T_M$, the increasing behavior of $\tau$ determines the increasing behavior of $\kappa_0/T^2$. On the other hand for $\mu<<T$, $\coth(\mu/T)\sim T/\mu$ in the leading order. Therefore in the high-temperature limit, $\kappa_0/T^2$ decreases with chemical potential. \begin{figure} \begin{minipage}{.485\textwidth} \centering \includegraphics[width=1.1\textwidth]{seebeck.eps} \end{minipage} \begin{minipage}{.485\textwidth} \centering \includegraphics[width=1.1\textwidth]{wid.eps} \end{minipage} \caption{Left plot: Variation of the Seebeck coefficient with temperature for different values of the chemical potential. Right plot: Variation of the Lorenz number, $L=\kappa_0/(\sigma_{el}T)$ with temperature for different values of the chemical potential.} \label{fig:5} \end{figure} We next show the behavior of the Seebeck coefficient as a function of temperature for different values of quark chemical potential in the left plot of Fig.~\eqref{fig:5}. This coefficient, which is dimensionless, decreases monotonically with temperature. The variation of the Seebeck coefficient with temperature can be understood as follows. First, it may be noted that this coefficient is a ratio of two quantities each of which is proportional to the relaxation time. When we consider the relaxation time as the average relaxation time, the ratio becomes independent of the average relaxation time. Further, at finite chemical potential quark contribution to the Seebeck coefficient is dominant compared to the antiquark contribution. Therefore, contrary to the nonmonotonic variation of $\sigma_{el}/T$ and $\kappa_0/T^2$ with temperature, where the nonmonotonic variation has its origin stemming from the behavior of relaxation time with temperature, the variation of the Seebeck coefficient is not expected to be nonmonotonic. Further, unlike other transport coefficients, the positivity of the Seebeck coefficient is not guaranteed. This is because in the expression of the Seebeck coefficient as given in Eq.(\ref{seeq}), the integrand in the numerator has the factor which is linear in $(E_a-b_a \omega/n_q)$. Therefore for the quarks, this factor becomes $(E-\omega/n_q)$, and the single-particle energy $E$ is much smaller than $\omega/n_q$. Therefore, the term $(E-\omega/n_q)$ is negative which makes the Seebeck coefficient negative. However, it is important to note that the expression of thermal conductivity also contains a term $(E-\omega/n_q)$, but it comes as a square. Therefore,the coefficient of thermal conductivity is positive define. In condensed matter system, the Seebeck coefficient can be both positive and negative, e.g. for electron and holes the Seebeck coefficients are of opposite sign. Further, for a bipolar medium with multiple charge carriers the sign of the Seebeck coefficient depends on the range of temperature considered \cite{Zhou:2020}. Similar to the case of thermal conductivity, one can do an analysis regarding the behavior of the Seebeck coefficient in the massless limit. In the massless limit, it can be shown that $S\sim -\coth(\mu/T)$. Therefore for high temperatures, the leading order contribution to the Seebeck coefficient is $S\sim -T/\mu$. Hence with increasing temperature the Seebeck coefficient decreases, on the other hand with an increase in chemical potential Seebeck coefficient increases. Finally, in the right plot of Fig.~\eqref{fig:5} we have plotted the ratio $L=\kappa_0/(\sigma_{el}T)$, as a function of temperature. This is nothing but the Wiedemann-Franz law. In condensed matter systems, this ratio is a constant and is known as the Lorenz number. In the present case, however, it is observed that the ratio increases monotonically with temperature. Similar to the Seebeck coefficient, in the constant relaxation time approximation, the ratio $L$, is independent of relaxation time. Further, in the high temperature limit $\kappa_0/(\sigma T)\sim (\coth(\mu/T))^2$. Therefore in the leading order for $\mu/T$, $\kappa_0/(\sigma T)\sim T^2/\mu^2$. Hence at high temperatures the ratio $L$ increases with temperature but decreases with quark chemical potential. \section{Conclusion} \label{conclusion} In the present investigation, we have estimated the Seebeck coefficient in a hot and dense partonic medium modeled by the Nambu-Jona-Lasinio model. Here, we have considered thermoelectric effect arising from a temperature gradient as well as a gradient in the chemical potential. Apart from the Seebeck coefficient, we have also estimated electrical conductivity, thermal conductivity, and Lorenz number associated with the Wiedemann–Franz law. Although electrical conductivity and thermal conductivity always remain positive, the Seebeck coefficient is negative for the range of temperature and chemical potential considered here. Also the variation of electrical conductivity and thermal conductivity with temperature and quark chemical potential is intimately related to the variation of the relaxation time with temperature and chemical potential. But the variation of the Seebeck coefficient and the Lorenz number are not sensitive to the variation of relaxation time with temperature and quark chemical potential. In the presence of thermoelectric effects in a conducting medium, the temperature gradient can be converted into an electrical current and vice versa. Seebeck coefficient represents the efficiency of any conducting medium to convert a temperature gradient into an electrical current. Therefore, for a nonvanishing Seebeck coefficient, the electrical current as well as the heat current get modified. The electrical current in the presence of Seebeck effect becomes, $\vec{J}=\sigma_{el} \vec{\mathcal{E}}-\sigma_{e l} S \vec{\nabla} T$. It is important to note that the electrical conductivity $\sigma_{el}$ is always positive due to the contributions of both the particles and the antiparticles. Positivity of the electrical conductivity can be shown using entropy production i.e. second law of thermodynamics. By demanding that in the presence of electromagnetic field $T \partial_{\mu} s^{\mu} \geq 0,$ where $s^{\mu}$ is the entropy current, it can be shown that the electrical conductivity is positive definite \cite{PhysRevD.81.045015}. For a negative Seebeck coefficient in the presence of a positive temperature gradient the electric current gets enhanced. Therefore, the net electric current increases if the electric current due to the thermoelectric effect and the electric current due to the external electric field contributes constructively.The thermal conductivity in the presence of the thermoelectric effect also gets modified. In the presence of a nonvanishing Seebeck coefficient, the net thermal conductivity is given as $\kappa=\kappa_{0}-T \sigma_{el} S^{2}$, indicating the nonvanishing value of the Seebeck coefficient reduces the effective thermal conductivity. It is important to note that the thermal conductivity is required to be positive for the theory to be consistent with the second law of thermodynamics, i.e., $T \partial_{\mu}s^{\mu} \geq 0$. Using the formalism of viscous hydrodynamics and viscous magnetohydrodynamics positivity of the electrical conductivity and the thermal conductivity has been shown explicitly \cite{PhysRevD.81.045015,Gavin:1985ph}. But the contributions to the entropy current coming from the thermoelectric effects are not considered in these investigations. Therefore in the context of entropy production in the viscous hydrodynamics and magnetohydrodynamics, it will be interesting to study the effects of thermoelectric coefficients. Thermoelectric coefficients could also be relevant in the context of the spin Hall effect (SHE). Spin Hall effect is an important ingredient for the generation of spin current and it is a key concept in spintronics. In the generation of spin current, spin Hall effect plays an important role. In spin Hall effect an electric field induces a transverse spin current perpendicular to the direction of the electric field. Spin Hall effect has been investigated recently in a hot and dense nuclear matter in the context of heavy-ion collisions \cite{Liu:2020dxg}. It has been argued that due to SHE, a spin current will be produced proportional to the electric field. This also means external electric field $\vec{\mathcal{E}}$ will induce a local spin polarization and the spin polarization distribution function of fermions (antifermions) in momentum space will feature a dipole distribution. Therefore, there will be a spin flow in the plane transverse to the direction of the electric field. Observation of spin Hall effect may open a new direction in the exploration of the many body quantum effects in hot and dense nuclear matter. However, the life-time of the electric field originating in heavy-ion collisions may be of a small value of order 1 fm. Therefore, the idea of the observation of the spin Hall effect becomes speculative. However, due to the presence of nonvanishing thermoelectric coefficients, any temperature gradient as well as a gradient in the chemical potential can give rise to an effective electric field which may contribute to the spin Hall effect. Therefore a detailed analysis of the thermoelectric property of the hot and dense matter produced in a heavy ion collision experiment could be relevant for spin Hall effect and needs further investigation. \section*{Acknowledgments} We thank Prof. Ajit M. Srivastava for suggesting and initiating discussions on the idea of thermoelectric coefficient in the context of heavy-ion collision. The authors would like to thank Prof. Jitesh R. Bhatt for useful discussions. The authors would also like to thank Sabyasachi Ghosh, Abhishek Atreya, Chowdhury Aminul Islam, Rajarshi Ray for many discussions on the topic of Seebeck coefficient during working group activities at WHEPP 2017, IISER Bhopal. The work of A.D. is supported by the Polish National Science Center Grant No. 2018/30/E/ST2/00432. \bibliographystyle{utphys}
\section{Methodology} \label{sec:theory} Soft-gluon corrections get logarithmically large at the absolute production threshold when $\sqrs$ approaches $M \equiv 4 m_t$, with $m_t$ the mass of the top quark. This corresponds to the limit $\rhohat \to 1$ of the partonic threshold variable, $\rhohat \equiv M^2/\hat s$. The theory of $2\to 4$ threshold resummation builds on the theory resummation for $2\to 2$ processes~\cite{Kidonakis:1998bk, Contopanagos:1996nh, Kidonakis:1998nf,Bonciani:2003nt}. We work in Mellin space, where the hadronic cross section $\sigma_{t\bar{t}t\bar{t}}(N)$ is the Mellin transform w.r.t.~the variable $\rho \equiv M^2/s $ \begin{align} \label{eq:mellin-space} \sigma_{t\bar{t}t\bar{t}}(N) = \int_0^1 {\rm d}\rho \,\rho^{N-1}\sigma_{t\bar{t}t\bar{t}}(\rho)\, \end{align} of the hadronic cross section in momentum space \begin{align} \label{eq:normal-space} \sigma_{t\bar{t}t\bar{t}}(\rho) = \sum_{i,j} & \int_0^1 {\rm d}x_1 f_i(x_1,\mu_F^2)\int_0^1 {\rm d}x_2 f_j(x_2,\mu_F^2) \nonumber\\ & \times \int_{\rho}^1{\rm d}\rhohat \, \delta\left(\rhohat-\frac{\rho}{x_1x_2}\right)\,\hat{\sigma}_{ij\to t\bar{t}t\bar{t}}(\rhohat)\,. \end{align} Here we use $f_i$ to denote parton distribution functions (PDFs), $\mu_F$ the factorisation scale, and $x_{1,2}$ the momentum fraction of the two colliding partons $i,j$. Only two partonic channels contribute at leading order (LO), $ij=\{ q\bar q ,gg\}$. The cross section $\hat{\sigma}_{ij\to t\bar{t}t\bar{t}}(N)$ is a purely perturbative function that obeys a refactorisation in the soft and collinear limits into functions containing information on particular modes of dynamics. Correspondingly, one can identify a soft function $\mathbf{S}$, containing corrections originating from soft gluon radiation, a collinear / jet function for each initial-state leg $\Delta_i$, containing corrections from collinear gluon radiation. All terms that are non-logarithmic in the soft-gluon limit are collected in the hard function $\mathbf{H}$. These functions are defined at the cross section level, i.e.~they include the necessary phase-space integrals. The factorisation in Mellin space takes the form \begin{align} \label{eq:resummationdef} \hat{\sigma}^{\rm res}_{ij\to t\bar{t}t\bar{t}}(N) = \,\,&\Delta_i(N+1) \Delta_j(N+1) \\ &\hspace{0.3cm} \times \, {\rm Tr} \left[ \mathbf{\bar{S}}_{ij\to t\bar{t}t\bar{t}}(N+1) \otimes \mathbf{H}_{ij\to t\bar{t}t\bar{t}}(N)\right]\,, \nonumber \end{align} suppressing the dependence of the various ingredients on the factorisation and renormalisation scales. As the jet and soft functions both capture soft-collinear enhancements, care must be taken to subtract the overlap contributions. In practice, this is done by dividing out the eikonal jet functions $\mathcal{J}_i$ from the soft function. This results in a new soft-collinear subtracted soft function that is denoted by $\mathbf{\bar{S}}$, and related to the full soft function as \begin{align} \mathbf{\bar{S}}(N+1) = \frac{\mathbf{S}(N+1)}{\mathcal{J}_1(N+1) \mathcal{J}_2(N+1) }\,. \end{align} The soft and hard functions are generally matrices in colour space, as indicated by their bold font, and colour-connected, indicated by the $\otimes$-symbol. We now briefly go over the definition for each of the ingredients in Eq.~\eqref{eq:resummationdef}. The hard function $\mathbf{H}_{ij\to t\bar{t}t\bar{t}}$ in Eq.~\eqref{eq:resummationdef} obeys the perturbative expansion \begin{eqnarray} \mathbf{H}_{ij\to t\bar{t}t\bar{t}} = \mathbf{H}_{ij\to t\bar{t}t\bar{t}}^{(0)} + \frac{\alpha_s}{\pi}\mathbf{H}_{ij\to t\bar{t}t\bar{t}}^{(1)} + \mathcal{O}(\alpha_s^2)\,.\,\,\,\,\,\, \end{eqnarray} At the NLL accuracy we need $\mathbf{H}_{ij\to t\bar{t}t\bar{t}}^{(0)}$, defined as a matrix in colour space with an element $IJ$ \begin{align} \mathbf{H}_{ij\to t\bar{t}t\bar{t},IJ}^{(0)} = \,\,&\frac{1}{2\hat{s}} \int_0^1{\rm d}\hat{\rho} \, \hat{\rho}^{N-1} \int {\rm d}\Phi^B \sum_{{\rm colour, spin}} \mathcal{A}^{(0)}_I \mathcal{A}^{\dagger(0)}_J\,, \end{align} where we sum (average) over final(initial)-state colour and polarization degrees of freedom. The Born phase space is denoted by $\Phi^B$. The object $\mathcal{A}^{(0)}_I=\langle c_I |\mathcal{A}^{(0)}\rangle$ is the colour-stripped amplitude projected to the colour-vector $c_I$, with $|\mathcal{A}^{(0)}\rangle$ the amplitude in the corresponding colour basis, while $\mathcal{A}^{(0)\dagger}_J$ is its complex conjugate projected to $c^{\dagger}_J$. We obtain the squared matrix elements numerically from aMC@NLO~\cite{Frederix:2018nkq,Alwall:2014hca}, after selecting a suitable colour basis as discussed below. The coefficient $\mathbf{H}_{ij\to t\bar{t}t\bar{t}}^{(1)}$ enters formally at next-to-next-to-leading logarithmic accuracy but can be used to supplement the NLL expressions, resulting in NLL$^\prime$ precision. It consists of virtual one-loop corrections, $\mathbf{V}_{ij\to t\bar{t}t\bar{t}}^{(1)}$, and constant terms stemming from collinear-enhanced contributions $\mathbf{C}_{ij\to t\bar{t}t\bar{t}}^{(1)}$ that are not yet captured by the initial-state jet functions $\Delta_i$, i.e. \begin{align} \label{eq:h1} \mathbf{H}_{ij\to t\bar{t}t\bar{t}}^{(1)} = \mathbf{V}_{ij\to t\bar{t}t\bar{t}}^{(1)} + \mathbf{C}_{ij\to t\bar{t}t\bar{t}}^{(1)}\,. \end{align} While the $\mathbf{C}_{ij\to t\bar{t}t\bar{t}}^{(1)}$ coefficient is calculated analytically, the $\mathbf{V}_{ij\to t\bar{t}t\bar{t}}^{(1)}$ coefficient is obtained numerically using MadLoop~\cite{Hirschi:2011pa, Ossola:2007ax, Cascioli:2011va, Denner:2016kdg}. We have explicitly checked that the infrared pole structure of the MadLoop calculation, using FKS subtraction~\cite{Frixione:1995ms, Frixione:1997np, Frederix:2009yq}, matches that of our resummed calculation. For the incoming jet functions we use the well-known expressions that can be found in e.g.~\cite{Catani:1989ne, Catani:1998tm, Catani:2003zt}, which are a function of $\lambda = \alpha_s b_0 \ln \bar{N}$ with $\bar{N} \equiv N{\rm e}^{\gamma_E}$. The soft function is given by~\cite{Contopanagos:1996nh, Kidonakis:1998bk} \begin{align} \label{eq:softdefinition} \mathbf{S}_{ij\to t\bar{t}t\bar{t}}= \mathbf{\overline{U}}_{ij\to t\bar{t}t\bar{t}}\,\,\mathbf{\widetilde{S}}_{ij\to t\bar{t}t\bar{t}}\,\,\mathbf{U}_{ij\to t\bar{t}t\bar{t}} \,, \end{align} with the evolution matrix written as a path-ordered exponential \begin{align} \label{eq:uevol} \mathbf{U}_{ij\to t\bar{t}t\bar{t}} = P {\rm exp}\left[\frac{1}{2}\int_{\mu_R^2}^{M^2/\bar{N}^2}\frac{{\rm d}q^2}{q^2} \mathbf{\Gamma}_{ij\to t\bar{t}t\bar{t}}(\alpha_s(q^2))\right], \end{align} and $\mathbf{\Gamma}_{ij\to t\bar{t}t\bar{t}}(\alpha_s(q^2))$ the soft anomalous dimension matrix. To achieve NLL($^\prime$) resummation we need to know the one-loop contribution $\mathbf{\Gamma}_{ij\to t\bar{t}t\bar{t}}^{(1)}$ in Eq.~\eqref{eq:uevol}. This object consists of a kinematic part and a colour-mixing part, which accounts for the change in colour of the hard system, i.e. \begin{eqnarray} \label{eq:sad} \mathbf{\Gamma}_{ij\to t\bar{t}t\bar{t}, IJ}^{(1)} = \sum_{k,l=1}^6 {\rm Tr}\left[c_I \mathbf{T}_k \cdot \mathbf{T}_l c^{\dagger}_J \right]\Gamma_{kl}\,, \end{eqnarray} where $\mathbf{T}_k$ are colour operators. The explicit expression for $\mathbf{\Gamma}_{ij\to t\bar{t}t\bar{t}, IJ}^{(1)}$ depends on a choice of basis tensors represented by $c_{I}$ (and $c_J^{\dagger}$ for the complex conjugate) for the underlying hard scattering process $ij \to t\bar{t}t\bar{t}$. The kinematic part, $\Gamma_{kl}$, is given by the residue of the UV-divergent part of the one-loop eikonal contributions~\cite{BOTTS198962, Kidonakis:1997gm, Kidonakis:1996aq}. The matrix $\mathbf{\widetilde{S}}_{ij\to t\bar{t}t\bar{t}}$ in Eq.~\eqref{eq:softdefinition} represents the boundary condition for the solution of the renormalisation group equation at $\mu_R = M/\bar{N}$ from which Eq.~\eqref{eq:softdefinition} follows. Like $\mathbf{H}$, it obeys a perturbative expansion which reads \begin{align} \mathbf{\widetilde{S}}_{ij\to t\bar{t}t\bar{t}} =\mathbf{\widetilde{S}}_{ij\to t\bar{t}t\bar{t}}^{(0)} + \frac{\alpha_s}{\pi} \mathbf{\widetilde{S}}_{ij\to t\bar{t}t\bar{t}}^{(1)} + \mathcal{O}(\alpha_s^2)\,. \end{align} The lowest-order contribution $\mathbf{\widetilde{S}}_{ij\to t\bar{t}t\bar{t}}^{(0)}$ is given by the trace of the colour basis vectors for the underlying hard process. For NLL$^\prime$ resummation we also need the first-order correction $\mathbf{\widetilde{S}}_{ij\to t\bar{t}t\bar{t}}^{(1)}$, which is calculated analytically by considering the eikonal corrections to $\mathbf{\widetilde{S}}_{ij\to t\bar{t}t\bar{t}}^{(0)}$. The major difficulty in the resummed calculations for the $t\bar{t}t\bar{t}$ production cross section stems from the complicated colour structure of the underlying hard process, involving six coloured particles. The colour structure of the $q\bar{q} \to t\bar{t}t\bar{t} $ process is \begin{eqnarray} \mathbf{3}\otimes \mathbf{\bar{3}} &=& \mathbf{3}\otimes \mathbf{\bar{3}} \otimes \mathbf{3}\otimes \mathbf{\bar{3}}. \end{eqnarray} The decomposition into irreducible representations reads \begin{eqnarray} \label{eq:reductionqqbar} \mathbf{1}\oplus\mathbf{8} = (2\times\mathbf{1}) \oplus (2\times\mathbf{8}) \oplus \mathbf{8}_S \oplus \mathbf{8}_A \oplus \mathbf{10} \oplus \mathbf{\overline{10}}\oplus \mathbf{27}\,. \,\,\,\,\,\,\,\,\,\,\, \end{eqnarray} For the $gg$ channel we have \begin{eqnarray} \mathbf{8}\otimes \mathbf{8} &=& \mathbf{3}\otimes \mathbf{\bar{3}} \otimes \mathbf{3}\otimes \mathbf{\bar{3}} \,, \end{eqnarray} and in terms of irreducible representations \begin{eqnarray} \mathbf{0}\oplus \mathbf{1}\oplus \mathbf{8}_S \oplus \mathbf{8}_A \oplus \mathbf{10} \oplus \mathbf{\overline{10}}\oplus \mathbf{27} &=& \\ &&\hspace{-4.5cm} \mathbf{0}\oplus (2\times\mathbf{1}) \oplus (2\times\mathbf{8}) \oplus \mathbf{8}_S \oplus \mathbf{8}_A \oplus \mathbf{10} \oplus \mathbf{\overline{10}}\oplus \mathbf{27}\,. \nonumber \end{eqnarray} From this we infer that the $q\bar{q}$ colour space is $6$-dimensional, whereas the $gg$ one is $14$-dimensional, directly translating into the dimensions of the soft anomalous dimension matrices of Eq.~\eqref{eq:sad}. Moreover, the one-loop soft anomalous dimension matrices $\mathbf{\Gamma}_{ij\to t\bar{t}t\bar{t}}^{(1)}$ are in general not diagonal. Solving Eq.~\eqref{eq:uevol} in terms of standard exponential functions requires changing the colour bases to $R$ where $\mathbf{\Gamma}_{ij\to t\bar{t}t\bar{t},R}^{(1)}$ is diagonal~\cite{Kidonakis:1998bk}. We find such orthonormal bases using the technique outlined in Ref.~\cite{Keppeler:2012ih}. The resulting one-loop soft anomalous dimension matrices for $N_c = 3$ in the threshold limit become~\footnote{Their full forms will be given in Ref.~\cite{4top:upcomingpaper}.} \begin{subequations} \begin{align} & 2{\rm Re}[\overline{\mathbf{\Gamma}}_{q\bar{q}\to t\bar{t}t\bar{t},R}] = \text{diag}\left(0,0,-3,-3,-3,-3\right), \\ & 2{\rm Re}[\overline{\mathbf{\Gamma}}_{gg\to t\bar{t}t\bar{t},R}] = \text{diag}(-8, -6, -6, -4, -3,-3,\\ & \hspace{3cm} -3,-3,-3,-3,-3,-3,0,0). \nonumber \end{align} \end{subequations} The values above are the negative values of the quadratic Casimir invariants for the irreducible representations in which the colour structure of the final state can be decomposed in SU(3). This result corresponds to a physical picture where the soft gluon is only sensitive to the total colour charge of a system at threshold, and constitutes a strong check of our calculations. We have also verified that the virtual corrections obtained from MadLoop, rewritten in the new basis $R$, are consistently $0$ for the base vector corresponding to a representation whose dimension is zero for $N_c=3$, which is another important consistency check of our work. With this, the contribution of the soft-collinear-subtracted soft function in Mellin space reads \begin{align} \mathbf{\bar{S}}_{ij\to t\bar{t}t\bar{t}, R} (N) &= \\ &\hspace{-0.2cm} \mathbf{\bar{\widetilde{S}}}_{ij\to t\bar{t}t\bar{t}, R}\, \exp\left[\frac{{\rm Re}[\mathbf{\bar{\Gamma}}_{ij\to t\bar{t}t\bar{t}, R}^{(1)}]}{b_0 \pi}\ln\left(1-2\lambda\right)\right], \nonumber \end{align} where $\mathbf{\bar{\Gamma}}_{ij\to t\bar{t}t\bar{t}, R}^{(1)}$ is related to $\mathbf{\Gamma}_{ij\to t\bar{t}t\bar{t}, R}^{(1)}$ after subtracting the soft-collinear contributions~\cite{Kidonakis:1998nf}. Note that the hard function in Eq.~\eqref{eq:resummationdef} also needs to be written in terms of the colour tensor basis $R$, requiring us to transform from the trace-basis used in aMC@NLO to our new basis. The last step to calculate a physical cross section in momentum space involves taking the inverse Mellin transform of the $N$-space expression \begin{align} \label{Eq:match} \sigma^{\rm f.o. +res}_{t\bar{t}t\bar{t}}(\rho) & = \sigma^{\rm f.o.}_{t\bar{t}t\bar{t}}(\rho) +\\ & \sum_{ij} \int_{\mathcal{C}} \frac{{\rm d}N}{2\pi i}\rho^{-N} f_i(N+1,\mu_F^2) f_j(N+1,\mu_F^2) \nonumber\\ & \times \left[ \hat{\sigma}_{ij\to t\bar{t}t\bar{t}}^{\rm res}(N) - \hat{\sigma}_{ij\to t\bar{t}t\bar{t}}^{\rm res}(N) \Big|_{\mathcal{O}(\alpha_s^n) } \right]\,, \nonumber \end{align} where `res' denotes LL, NLL or NLL$^\prime$. To retain the full available information from the perturbative calculation, we match the resummed result to the fixed-order cross section $ \sigma^{\rm f.o.}$, leading to the `f.o.~+res' accuracy. To avoid double-counting, the resummed result is expanded up to $\mathcal{O}(\alpha_s^n)$ (denoted as $\hat{\sigma}_{ij\to t\bar{t}t\bar{t}}^{\rm res}(N) \big|_{\mathcal{O}(\alpha_s^n)}$), with $n =4$ for f.o.=LO or $n=5$ for f.o.=NLO. The inverse Mellin transform in Eq.~(\ref{Eq:match}) relies on the so-called Minimal Prescription~\cite{Catani:1996yz} and is evaluated numerically on a contour $\mathcal{C}$ parameterised by $C_{\rm MP}$ and $\phi_{\rm MP}$ as \begin{align} N = C_{\rm MP} + y {\rm e}^{i\phi_{\rm MP}}\,, \end{align} with $y \in [0,\infty)$. We calculated results for various values $C_{\rm MP}$ and $\phi_{\rm MP}$ to verify the independence of the result on the choice of the contour. \section{Numerical results} \label{sec:res} The phenomenological studies reported in this letter are performed using the central member of the LUXqed\_plus\_PDF4LHC15\_nnlo\_100 PDF set~\cite{Manohar:2016nzj, Manohar:2017eqh} for both the pure QCD results and the QCD + EW results. This PDF set is based on the PDF4LHC15 PDF set~\cite{Butterworth:2015oua, NNPDF:2014otw, Harland-Lang:2014zoa, Dulat:2015mca} and includes the photon content of the proton, needed for the calculation of the EW corrections. We use the $\alpha_s$ value corresponding to the PDF set, take the mass of the top quark $m_t = 172.5$~GeV (unless stated otherwise) and choose the central factorisation and renormalisation scale $\mu_{F,0} = \mu_{R,0} = 2m_t$. The theoretical uncertainty is estimated by varying $\mu_R$ and $\mu_F$ using a $7$-point scale variation. To this end, we consider the minimal and maximum cross section values calculated for \begin{align} \label{eq:7-point-variations} \left( \frac{\mu_R} { \mu_{R, 0}}, \frac{\mu_F}{\mu_{F,0}} \right)_{7-{\rm point}} \in& \{(0.5,0.5),(0.5,1),(1,0.5), \nonumber \\ & (1,1),(1,2),(2,1),(2,2)\}\,. \end{align} The fixed-order results are obtained using aMC@NLO~\cite{Frederix:2018nkq,Alwall:2014hca}. Since our calculation concerns pure QCD corrections, we present the LO and NLO QCD results for comparison. However, our final resummation-improved cross section incorporates the NLO(QCD+EW) result, where the electroweak corrections are included up to $\mathcal{O}(\alpha^2)$~\cite{Frederix:2017wme}.~\footnote{In the notation of Ref.~\cite{Frederix:2017wme}, we include up to (N)LO$_{3}$.} We show our results for $\bar{N}$ resummation, but did confirm that those for $N$-resummation show qualitatively the same behaviour. We defer a detailed discussion of the subtle differences between $N$ and $\bar{N}$ resummation to an upcoming publication~\cite{4top:upcomingpaper}. In Fig.~\ref{fig:summary} we show the scale dependence of various fixed-order and matched resummed results for $\sigma_{t\bar{t}t\bar{t}}$ under the assumption $\mu_R = \mu_F$. While the NLL corrections only moderately improve the scale dependence of the NLO QCD cross section, the scale sensitivity of the NLO+NLL$^\prime$ result is dramatically reduced. NLL$^\prime$ contributions increase the $\sigma_{t\bar{t}t\bar{t}}$ predictions by 16\% w.r.t.~the pure NLO QCD result, and by 15\% w.r.t.~the complete NLO (QCD+EW) result, see the reported ${\rm K_{NLL^{\prime}}}$ factors in Table~\ref{tab:summary}. These corrections are more than twice the size of the previously calculated complete EW effects at NLO. \begin{figure}[t] \centering \includegraphics[page=9, width=0.45\textwidth]{plot-summary.pdf} \caption{Scale dependence of the pure QCD LO (gray dashed), NLO (gray solid), LO+LL (purple dashed), NLO+NLL (light-blue dash-dotted), NLO+NLL$^\prime$ (blue solid) and NLO(QCD+EW)+NLL$^\prime$ (dark-blue solid) cross sections at $\sqrt{s} = 13$~TeV, obtained by varying $\mu =\mu_R = \mu_F $ with a factor of $2$ around the central scale of $\mu_0 = 2m_t$.} \label{fig:summary} \end{figure} \begin{figure}[t] \centering \includegraphics[page=7, width=0.45\textwidth]{plot-summary.pdf} \caption{Predictions for the total $pp\rightarrow t\bar{t}t\bar{t}$ cross section at $\sqrt{s}=13$ TeV for fixed-order calculations and resummation-improved results, obtained using the $7$-point scale variation as indicated in Eq.~\eqref{eq:7-point-variations}. } \label{fig:summary2} \end{figure} \begin{table*}[th] \renewcommand{\arraystretch}{1.5} \centering \begin{ruledtabular} \begin{tabular}{ccccc} $\sqrt{s}$ (TeV) & $\sigma_{t\bar{t}t\bar{t}}^{\rm NLO}$ (fb) & $\sigma_{t\bar{t}t\bar{t}}^{\rm NLO+NLL}$ (fb) & $\sigma_{t\bar{t}t\bar{t}}^{\rm NLO+NLL^{\prime}}$ (fb) & $ {\rm K_{NLL^{\prime}}}$ \\ \hline 13 & $11.00(2)_{-24.5\%}^{+25.2\%}$~fb& $11.46(2)_{-17.7\%}^{+ 21.3\%}$~fb& $12.73(2)_{-11.8\%}^{+4.1\%}$ ~fb& $1.16$\\ 13.6 & $13.14(2)_{-24.4\%}^{+25.1\%}$~fb& $13.81(2)_{-20.1\%}^{+ 20.7\%}$~fb& $15.16(2)_{-11.9\%}^{+2.5\%}$ ~fb& $1.15$\\ \hline \hline $\sqrt{s}$ (TeV) & $\sigma_{t\bar{t}t\bar{t}}^{\rm NLO(QCD+EW)}$ (fb) & $\sigma_{t\bar{t}t\bar{t}}^{\rm NLO(QCD+EW)+NLL}$ (fb) & $\sigma_{t\bar{t}t\bar{t}}^{\rm NLO(QCD+EW)+NLL^{\prime}}$ (fb) & ${\rm K_{NLL^{\prime}} }$ \\ \hline 13 & $11.64(2)^{+23.2\%}_{-22.8\%}$~fb & $12.10(2)^{+19.5\%}_{-16.3\%}$~fb & $13.37(2)^{+3.6\%}_{-11.4\%}$~fb & $1.15$\\ 13.6 & $13.80(2)^{+22.6\%}_{-22.9\%}$~fb & $14.47(2)^{+18.5\%}_{-19.1\%}$~fb & $15.82(2)^{+1.5\%}_{-11.6\%}$~fb & $1.15$\\ \end{tabular} \end{ruledtabular} \caption{\label{tab:summary} Fixed and resummed-and-matched total cross sections in fb for $pp\to t\bar{t}t\bar{t}$ with $\sqrt{s} = 13$~TeV and $\sqrt{s} = 13.6$~TeV, the central scale value of $\mu_0 = 2m_t$ and $m_t = 172.5$~GeV. The number in parenthesis indicates the statistical uncertainty on the last digit whereas the percentage error indicates the $7$-point scale uncertainty, obtained using the variations indicated in Eq.~\eqref{eq:7-point-variations}. The ${\rm K_{NLL^{\prime}}}$ factor is the ratio of the resummation-improved cross section at NLO+NLL$^\prime$ to the NLO cross section.} \end{table*} Next we examine the reduction of the theoretical error of the resummation-improved cross section using the 7-point method. In Table~\ref{tab:summary} we quote the central values of the NLO, NLO(QCD+EW), NLO+NLL$^\prime$ and NLO(QCD+EW)+NLL$^\prime$ cross sections together with the corresponding error due to scale variation. This information is graphically represented in Fig.~\ref{fig:summary2}. We see that the 7-point method scale error gets smaller with increasing accuracy of the calculations. Remarkably, the scale error of the NLO+NLL$^\prime$ predictions is reduced compared to NLO predictions by more than a factor of 2. Including the PDF uncertainty of $\pm 6.9\%$, our state-of-the-art predictions for $\sqrt s=13$ TeV and $m_t = 172.5$~GeV read \small \begin{align} \sigma^{\rm NLO(QCD+EW) + NLL'}_{t\bar{t}t\bar{t}} =13.37(2) \,^{+0.48}_{-1.52} {\rm (scale)}\pm 0.92 {\rm (pdf)}\ {\rm fb}, \nonumber \end{align} \normalsize or, adding the two theoretical errors in quadrature \begin{align} \sigma^{\rm NLO(QCD+EW) + NLL'}_{t\bar{t}t\bar{t}} =13.37(2) \,^{+1.04}_{-1.78}\ {\rm fb}. \end{align} In Table~\ref{tab:summary} we also report the obtained cross section for the LHC CM energy of $13.6$~TeV. Including the scale uncertainty of $\pm 6.7\%$ we obtain \small \begin{align} \sigma^{\rm NLO(QCD+EW) + NLL'}_{t\bar{t}t\bar{t}} & = 15.82(2) \,^{+0.24}_{-1.83} {\rm (scale)}\pm 1.06 {\rm (pdf)}\ {\rm fb} \nonumber \\ & = 15.82(2)\,^{+1.09}_{-2.11}\ {\rm fb}, \nonumber \end{align} \normalsize which is an increase of $18.3\%$ w.r.t.~the obtained cross section for $\sqrt{s} = 13$~TeV. We have also studied the effect of varying the value of the top mass in the window of $[170-175]$~GeV. The resulting predictions are shown in Fig.~\ref{fig:mt-variation} for $\sqrt{s} = 13$~TeV. We observe that the correction stemming from soft-gluon resummation is flat under variation of the top quark mass. \begin{figure}[t] \centering \includegraphics[page=1, width=0.45\textwidth]{plot-summary-mt.pdf} \caption{Cross section for the $pp\to t\bar{t}t\bar{t}$ process with $\sqrt{s} = 13$~TeV for different values of $m_t$. Shown are the LO, NLO and NLO+NLL' predictions (QCD + EW). The bands indicates the scale uncertainty calculated using the $7$-point method, where the central scale is taken to be $\mu_0 = 2m_t$.} \label{fig:mt-variation} \end{figure} \section{Conclusion} \label{sec:conclusions} In this letter, we have obtained predictions for the total cross section of the four top production process at NLO+NLL$^\prime$ accuracy, including electroweak corrections for the fixed-order prediction. This is the first time that the framework of threshold resummation has been applied to a $2\to 4$ process containing six coloured partons at leading order. We present our results both at a collider energy of $13$ and $13.6$ TeV, and vary the top mass in the window of $170-175$ GeV. Setting $m_t = 172.5$~GeV and $\sqrt{s} = 13.6$~TeV, we find the total cross section $\sigma^{\rm NLO(QCD + EW)+ NLL^\prime}_{t\bar{t}t\bar{t}} = 15.8^{+1.5\%}_{-11.6\%}$~fb, where the indicated error is estimated using the $7$-point scale uncertainty. When compared to the NLO(QCD+EW)-only prediction, $\sigma^{\rm NLO(QCD+EW)}_{t\bar{t}t\bar{t}} = 13.8^{+22.6\%}_{-22.9\%}$~fb, we find that the central value is increased with a $K$-factor of $1.15$. The uncertainty stemming from scale variation is reduced by more than a factor of two. Including the PDF error in quadrature we reduce the total theoretical uncertainty from $(+23.6\%,-23.9\%)$ at NLO(QCD+EW) to $(+6.8\%,-13.4\%)$ at NLO(QCD + EW)+ NLL$^\prime$, which lies comfortably below the current experimental uncertainty. These predictions will play an important role in stress-testing the SM, especially in view of the latest experimental results obtained for $t\bar{t}t\bar{t}$ production. \section*{Acknowlegdements} We are grateful to Marco Zaro and Davide Pagani for their help in extracting the NLO electroweak corrections from aMC@NLO. This work has been supported in part by the DFG grant KU 3103/2. MvB acknowledges support from a Royal Society Research Professorship (RP/R1/180112) and from the Science and Technology Facilities Council under grant ST/T000864/1, while LMV acknowledges support from the DFG Research Training Group “GRK 2149: Strong and Weak Interactions - from Hadrons to Dark Matter". AK gratefully acknowledges the support and the hospitality of the CERN Theoretical Physics Department.
\section{Introduction and Notation} Throughout this paper, we let ${\mathbb N}^*$ denote the set of positive integers and $\mathscr{P}$ the set of prime numbers. We let $\card \mathscr{A}$ denote the cardinal of a given finite set $\mathscr{A}$. For a given prime number $p$, we let $\vartheta_p$ denote the usual $p$-adic valuation. For $x \in {\mathbb R}$, we let $\lfloor x\rfloor$ denote the integer-part of $x$. For $N , b \in {\mathbb N}$, with $b \geq 2$, the expansion of $N$ in base $b$ is denoted by $N = \overline{a_k a_{k - 1} \dots a_1 a_0}_{(b)}$, meaning that $N = a_0 + b a_1 + b^2 a_2 + \dots + b^k a_k$ (with $k \in {\mathbb N}$, $a_0 , a_1 , \dots , a_k \in \{0 , 1 , \dots , b - 1\}$ and $a_k \neq 0$). In such a context, we let $S_b(N)$ denote the sum of base-$b$ digits of $N$, that is $S_b(N) := a_0 + a_1 + \dots + a_k$. Further, we let $\pi$ and $\theta$ respectively denote the prime-counting function and the Chebyshev theta function, defined by: $$ \pi(x) := \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq x \end{subarray}} 1 ~~~~\text{and}~~~~ \theta(x) := \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq x \end{subarray}} \log{p} ~~~~~~~~~~ (\forall x \in {\mathbb R}^+) . $$ The prime number theorem states that $\pi(x) \sim_{+ \infty} \frac{x}{\log{x}}$. Other equivalent statements are: $\theta(x) \sim_{+ \infty} x$ and $\log\mathrm{lcm}(1 , 2 , \dots , n) \sim_{+ \infty} n$ (see e.g., \cite[Chapter 4]{kon}). The weaker estimates $\pi(x) = O\left(\frac{x}{\log{x}}\right)$, $\theta(x) = O(x)$ and $\log\mathrm{lcm}(1 , 2 , \dots , n) = O(n)$ are called Chebyshev's estimates. In section \ref{sec4}, we use extensively Landau's big $O$ notation which we sometimes specify as follows: if $f$ and $g$ are two real functions, with $g > 0$, defined on some interval $I$ of ${\mathbb R}$, and depending on a parameter $t$, then we write $f = O_{\perp t}(g)$ if there exists a positive constant $M$, \underline{not depending on $t$}, such that $\vert f(x)\vert \leq M g(x)$ ($\forall x \in I$). In number theory, it is common that a prime factorisation of some special numbers $N$ makes appear, as exponents of each prime $p$, expressions of the form $\lfloor\frac{u_N}{f(p)}\rfloor$ or a sum of such expressions. The most famous example is perhaps the Legendre formula stating that for any natural number $n$, we have \begin{equation}\label{eq24} n! = \prod_{p \text{ prime}} p^{\lfloor\frac{n}{p}\rfloor + \lfloor\frac{n}{p^2}\rfloor + \lfloor\frac{n}{p^3}\rfloor + \dots} , \end{equation} which may be also reformulate in terms of base expansions as follows: \begin{equation}\label{eq25} n! = \prod_{p \text{ prime}} p^{\frac{n - S_p(n)}{p - 1}} . \end{equation} (See e.g., \cite[pages 76-77]{moll}). Another famous example is the formula of the least common multiple of the first consecutive positive integers: \begin{equation}\label{eq26} \mathrm{lcm}(1 , 2 , \dots , n) = \prod_{p \text{ prime}} p^{\lfloor\frac{\log{n}}{\log{p}}\rfloor} ~~~~~~~~~~ (\forall n \in {\mathbb N}) . \end{equation} Among other examples which are less known, we can cite the following \begin{equation}\label{eq27} \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , i_1 + i_2 + \dots + i_k \leq n\Big\} = \prod_{p \text{ prime}} p^{\lfloor\frac{n}{p}\rfloor} , \end{equation} which is pointed out in the book of Cahen and Chabert \cite[page 246]{cah} and also by Farhi \cite{far} in the context of the integer-valued polynomials. Basing on the remark that in both Formulas \eqref{eq24}, \eqref{eq26} and \eqref{eq27}, the right-hand side (which is a product taken over the primes) is interpreted without any reference to prime numbers, we may naturally ask if an expression of a general type $\prod_{p \text{ prime}} p^{\lfloor\frac{x}{f_1(p)}\rfloor + \lfloor\frac{x}{f_2(p)}\rfloor + \dots}$ (where $x \in {\mathbb R}^+$ and ${(f_i)}_i$ is a sequence of positive functions, satisfying some regularity conditions) possess the same property; that is, it has an interpretation without reference to the primes. In this paper, we study only the case of the products $$ \pi_f(x) := \prod_{p \text{ prime}} p^{\left\lfloor\frac{x}{f(p)}\right\rfloor} , $$ for which we answer affirmatively to the previous question (under some hypothesis on $f$). After giving several applications of our result, we focus our study on the two particular cases $f(p) = p$ and $f(p) = p - 1$. Because in both cases, there is no loss of generality to take $x$ an integer, we are led to define for any $n \in {\mathbb N}$: $$ \rho_n := \prod_{p \text{ prime}} p^{\left\lfloor\frac{n}{p}\right\rfloor} ~~~~\text{and}~~~~ \sigma_n := \prod_{p \text{ prime}} p^{\left\lfloor\frac{n}{p - 1}\right\rfloor} . $$ We begin with the arithmetic study of $\rho_n$ and $\sigma_n$ by establishing several arithmetic properties concerning them; especially, we obtain for $\sigma_n$ a nontrivial divisor and a nontivial multiple. Moreover, we determine the $p$-adic valuations of the integers $\frac{\sigma_n}{n!}$ when the prime $p$ is large enough compared to $\sqrt{n}$; we discover that the prime numbers of the form $\lfloor\frac{n}{k} + 1\rfloor$ ($k \in {\mathbb N}^*$, $k < \sqrt{n + 1} + 1$) play a vital role in the arithmetic nature of the $\sigma_n$'s (this phenomenon will pop up again when studying analytically the $\sigma_n$'s). In another direction, we find asymptotic estimates for $\log{\rho_n}$ and $\log{\sigma_n}$. However, due to the difficulties encountered in counting the prime numbers of the form $\lfloor\frac{n}{k} + 1\rfloor$ ($1 \leq k \leq \sqrt{n}$), the optimal estimate of $\log{\sigma_n}$ is only given conjecturally by leaning on heuristic reasoning. We finally conclude the paper by pointing out the connection between our arithmetic and analytic studies concerning the numbers $\sigma_n$. \section{An expression of $\pi_f$ using the $\mathrm{lcm}$'s} Our stronger result of expressing $\pi_f$ in terms of the $\mathrm{lcm}$'s (without any reference to prime numbers) is the following: \begin{thm}\label{t1} Let $f : {\mathbb N}^* \rightarrow {\mathbb R}_+$ be an arithmetic function such that $f({\mathbb N}^* \setminus \{1\}) \subset {\mathbb R}^*_+$ (i.e., $f$ does not vanish except at $1$ eventually). Consider the set ${\mathbb N}^* \setminus \{1\}$ equipped with the partial order relation ``divide'' and the set ${\mathbb R}^*_+$ equipped with the usual total order relation ``$\leq$'', and suppose that the map: $$ \begin{array}{rcl} \widetilde{f} :~ {\mathbb N}^* \setminus \{1\} & \longrightarrow & {\mathbb R}^*_+ \\[2mm] n & \longmapsto & \dfrac{f(n)}{\log{n}} \end{array} $$ is nondecreasing with respect to these two orders. Then, we have for any $x \in {\mathbb R}^+$: \begin{multline*} \prod_{p \text{ prime}} p^{\left\lfloor\frac{x}{f(p)}\right\rfloor} = \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , \\[-5mm] f(i_1) + f(i_2) + \dots + f(i_k) \leq x\Big\} . \end{multline*} \end{thm} In order to present a clean proof of Theorem \ref{t1}, we go through the following lemma: \begin{lemma}\label{l1} Let $f : {\mathbb N}^* \rightarrow {\mathbb R}_+$ as in Theorem \ref{t1}. Then, for any prime number $p$ and any positive integer $a$, we have $$ \vartheta_p(a) \leq \frac{f(a)}{f(p)} . $$ \end{lemma} \begin{proof} Let $p$ be a prime number and $a$ be a positive integer. Since the inequality of the lemma is trivial when $\vartheta_p(a) = 0$, we may suppose that $\vartheta_p(a) \geq 1$; that is $p \mid a$. Setting $\alpha = \vartheta_p(a)$, we can write $a = b p^{\alpha}$ for some $b \in {\mathbb N}^*$ with $p \nmid b$. Thus \begin{equation}\label{eq1} \alpha \leq \alpha + \frac{\log{b}}{\log{p}} = \frac{\log(b p^{\alpha})}{\log{p}} = \frac{\log{a}}{\log{p}} . \end{equation} Next, the fact that $p \mid a$ implies (according to our assumptions on $f$) that: $$ \frac{f(p)}{\log{p}} \leq \frac{f(a)}{\log{a}} ; $$ that is \begin{equation}\label{eq2} \frac{\log{a}}{\log{p}} \leq \frac{f(a)}{f(p)} . \end{equation} Combining \eqref{eq1} and \eqref{eq2}, we get $$ \alpha \leq \frac{f(a)}{f(p)} , $$ as required. This completes the proof of the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{t1}] Let $x \in {\mathbb R}_+$ be fixed. For a given prime number $p$, the $p$-adic valuation of the left-hand side of the identity of the theorem is equal to $\lfloor\frac{x}{f(p)}\rfloor$, while the $p$-adic valuation of the right-hand side of the same identity is equal to $\ell_p := \max\{\vartheta_p(i_1 i_2 \dots i_k) ; k \in {\mathbb N} , i_1 , \dots , i_k \in {\mathbb N}^* , f(i_1) + \dots + f(i_k) \leq x\}$. So, we have to show that $\ell_p = \lfloor\frac{x}{f(p)}\rfloor$ (for any prime number $p$). To do so, we are going to prove the two inequalities $\ell_p \geq \lfloor\frac{x}{f(p)}\rfloor$ and $\ell_p \leq \lfloor\frac{x}{f(p)}\rfloor$ (where $p$ is a given prime number). First, for a given prime number $p$, let us show that $\ell_p \geq \lfloor\frac{x}{f(p)}\rfloor$. By considering the particular natural number: $$ k = \left\lfloor\frac{x}{f(p)}\right\rfloor $$ and the particular positive integers: $$ i_1 = i_2 = \dots = i_k = p , $$ we get $$ f(i_1) + f(i_2) + \dots + f(i_k) = k f(p) = \left\lfloor\frac{x}{f(p)}\right\rfloor f(p) \leq x . $$ Thus (according to the definition of $\ell_p$): $$ \ell_p \geq \vartheta_p\left(i_1 i_2 \cdots i_k\right) = \vartheta_p\left(p^k\right) = k = \left\lfloor\frac{x}{f(p)}\right\rfloor , $$ as required. Now, for a given prime number $p$, let us show that $\ell_p \leq \lfloor\frac{x}{f(p)}\rfloor$. For any $k \in {\mathbb N}$ and any $i_1 , i_2 , \dots , i_k \in {\mathbb N}^*$, with $f(i_1) + f(i_2) + \dots + f(i_k) \leq x$, we have \begin{align*} \vartheta_p\left(i_1 i_2 \cdots i_k\right) & = \vartheta_p(i_1) + \vartheta_p(i_2) + \dots + \vartheta_p(i_k) \\ & \leq \frac{f(i_1)}{f(p)} + \frac{f(i_2)}{f(p)} + \dots + \frac{f(i_k)}{f(p)} ~~~~~~~~~~ (\text{according to Lemma \ref{l1}}) \\ & = \frac{f(i_1) + f(i_2) + \dots + f(i_k)}{f(p)} \\ & \leq \frac{x}{f(p)} ; \end{align*} but since $\vartheta_p(i_1 i_2 \cdots i_k) \in {\mathbb N}$, it follows that: $$ \vartheta_p\left(i_1 i_2 \cdots i_k\right) \leq \left\lfloor\frac{x}{f(p)}\right\rfloor . $$ The definition of $\ell_p$ concludes that $$ \ell_p \leq \left\lfloor\frac{x}{f(p)}\right\rfloor , $$ as required. This completes the proof. \end{proof} \begin{rmks}\label{rmks1} Let us put ourselves in the situation of Theorem \ref{t1}. \begin{enumerate} \item If the map $\widetilde{f}$ is nondecreasing in the usual sense (i.e., with respect to the usual orders of the two sets ${\mathbb N}^* \setminus \{1\}$ and ${\mathbb R}_+^*$) then it remains nondecresing in the sense imposed by Theorem \ref{t1} (this immediately follows from the implication: $a \mid b \Rightarrow a \leq b$, $\forall a , b \in {\mathbb N}^*$). \item More generally than the previous item, if the restriction of the map $\widetilde{f}$ on ${\mathbb N}^* \setminus \{1 , 2\}$ is nondecreasing in the usual sense and $\widetilde{f}(2) \leq \widetilde{f}(4)$ then $\widetilde{f}$ is nondecreasing in the sense imposed by Theorem \ref{t1}. \end{enumerate} \end{rmks} Now, from Theorem \ref{t1}, we derive the following corollary in which the condition imposed on $f$ is made simpler. \begin{coll}\label{coll1} Let $f : {\mathbb N}^* \rightarrow {\mathbb R}_+$ be an arithmetic function satisfying $f({\mathbb N}^* \setminus \{1\}) \subset {\mathbb R}_+^*$. Suppose that the map $$ \begin{array}{rcl} {\mathbb N}^* \setminus \{1\} & \longrightarrow & {\mathbb R}_+^* \\[2mm] n & \longmapsto & \dfrac{f(n)}{n} \end{array} $$ is nondecreasing in the usual sense (i.e., with respect to the usual order of ${\mathbb R}$). Then we have for any $x \in {\mathbb R}_+$: \begin{multline*} \prod_{p \text{ prime}} p^{\left\lfloor\frac{x}{f(p)}\right\rfloor} = \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , \\[-5mm] f(i_1) + f(i_2) + \dots + f(i_k) \leq x\Big\} . \end{multline*} \end{coll} \begin{proof} We use Theorem \ref{t1} together with Item 2 of Remarks \ref{rmks1}. We remark that $\widetilde{f}$ (defined as in Theorem \ref{t1}) is the product of the two functions: $n \mapsto \frac{f(n)}{n}$ (supposed nondecreasing in the usual sense on ${\mathbb N}^* \setminus \{1\}$) and $n \mapsto \frac{n}{\log{n}}$ (which is nondecreasing on ${\mathbb N}^* \setminus \{1 , 2\} = \{3 , 4 , 5 , \dots\}$, as a simple study of function shows). So, $\widetilde{f}$ is nondecreasing on ${\mathbb N}^* \setminus \{1 , 2\}$ in the usual sense. In addition, we have $$ \widetilde{f}(2) = \frac{f(2)}{\log{2}} = \frac{f(2)}{2} \cdot \frac{2}{\log{2}} = \frac{f(2)}{2} \cdot \frac{4}{\log{4}} \leq \frac{f(4)}{4} \cdot \frac{4}{\log{4}} $$ (since $n \mapsto \frac{f(n)}{n}$ is supposed nondecreasing in the usual sense on ${\mathbb N}^* \setminus \{1\}$). That is $$ \widetilde{f}(2) \leq \frac{f(4)}{\log{4}} = \widetilde{f}(4) . $$ The conclusion follows from Item 2 of Remarks \ref{rmks1} and Theorem \ref{t1}. \end{proof} \subsection*{Some applications:} {\large\bf 1.} By applying Theorem \ref{t1} for $f(m) = \log{m}$ (which cleary satisfies the required conditions), we obtain that for any $x \in {\mathbb R}_+$, we have \begin{align*} \prod_{p \text{ prime}} p^{\left\lfloor\frac{x}{\log{p}}\right\rfloor} & = \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , \\[-5mm] & \hspace*{6cm} \log{i_1} + \log{i_2} + \dots + \log{i_k} \leq x\Big\} \\ & = \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , i_1 i_2 \cdots i_k \leq e^x\Big\} \\ & = \mathrm{lcm}\Big\{1 , 2 , \dots , \left\lfloor e^x\right\rfloor\Big\} . \end{align*} By taking in particular $x = \log{n}$ ($n \in {\mathbb N}^*$), we obtain the well-known formula: $$ \prod_{p \text{ prime}} p^{\left\lfloor\frac{\log{n}}{\log{p}}\right\rfloor} = \mathrm{lcm}\left\{1 , 2 , \dots , n\right\} ~~~~~~~~~~ (\forall n \in {\mathbb N}^*) . $$ \noindent{\large\bf 2.} By applying Corollary \ref{coll1} for the function $f(m) = m$ (which clearly satisfies the imposed conditions), we obtain in particular that for all $n \in {\mathbb N}$, we have \begin{multline}\label{eq3} \prod_{p \text{ prime}} p^{\left\lfloor\frac{n}{p}\right\rfloor} = \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , \\[-5mm] i_1 + i_2 + \dots + i_k \leq n\Big\} , \end{multline} which is already pointed out by Cahen and Chabert \cite{cah} and by Farhi \cite{far}. \medskip \noindent{\large\bf 3.} (Generalization of \eqref{eq3}). Let $\alpha \geq 1$. By applying Corollary \ref{coll1} for the function $f(m) = m^{\alpha}$ (which clearly satisfies the imposed conditions), we obtain in particular that for all $n \in {\mathbb N}$, we have \begin{multline*}\label{eq4} \prod_{p \text{ prime}} p^{\left\lfloor\frac{n}{p^{\alpha}}\right\rfloor} = \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , \\[-5mm] i_1^{\alpha} + i_2^{\alpha} + \dots + i_k^{\alpha} \leq n\Big\} . \end{multline*} \noindent{\large\bf 4.} For all $n , k \in {\mathbb N}$, with $n \geq k$, let us define (as in \cite{far}): $$ q_{n , k} := \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , i_1 + i_2 + \dots + i_k \leq n\Big\} . $$ Note that these numbers have been already encountered and studied by Farhi \cite{far} in a context relating to the integer-valued polynomials. By applying Corollary \ref{coll1} for the function $f(m) = m - 1$ (which clearly satisfies the imposed conditions), we obtain that for all $n \in {\mathbb N}$, we have \begin{eqnarray} \prod_{p \text{ prime}} p^{\left\lfloor\frac{n}{p - 1}\right\rfloor} & = & \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , \notag \\[-5mm] & ~ & \hspace*{3.5cm} (i_1 - 1) + (i_2 - 1) + \dots + (i_k - 1) \leq n\Big\} \notag \\ & \hspace*{-2.4cm} = & \hspace*{-1.2cm} \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ k \in {\mathbb N} , i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , i_1 + i_2 + \dots + i_k \leq n + k\Big\} \notag \\ & \hspace*{-2.4cm} = & \hspace*{-1.2cm} \mathrm{lcm}\Big\{q_{n + k , k} ~;~ k \in {\mathbb N} \Big\} , \label{eq5} \end{eqnarray} which remarkably represents the least common multiple of the $n$\up{th} diagonal of the arithmetic triangle of the $q_{i , j}$'s, beginning as follows (see \cite{far}): \begin{table}[!h] $$ \begin{array}{llllllll} 1 & ~ & ~ & ~ & ~ & ~ & ~ & \\ 1 & 1 & ~ & ~ & ~ & ~ & ~ & \\ 1 & 2 & 1 & ~ & ~ & ~ & ~ & \\ 1 & 6 & 2 & 1 & ~ & ~ & ~ & \\ 1 & 12 & 12 & 2 & 1 & ~ & ~ & \\ 1 & 60 & 12 & 12 & 2 & 1 & ~ & \\ 1 & 60 & 360 & 24 & 12 & 2 & 1 & \\ 1 & 420 & 360 & 360 & 24 & 12 & 2 & 1 \end{array} $$ \caption{The triangle of the $q_{n , k}$'s for $0 \leq k \leq n \leq 7$} \end{table} For a given $n \in {\mathbb N}$, let $D_n = {(d_{n , k})}_{k \in {\mathbb N}}$ denote the sequence of the $n$\up{th} diagonal of the above triangle, that is \begin{eqnarray} d_{n , k} & := & q_{n + k , k} \notag \\ & = & \mathrm{lcm}\Big\{i_1 i_2 \cdots i_k ~;~ i_1 , i_2 , \dots , i_k \in {\mathbb N}^* , i_1 + i_2 + \dots + i_k \leq n + k\Big\} \label{eq6} \end{eqnarray} ($\forall k \in {\mathbb N}$). In order to simplify Formula \eqref{eq5}, we are going to show that the sequences $D_n$ ($n \in {\mathbb N}$) are all nondecreasing in the divisibility sense and eventually constant. Precisely, we have the following proposition: \begin{prop}\label{p1} For all $n , k \in {\mathbb N}$, we have $$ d_{n , k} ~~\text{divides}~~ d_{n , k + 1} . $$ If in addition $k \geq n$, then we have $$ d_{n , k} = d_{n , n} . $$ \end{prop} \begin{proof} Let $n , k \in {\mathbb N}$ be fixed and let $i_1 , i_2 , \dots , i_k \in {\mathbb N}^*$ such that $i_1 + i_2 + \dots + i_k \leq n + k$. By setting $i_{k + 1} = 1$, we have $i_1 + i_2 + \dots + i_k + i_{k + 1} \leq n + k + 1$; thus (by \eqref{eq6}) $d_{n , k + 1}$ is a multiple of $i_1 i_2 \cdots i_k i_{k + 1} = i_1 i_2 \cdots i_k$. Since this holds for any $i_1 , i_2 , \dots , i_k \in {\mathbb N}^*$ such that $i_1 + i_2 + \dots + i_k \leq n + k$, we derive that $d_{n , k + 1}$ is a multiple of $d_{n , k}$, as required. Now, let us prove the second part of the proposition. So, suppose that $k \geq n$ and let us prove that $d_{n , k} = d_{n , n}$. It follows from an immediate induction leaning on the result of the first part of the proposition (proved above) that $d_{n , n} \mid d_{n , k}$. So, it remains to prove that $d_{n , k} \mid d_{n , n}$. Let $i_1 , i_2 , \dots , i_k \in {\mathbb N}^*$ such that $i_1 + i_2 + \dots + i_k \leq n + k$. Let $\ell \in {\mathbb N}$ denote the number of indices $i_r$ ($1 \leq r \leq k$) which are equal to $1$; so we have exactly $(k - \ell)$ indices $i_r$ which are $\geq 2$. Thus we have $$ i_1 + i_2 + \dots + i_k \geq \ell + 2 (k - \ell) = 2 k - \ell . $$ But since $i_1 + i_2 + \dots + i_k \leq n + k$, we derive that $2 k - \ell \leq n + k$, which gives $\ell \geq k - n$. This proves that we have at least $(k - n)$ indices $i_r$ which are equal to $1$. By assuming (without loss of generality) that those indices are $i_{n + 1} , i_{n + 2} , \dots , i_k$ (i.e., $i_{n + 1} = i_{n + 2} = \dots = i_k = 1$), we get \begin{align*} i_1 i_2 \cdots i_n & = i_1 i_2 \cdots i_k \\[-5mm] \intertext{and} \\[-1cm] i_1 + i_2 + \dots + i_n & = \left(i_1 + i_2 + \dots + i_k\right) - (k - n) \\ & \leq (n + k) - (k - n) \\ & = 2 n . \end{align*} This shows that each product $i_1 i_2 \cdots i_k$ occurring in the definition of $d_{n , k}$ reduces (by permuting the $i_r$'s and eliminate those of them which are equal to $1$) to a product $j_1 j_2 \cdots j_n$ which occurs in the definition of $d_{n , n}$. Consequently $d_{n , k} \mid d_{n , n}$, as required. This completes the proof of the proposition. \end{proof} Using Proposition \ref{p1}, we have for any $n \in {\mathbb N}$: \begin{align*} \mathrm{lcm}\Big\{q_{n + k , k} ~;~ k \in {\mathbb N}\Big\} & = \mathrm{lcm}\Big\{d_{n , k} ~;~ k \in {\mathbb N}\Big\} \\ & = d_{n , n} \\ & \hspace*{-5mm} = \mathrm{lcm}\Big\{i_1 i_2 \cdots i_n ~;~ i_1 , i_2 , \dots , i_n \in {\mathbb N}^* , i_1 + i_2 + \dots + i_n \leq 2 n\Big\} . \end{align*} \noindent This concludes to the following interesting corollary, simplifying Formula \eqref{eq5}: \begin{coll}\label{coll2} For any $n \in {\mathbb N}$, we have \begin{equation} \prod_{p \text{ prime}} p^{\left\lfloor\frac{n}{p - 1}\right\rfloor} = \mathrm{lcm}\Big\{i_1 i_2 \cdots i_n ~;~ i_1 , i_2 , \dots , i_n \in {\mathbb N}^* , i_1 + i_2 + \dots + i_n \leq 2 n\Big\} . \tag*{$\square$} \end{equation} \end{coll} \section[Arithmetic results]{Arithmetic results on the numbers $\rho_n$ and $\sigma_n$}\label{sec3} A certain number of arithmetic properties concerning the numbers $\rho_n$ and $\sigma_n$ are either immediate or quite easy to prove. We have gathered them in the following proposition: \begin{prop}\label{p2} For any natural number $n$, we have \begin{enumerate} \item[{\rm (i)}] $\rho_n \mid \rho_{n + 1}$, $\sigma_n \mid \sigma_{n + 1}$, and $\rho_n \mid \sigma_n$; \item[{\rm (ii)}] $\rho_n \mid n!$; \item[{\rm (iii)}] $n! \mid \sigma_n$ and $\sigma_n \mid (2 n)!$; \item[{\rm (iv)}] $\sigma_{2 n + 1} = 2 \sigma_{2 n}$. \end{enumerate} \end{prop} \begin{proof} Let $n \in {\mathbb N}$ be fixed. The properties of Item (i) are trivial. The property of Item (ii) immediately follows from the Legendre formula providing the decomposition of $n!$ into a product of prime factors. For Item (iii), the fact that $n! \mid \sigma_n$ follows from the inequality: $$ \frac{n}{p - 1} = \frac{n}{p} + \frac{n}{p^2} + \frac{n}{p^3} + \dots \geq \left\lfloor\frac{n}{p}\right\rfloor + \left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots $$ together with the Legendre formula. Next, to prove that $\sigma_n \mid (2 n)!$, we use Corollary \ref{coll2}. For any $i_1 , i_2 , \dots , i_n \in {\mathbb N}^*$ satisfying $i_1 + i_2 + \dots + i_n \leq 2 n$, we have that $i_1 i_2 \cdots i_n \mid i_1! i_2! \cdots i_n! \mid (i_1 + i_2 + \dots + i_n)! \mid (2 n)!$. Thus $\mathrm{lcm}\{i_1 i_2 \cdots i_n ~;~ i_1 , i_2 , \dots , i_n$ $\in {\mathbb N}^* , i_1 + i_2 + \dots + i_n \leq 2 n\} \mid (2 n)!$; that is (according to Corollary \ref{coll2}): $\sigma_n \mid (2 n)!$. Let us finally prove Item (iv). First, we have $\vartheta_2(\sigma_{2 n + 1}) = \lfloor\frac{2 n + 1}{2 - 1}\rfloor = 2 n + 1$ and $\vartheta_2(2 \sigma_{2 n}) = 1 + \vartheta_2(\sigma_{2 n}) = 1 + \lfloor\frac{2 n}{2 - 1}\rfloor = 2 n + 1$; hence $\vartheta_2(\sigma_{2 n + 1}) = \vartheta_2(2 \sigma_{2 n})$. Next, for any odd prime $p$, since the odd number $(2 n + 1)$ cannot be a multiple of the even number $(p - 1)$ then we have $$ \left\lfloor\frac{2 n + 1}{p - 1}\right\rfloor = \left\lfloor\frac{2 n}{p - 1}\right\rfloor ; $$ that is $$ \vartheta_p(\sigma_{2 n + 1}) = \vartheta_p(\sigma_{2 n}) = \vartheta_p(2 \sigma_{2 n}) . $$ Consequently, we have $\vartheta_q(\sigma_{2 n + 1}) = \vartheta_q(2 \sigma_{2 n})$ (for any prime number $q$), concluding that $\sigma_{2 n + 1} = 2 \sigma_{2 n}$, as required. This completes the proof of the proposition. \end{proof} In the following proposition, we shall improve Item (iii) of Proposition \ref{p2}. It appears that this improvement is optimal (understanding that we use uniquely simple expressions). \begin{prop}\label{p3} For any natural number $n$, we have $$ (n + 1)! ~\mid~ \sigma_n ~~\text{and}~~ \sigma_n ~\mid~ n! \, \mathrm{lcm}(1 , 2 , \dots , n , n + 1) . $$ \end{prop} \begin{proof} Let $n \in {\mathbb N}$ be fixed. We have to show that for any prime $p$, we have \begin{equation}\label{eq7} \vartheta_p\left((n + 1)!\right) \leq \vartheta_p\left(\sigma_n\right) \leq \vartheta_p\left(n! \, \mathrm{lcm}(1 , 2 , \dots , n , n + 1)\right) . \end{equation} Let $p$ be a fixed prime number and let us prove \eqref{eq7}. By setting $e$ the largest prime number satisfying $p^e \leq n + 1$, we have that $\vartheta_p(n!) = \sum_{i = 1}^{e} \left\lfloor\frac{n}{p^i}\right\rfloor$, $\vartheta_p((n + 1)!) = \sum_{i = 1}^{e} \left\lfloor\frac{n + 1}{p^i}\right\rfloor$ (according to the Legendre formula), $\vartheta_p(\sigma_n) = \left\lfloor\frac{n}{p - 1}\right\rfloor$ (by definition of $\sigma_n$), and $\vartheta_p\left(\mathrm{lcm}(1 , 2 , \dots , n + 1)\right) = e$. So \eqref{eq7} reduces to \begin{equation}\label{eq8} \sum_{i = 1}^{e} \left\lfloor\frac{n + 1}{p^i}\right\rfloor \leq \left\lfloor\frac{n}{p - 1}\right\rfloor \leq \sum_{i = 1}^{e} \left\lfloor\frac{n}{p^i}\right\rfloor + e . \end{equation} On the one hand, we have $$ \sum_{i = 1}^{e} \left\lfloor\frac{n + 1}{p^i}\right\rfloor \leq \sum_{i = 1}^{e} \frac{n + 1}{p^i} = \frac{n + 1}{p - 1} \left(1 - \frac{1}{p^e}\right) \leq \frac{n}{p - 1} $$ (since $p^e \leq n + 1$). But since $\sum_{i = 1}^{e} \left\lfloor\frac{n + 1}{p^i}\right\rfloor$ is an integer, we derive that $$ \sum_{i = 1}^{e} \left\lfloor\frac{n + 1}{p^i}\right\rfloor \leq \left\lfloor\frac{n}{p - 1}\right\rfloor , $$ confirming the left inequality in \eqref{eq8}. On the other hand, by leaning on the refined inequality $\left\lfloor\frac{a}{b}\right\rfloor \geq \frac{a + 1}{b} - 1$, which holds for any positive integers $a , b$, we have \begin{align*} \left\lfloor\frac{n}{p - 1}\right\rfloor - \sum_{i = 1}^{e} \left\lfloor\frac{n}{p^i}\right\rfloor & \leq \frac{n}{p - 1} - \sum_{i = 1}^{e} \left(\frac{n + 1}{p^i} - 1\right) \\[1mm] & = \frac{n}{p - 1} - \frac{n + 1}{p - 1} \left(1 - \frac{1}{p^e}\right) + e \\[1mm] & = \frac{1}{p - 1} \left(\frac{n + 1}{p^e} - 1\right) + e . \end{align*} But from the definition of $e$, we have $p^{e + 1} > n + 1$, that is $\frac{n + 1}{p^e} < p$. By reporting this into the last estimate, we get $$ \left\lfloor\frac{n}{p - 1}\right\rfloor - \sum_{i = 1}^{e} \left\lfloor\frac{n}{p^i}\right\rfloor < e + 1 . $$ Next, since $\lfloor\frac{n}{p - 1}\rfloor - \sum_{i = 1}^{e} \lfloor\frac{n}{p^i}\rfloor \in{\mathbb Z}$, we conclude to $$ \left\lfloor\frac{n}{p - 1}\right\rfloor - \sum_{i = 1}^{e} \left\lfloor\frac{n}{p^i}\right\rfloor \leq e , $$ confirming the right inequality of \eqref{eq8}. This completes this proof. \end{proof} From Proposition \ref{p3}, we derive an asymptotic estimate for the number $\log{\sigma_n}$ when $n$ tends to infinity. We have the following \begin{coll}\label{coll3} We have $$ \log\sigma_n ~\sim_{+ \infty}~ n \log{n} . $$ \end{coll} \begin{proof} According to Proposition \ref{p3}, we have for any $n \in {\mathbb N}^*$: $$ \log{(n + 1)!} \leq \log{\sigma_n} \leq \log{(n!)} + \log\mathrm{lcm}(1 , 2 , \dots , n , n + 1) . $$ Then the asymptotic estimate of the corollary follows from the facts: $\log{(n + 1)!}$ $\sim_{+ \infty} \log(n!) \sim_{+ \infty} n \log{n}$ (according to Stirling's formula) and $\log\mathrm{lcm}(1 , 2 , \dots , n ,$ \linebreak $n + 1) \sim_{+ \infty} n$ (according to the prime number theorem). \end{proof} Note that the asymptotic estimate of the above corollary will be specified in §\ref{sec4}. We now turn to establish a result evaluating the $p$-adic valuations of the positive integers $\frac{\sigma_n}{n!}$ ($n \in {\mathbb N}^*$) for sufficiently large prime numbers. We discover as a remarkable phenomenon that primes of a special type play a vital role. We find again this phenomenon in §\ref{sec4} when estimating asymptotically $\log{\sigma_n}$. We have the following theorem: \begin{thm}\label{t2} Let $n$ be a positive integer and $p$ be a prime number such that: $$ \sqrt{n + 1} < p \leq n + 1 . $$ Then, we have $$ \vartheta_p\left(\frac{\sigma_n}{n!}\right) \in \{0 , 1\} . $$ Besides, the equality $\vartheta_p\left(\frac{\sigma_n}{n!}\right) = 1$ holds if and only if $p$ has the form $$ p = \left\lfloor\frac{n}{k} + 1\right\rfloor , $$ with $k \in {\mathbb N}^*$ and $k < \sqrt{n + 1} + 1$. \end{thm} \begin{proof} By the definition of $\sigma_n$ and the Legendre formula \eqref{eq25}, we have that \begin{align} \vartheta_p\left(\frac{\sigma_n}{n!}\right) & = \vartheta_p\left(\sigma_n\right) - \vartheta_p\left(n!\right) \notag \\[1mm] & = \left\lfloor\frac{n}{p - 1}\right\rfloor - \frac{n - S_p(n)}{p - 1} \notag \\[1mm] & = \left\lfloor\frac{n}{p - 1} - \frac{n - S_p(n)}{p - 1}\right\rfloor ~~~~~~~~~~ \left(\text{since } \frac{n - S_p(n)}{p - 1} = \vartheta_p(n!) \in {\mathbb Z}\right) \notag \\[1mm] & = \left\lfloor\frac{S_p(n)}{p - 1}\right\rfloor . \label{eq9} \end{align} The first part of the theorem is then equivalent to the fact $\left\lfloor\frac{S_p(n)}{p - 1}\right\rfloor \in \{0 , 1\}$. So, let us prove this last fact. The hypothesis on $p$ insures that $n < p^2 - 1$, which implies that the representation of the positive integer $n$ in base $p$ has the form $n = \overline{a_1 a_0}_{(p)}$, with $a_0 , a_1 \in \{0 , 1 , \dots , p - 1\}$ and $(a_0 , a_1) \neq (p - 1 , p - 1)$. Consequently, we have $S_p(n) = a_0 + a_1 < 2 (p - 1)$, implying that $\frac{S_p(n)}{p - 1} < 2$; hence $\left\lfloor\frac{S_p(n)}{p - 1}\right\rfloor \in \{0 , 1\}$, as required. This achieves the proof of the first part of the theorem. Now, let us prove the second part of the theorem. \\ \textbullet{} Suppose that $\vartheta_p\left(\frac{\sigma_n}{n!}\right) = 1$ and let us show the existence of $k \in {\mathbb N}^*$, with $k < \sqrt{n + 1} + 1$ such that $p = \left\lfloor\frac{n}{k} + 1\right\rfloor$. As seen above, the representation of $n$ in base $p$ has the form $n = \overline{a_1 a_0}_{(p)} = a_0 + p a_1$, where $a_0 , a_1 \in \{0 , 1 , \dots , p - 1\}$ and $(a_0 , a_1) \neq (p - 1 , p - 1)$. We will show that $k = a_1 + 1$ is suitable. According to \eqref{eq9}, we have $\vartheta_p\left(\frac{\sigma_n}{n!}\right) = \left\lfloor\frac{S_p(n)}{p - 1}\right\rfloor = \left\lfloor\frac{a_0 + a_1}{p - 1}\right\rfloor$. So the supposition $\vartheta_p\left(\frac{\sigma_n}{n!}\right) = 1$ implies that $\frac{a_0 + a_1}{p - 1} \geq 1$, that is $a_0 + a_1 \geq p - 1$. This last inequality together with $a_0 < p$ imply that $$ p - 1 \leq \frac{a_0 + a_1 p}{a_1 + 1} < p , $$ which is equivalent to $$ \left\lfloor\frac{n}{a_1 + 1}\right\rfloor = p - 1 . $$ Thus $$ p = \left\lfloor\frac{n}{a_1 + 1} + 1\right\rfloor . $$ Besides, we have $a_1 = \left\lfloor\frac{n}{p}\right\rfloor \leq \frac{n}{p} < \sqrt{n + 1}$ (since $p > \sqrt{n + 1} > \frac{n}{\sqrt{n + 1}}$). Thus $k = a_1 + 1$ satisfy the required properties (i.e., $p = \left\lfloor\frac{n}{k} + 1\right\rfloor$ and $k < \sqrt{n + 1} + 1$). \\ \textbullet{} Conversely, suppose that there exists $k \in {\mathbb N}^*$, with $k < \sqrt{n + 1} + 1$, such that $p = \left\lfloor\frac{n}{k} + 1\right\rfloor$, and let us show that $\vartheta_p\left(\frac{\sigma_n}{n!}\right) = 1$. Setting $a_0 := n - (k - 1) p$ and $a_1 := k - 1$, we first show that the representation of $n$ in base $p$ is $n = \overline{a_1 a_0}_{(p)}$. Since it is immediate that $n = a_0 + p a_1$, it just remains to prove that $a_0 , a_1 \in \{0 , 1 , \dots , p - 1\}$. Since $k < \sqrt{n + 1} + 1 < p + 1$ then $k - 1 < p$; that is $a_1 \in \{0 , 1 , \dots , p - 1\}$. Next, since $p = \left\lfloor\frac{n}{k} + 1\right\rfloor$ then $$ p \leq \frac{n}{k} + 1 < p + 1 , $$ implying that $$ p - k \leq n - (k - 1) p < p , $$ that is $$ p - k \leq a_0 < p . $$ But $p - k = (p - 1) - a_1 \geq 0$; thus $a_0 \in \{0 , 1 , \dots , p - 1\}$. We have confirmed that the representation of $n$ in base $p$ is $n = \overline{a_1 a_0}_{(p)}$. Consequently, we have (according to \eqref{eq9}): $$ \vartheta_p\left(\frac{\sigma_n}{n!}\right) = \left\lfloor\frac{S_p(n)}{p - 1}\right\rfloor = \left\lfloor\frac{a_0 + a_1}{p - 1}\right\rfloor = \left\lfloor\frac{n - (k - 1)(p - 1)}{p - 1}\right\rfloor = \left\lfloor\frac{n}{p - 1}\right\rfloor - k + 1 . $$ Then, since $\frac{n}{p - 1} \geq k$ (because $\frac{n}{k} + 1 \geq \left\lfloor\frac{n}{k} + 1\right\rfloor = p$), it follows that $\vartheta_p\left(\frac{\sigma_n}{n!}\right) \geq 1$. But since $\vartheta_p\left(\frac{\sigma_n}{n!}\right) \in \{0 , 1\}$ (according to the first part, already proved, of the theorem), we conclude that $\vartheta_p\left(\frac{\sigma_n}{n!}\right) = 1$, as required. This completes the proof of the theorem. \end{proof} \section[Analytic estimates]{Analytic estimates of the numbers $\log{\rho_n}$ and $\log{\sigma_n}$}\label{sec4} Throughout this section, we let $\mathrm{c}$ denote the absolute positive constant given by: $$ \mathrm{c} := \sum_{p \text{ prime}} \frac{\log{p}}{p (p - 1)} = 0.755\dots . $$ Our goal is to find asymptotic estimates for $\log{\rho_n}$ and $\log{\sigma_n}$ as $n$ tends to infinity. The obtained main results are the following: \begin{thm}\label{t3} We have $$ \log{\rho_n} = n \log{n} - (\mathrm{c} + 1) n + O\left(\sqrt{n}\right) . $$ \end{thm} \begin{thm}\label{t4} We have $$ \log{\sigma_n} = n \log{n} - n + O\left(\sqrt{n \log{n}}\right) . $$ \end{thm} \begin{conj}[improving Theorem \ref{t4}]\label{conj1} We have $$ \log{\sigma_n} = n \log{n} - n + O\left(\sqrt{n}\right) . $$ \end{conj} Note that an explanation for the validity of Conjecture \ref{conj1} is given latter; actually, it depends on a conjecture on counting the prime numbers of a certain form, which is heuristically plausible. To establish the above results, we need the following auxiliary results: \begin{lemma}\label{l2} For any $x \geq 1$, we have $$ \sum_{\begin{subarray}{c} p \text{ prime} \\ p > x \end{subarray}} \frac{\log{p}}{p (p - 1)} = O\left(\frac{1}{x}\right) . $$ \end{lemma} \begin{proof} Since $\frac{\log{p}}{p (p - 1)} \leq 2 \frac{\log{p}}{p^2}$ (for any prime number $p$), then it suffices to show that $\sum_{p \text{ prime, } p > x} \frac{\log{p}}{p^2} = O\left(\frac{1}{x}\right)$. According to the Abel summation formula (see e.g., \cite[Proposition 1.4]{kon}), we have for any positive real numbers $x , y$, with $x < y$: \begin{align*} \sum_{\begin{subarray}{c} p \text{ prime} \\ x < p \leq y \end{subarray}} \frac{\log{p}}{p^2} & = \left(\sum_{\begin{subarray}{c} p \text{ prime} \\ x < p \leq y \end{subarray}} \log{p}\right) \frac{1}{y^2} - \bigints_{x}^{y} \left(\sum_{\begin{subarray}{c} p \text{ prime} \\ x < p \leq t \end{subarray}} \log{p}\right) \left(\frac{1}{t^2}\right)' \, d t \\[2mm] & = \frac{\theta(y) - \theta(x)}{y^2} + 2 \int_{x}^{y} \frac{\theta(t) - \theta(x)}{t^3} \, d t . \end{align*} Then, by setting $y$ to infinity, it follows (since $\theta(y) = O(y)$) that: $$ \sum_{\begin{subarray}{c} p \text{ prime} \\ p > x \end{subarray}} \frac{\log{p}}{p^2} = 2 \int_{x}^{+ \infty} \frac{\theta(t) - \theta(x)}{t^3} \, d t = 2 \int_{x}^{+ \infty} \frac{\theta(t)}{t^3} \, d t - \frac{\theta(x)}{x^2} . $$ Using finally $\theta(t) = O(t)$, we get $$ \sum_{\begin{subarray}{c} p \text{ prime} \\ p > x \end{subarray}} \frac{\log{p}}{p^2} = O\left(\int_{x}^{+ \infty} \frac{d t}{t^2}\right) + O\left(\frac{1}{x}\right) = O\left(\frac{1}{x}\right) , $$ as required. The proof is complete. \end{proof} Lemma \ref{l2} above is used in the proof of the following proposition: \begin{prop}\label{p10} For any positive integer $n$, we have $$ \sum_{p \text{ prime}} \left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} = \mathrm{c} \cdot n + O\left(\sqrt{n}\right) . $$ \end{prop} \begin{proof} Let $n$ be a fixed positive integer. For any prime number $p$, let $e_p$ denote the greatest natural number satisfying $p^{e_p} \leq n$; explicitly $e_p = \lfloor\frac{\log{n}}{\log{p}}\rfloor$. So we have $p^{e_p + 1} > n$. On the one hand, we have \begin{align*} \sum_{p \text{ prime}} \left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} & \leq \sum_{p \text{ prime}} \left(\frac{n}{p^2} + \frac{n}{p^3} + \dots\right) \log{p} \\ & = \sum_{p \text{ prime}} \frac{n}{p (p - 1)} \log{p} ; \end{align*} that is \begin{equation}\label{eq10} \sum_{p \text{ prime}} \left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} \leq \mathrm{c} \cdot n . \end{equation} On the other hand, we have (according to the definition of the $e_p$'s): \begin{align*} \sum_{p \text{ prime}} \left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} & = \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots + \left\lfloor\frac{n}{p^{e_p}}\right\rfloor\right) \log{p} \\ & \hspace*{-5cm} \geq \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left[\left(\frac{n}{p^2} - 1\right) + \left(\frac{n}{p^3} - 1\right) + \dots + \left(\frac{n}{p^{e_p}} - 1\right)\right] \log{p} \\ & \hspace*{-5cm} = n \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(\frac{1}{p^2} + \frac{1}{p^3} + \dots + \frac{1}{p^{e_p}}\right) \log{p} - \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(e_p - 1\right) \log{p} \\ & \hspace*{-5cm} = n \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(\frac{1}{p (p - 1)} - \frac{1}{p^{e_p} (p - 1)}\right) \log{p} - \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(e_p - 1\right) \log{p} \\ & \hspace*{-5cm} = n \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \frac{\log{p}}{p (p - 1)} - n \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \frac{\log{p}}{p^{e_p} (p - 1)} - \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(e_p - 1\right) \log{p} \\ & \hspace*{-5cm} = n \left(\mathrm{c} - \sum_{\begin{subarray}{c} p \text{ prime} \\ p > \sqrt{n} \end{subarray}} \frac{\log{p}}{p (p - 1)}\right) - n \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \frac{\log{p}}{p^{e_p} (p - 1)} - \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(e_p - 1\right) \log{p} ; \end{align*} that is \begin{multline}\label{eq28} \sum_{p \text{ prime}} \left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} \geq \mathrm{c} \, n - n \sum_{\begin{subarray}{c} p \text{ prime} \\ p > \sqrt{n} \end{subarray}} \frac{\log{p}}{p (p - 1)} - n \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \frac{\log{p}}{p^{e_p} (p - 1)} \\ - \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(e_p - 1\right) \log{p} . \end{multline} But, by using Lemma \ref{l2}, we have \begin{equation}\label{eq29} \sum_{\begin{subarray}{c} p \text{ prime} \\ p > \sqrt{n} \end{subarray}} \frac{\log{p}}{p (p - 1)} = O\left(\frac{1}{\sqrt{n}}\right) . \end{equation} Next, by using the fact $p^{e_p} > \frac{n}{p}$ (for any prime $p$), we have \begin{equation}\label{eq30} \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \frac{\log{p}}{p^{e_p} (p - 1)} < \frac{1}{n} \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \frac{p}{p - 1} \log{p} \leq \frac{2}{n} \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \log{p} = \frac{2}{n} \theta\left(\sqrt{n}\right) = O\left(\frac{1}{\sqrt{n}}\right) , \end{equation} and by using the fact $e_p - 1 < e_p := \lfloor\frac{\log{n}}{\log{p}}\rfloor \leq \frac{\log{n}}{\log{p}}$, we have \begin{equation}\label{eq31} \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \left(e_p - 1\right) \log{p} < \sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n} \end{subarray}} \log{n} = (\log{n}) \pi(\sqrt{n}) = O\left(\sqrt{n}\right) . \end{equation} Then, by inserting \eqref{eq29}, \eqref{eq30} and \eqref{eq31} into \eqref{eq28}, we get \begin{equation}\label{eq13} \sum_{p \text{ prime}} \left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} \geq \mathrm{c} \, n + O\left(\sqrt{n}\right) . \end{equation} Finally, \eqref{eq10} and \eqref{eq13} conclude to $$ \sum_{p \text{ prime}} \left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} = \mathrm{c} \cdot n + O\left(\sqrt{n}\right) , $$ as required. \end{proof} We are now able to prove Theorem \ref{t3}. \begin{proof}[Proof of Theorem \ref{t3}] For any sufficiently large integer $n$, we have according to Legendre's formula: \begin{align*} \log{\rho_n} = \sum_{p \text{ prime}} \left\lfloor\frac{n}{p}\right\rfloor \log{p} & = \sum_{p \text{ prime}}\left(\left\lfloor\frac{n}{p}\right\rfloor + \left\lfloor\frac{n}{p^2}\right\rfloor + \dots\right) \log{p} \\ & \hspace*{3cm} - \sum_{p \text{ prime}}\left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} \\[2mm] & = \log(n!) - \sum_{p \text{ prime}}\left(\left\lfloor\frac{n}{p^2}\right\rfloor + \left\lfloor\frac{n}{p^3}\right\rfloor + \dots\right) \log{p} . \end{align*} Then, the weaker form of Stirling's approximation formula $\log(n!) = n \log{n} - n + O(\log{n})$ and Proposition \ref{p10} conclude to: $$ \log{\rho_n} = n \log{n} - (\mathrm{c} + 1) n + O(\sqrt{n}) , $$ as required. \end{proof} We now turn to estimate $\log{\sigma_n}$ by leaning on the estimate of $\log{\rho_n}$ (given by Theorem \ref{t3} proved above). To do so, we shall first establish an important formula relating $\log{\rho_n}$ and $\log{\sigma_n}$. This is done with the following proposition: \begin{prop}\label{p4} For any positive integer $n$, we have \begin{align} \log{\rho_n} & = \sum_{k = 1}^{n} \theta\left(\frac{n}{k}\right) , \label{eq14} \\ \log{\sigma_n} & = \sum_{k = 1}^{n} \theta\left(\frac{n}{k} + 1\right) , \label{eq15} \\ \log{\sigma_n} - \log{\rho_n} & = \sum_{\begin{subarray}{c} 1 \leq k \leq n \\ \lfloor\frac{n}{k} + 1\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\!\log\left\lfloor\frac{n}{k} + 1\right\rfloor . \label{eq16} \end{align} \end{prop} \begin{proof} Let $n$ be a fixed positive integer. We have \begin{multline*} \log{\rho_n} = \sum_{p \text{ prime}} \left\lfloor\frac{n}{p}\right\rfloor \log{p} = \sum_{p \text{ prime}} \left(\sum_{1 \leq k \leq \frac{n}{p}} 1\right) \log{p} = \sum_{1 \leq k \leq n} \left(\sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \frac{n}{k} \end{subarray}} \log{p}\right) \\ = \sum_{1 \leq k \leq n} \theta\left(\frac{n}{k}\right) , \end{multline*} proving \eqref{eq14}. Similarly, we have \begin{multline*} \log{\sigma_n} = \sum_{p \text{ prime}} \left\lfloor\frac{n}{p - 1}\right\rfloor \log{p} = \sum_{p \text{ prime}} \left(\sum_{1 \leq k \leq \frac{n}{p - 1}} 1\right) \log{p} = \sum_{1 \leq k \leq n} \left(\sum_{\begin{subarray}{c} p \text{ prime} \\ p \leq \frac{n}{k} + 1 \end{subarray}} \log{p}\right) \\[1mm] = \sum_{1 \leq k \leq n} \theta\left(\frac{n}{k} + 1\right) , \end{multline*} proving \eqref{eq15}. Finally, using \eqref{eq14} and \eqref{eq15}, let us prove \eqref{eq16}. We have \begin{align*} \log{\sigma_n} - \log{\rho_n} & = \sum_{1 \leq k \leq n} \theta\left(\frac{n}{k} + 1\right) - \sum_{1 \leq k \leq n} \theta\left(\frac{n}{k}\right) \\ & = \sum_{1 \leq k \leq n} \left(\theta\left(\frac{n}{k} + 1\right) - \theta\left(\frac{n}{k}\right)\right) \\ & = \sum_{1 \leq k \leq n} \left(\sum_{\begin{subarray}{c} p \text{ prime} \\ \frac{n}{k} < p \leq \frac{n}{k} + 1 \end{subarray}} \log{p}\right) . \end{align*} But since for any $1 \leq k \leq n$, the interval $(\frac{n}{k} , \frac{n}{k} + 1]$ contains a unique integer which is $\lfloor\frac{n}{k} + 1\rfloor$, it follows that: $$ \log{\sigma_n} - \log{\rho_n} = \sum_{\begin{subarray}{c} 1 \leq k \leq n \\ \lfloor\frac{n}{k} + 1\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\!\log\left\lfloor\frac{n}{k} + 1\right\rfloor , $$ proving \eqref{eq16}. The proof is complete. \end{proof} To deduce an asymptotic estimate for $\log{\sigma_n}$ from that of $\log{\rho_n}$ by means of Proposition \ref{p4}, we must estimate (asymptotically) the sum $$ \mathcal{S}(n) := \sum_{\begin{subarray}{c} 1 \leq k \leq n \\ \lfloor\frac{n}{k} + 1\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\!\log\left\lfloor\frac{n}{k} + 1\right\rfloor . $$ To do so, we split $\mathcal{S}$ into two sums: $$ \mathcal{S}_1(n) := \sum_{\begin{subarray}{c} 1 \leq k \leq \sqrt{n} \\ \lfloor\frac{n}{k} + 1\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\!\log\left\lfloor\frac{n}{k} + 1\right\rfloor ~~~~\text{and}~~~~ \mathcal{S}_2(n) := \sum_{\begin{subarray}{c} \sqrt{n} < k \leq n \\ \lfloor\frac{n}{k} + 1\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\!\log\left\lfloor\frac{n}{k} + 1\right\rfloor . $$ The estimate of $\mathcal{S}_2(n)$ is within our reach; it is given by the following proposition: \begin{prop}\label{p5} For any positive integer $n$, we have $$ \mathcal{S}_2(n) = \mathrm{c} \, n + O\left(\sqrt{n}\right) . $$ \end{prop} \begin{proof} For a given $n \in {\mathbb N}^*$, we have $$ \mathcal{S}_2(n) := \sum_{\begin{subarray}{c} \sqrt{n} < k \leq n \\ \lfloor\frac{n}{k} + 1\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\!\log\left\lfloor\frac{n}{k} + 1\right\rfloor = \sum_{\begin{subarray}{c} p \text{ prime} \\ p < \sqrt{n} + 1 \end{subarray}} \sum_{\begin{subarray}{c} \sqrt{n} < k \leq n \\ \left\lfloor\frac{n}{k} + 1\right\rfloor = p \end{subarray}} \!\!\!\! \log{p} . $$ But since for any prime number $p$ satisfying $p < \sqrt{n} + 1$ and any integer $k$ satisfying $\sqrt{n} < k \leq n$, we have $$ \left\lfloor\frac{n}{k} + 1\right\rfloor = p \Longleftrightarrow p \leq \frac{n}{k} + 1 < p + 1 \Longleftrightarrow \frac{n}{p} < k \leq \frac{n}{p - 1} , $$ it follows that: \begin{equation}\label{eq17} \mathcal{S}_2(n) = \sum_{\begin{subarray}{c} p \text{ prime} \\ p < \sqrt{n} + 1 \end{subarray}} \sum_{\begin{subarray}{c} \sqrt{n} < k \leq n \\ \frac{n}{p} < k \leq \frac{n}{p - 1} \end{subarray}} \!\!\!\! \log{p} = \sum_{\begin{subarray}{c} p \text{ prime} \\ p < \sqrt{n} + 1 \end{subarray}} \!\!\left(\sum_{\max\left(\sqrt{n} , \frac{n}{p}\right) < k \leq \frac{n}{p - 1}} 1\right) \log{p} . \end{equation} Next, we remark that for any prime number $p$ satisfying $p < \sqrt{n} + 1$, we have $\frac{n}{p} > \frac{n}{\sqrt{n} + 1} > \sqrt{n} - 1$, implying that $\frac{n}{p} \leq \max\left(\sqrt{n} , \frac{n}{p}\right) < \frac{n}{p} + 1$. Consequently, the interval $\left(\max\left(\sqrt{n} , \frac{n}{p}\right) , \frac{n}{p - 1}\right]$ contains at least \\ $\frac{n}{p - 1} - \max\left(\sqrt{n} , \frac{n}{p}\right) - 1 > \frac{n}{p - 1} - \frac{n}{p} - 2 = \frac{n}{p (p - 1)} - 2$ integers and at most \\ $\frac{n}{p - 1} - \max\left(\sqrt{n} , \frac{n}{p}\right) + 1 \leq \frac{n}{p - 1} - \frac{n}{p} + 1 = \frac{n}{p(p - 1)} + 1$ integers. So, for any prime number $p$ satisfying $p < \sqrt{n} + 1$, we have $$ \sum_{\max\left(\sqrt{n} , \frac{n}{p}\right) < k \leq \frac{n}{p - 1}} 1 = \frac{n}{p (p - 1)} + O_{\perp p}(1) . $$ By reporting this into \eqref{eq17}, we get \begin{align*} \mathcal{S}_2(n) & = \sum_{\begin{subarray}{c} p \text{ prime} \\ p < \sqrt{n} + 1 \end{subarray}} \left(\frac{n}{p (p - 1)} + O_{\perp p}(1)\right) \log{p} \\ & = \left(\sum_{\begin{subarray}{c} p \text{ prime} \\ p < \sqrt{n} + 1 \end{subarray}} \frac{\log{p}}{p (p - 1)}\right) n + O\left(\theta(\sqrt{n} + 1)\right) \\ & = \left(\mathrm{c} - \sum_{\begin{subarray}{c} p \text{ prime} \\ p \geq \sqrt{n} + 1 \end{subarray}} \frac{\log{p}}{p (p - 1)}\right) n + O\left(\theta(\sqrt{n} + 1)\right) . \end{align*} But since $$ \sum_{\begin{subarray}{c} p \text{ prime} \\ p \geq \sqrt{n} + 1 \end{subarray}} \frac{\log{p}}{p (p - 1)} \leq 2 \!\!\!\sum_{\begin{subarray}{c} p \text{ prime} \\ p \geq \sqrt{n} + 1 \end{subarray}} \frac{\log{p}}{p^2} = O\left(\frac{1}{\sqrt{n}}\right) ~~~~~~~ (\text{according to Lemma \ref{l2}}) $$ and $$ \theta\left(\sqrt{n} + 1\right) = O\left(\sqrt{n}\right) ~~~~~~~~~~ (\text{according to the Chebyshev estimates}) , $$ we conclude to $$ \mathcal{S}_2(n) = \mathrm{c} \, n + O\left(\sqrt{n}\right) , $$ as required. \end{proof} We now turn to estimate the crucial sum $\mathcal{S}_1(n)$. To facilitate the task, we begin by estimating $\mathcal{S}_1(n)$ in terms of the cardinality of a specific set of prime numbers. For any $n \in {\mathbb N}^*$, we set $$ \mathscr{A}_n := \left\{\left\lfloor\frac{n}{k} + 1\right\rfloor ~;~ k \in {\mathbb N}^* , k \leq \sqrt{n}\right\} \cap \mathscr{P} $$ (in other words, $\mathscr{A}_n$ is the set of prime numbers having the form $\left\lfloor\frac{n}{k} + 1\right\rfloor$, where $k \leq \sqrt{n}$ is a positive integer). Above all, it is important to note that for any $n \in {\mathbb N}^*$, the positive integers $\left\lfloor\frac{n}{k} + 1\right\rfloor$ ($k \in {\mathbb N}^*$, $k \leq \sqrt{n}$) (appearing in the definition of $\mathscr{A}_n$) are pairwise distinct. Indeed, for all $k , \ell \in {\mathbb N}^*$, with $k \leq \sqrt{n}$, $\ell \leq \sqrt{n}$, and $k \neq \ell$, we have $$ \left\vert\left(\frac{n}{k} + 1\right) - \left(\frac{n}{\ell} + 1\right)\right\vert = \frac{n \vert \ell - k\vert}{k \ell} \geq \frac{n}{k \ell} \geq 1 , $$ implying that: $$ \left\lfloor\frac{n}{k} + 1\right\rfloor \neq \left\lfloor\frac{n}{\ell} + 1\right\rfloor . $$ It follows from this fact that $\mathscr{A}_n$ ($n \in {\mathbb N}^*$) has the same cardinality with the set of positive integers $k$ satisfying $k \leq \sqrt{n}$ and for which $\left\lfloor\frac{n}{k} + 1\right\rfloor$ is prime. That is \begin{equation}\label{eq18} \card \mathscr{A}_n = \sum_{\begin{subarray}{c} 1 \leq k \leq \sqrt{n} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} 1 . \end{equation} The estimate of $\mathcal{S}_1(n)$ in terms of $\card \mathscr{A}_n$ ($n \in {\mathbb N}^*$) is given by the following proposition: \begin{prop}\label{p6} We have $$ \mathcal{S}_1(n) = \left(\card \mathscr{A}_n\right) \cdot O\left(\log{n}\right) . $$ \end{prop} \begin{proof} Let $n \in {\mathbb N}^*$ be fixed. From the obvious double inequality $$ \sqrt{n} < \left\lfloor\frac{n}{k} + 1\right\rfloor \leq n + 1 ~~~~~~~~~~ (\forall k \in {\mathbb N}^* ,\text{ with } k \leq \sqrt{n}) , $$ we have that $$ \frac{1}{2} \log(n) \cdot \!\!\!\!\!\!\sum_{\begin{subarray}{c} 1 \leq k \leq \sqrt{n} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\! 1 \leq \mathcal{S}_1(n) \leq \log(n + 1) \cdot \!\!\!\!\!\! \sum_{\begin{subarray}{c} 1 \leq k \leq \sqrt{n} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\! 1 ; $$ which immediately implies (taking into account \eqref{eq18}): $$ \mathcal{S}_1(n) = \left(\card \mathscr{A}_n\right) \cdot O\left(\log{n}\right) , $$ as required. \end{proof} By combining Formula \eqref{eq16} of Proposition \ref{p4}, Theorem \ref{t3}, Proposition \ref{p5}, and Proposition \ref{p6}, we immediately derive the following proposition: \begin{prop}\label{p7} We have \begin{equation} \log{\sigma_n} = n \log{n} - n + O\left(\sqrt{n}\right) + \left(\card \mathscr{A}_n\right) \cdot O\left(\log{n}\right) . \tag*{$\square$} \end{equation} \end{prop} At this point, the whole problem is now to estimate $\card \mathscr{A}_n$ ($n \in {\mathbb N}^*$). First, let us do it heuristically. For $n \in {\mathbb N}^*$, since the numbers constituting the set $\left\{\left\lfloor\frac{n}{k} + 1\right\rfloor ~;~ k \in {\mathbb N}^* , k \leq \sqrt{n}\right\}$ (of cardinality $\lfloor\sqrt{n}\rfloor$) do not satisfy (apparently) any particular congruence, we may conjecture with considerable confidence that the quantity prime numbers in it (i.e., $\card \mathscr{A}_n$) is $O\left(\frac{\sqrt{n}}{\log{\sqrt{n}}}\right) = O\left(\frac{\sqrt{n}}{\log{n}}\right)$. We precisely make the following \begin{conj}\label{conj2} There exist two positive absolute constants $\alpha$ and $\beta$ {\rm(}with $\alpha < \beta${\rm)} such that for any sufficiently large positive integer $n$, we have $$ \alpha \left(\frac{\sqrt{n}}{\log{n}}\right) \leq \card \mathscr{A}_n \leq \beta \left(\frac{\sqrt{n}}{\log{n}}\right) . $$ \end{conj} The proof of Conjecture \ref{conj1} through Conjecture \ref{conj2} is then immediate: \begin{proof}[Proof of Conjecture \ref{conj1} through Conjecture \ref{conj2}] It suffices to insert the estimate $\card \mathscr{A}_n = O\left(\frac{\sqrt{n}}{\log{n}}\right)$ (provided by Conjecture \ref{conj2}) into the estimate of Proposition \ref{p7}. \end{proof} Unfortunately, we were unable to confirm Conjecture \ref{conj2}, so the best result we have achieved is Theorem \ref{t4}. Before setting out the proof of that theorem, it is important to note that the trivial estimate $\card \mathscr{A}_n = O\left(\sqrt{n}\right)$ leads us (through Proposition \ref{p7}) to the estimate: $$ \log{\sigma_n} = n \log{n} - n + O\left(\sqrt{n} \log{n}\right) . $$ We shall improve the later by counting more intelligently the elements of $\mathscr{A}_n$. We have the following \begin{prop}\label{p8} We have $$ \card \mathscr{A}_n = O\left(\sqrt{\frac{n}{\log{n}}}\right) . $$ \end{prop} \begin{proof} Let $n$ be a fixed sufficiently large positive integer and let $t$ be a real parameter with $1 \leq t \leq \sqrt{n}$ (we will choose $t$ later in terms of $n$ in order to optimize the result). We have (according to \eqref{eq18}) \begin{align} \card \mathscr{A}_n & = \sum_{\begin{subarray}{c} 1 \leq k \leq \sqrt{n} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} 1 \notag \\[1mm] & = \sum_{\begin{subarray}{c} 1 \leq k \leq \frac{\sqrt{n}}{t} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} 1 + \sum_{\begin{subarray}{c} \frac{\sqrt{n}}{t} < k \leq \sqrt{n} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} 1 . \label{eq19} \end{align} Next, we have on the one hand: \begin{equation}\label{eq20} \sum_{\begin{subarray}{c} 1 \leq k \leq \frac{\sqrt{n}}{t} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\! 1 \leq \sum_{1 \leq k \leq \frac{\sqrt{n}}{t}} 1 \leq \frac{\sqrt{n}}{t} , \end{equation} and on the other hand: \begin{align*} \sum_{\begin{subarray}{c} \frac{\sqrt{n}}{t} < k \leq \sqrt{n} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\! 1 & \leq \card\!\!\left\{p \text{ prime } ~;~ p \leq \sqrt{n} \, t + 1\right\} ~~~~~~ \left(\text{by putting } p = \left\lfloor\frac{n}{k} + 1\right\rfloor\right) \\ & = \pi\left(\sqrt{n} \, t + 1\right) \\ & = O\left(\frac{\sqrt{n} \, t}{\log(\sqrt{n} \, t)}\right) ~~~~~~~~~~ (\text{according to the Chebyshev estimate}) , \end{align*} implying (since $1 \leq t \leq \sqrt{n}$) that: \begin{equation}\label{eq21} \sum_{\begin{subarray}{c} \frac{\sqrt{n}}{t} < k \leq \sqrt{n} \\ \left\lfloor\frac{n}{k} + 1\right\rfloor \text{ is prime} \end{subarray}} \!\!\!\!\!\! 1 = O\left(\frac{\sqrt{n} \, t}{\log{n}}\right) . \end{equation} By inserting \eqref{eq20} and \eqref{eq21} into \eqref{eq19}, we get $$ \card \mathscr{A}_n = O\left(\frac{\sqrt{n}}{t}\right) + O\left(\frac{\sqrt{n}}{\log{n}} t\right) . $$ To obtain an optimal result, we must take $t = O\left(\sqrt{\log{n}}\right)$, providing $$ \card \mathscr{A}_n = O\left(\sqrt{\frac{n}{\log{n}}}\right) , $$ as required. \end{proof} We are finally ready to prove Theorem \ref{t4}. \begin{proof}[Proof of Theorem \ref{t4}] It suffices to insert the estimate of Proposition \ref{p8} into that of Proposition \ref{p7}. \end{proof} \section[Concluding remarks]{Concluding remarks about the connection between the arithmetic and the analytic studies} In this section, we briefly explain how we can derive our asymptotic estimates concerning $\log{\sigma_n}$ (i.e., Theorem \ref{t4} and Conjecture \ref{conj1}) rather from our arithmetic study. For the sequel, we let $\mathrm{c}_1 , \mathrm{c}_2 , \mathrm{c}_3$, etc. denote suitable absolute constants greater than $1$. For any $n \in {\mathbb N}^*$, we can write: \begin{equation}\label{eq22} \frac{\sigma_n}{n!} = \prod_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n + 1} \end{subarray}} p^{\vartheta_p\left(\frac{\sigma_n}{n!}\right)} \cdot \prod_{\begin{subarray}{c} p \text{ prime} \\ \sqrt{n + 1} < p \leq n + 1 \end{subarray}} p^{\vartheta_p\left(\frac{\sigma_n}{n!}\right)} . \end{equation} For the primes $p \leq \sqrt{n + 1}$, we estimate $\vartheta_p\left(\frac{\sigma_n}{n!}\right)$ as follows: let $n = \overline{a_r a_{r - 1} \dots a_0}_{(p)}$ be the representation of $n$ in base $p$ ($r \in {\mathbb N}$, $a_0 , a_1 , \dots , a_r \in \{0 , 1 , \dots , p - 1\}$, and $a_r \neq 0$). Then we have (according to the Legendre formula \eqref{eq25}): \begin{multline*} \vartheta_p\left(\frac{\sigma_n}{n!}\right) = \vartheta_p\left(\sigma_n\right) - \vartheta_p(n!) = \left\lfloor\frac{n}{p - 1}\right\rfloor - \frac{n - S_p(n)}{p - 1} \leq \frac{S_p(n)}{p - 1} \leq \frac{(p - 1) (r + 1)}{p - 1} \\ = r + 1 . \end{multline*} Thus $$ p^{\vartheta_p\left(\frac{\sigma_n}{n!}\right)} \leq p^{r + 1} \leq p \, n . $$ Consequently, we have \begin{equation}\label{eq23} \prod_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n + 1} \end{subarray}} p^{\vartheta_p\left(\frac{\sigma_n}{n!}\right)} \leq \left(\prod_{\begin{subarray}{c} p \text{ prime} \\ p \leq \sqrt{n + 1} \end{subarray}} p\right) \cdot n^{\pi(\sqrt{n + 1})} \leq \mathrm{c}_1^{\sqrt{n}} \end{equation} (according to the Chebyshev estimates). \\ However, for the primes $p > \sqrt{n + 1}$, we estimate $\vartheta_p\left(\frac{\sigma_n}{n!}\right)$ by using Theorem \ref{t2} saying us that for any such prime $p$, we have $\vartheta_p\left(\frac{\sigma_n}{n!}\right) \in \{0 , 1\}$ and $\vartheta_p\left(\frac{\sigma_n}{n!}\right) = 1$ if and only if $p \in \mathscr{A}'_n$, where $$ \mathscr{A}'_n := \left\{\left\lfloor\frac{n}{k} + 1\right\rfloor ~;~ k \in {\mathbb N}^* , k < \sqrt{n + 1} + 1\right\} \cap \mathscr{P} . $$ (Note the resemblance between $\mathscr{A}'_n$ and $\mathscr{A}_n$). So we have $$ \prod_{\begin{subarray}{c} p \text{ prime} \\ \sqrt{n + 1} < p \leq n + 1 \end{subarray}} p^{\vartheta_p\left(\frac{\sigma_n}{n!}\right)} = \prod_{p \in \mathscr{A}'_n} p . $$ By using the results of §\ref{sec4} concerning $\card \mathscr{A}_n$ (almost the same with $\card \mathscr{A}'_n$), we easily derive that the quantity $\prod p^{\vartheta_p\left(\frac{\sigma_n}{n!}\right)}$, where the product is over the primes $p$ satisfying $\sqrt{n + 1} < p \leq n + 1$, is conjecturally bounded below by $\mathrm{c}_2^{\sqrt{n}}$ and bounded above by $\mathrm{c}_3^{\sqrt{n \log{n}}}$ (or conjecturally by $\mathrm{c}_3^{\sqrt{n}}$). By inserting this together with \eqref{eq23} into \eqref{eq22}, we conclude that $\frac{\sigma_n}{n!}$ is conjecturally bounded below by $\mathrm{c}_2^{\sqrt{n}}$ and bounded above by $\mathrm{c}_4^{\sqrt{n \log{n}}}$ (or conjecturally by $\mathrm{c}_4^{\sqrt{n}}$). Notice that this is exactly what obtained in §\ref{sec4}. By this approach, the constant $\mathrm{c}$ (present in the analytic approach when estimating $\log{\rho_n}$ and then eliminated when estimating $\log{\sigma_n}$) does remarkably not appear! \section*{Acknowledgement} The authors acknowledge support from the Algerian DGRSDT (Direction G\'en\'erale de la Recherche Scientifique et du D\'eveloppement Technologique). \rhead{\textcolor{OrangeRed3}{\it References}}
\section{Introduction} Since the discovery of the Cosmic Microwave Background radiation (CMB) by Penzias and Wilson(1965), investigations of this radiation has been a central feature in order to get new information about the earliest evolution of the Universe. Due to technical limitations, until recently most of the observational effort has been concentrated to obtain very accurate temperature measurements. Rees (1968) showed that polarization observations of CMB on larger scales could be a very important way to study the very early history of the Universe. Symmetries in the production and growth of the polarization signal are constraining the configurations of the CMB polarization. Density (scalar) perturbations produce temperature (T) fluctuations and E (curl - free) polarization modes, while tensor (gravitational waves) produce both T, E and B (divergence free) modes. It is generally assumed that the initial inhomogeneities in the Universe were Gaussian distributed. Linear theory predicts that the CMB fluctuations are also Gaussian, and that the CMB spectrum can be fully described by 4 power spectra TT, EE, BB and TE while the TB and EB power spectra should zero due to parity constraints (e.g. Kamionkowski et. (1997). CMB polarization measurements were provided by the WMAP satellite, the final results given in Bennett et al. (2013). For r, the tensor-to-scalar ratio, Bennett et al. found, for WMAP-only data, an upper limit of 0.38. Ade et al. (2014) report a detection of B-modes in a 400 sq. degree area on the sky with the BICEP2 instruments, observing from the South Pole. These observations were done at only one frequency, 150 GHz, implying that their estimate of polarized Galactic emission in this small area on the sky is uncertain. Ade et al. (2015) attacked this problem by combining BICEP2 + Keck 150 GHz data and Planck 30 GHz-353 GHz observations in this area. This investigation determined the amplitude of the lensing spectrum 1.12 $\pm$ 0.18, relative to the standard $\lambda$CDM model, and an upper limit on the tensor-to-scalar ratio r $<$ 0.13, with 95 percent confidence. The Planck Collaboration XV (2016) determined the CMB lensing potential at a level of 40$\sigma$, while the Planck Collaboration XIII (2016) found an upper limit of 0.11 on the tensor-to-scalar ratio. For the Planck Collaboration, a key issue has always been to carefully investigate the data flows from the detectors and find ways to correct for systematic errors. Since the release of the Planck data in 2015, this effort has been continued and significant improvement in the final Planck frequency maps has been obtained. Since no new observations have been obtained between 2015 and 2017 , the improvements are mainly due to removal of remaining systematic errors. In a series of papers, the capabilities of neural networks for dealing with mm/submm observations have been investigated by N{\o}rgaard - Nielsen \& J{\o}rgensen (2008), N{\o}rgaard - Nielsen \& Hebert (2009), N{\o}rgaard - Nielsen (2010), and N{\o}rgaard - Nielsen (2012). The last two papers showed, by using data from the WMAP satellite, that simple neural networks can extract the CMB temperature and polarization signals with excellent accuracy. In N{\o}rgaard - Nielsen (2016, hereafter NN1) the Planck polarization frequency maps, released in 2015, the Stoke Q and U parameters were extracted by means of simple neural networks. The BB power spectrum was detected in 100 $\leq$ l $\leq$ 275 with S/N = 4.5. It was demonstrated that the contribution from Galactic emission and remaining systematic errors in the maps were very small, but they could not be completely ruled out. Due to the improvement in the Planck 2017 polarizations maps, it is feasible to repeat the 2016 analysis. As in NN1, the this paper is concentrated on the reliability of the detected polarization power spectra. The structure of the paper is the following: Sect. 2 and 3 give a short overview of the different analysis tools applied in NN1, the power spectra from the extracted Q and U maps are presented in Sect. 4, while the fully calibrated power spectra are shown in Sect. 5, the contamination of the power spectra is discussed in Sect. 6. Conclusions are given in Sect.7 . \section{The final Planck polarization maps} In the present analysis, the final Planck frequency maps (named DX12) have been exploited: \begin{itemize} \item{only the final Q and U maps in the frequency range 30 GHz - 353 GHz has been exploited} \item{no attempt has been performed to correct the Planck frequency maps to the same point spread function} \item{the analysis has been performed exclusively on the Planck frequency maps, implying that no auxiliary data or information has been exploited} \end{itemize} From the final set of Planck frequency maps, 4 sets have been exploited in this paper: the 2 half mission maps (HM1 and HM2) and the maps combined from the ODD years and the EVEN years of the mission. The precise definition of this data sets can be found in Planck Collaboration I (2017) \section{The road from frequency maps to CMB power spectra} In this paper, the procedure for extracting the Stoke parameters Q and U from the final Planck frequency maps follows closely the procedure used in NN1. In this section, the key features of this procedure are outlined. \subsection{The applied foreground model} As for the previous investigations, the main purpose is only to extract the CMB signal. Therefore, meaningful physical models of the Galactic foreground components are not required, only a coherent mathematical model of the non - CMB signal (all signals that are not related to the CMB) is needed. The Independent Component Analysis (Hyvarinen et al. 2001, ICA) has turned out to be well suited to provide a simple model of the non - CMB signal. Briefly, ICA is a mathematical method to extract a set of independent components, in this case from the 7 Planck frequency maps. Like in NN1, each of the HM1, HM2, ODD and EVEN data sets are split into independent components by ICA. By analysing these component maps, it is evident that 2 components contain nearly all the extended Galactic emission. For each of these components, ICA provides a normalized spectrum and an amplitude per sky pixel, which are used in setting up the neural networks, see Sect. 3.2. \subsection{The neural network method} In the series of papers referred above, it has been demonstrated that neural networks are a very effective tool for extracted CMB from mm/submm observations. The setup of the neural networks is the same as previously: \begin{itemize} \item{simple multi layer perceptron neural networks with 2 hidden layers have been exploited} \item{the input for the neural networks are seven inputs channels (30 GHz - 353 GHz) and one output channel (Q or U), all in units of $\mu$K} \item{the standard non - linear activation function 'tansig' has been exploited} \end{itemize} The scheme for setting up the data for training the neural network is: \begin{enumerate} \item{defining the required intervals of the amplitudes of the 2 ICA components and for Q or U} \item{draw 3 random numbers within these intervals} \item{calculate the combined spectrum} \item{for each input channel, add random Gaussian noise within the range corresponding to the Planck hit maps in the different data sets} \item{run these steps N(train) times, to get reasonable number of spectra to train the network} \item{train the network to establish the transformation between the input channels and the output channel} \item{test the accuracy of the network by running an independent TEST data set, with N(test) spectra obtained in the same way as the TRAIN data set, through the network.} \item{if the accuracy is satisfactory (meaning the systematic errors as function of the input parameters are very small and the residuals are Gaussian distributed), the Planck data set is run through the network} \end{enumerate} The main advantages of using neural networks are: \begin{itemize} \item{the number of weights needed to set up the full transformation between the 7 input channels and 1 output channel (Q or U) is in our case $~$ 80, small compared to the number of pixels in each of the Planck frequency maps ($\sim$ 12.000.000)} \item{the foreground model is only using the input frequency maps themselves (neither auxiliary data nor assumptions about e.g. the spectral behaviour of the different foreground components are applied)} \item{a small number of parameters are needed in the foreground model, since a physical meaningful model is not required } \item{the network is set up to extract signals with the same spectral behaviour as CMB} \item{the available frequency range is exploited} \end{itemize} \begin{figure}[ht] \centering \includegraphics[width= 3.0in]{fig_1_up.pdf} \caption{The CC HM and OE EE power spectra together with the mean. It is seen that these spectra fit nicely together up to l $=$ 1700. Y - axis: $l(l +1)/(2\pi) C^{EE}_{l}[\mu K^{2}]$} \label{fig_1} \end{figure} \subsection{Applied sky mask} In order to avoid residual contamination by emission from the MIlky Way in the extracted power spectra, the apodized CAMSPEC mask used in NN1, covering about 63 per cent of the sky, has also been applied here. Since the contribution from point sources is expected to be insignificant, it is also neglected in this work. \subsection{The Planck simulations} In the process of establishing the Planck 2015 data, the whole Planck data chain has been extensively simulated (FFP8). In NN1, this data set was mainly used for calibrating the derived power spectra to an absolute scale. The basic simulations behind FFP8 has been improved, but from a calibration point of view, the improvements are not significant. Therefore, the NN1 calibration scheme has also been followed here. \begin{table}[h] \caption{Definition of the 'standard' Planck high pass filter, defined with $l_{1}$ = 20 and $l_{2}$ = 40 (Planck 2015 IX)} \centering \begin{tabular}{c c} l range & value\\ \hline $l < l_{1}$ & 0.0\\ $l_{1} < l < l_{2}$ & $0.5[1~-~cos(\pi\frac{l~-~l_{1}}{l_{2}~-~l_{1}})]$ \\ $l > l_{2}$& 1.0\\ \hline \label{high_pass} \end{tabular} \end{table} \subsection{The high pass filter} It turned out that special treatments of the Planck 2015 maps at multipoles less than 20 were needed, mainly due to the uncertainty in the contribution of large scale systematic errors still in the frequency maps. Although there have been significant improvements in the frequency maps since 2015, the discussion of this range of multipole moments is outside the goal of this investigation, and the 'standard' Planck high pass filter has been introduced (Table 1) as in NN1. \section{The power spectra of the observed Planck polarization maps} The accuracy of cross correlation (CC) power spectra is superior compared to auto correlation (AC) power spectra. Therefore, the discussions in this paper are concentrated on the HM1 $<$-$>$ HM2 (HM) and the ODD $<$-$>$ EVEN (OE) CC power spectra. All power spectra analysed in this paper have been extracted with the HEALPix \textbf{anafast} IDL routine (\citet{Gors05}). The l - interval of the power spectra are the same as for the Planck 2015 EE power spectrum. \begin{figure}[!] \centering \includegraphics[width=3.0 in]{fig_2_up.pdf} \caption{The CC HM and OE BB power spectra, compared with the 2015 BB power spectrum (NN1). Taking the uncertainty into account, a reasonable agreement is seen between the $<$2015$>$ and $<$2017$>$ spectra. Y - axis: $l(l +1)/(2\pi) C^{BB}_{l}[\mu K^{2}]$} \label{fig_2} \end{figure} \subsection{The EE power spectra} The CC HM and EO EE power spectra together with the mean are shown in Fig. \ref{fig_1}. It is seen that the HM and OE power spectra are very similar up to l $\sim$ 1700. \subsection{The BB power spectra} The derived CC HM and OE BB power spectra and the mean are shown in Fig. \ref{fig_2}, together with the CC BB power spectra from NN1. Taking the accuracy of these power spectra into account a reasonable agreement is seen. It is seen that the main features in the 2015 spectrum (NN1) is still present. \begin{figure}[!] \centering \includegraphics[width=2.5 in]{fig_3_up.pdf} \caption{The CC EB power spectrum. Y - axis: $l(l +1)/(2\pi) C^{EB}_{l}[\mu K^{2}]$} \label{fig_3} \end{figure} \subsection{The EB power spectrum} The CC EB power spectrum shows 2 features around l = 675 and l = 375, each detected with a S/N $\sim$4 (Fig.\ref{fig_3}). The positions of these feaures correpond to the postions of the first 2 peaks in the EE power spectrum (Fig.\ref{fig_1}). \begin{figure}[!] \centering \includegraphics[width=3.1 in]{fig_tb_nn.pdf} \caption{The CC TB power spectrum, it is evident that this spectrum is consistent with a zero signal. Y - axis: $l(l +1)/(2\pi) C^{TB}_{l}[\mu K^{2}]$} \label{fig_4} \end{figure} \subsection{The TB power spectrum} The CC HM TB spectrum is presented in Fig. \ref{fig_4}. It is clear that no part of this spectrum has been significantly detected. For 100 $\leq$ l $\leq$ 750 the average power is -0.14 $\pm$ 0.13 , \begin{figure}[!] \centering \includegraphics[width=3.0 in]{fig_5_up.pdf} \caption{The fully calibrated CC $<$HM, OE$>$ EE power spectrum, compared with the Planck 2015 EE power spectrum (Planck 2015 XI) including error bars. An excellent agreement is seen for l $<$ 1650. Y - axis: $l(l +1)/(2\pi) C^{EE}_{l}[\mu K^{2}]$} \label{fig_5} \end{figure} \section{The calibration of the power spectra} The basic calibration scheme used in NN1 (mainly corrections for the sky mask, the sensitivity of the Planck detectors and for the effective point spread function of the NN method) has been applied. Briefly, these corrections were estimated by combining a set of simulated FFP8 frequency maps with realistic noise maps and theoretical tensor-to-scalar maps, calculated by the CAMB software package (see http://CAMB.info). \section{The fully calibrated EE, BB and EB power spectra} The mean, fully calibrated, CC $<$HM, OE$>$ EE spectrum are given in Fig. \ref{fig_5} together with the Planck 2015 EE power spectrum, a good agreement is seen. As in NN1, the error bars given in the figures are extracted from the scatter around the spectra in each l - interval. \begin{figure}[!] \centering \includegraphics[width=3.0 in]{fig_6_up.pdf} \caption{The fully calibrated CC $<$HM, OE$>$ BB power spectrum, compared with PLANCK 2015 estimates of the BB power spectrum. CAMB BB tensor spectra with tensor to scalar ratios of 0.5 and 0.3 are also shown. Y - axis: $l(l +1)/(2\pi) C^{BB}_{l}[\mu K^{2}]$} \label{fig_6} \end{figure} In Fig. \ref{fig_6}, the mean, fully calibrated CC $<$HM, OE$>$ BB power spectrum is shown. The excess from l = 100 to l = 275, discussed in NN1, are confirmed with improved S/N = 6.6 (compared to 4.5 in NN1 ), the feature 175 $\leq$ l $\leq$ 275 has a S/N of 4.1 . The theoretical CAMB spectra with tensor to scalar ratios of 1.0, 0.5, 0.3 are also shown together with the CAMB lensing spectrum. It is clear that a combination of these CAMB spectra will have difficulties in fitting the observed spectrum. In Fig. \ref{fig_7}, the difference in S/N between the BB power spectrum found in NN1 and the spectrum given here is highlighted. The improvement is evident. \begin{figure}[!] \centering \includegraphics[width=3.0 in]{fig_sn_an_nn_bb_2017.pdf} \caption{The Signal to Noise of the 2017 BB power spectrum, compared with previous obtain spectrum. A considerable improvement is evident} \label{fig_7} \end{figure} The fully calibrated CC EB power spectrum is shown in Fig. \ref{fig_8}. The 2 features around l $=$ 675 and l $=$ 375 have $S/N$ $=$ 4.4, and 4.2, respectively. These features could indicate a violation of the parity assumption, but the coincidence with the positions with the first 2 peaks of the EE power spectrum could also indicate a leakage from E to B. To explain the strength of the EB features, a leakage of 1 - 2 per cent is sufficient. At this stage, it is difficult to exclude a leakage of this level. Such a leakage would give a insignificant contribution to the BB power spectrum compared to the noise level at the positions of the features. In Fig. \ref{fig_1}, it is seen that the EE power is much weaker than the first peaks in the l - range relevant for the BB spectrum discussed in this paper, implying that the E to B leakage gives no significant contribution to the detected BB power spectrum. \section{Remaining systematics effects from the reduction of the Planck frequency maps } As emphasized above, important improvements in removing residual systematics have been obtained by the LFI and HFI instrument teams since 2015. As emphasized in the two Planck 2015 detector papers (Planck Collaboration III and VIII (2016)) the systematic errors on larger scales are the most difficult to remove, but the high-pass-filter (Table \ref{high_pass}) is designed to remove this kind of systematics. Planck Collaboration III (2017) gives a detailed summary of all the systematic errors found during the reduction phase of the HFI data and limits on their amplitudes. From their Table 7 it is seen that main contributor of remaining systematic errors in the frequency maps is the uncertainties in the ADC linear corrections. For the 100 GHz, 143 GHz and 217 GHz channels (the most sensitive channels to the CMB signal on Planck) the residual level of systematic errors is 1 - 3 $10^{-3}$ $\mu \textrm{K}^{2}$, peaking at small l's, implying that they give only minimal contributions to the cross power spectra presented here. For the other important contribution to systematics e.g. calibration and bandpass mismatch are nearly an order of magnitude lower. Further, the neural networks are design to extract components with a flat frequency spectrum. Altogether, it is reasonable to assume that the observed BB and EB cross spectra are not polluted significantly by residual systematic errors. The E/B leakage due the uncertainties in the beams of the Planck optical system is estimated to be less than $3 x 10^{-3} \mu K^{2}$, which is small compared to the detected BB power spectrum (see Fig. 6). \section{The E/B leakage} Due to the fact that most, if not all, CMB experiments extract the power spectra from only a fraction of the sky (due to the strong signal from the MIlky Way in the relevant frequency range) the E modes will to some extend leak into the B modes. This problem has been discussed in a number of papers e.g. Kendrick (2006) and Grain et al. (2009). They show the problem can be significantly reduced by a proper apodization of the mask. Most of these investigations have analysed the problem if small areas on the sky, say 1 per cent, are observed, relevant for e.g. balloon flights. Since the Planck Mission is covering the whole sky, it is possible to analyse a large part of the sky, it is only necessary to exclude the brightest area of the Milky Way. As emphasized in Section 3.3, data from about 60 percent of the sky has been exploited in this paper. To investigate the level of leakage in the detected BB power spectrum, the BB power spectra has been calculated from the CAMB T/S = 1.0 models with different masks applied. The leakage (defined as the difference of the power spectrum included a mask minus the full sky power spectrum). The masks applied are similar to the mask described in Section 3.3, except for the size of the sky coverage. In Fig. 9, it is seen that the level of the E/B leakage for the mask used in calculated the BB power spectrum (the purple line) is insignificant compared to the detected spectrum(Fig. 6). \begin{figure}[!] \centering \includegraphics[width=2.5 in]{fig_eb_fin.pdf} \caption{The fully calibrated CC $<$HM, OE$>$ EB power spectrum. Y - axis: $l(l +1)/(2\pi) C^{EB}_{l}[\mu K^{2}]$} \label{fig_8} \end{figure} \section{Contamination by Galactic emission} As emphasized above, the neural networks are setup to extract a signal corresponding to a flat spectrum, but at low flux levels the noise of the observations presents a challenge for the neural networks. \begin{figure}[!] \centering \includegraphics[width=3.5 in]{eb_lk_up.pdf} \caption{The BB(mask) corrected - BB(full sky), for the CAMB T/S = 1 model. It is seen that the E/B leakage due to the applied mask is much smaller than the detected BB power spectrum. Y - axis: $l(l +1)/(2\pi) C^{EB}_{l}[\mu K^{2}]$} \label{fig_9a} \end{figure} \begin{figure}[!] \centering \includegraphics[width=3.0 in]{fig_mask_bb.pdf} \caption{The $<$HM, OE$>$ CC BB power spectra obtained with a set of CAMSPEC masks, with sky covering factors of 0.26, 0.40, 0.52, 0.61, 0.68. The spectra have been normalized to the covering factor of the mask used in obtained the HM and ODd/EVEN spectra. If the accuracy of the spectra is taken into account thee are no trend as function of the sky coverage. Y - axis: $l(l +1)/(2\pi) C^{BB}_{l}[\mu K^{2}]$} \label{fig_9} \end{figure} A simple test to estimate the contribution of the Galactic components is to investigate how the cross power spectra are depending of the Galactic latitude. In Fig. \ref{fig_9} the BB power spectra obtained within different CAMSPEC masks with different sky coverage (0.26 - 0. 68) are shown. It is seen that within the accuracy there is no variation of the spectra as function of the sky coverage, indicating a non - significant Galactic contribution. \begin{figure}[!] \centering \includegraphics[width=3.0 in]{cross_syn_nn_hm_12_e.pdf} \caption{The CC synchrotron HM 1 and NN HM 2 EE power spectrum. The synchrotron HM 1 map is provided by the Planck Commander team. Due to the observational errors in the LFI channels, they have applied a low pass filter removing multipolemoments above 250. It is evident that there is no evidence for a contamination of the NN maps by synchrotron emission from the Milky Way. X - axis: $\delta$l = 1, Y - axis: $l(l +1)/(2\pi) C^{EE}_{l}[\mu K^{2}]$} \label{fig_10} \end{figure} \begin{figure}[!] \centering \includegraphics[width=3.0 in]{cross_syn_nn_hm_12_b.pdf} \caption{The CC synchrotron HM 1 and NN HM 2 BB power spectrum. The synchrotron HM 1 map is provided by the Planck Commander team. Due to the observational errors in the LFI channels, they have applied a low pass filter removing multipolemoments above 250. It is evident that there is no evidence for a contamination of the NN maps by synchrotron emission from the Milky Way. X - axis: $\delta$l = 1,Y - axis: $l(l +1)/(2\pi) C^{BB}_{l}[\mu K^{2}]$} \label{fig_11} \end{figure} \subsection{The synchrotron emission} It is generally accepted that only the Galactic synchrotron emission and the thermal dust emission are polarized. The best available maps of these components have been derived by the Commander team from the Planck 2017 data set. Due to the uncertainties in the LFI channels they have a applied a low pass filter. In Figs. \ref{fig_10} and \ref{fig_11}, the CC EE and BB power spectra between the Commander synchrotron maps and the corresponding NN maps are shown. It is evident that there is negligible synchrotron contribution to the derived NN power spectra. \begin{figure}[h] \centering \includegraphics[width=3.0 in]{cross_dust_nn_hm_21_e.pdf} \caption{The CC dust HM 1 and NN HM 2 EE power spectrum. The dust HM 1 map is provided by the Planck Commander team. It is evident that there is no sign of a contamination of the NN maps by thermal dust emission from the Milky Way. X - axis: $\delta$l = 1,Y - axis: $l(l +1)/(2\pi) C^{EE}_{l}[\mu K^{2}]$} \label{fig_12} \end{figure} \begin{figure}[h] \centering \includegraphics[width=3.0 in]{cross_dust_nn_hm_21_b.pdf} \caption{The CC dust HM 1 and NN HM 2 BB power spectrum. The dust HM 1 map is provided by the Planck Commander team. It is evident that there is no evidence for a contamination of the NN maps by thermal dust emission from the Milky Way. X - axis: $\delta$l = 1,Y - axis: $l(l +1)/(2\pi) C^{BB}_{l}[\mu K^{2}]$} \label{fig_13} \end{figure} \subsection{The dust thermal emission} Similarly, in Figs. \ref{fig_12} and \ref{fig_13}, the CC EE and BB power spectra between the Commander thermal dust emission maps and the corresponding NN maps are shown. It is evident that there is also negligible contribution from dust to the derived NN power spectra. \section{Conclusions} It has been demonstrated that with the improved accuracy of the final Planck polarization maps compared the 2015 release, the detection of the BB power spectrum in NN1 is confirmed, with a higher confidence. Possible contamination from Galactic emission and remaining systematic errors in the Planck frequency maps has been ruled out. Two features in the EB power spectrum have been found each detected with a S/N $\sim$4 . At this stage, the most likely origin of these features is leakage from E modes to B modes of the order of 1 per cent. This level of leakage gives no significant contribution to the detected BB power spectrum. The TB power spectrum is found to be consistent with a zero spectrum. The confirmation of the BB power spectrum will, no doubt, give new strong arguments for the proposed polarization missions to follow up on Planck. \section{Acknowledgements} The author acknowledges that this work would not have been possible without the massive efforts by a lot of strongly committed scientists and engineers within the Planck Collaboration. A anonymous referee is acknowledged for valuable comments. This work has taken advantage of the HEAlPix software package.
\section{Introduction} Fractional Gaussian fields have inspired extensive research in spatial statistics. Fractional fields generalize the notion of fractional noise in two or higher dimensions and are particularly important for studying power laws and modeling long-range dependencies. The early mathematical development of fractional fields can be traced to the works of \citet{yagl:1957}, \citet{whit:1962}, \citet{mcke:1963}, \citet{gang:1986}, \citet{mand:vann:1968}, and others. Also notable are the works by \citet{dobr:1979}, \citet{yagl:1987}, \cite{gran:joye:1980}, \cite{hosk:1981}, \cite{gay:heyd:1990}, \citet{bera:1994}, \citet{ma:2003} and \citet{kelb:leon:2005}. Recent surveys on the topic are provided in \citet{chil:delf:2009}, \citet{cohe:ista:2013} and \citet{lodh:shef:2016}. Fractional Gaussian fields cover, as special cases, the de Wijs process or the Gaussian free fields \citep{math:1970,shef:2007,mond:2015}, the thin plate spline \citep{gu:wahb:1993}, higher-order intrinsic random fields \citep{math:1970,math:1973} and power variogram models. Fractional Gaussian fields can also be seen as limiting cases of the Mat\'ern models. Their applications range from agriculture, hydrology and environmental science to cosmology, statistical physics, and quantum mechanics. Advances in fractional Gaussian fields have been accompanied by the development of their discrete-space approximations. In one dimension, the discrete-space approximations emerged in the influential works of \citet{gran:joye:1980} and \citet{hosk:1981} on fractional differencing and have received extensive treatments in time series analysis. Furthermore, there is an impressive array of works on intrinsic autoregressions that can be understood as discrete-space approximations of various intrinsic random fields; see e.g. \citet{kuns:1987}, \citet{besa:koop:1995}, \citet{besa:mond:2005}, \citet{rue:held:2005}, \citet{lind:rue:2011}, \citet{cres:2015}, and \citet{mond:2018}. In a recent paper, \citet{dutt:mond:2015,dutt:mond:2016b} consider fraction Laplacian differencing as ways to approximate fractional Gaussian fields in two dimensions. These discrete-space approximations do not conflict, but rather establish a deeper connection with limiting, continuum fractional Gaussian fields, and help advance statistical computation. The intent of this book chapter is to provide a basic introduction to fractional Gaussian fields with an emphasis on their interpretation, their statistical properties and on exploring their discrete-space approximations. We start with a basic definition of fractional Gaussian fields in Sections \ref{sec:matern}, which arise when a fractional order of the Laplacian is applied to the Gaussian white noise on the two dimensional Euclidean space. We then present their spectral densities, variograms and, in Section \ref{sec:fracdiff}, consider their discrete-space approximations. These discrete-space approximations are obtained by restricting the random fields on regular lattices and by replacing the Laplacian operator on the two dimensional Euclidean space with discrete Laplacians on regular grids. We primarily focus on Gaussian fields with geometric anisotropies which occur when variogram contours are formed by concentric ellipses, and standard statistical analysis presents further challenges. In Section \ref{sec:MLestimation} we focus on a certain range of the fractional parameter and discuss maximum likelihood estimation for spatial models based on fractional Gaussian fields or their discrete-space approximations. We judge the effectiveness of the discrete-space approximations in terms of efficiency in approximating the continuum limits. For this, we consider simulation studies. Using computer experiments, we demonstrate that discrete-space approximations provide as good estimates as the limiting, continuum model based on the fractional Gaussian fields. We further demonstrate statistical scalability. In Section \ref{sec:data}, we present an analysis of the Indian Ocean surface temperature obtained from the Argo floats devices, further establishing the agreement between the models based on continuum fractional Gaussian fields and their discrete-space approximations. For ease of understanding and reproducibility, we provide all Matlab codes in the appendix. The dataset can be obtained from \url{ftp://usgodae.org/pub/outgoing/argo/geo/indian_ocean/} and also from the corresponding author. \section{Fractional Gaussian fields and their approximations} \subsection{Fractional Gaussian fields}\label{sec:matern} We follow \citet{lodh:shef:2016} and consider anisotropic two dimensional fractional Gaussian fields as \begin{equation}\label{eqn:imaternspde} \psi(u,v) = ( - \nabla)^{-\nu/2} \xi(u,v), \quad (u,v)\in\mathbb{R}^2, \end{equation} where $\xi(u,v)$, for $(u,v)\in\mathbb{R}^2$ represent Gaussian white noise with marginal variance $\sigma^2$, $\nu$ denotes the fractional or the long range dependence parameter, and $\nabla$ is the anisotropic Laplacian \begin{equation}\label{eqn:spdeoperator} \nabla = 4\alpha\frac{\partial^2 }{\partial u^2} + 4(\mbox{{\small$\frac{1}{2}$}}-\alpha)\frac{\partial^2 }{\partial v^2}. \end{equation} The parameter $0 < \alpha <1/2$ controls the degree of geometric anisotropy in both the $x$ and $y$ directions and the value $\alpha= 1/4$ corresponds to isotropic random fields. It is important to note that a Gaussian white noise is not pointwise defined, rather, it is a generalized random field \citep{math:1973,chil:delf:2009,lodh:shef:2016}. In fact, a white noise $\xi$ on $\mathbb{R}^2$ is defined such that for any pair of disjoint measurable sets $A$ and $B$, $\int_A\xi(u,v)dudv$ and $\int_B\xi(u,v)dudv$ are independent Gaussian random variables with zero means and variances $\sigma^2|A|$ and $\sigma^2|B|$ respectively, where $|A|$ and $|B|$, respectively, denote the areas of $A$ and $B.$ However, either pointwise or in a distributional sense, fractional Gaussian fields exist for all real values for $\nu$. Fractional Gaussian fields include many important models as special cases. In particular, $\nu=0$ corresponds to the White noise model, $\nu=1$ gives the de Wijs process or the Gaussian free fields, $\nu=2$ indicates thin plate splines (also known as bi-Laplacian random fields) and $\nu =3/2$ denotes the L\'evy Brownian motion. For all $\nu \le 1$, fractional Gaussian fields correspond to generalized random fields and are defined in a distributional sense. For $1 < \nu < 2$, fractional Gaussian fields have stationary (zeroth-order) increments. For $2 \le \nu < 3$, fractional Gaussian fields have stationary first-order increments, and so on. For non-negative $\nu$, it can be shown that the generalized spectral density of the fractional Gaussian fields in \eqref{eqn:imaternspde} is \begin{equation}\label{eqn:sdimatern} \rho(\omega,\eta) = \frac{\sigma^2}{\left(4\alpha\omega^2 + 4(\mbox{{\small$\frac{1}{2}$}}-\alpha)\eta^2\right)^\nu},~(\omega,\eta) \in \mathbb{R}^2. \end{equation} Thus, for $1 < \nu < 2$, standard Fourier integral formulas give an expression for variogram of $\Psi$ as \citep{dutt:mond:2016a} \begin{eqnarray}\label{eqn:variogPower} \gamma(h,k) & = &\mbox{{\small$\frac{1}{2}$}} \mathrm{var}~(\psi(u+h,v+k)-\psi(u,v)) = \int_{\mathbb{R}^2}\{1-\cos(h\omega+k\eta) \} \rho(\omega,\eta)d\omega d\eta\nonumber \\ & = & \dfrac{\sigma^2\Gamma(\nu-\mbox{{\tiny$\frac{1}{2}$}})}{16\sqrt{\pi\alpha(\mbox{{\small$\frac{1}{2}$}}-\alpha)}\Gamma(\nu)\Gamma(2\nu-1)\sin(-\nu\pi)} \left(\frac{h^2}{4\alpha}+ \frac{k^2}{4(\mbox{{\small$\frac{1}{2}$}}-\alpha)}\right)^{\nu-1}, \end{eqnarray} for any $(h,k)\in\mathbb{R}^2.$ By virtue of \eqref{eqn:variogPower}, fractional Gaussian fields, for values of $1< \nu <2$, correspond to widely used power variogram models in geostatistics. Fractional Gaussian fields can also be seen as a limiting case of Mat\'ern models. The latter emerge as a solution to the stochastic partial differential equation \begin{equation}\label{eqn:maternspde} (\kappa^2 - \nabla)^{\nu/2} \psi^\dagger(u,v) = \xi(u,v),~(u,v)\in\mathbb{R}^2, \end{equation} where $\kappa > 0$ is the inverse range parameter. The limiting cases, as $\kappa \to 0$, provide the fractional Gaussian fields in \eqref{eqn:imaternspde}. The spectral density \eqref{eqn:sdimatern} and the variogram function \eqref{eqn:variogPower} play an important role in all subsequent statistical computations. For example, for $1 < \nu <2$, the variogram function \eqref{eqn:variogPower} is key to computing the actual likelihood function, which we shall discuss in Section \ref{sec:MLiMatern}. \subsection{Lattice approximations}\label{sec:fracdiff} For $m\geq 1$, let $\mathbb{Z}^2_m$ denote the sub-lattice of the two-dimensional integer lattice $\mathbb{Z}^2$ with spacing $1/m$. Following \citet{dutt:mond:2016a}, let $\Delta_m$ be the Laplace difference operator on the sub-lattice $\mathbb{Z}^2_m$. Thus, for any real valued function $w$ defined at the lattice points of $\mathbb{Z}_m^2$, we get \[ \Delta_m w(u,v) = w(u,v) - [ \alpha_m \{ w(u+\mbox{{\small$\frac{1}{m}$}} ,v) + w(u-\mbox{{\small$\frac{1}{m}$}},v)\} + (\mbox{{\small$\frac{1}{2}$}} - \alpha_m) \{ w(u,v+\mbox{{\small$\frac{1}{m}$}})+w(u,v-\mbox{{\small$\frac{1}{m}$}}) \}] , \] where $0 \leq \alpha_m \leq 1/2$. Next, we consider \begin{equation}\label{eqn:fld} \psi^{(m)}(u,v) = \Delta_m^{- \nu/2}\; \xi^{(m)}_{u,v}, \hspace{.1in} \nu \ge 0, \end{equation} where $\xi^{(m)}_{u,v}$ is a Gaussian white noise on the sub-lattice ${\mathbb Z}_m^2$ with \[ \mathrm{var}~ \xi^{(m)}_{u,v} = \sigma^2_m /m^2. \] Then, the random field $\{ \psi^{(m)}(u,v)\}$ that arises from the above fractional Laplacian differencing can be interpreted as an approximation of the fractional Gaussian random fields \ref{eqn:imaternspde} on the sub-lattice ${\mathbb Z}_m^2$. It then follows from the standard theory on linear transformation or spectral representation that the generalized spectral density function of $\{ \psi^{(m)}(u,v)\}$ has the form \begin{equation}\label{eqn:sdffd} \rho_{m} (\omega,\eta) = \frac{\sigma_m^2}{ m^2 \Bigl[ 4\alpha_m\sin^2(\mbox{{\small$\frac{1}{2m}$}} \omega) + 4(\mbox{{\tiny$\frac{1}{2}$}}-\alpha_m)\sin^2(\mbox{{\small$\frac{1}{2m}$}} \eta)\Bigr]^\nu}, \end{equation} with $\omega, \eta \in (-\pi m, \pi m]$, $\sigma_m >0$ and $\nu > 0$. Under appropriate scaling of the parameters $\sigma^2_m$ the lattice random field converges to the fractional Gaussian fields \eqref{eqn:imaternspde}. That is, as $m\to\infty,$ \[ 4^\nu m^{2\nu-2}\sigma^2_m \to \sigma^2, \] and $\alpha_m \to \alpha,$ the spectral density $\rho_{m}$ converges to $\rho$ pointwise and in $L_p$ for all $p \leq 2/\lfloor\nu-1\rfloor.$ \begin{figure} \centering\includegraphics[width=\textwidth]{plots/variogram-diff.png} \caption{Plot of $\gamma_{m}(h,k)-\gamma(h,k)$ against the lag distance $\sqrt{h^2/\left(4\alpha\right)+k^2/\left(4(\mbox{{\small$\frac{1}{2}$}}-\alpha)\right)}$ for difference values of $\nu$ and $m.$} \label{fig:variodiff} \end{figure} The preceding result indicates that that continuum fractional Gaussian fields are scaling limits of fractionally differenced Gaussian random fields on regular lattices and it also explicitly describes the rescaling of parameters needed. For integer values $\nu=1,2, \ldots$, fractional Laplacian differencing corresponds to intrinsic autoregressions of order $\nu-1$ on the sub-lattice ${\mathbb Z}_m^2$. Furthermore, for $1 < \nu < 2$, fractional Laplacian differencing leads to a random field with stationary (zeroth-order) increments. Similarly, for $2 \le \nu < 3$, fractional Laplacian differencing gives rise to a random field with stationary first-order increments, and so on. For $1 < \nu <2$, the variogram function of $\psi^{(m)}$ takes the form of \begin{eqnarray}\label{eqn:variogsddif} \gamma_{m}(h,k) & = &\mbox{{\small$\frac{1}{2}$}}\mathrm{var}~\big(\psi^{(m)}(u+h,v+k)-\psi^{(m)}(u,v)\big) \nonumber \\ &= &\frac{1}{4\pi^2}\int_{-m\pi}^{m\pi}\int_{-m\pi}^{m\pi}\{1-\cos(\omega h+\eta k)\} \rho_{m}(\omega, \eta) d\omega d\eta. \end{eqnarray} for $(h,k)\in \mathbb{Z}^2$, and one can show that $\sup_{(h,k)\in\mathbb{Z}^2}\left|\gamma_{m}(h,k) - \gamma(h,k)\right| \to 0$ as $m\to \infty$. However, unlike \eqref{eqn:variogPower}, there is no such exact analytic formula available for \eqref{eqn:variogsddif}. Interestingly, we can apply the numerical method presented in \citet{dutt:mond:2016b} to calculate \eqref{eqn:variogsddif} and assess how well $\gamma_{m}$ in \eqref{eqn:variogsddif} approximate the limiting variogram function \eqref{eqn:variogPower}. The plots in Figure \ref{fig:variodiff} display the difference $\gamma_{m}(h,k)-\gamma(h,k)$ for $\sigma^2= 1,$ $\sigma^2_m = 4^{-\nu}m^{2-2\nu},$ $\alpha_m \equiv \alpha=0.1$ and various values of $\nu$ between $1$ and $2$ and, for $m=2,4$ and $8.$ We find that the difference is essentially a small constant independent of the spatial lag, but depending on $\nu$ and $\alpha.$ Because the variogram of a nugget effect is constant, these results thus suggest that, when augmented with a nugget effect, the fractionally differenced random field at the one-eighth lattice provides an excellent approximation of an fractional Gaussian field plus a nugget effect on the original lattice. This approximation result is consistent with the isotropic case discussed in \citet{dutt:mond:2016b}. \section{Model based geostatistics} In practice, the spatial random fields are often observed indirectly via some noise, blurring, treatment or covariate effects. Let the available data consist of values $y_1, \ldots, y_n$ at respective sites or sampling stations $s_1, \ldots, s_n$. Here each $s_i$ represents a small region (relative to the scale of sampling) and is often referenced by a point $(u_i,v_i)$ in $\mathbb{R}^2.$ In model based geostatistics \citep{digg:tawn:1998,digg:ribe:2007}, it is assumed that the observed data values are realizations of an explicitly specified stochastic model, such as the linear mixed model \begin{equation}\label{eqn:geostatmodel} y_i = \mu + \psi(u_i,v_i) + \epsilon_i, \end{equation} where $\mu$ is the overall mean, $\psi$ is the underlying fractional Gaussian field \eqref{eqn:imaternspde} with the variogram function given by equation \eqref{eqn:variogPower}, and, independent of $\psi$, random errors $\epsilon_1, \ldots, \epsilon_n$ are iid $N(0,\tau^{-1})$ residual or nugget components. The parameter $\tau^{-1}$ is popularly known as the nugget variance. Under the intrinsic assumption, the joint distribution of the contrasts observations $y_1,y_2,\ldots,y_n$ are then used for estimating the mean and spatial parameters, and conditional distribution of $\psi(u,v)$ given observed data values $y_1,\ldots,y_n$ is used to make predictions at an unsampled locations $(u,v) \in \mathbb{R}^2.$ For a suitable value of $m$, we next assume that the sampling stations $s_i$, $1\leq i \leq n$, can be embedded in the sublattice $\mathbb{Z}_m^2$. Furthermore, on the sublattice $\mathbb{Z}_m^2$, let the point $(u_{i,m}, v_{i,m})$ best represents the sampling station $s_i$. We can then consider a lattice approximation of the linear mixed model \eqref{eqn:geostatmodel} by replacing $\psi$ with $\psi^{(m)}$. This leads to an approximate model \begin{equation}\label{eqn:latticemodel} y_i = \mu^{(m)} + \psi^{(m)}(u_{i,m} , v_{i,m})+ \epsilon^{(m)}_i. \end{equation} In the above $\mu^{(m)}$ is now the overall mean, and random errors $\epsilon_1^{(m)}, \ldots, \epsilon_n^{(m)}$ are independent of $\psi^{(m)}$ and are iid $N(0,\tau^{-1}_m)$. The nugget variance $\tau^{-1}_m$ is analogous to $\tau^{-1}$. One important aspect of model based geostatistics is that it explicitly describes the joint distribution of the observations, thus providing a likelihood for the parameters. It provides a complete approach to inference based on variograms which is primarily used by practitioners. For more detail on model based geostatistics we refer the readers to \citet{digg:ribe:2007} and subsequent references. \subsection{Maximum likelihood estimation}\label{sec:MLestimation} Generally, the Gaussian linear mixed models allow maximum likelihood methods for estimating spatial parameters of interest thus facilitating model selection via information criteria, statistical inference, and more importantly, assessment of the uncertainty of the parameter estimates. However, maximum likelihood estimation for models \eqref{eqn:geostatmodel} and \eqref{eqn:latticemodel} presents significant challenges. In particular, exact MLE calculations for \eqref{eqn:geostatmodel} can be very challenging for any values of $\nu \ge 2$. Furthermore, $\nu \le 1$, fractional Gaussian fields are not defined pointwise but only in a distributional sense. This also presents additional complications. For MLE calculations with $\nu=1$, we refer to McCullagh and Clifford (2006) and Dutta and Mondal (2015). Here, for ease of exposition, we restrict our discussion to $1< \nu <2$. When $1< \nu <2$, fractional Gaussian fields have stationary increments. Thus, for this range of the fractional parameter, the marginal variances of the observations are infinite but all contrasts possess valid joint distribution. Moreover, in this case, the expected value of any contrast of the vector ${y} = (y_1,\ldots,y_n)^\top$ is zero. In the next subsection, we use these properties to advance MLE calculations. \subsubsection{MLE for fractional Gaussian fields }\label{sec:MLiMatern} We assume $1< \nu <2$. In the continuum model (8), the observations themselves do not possess a regular joint distribution because the marginal variances are not finite. However, in this case, all contrasts of the observations admit a non-singular multivariate normal distribution. To that end, suppose ${C}$ is an $(n-1)\times n$ matrix of orthogonal contrasts so that ${C1_n=0}$ and ${CC}^\top = {I}_{n-1},$ where $1_n$ is the $n\times 1$ vector of ones. Then the joint distribution of ${Cy}$ is multivariate normal with zero mean vector and covariance matrix ${C}\Sigma{C}^\top + \tau^{-1}{I}_{n-1},$ where the $(i,j)$th entry of $\Sigma$ arises from the variogram \eqref{eqn:variogPower} and is given by \[\sigma_{ij} = \dfrac{\sigma^2\pi\Gamma(\nu-\mbox{{\footnotesize$1/2$}})}{16\sqrt{\alpha(\mbox{{\small$\frac{1}{2}$}}-\alpha)}\Gamma(\nu)\Gamma(2\nu-1)\sin (\nu\pi)}\left(\frac{(u_{i}-u_{j})^2}{4\alpha} + \frac{(v_{i}-v_{j})^2}{4(\mbox{{\footnotesize$1/2$}}-\alpha)}\right)^{\nu-1},\] where $\sigma^2 > 0,$ $\tau >0,$ $1 < \nu < 2$ and $0 < \alpha < \mbox{{\footnotesize$1/2$}}.$ Note that, although $\Sigma$ is not non-negative definite, $C\Sigma C^\top$ is positive semi-definite. Consequently, the log-likelihood of the parameter $\theta = (\tau,\sigma^2,\nu,\alpha)$ is given by \begin{equation}\label{eqn:loglikMatern} 2\ell(\theta) = -(n-1)\log(2\pi) - \log\det({C}\Sigma{C}^\top + \tau^{-1}{I}_{n-1}) - {y}^\top{C}^\top({C}\Sigma{C}^\top + \tau^{-1}{I}_{n-1})^{-1}{Cy}. \end{equation} This likelihood function is invariant to the choice of the orthogonal contrast matrix ${C}$ because different choices change the log-likelihood by an additive constant that does not depend on the parameters. ML estimates of the parameters are obtained by maximizing $\ell$ within the domain. Because the parameters $\nu$ and $\alpha$ are constrained inside intervals, the limited memory Broyden–Fletcher–Goldfarb–Shanno algorithm with box constraints (L-BFGS-B) provides a practically useful tool for ML estimation. We also obtain the numerical hessian matrix as a byproduct of the algorithm and compute the standard errors of the parameters as the square roots of the diagonals of the inverse hessian matrix. There are some practical drawbacks of estimating the parameters using this method. First, the method requires inversion of an $(n-1)\times (n-1)$ covariance matrix, which is typically done using the dense Cholesky factorization, that requires $O(n^2)$ storage space in memory and has $O(n^3)$ computational complexity. Thus the method is only useful for moderate sample sizes. Second, the log-likelihood is not a concave function. Thus we cannot guarantee a global maximum. At the same time, maximization can run into boundary problems, meaning that maximum value is susceptible to occur at the boundary of the parameter space. Finally, for $\nu \ge 2$, MLE calculations get exceedingly difficult, as we need to consider a different contrast matrix $C$ that can generate all first-order increments of the observed data. \subsubsection{MLE with lattice approximations}\label{sec:MLfracdiff} Exact MLE calculations for the model \eqref{eqn:latticemodel} also presents challenges. This is because unlike \eqref{eqn:variogPower}, variogram calculations \eqref{eqn:variogsddif} require expensive numerical computation. However, on any finite regular lattice, \eqref{eqn:fld} provides another alternative way to approximate the model \eqref{eqn:latticemodel}. To that end, suppose that for a specific value of $m,$ the spatial domain is embedded in a finite regular rectangular array with $r$ rows and $c$ columns (both of which depend on $m$). Then, under a restriction of $\Delta_m$ to the finite $r\times c$ array, a solution $\varphi$ to \eqref{eqn:fld} has a precision matrix $\lambda_m R^\nu$ where $\lambda_m = m^2/\sigma_m^2$ and $R$ is the $rc\times rc$ matrix representing the restriction of $\Delta_m$ to the finite $r\times c$ array. Under a column major ordering of the entries of the $r\times c$ array, \citet{dutt:mond:2015}, have shown that the $rc\times rc$ matrix $R$ admits a spectral decomposition given by \[R = {P}^\top \left(4\alpha_m {D}_{01} + 4(\mbox{{\footnotesize$1/2$}}-\alpha_m){D}_{10}\right){P},\] with $P = P_c\otimes P_r,$ ${D}_{10} = {I}_c\otimes {D}_r$ and ${D}_{10} = {D}_c\otimes {I}_r,$ where for $l=r$ or $c,$ $P_l$ is the $l\times l$ orthogonal matrix with $(i,j)$th entry given by \[ p_{1,j} = l^{-\mbox{{\tiny$\frac{1}{2}$}}},\quad p_{i,j} = (2/l)^{-\mbox{{\tiny$\frac{1}{2}$}}}\cos \left\{\pi(i-1)(j-\mbox{{\tiny$\frac{1}{2}$}})/l \big\}, \quad i=2,\ldots,l, \quad j=1,\ldots,l,\] and ${D}_l$ is the $l\times l$ diagonal matrix with $i$th diagonal entry \[d_{i} = \sin^2\left\{\pi(i-1)/(2l)\right\},\quad 1\leq i \leq l.\] Consequently, suppressing $m$, we revise \eqref{eqn:latticemodel} using $\varphi$ as \[ {y} = \mu + {F}\varphi + {\varepsilon} \] where ${F}$ is the $n\times rc$ incidence matrix with $i$th row $f_i$ such that $f_i^\top\varphi$ gives the $\varphi-$values at $(u_{im},v_{im}),$ ${\varepsilon} = ({\varepsilon}_1^{(m)},\ldots,{\varepsilon}_n^{(m)})^\top,$ and the improper density for $\varphi$ is given by, \begin{equation}\label{eqn:densitypsi} f(\varphi) \varpropto \left|\lambda_m R^\nu \right|_{+}^{\mbox{{\footnotesize$1/2$}}} \exp\left(-\mbox{{\small$\frac{1}{2}$}}\lambda_m\varphi^\top{R}^\nu\varphi\right). \end{equation} In the above, we interpret the fractional power of $R$ via its spectral density, \begin{equation}\label{eqn:sddifprecision} R^\nu ={P}^\top \left(4\alpha_m {D}_{01} + 4(\mbox{{\footnotesize$1/2$}}-\alpha_m){D}_{10}\right)^\nu{P}. \end{equation} In order to estimate the parameters $\theta_m = (\tau_m,\lambda_m,\nu,\alpha_m)$, \citet{dutt:mond:2016a} takes an h-likelihood approach. Unlike the method described in Section \ref{sec:MLiMatern}, the above finite regular lattice approximations and the h-likelihood method are valid for all $\nu>0.$ The h-likelihood method goes as follows. Let ${B}$ denote the last $rc-1$ rows of the matrix ${M}$ so that ${B}\varphi$ is an $rc-1$ variate normal random vector with diagonal precision matrix ${G}$ consisting of the $rc-1$ non-zero eigen values of $\lambda_m{R}^\nu.$ Next, define the following matrices and vectors \[{X} = \begin{pmatrix} {1}_n & {F} \\ {0} & {B}\end{pmatrix},~ {z}=\begin{pmatrix} {y} \\ {0}\end{pmatrix},~ {\beta} = \begin{pmatrix}\mu^{(m)} \\ \varphi \end{pmatrix},~ {Q} = \begin{pmatrix} \tau_m{I}_n & {0}\\ {0} & {G}\end{pmatrix} \textrm{ and }{H=X(X^\top QX)^{\mathrm{-1}}X^\top Q}. \] \citet{dutt:mond:2016a} then obtain the residual likelihood (REML) function $\ell_R$ given by \begin{equation}\label{eqn:remlhlik} 2\ell_R(\tilde\theta) = \log\det {Q} - \log|X^\top QX|_+ - {(z-X\widehat{{\beta}})^\top Q (z-X\widehat{{\beta}})} \end{equation} where $\widehat{{\beta}}$ is the solution to \begin{equation}\label{eqn:blup} ({X^\top QX}){\beta} = {X^\top Qz}. \end{equation} Traditional maximization of the log REML function uses score equations which are obtained by equating the gradient of $\ell_R$ to zero. Thus suppose ${Q}_{1} = \partial {Q}/\partial \tau_m,$ ${Q}_{2} = \partial {Q}/\partial \lambda_m$, ${Q}_{3} = \partial {Q}/\partial \nu,$ and ${Q}_{4} = \partial {Q}/\partial \alpha_m,$. The score equations that maximize the log--REML function in \eqref{eqn:remlhlik} are then given by \[ \mbox{{\small$\frac{1}{2}$}}\mathrm{Tr} \left({Q}^{-1}{Q}_{i}\right) - \mbox{{\small$\frac{1}{2}$}}\mathrm{Tr}~\left\{ ({X}^{\top} {QX})^{-1}{X}^{\top} {Q}_{i}{X}\right\} - \mbox{{\small$\frac{1}{2}$}}({z-X}\widehat{\beta})^{\top}{Q}_{i}({z-X}\widehat{\beta}) = 0 \] for $i=1,\ldots,4$. Note that these score equations can also be expressed succinctly as \begin{equation}\label{eqn:scorehlik} \mbox{{\small$\frac{1}{2}$}}\mathrm{Tr}~{(I-H)Q}^{-1}{Q}_{i} - \mbox{{\small$\frac{1}{2}$}}({z-X}\widehat{\beta})^{\top}{Q}_{i}({z-X}\widehat{\beta}) = 0,\quad i=1,\ldots,4. \end{equation} Typically, Fisher's scoring method is used to solve the score equations and to obtain REML estimates. However, this also requires computation of the second derivatives of the log REML function or the information matrix ${\mathfrak{I}}$ whose $(i,j)$th entry is equal to \begin{equation}\label{eqn:informationhlik} \mathfrak{I}(i,j) = \mbox{{\small$\frac{1}{2}$}}\mathrm{Tr}\left\{{(I-H)Q}^{-1}{Q}_{i}{(I-H)Q}^{-1}{Q}_{j}\right\}, \end{equation} which can also be used to derive standard errors of the estimates. However, computing the trace terms either in \eqref{eqn:scorehlik} or \eqref{eqn:informationhlik} are not straightforward as they require computing the diagonal entries of the hat-matrix ${H}.$ For large values of $n,$ an exact computation of these trace terms has $O(n^3)$ computation complexity and requires $O(n^2)$ memory storage space. As a practical alternative, \citet{dutt:mond:2016a} then suggests instead solving the unbiased system of equations \begin{equation}\label{eqn:approxScoreEq} g_{i}(\theta_m) = \frac{1}{2K} \sum_{t=1}^{K}{u}_{t}^{\top}{Q}^{-1}{Q}_{i}({I}-{H}){u}_{t} - \mbox{{\small$\frac{1}{2}$}}({z-X}\widehat{\beta})^{\top}{Q}_{i}({z-X}\widehat{\beta}) = 0, \end{equation} where ${u}_t$'s are i.i.d Rademacher random vectors with entries $\pm 1$ with probability $\mbox{{\footnotesize$1/2$}}$ each. Here the number of Rademacher vectors, $K,$ should be large. However, the results of \citet{dutt:mond:2016a} suggests $K=50$ retain sufficient statistical efficiency of the estimates. \citet{dutt:mond:2016a} provide a sophisticated matrix-free trust-region algorithm for solving \eqref{eqn:approxScoreEq} that crucially depend on the matrix-free discrete cosine transformation for computing matrix-vector multiplications of the form $Pv$ and the matrix-free inverse discrete cosine transformation for computing $P^\top v$ for $v\in \mathbb{R}^{rc},$ \citep{rao:yip:1990,frig:john:2005} and a matrix-free preconditioned Lanczos algorithm for solving large system of linear equation \eqref{eqn:blup} and those involved in \eqref{eqn:approxScoreEq}. Furthermore, this computational framework yields the standard errors of the parameter estimates as well as the best linear unbiased predictions of the random field $\varphi$ that serves as the kriged surface of the random field. Overall, in contrast to the dense-matrix computations the computational complexity of the matrix-free algorithms is essentially $O(n(\log n)^2)$ using only $O(n)$ storage in memory. \section{Simulation studies}\label{sec:sim} We perform two simulation studies. The goal of the first simulation study is to derive the estimates for the fractionally differenced random field model when the data is generated from a continuum fractional Gaussian field plus a nugget effect, and to compare these estimates with the actual maximum likelihood estimates. The goal of the second simulation study is to demonstrate the scalability of statistical computation for fractional Laplacian differencing. \subsection{An experiment with power-law variogram}\label{sec:sim1} \begin{figure} \centering\includegraphics[width=0.8\textwidth]{plots/accuracy.png} \caption{Histograms of the estimates of $\nu$ (top), nugget precision (middle) and anisotropy parameters (bottom) using direct ML estimation of intrinsic Mat\'ern model (left column) and h-likelihood method on the lattice model (right column).} \label{fig:histAccuracy} \end{figure} \begin{table} \caption{Coverage probabilities and mean widths of 95\% confidence intervals based on normal approximations.} \label{tab:confintAccuracy} \begin{center} \begin{tabular}{|c |c c |c c |c c|} \hline & \multicolumn{2}{c|}{$\nu$} & \multicolumn{2}{c|}{$\log\tau~\vdots~\log\tau_m$} & \multicolumn{2}{c|}{$\alpha~\vdots~\alpha_m$} \\ Model & Coverage & Width & Coverage & Width & Coverage & Width \\ \hline Fractional Gaussian & 100 & 1.31 & 92.9 & 0.89 & 100 & 0.19\\ fields &&&&&&\\ \hline Fractional Laplacian & 96 & 0.27& 60.6 & 0.24 & 94 & 0.09 \\ differening &&&&&&\\ \hline \end{tabular} \end{center} \end{table} We generate data on 4000 randomly selected grid points in a 100x100 lattice embedding the unit square from an intrinsic Mat\'ern random field with $\nu = 1.25, \tau = 1,$ $\sigma^2 = 2,$ and $\alpha = 0.25.$ We compute the estimates of $\nu,\tau,\sigma^2$ and $\alpha$ using the method described in Section \ref{sec:MLiMatern}. Next, we fit the lattice model \eqref{eqn:latticemodel} the original $100\times 100$ array (so that $m=1$) and compute estimates of $\nu,$ $\tau_m,$ $\lambda_m$ and $\alpha_m.$ We repeat this process 100 times. Overall, the h-likelihood method was between 40--80 times faster than the direct ML estimation of intrinsic Mat\'ern model and in one of these simulations the direct ML method failed to converge, yielding estimates on the boundary. We discard this case from our analysis. It is expected that the analysis on the original scale with the fractionally differenced model would yield biased estimate of $\tau_m$ because it compensates or absorbs the difference between the lattice variogram and the continuum variogram as seen in Figure \ref{fig:variodiff}. Figure \ref{fig:histAccuracy} shows the histograms of these estimates from the two models along with the true values. These plots show that the lattice based fractionally differenced model provide practically useful estimates of $\nu$ and the anisotropy parameter. However, it over estimates the nugget variance (underestimates nugget precision). On the other hand, the confidence intervals and their average widths in Table \ref{tab:confintAccuracy} show that the fractionally differenced model provides shorter and more practically meaningful confidence intervals for $\nu$ and the anisotropy parameters. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{plots/res1.png} \includegraphics[width=0.9\textwidth]{plots/res2.png} \end{center} \caption{Boxplots of the parameter estimates and their standard errors. True values of the parameters are: $\tau_m=4$ and $\lambda_m=8$ and $\nu=1.25$ in top panel and $\nu=1.5$ in bottom panel.} \label{fig:largesim} \end{figure} \subsection{Large scale computation with lattice approximations} In this section, we demonstrate the scalability of the likelihood computations using the fractionally differenced model. To that end, we now generate data on grid points of a $256\times 256$ regular rectangular array from two fractionally differenced models. We keep $\tau_m = 4,$ $\lambda_m=8$ and fix $\alpha_m=0.25$ and use two different values $\nu=1.25$ and $\nu=1.5.$ We randomly keep 60\% of the observations resulting in a sample of size around $39321\pm125$ (mean $\pm$ sd). The data is generated from the fractionally differenced model because such the method for generating from the intrinsic Mat\'ern model runs out of the memory. Similarly, the method for fitting the intrinsic Mat\'ern model using dense-matrix computations also fail on such large datasets. In contrast, the fractionally differenced model fits without any issue on a standard personal computer. The process is repeated 100 times for each choices of $\nu$ and the resulting boxplots of the estimates and their standard errors are shown in Figure \ref{fig:largesim}. We find that the estimates are very close to the true values of the parameters and are also unbiased. Furthermore, the standard errors are also very small suggesting the estimators are statistically efficient, a fact that is also noted in \citet{dutt:mond:2016a}. \section{Indian Ocean surface temperature from Argo floats} \label{sec:data} The Argo Program is part of the Global Ocean Observing System from an international collaboration among more than 30 countries from all continents that provides useful data on important ocean variables. Conceived in the early 2000s, the Argo fleet now consists of more than 4000 drifting battery-powered machines called Argo \emph{floats} that are deployed worldwide. These floats weigh around 20-30kg each, and typically probe the drifts at a depth where they are stabilized by their buoyant (around 1km). Every 10 days or so, these floats change their buoyant and dive to a depth of 2km and then rise to the water-surface measuring conductivity and temperature profiles as well as pressure, over about 6 hours. From the surface, they transmit their location as well as the collected data to satellites and dives back to their drifting depth. In this section, we analyze the monthly data on sea-surface temperature in the Indian Ocean obtained from April 1, 2020 till April 30, 2020. These data were collected and made freely available by the International Argo Program and the national programs that contribute to it \citep{argo2020}. After removing the erroneous measurements as described on the Argo website, we obtain 2525 observations of sea-surface temperature (in {\textsuperscript{o}C}) and plot them in the bottom left panel of Figure \ref{fig:argodata}. Note that the Argo floats are quite scattered over the Indian Ocean and the temperature are clearly spatially auto-correlated. Furthermore, from the top panel of Figure \ref{fig:argodata} we can see that the temperature variation seems to be more along the latitude compared to the longitude, as one would naturally expect. In fact, this suggests that an (intrinsically) stationary model may not be accurate. To account for this trend along the latitude, we fit a quadratic mean model \begin{equation}\label{eqn:quadratictrend} \mu(l) = a_0 + a_1 l + a_2 l^2 \end{equation} where $l$ denotes the latitude using ordinary least squares. Next, we obtain the residuals from this quadratic mean model and use them as response for the following spatial analysis. \begin{figure}[htp] \centering\includegraphics[width=0.95\textwidth]{plots/argo-coordinates.png} \centering\includegraphics[width=0.95\textwidth]{plots/argo-plot.png} \caption{Sea-surface temperature (in {\textsuperscript{o}C}) measured by the Argo floats in the Indian Ocean during April, 2020. Top: Observed temperature against the geographic coordinates. Bottom left: Image of observed temperature values and Right: Krigged sea-surface temperature.} \label{fig:argodata} \end{figure} First we run some exploratory analyses. We compute and plot the empirical variogram using the R package \textsf{geoR} along the four directions and plot it using the R package \textsf{ggplot2}. \begin{verbatim} library(geoR) library(dplyr) library(ggplot2) library(RColorBrewer) temp = read.table("april-data.txt") names(temp) = c("Latitude","Longitude","Temperature","Resid.quad") temp.geo = as.geodata(temp,data.col = 4) # Using the residuals from quadratic model as response vg4 = variog4(temp.geo) vg.gg = data.frame(h = vg4$`0`$u , v = vg4$`0`$v, Direction='0°') rbind(., data.frame(h = vg4$`45`$u , v = vg4$`45`$v, Direction='45°')) rbind(., data.frame(h = vg4$`90`$u , v = vg4$`90`$v, Direction='90°')) rbind(., data.frame(h = vg4$`135`$u , v = vg4$`135`$v, Direction='135°')) ggplot(vg.gg,aes(x=h,y=v,group=Direction,color=Direction,lty=Direction)) + geom_line(size=1.5) + xlab("Spatial lag") + ylab("Variogram") + theme_light(base_size = 16) + scale_color_brewer(palette="Dark2") + theme(legend.position = "bottom",legend.key.width = unit(2.5,"cm")) \end{verbatim} The dataset is also available by an email request to the corresponding author. The directional variograms are shown in Figure \ref{fig:argo-variogram}. We see that the variogram increases more along the 90\deg and the 45\deg directions supporting that there is more spatial variability across the latitude. Furthermore, the variograms along these directions do not seem to reach a sill, suggesting that an intrinsic model could be more appropriate for the data. \begin{figure}[htp] \centering\includegraphics[width=0.6\textwidth]{plots/argo-variogram.png} \caption{Directional variograms of the residual temperature values.} \label{fig:argo-variogram} \end{figure} \begin{table}[t] \caption{Estimates of the spatial parameters from the spatial linear mixed model based on fractional Gaussian field (FGF) and its lattice approximation (FLD). Standard errors are shown in parentheses.} \label{tab:argoestimates} \begin{center} \begin{tabular}{cccc} \hline Model & $\widehat{\nu}$ & $\widehat\alpha~\vdots~\widehat{\alpha}_m$ & $\widehat{\tau}~\vdots~\widehat{\tau}_m ({\textsuperscript{o}C}^{-2})$ \\ \hline FGF & 1.400 ( 0.114 ) & 0.058 ( 0.0314 ) & 3.59 ( 0.285 ) \\ FLD & 1.426 ( 0.051 ) & 0.074 ( 0.012 ) & 3.11 ( 0.267 ) \\ \hline \end{tabular} \end{center} \end{table} We first fit the intrinsic Mat\'ern model to the data using the method described in Section \ref{sec:MLiMatern}. The estimates of $\nu,\alpha$ and $\tau$ are shown in the first row of Table \ref{tab:argoestimates} and the estimate of $\sigma^{-2}$ is $\widehat{\sigma}^{-2} = 8.987{\textsuperscript{o}C}^{-2}$ (s.e. $1.19{\textsuperscript{o}C}^{-2}$). The estimate of $\alpha$ corroborates the observation that the temperature varies more across the latitude than the longitude. Next, we fit the fractionally differenced process to the data using the method described in Section \ref{sec:MLfracdiff}. To that end, we embed the region bounded between by 21\deg to the North, 67\deg to the South, 20\deg to the West and 145\deg to the East in a $128\times180$ regular rectangular array so that each pixel is approximately $0.6875\deg$ latitude by $0.694\deg$ longitude. Next we average the residuals from the quadratic model falling inside the same lattice pixel, resulting in around 7.43\% observed pixels. We use 50 Rademacher variables for stochastically approximate the score equations. Abusing the notation, we drop the subscript $m$ from $\tau_m,$ $\alpha_m$ and $\lambda_m,$ as $m$ is implicitly chosen via the array dimensions. The estimates of $\nu,\alpha$ and $\tau$ are shown in the second row of Table \ref{tab:argoestimates} and the estimate of $\lambda$ is $\widehat{\lambda} = 15.381{\textsuperscript{o}C}^{-2}$ (s.e. $3.35{\textsuperscript{o}C}^{-2}$). The results largely agree with the findings in Section \ref{sec:sim1}. In particular, although the estimates of $\nu$ and the anisotropy parameters are very close from the two methods, the fractionally differenced model yields smaller standard errors of these estimates. The slight discrepancy in the estimates of $\alpha$ occurs because the pixels are not exact squares. On the other hand, the estimate of the nugget precision is lower from the fractionally differenced model than the intrinsic Mat\'ern model. Note that as a byproduct of fitting the fractionally differenced model, we also obtain the best linear prediction $\widehat{\varphi}$ of the underlying spatial random field $\varphi$. The image $\widehat{\varphi}$ plus the quadratic mean from \eqref{eqn:quadratictrend} is shown on the right panel of Figure \ref{fig:argodata}. Note that the fine scale features of the temperature gradient are more prominent in the krigged map. Such interpolated maps are often useful in studying other oceanic and atmospheric activities. \section{Concluding remarks}\label{sec:discussion} This book chapter presents a brief review on fractional Gaussian fields, their lattice based approximations, and connections to the intrinsic and stationary M\'atern, power-law and other generalized random fields. Likelihood based inference methods have been developed for spatial linear mixed models based on the fractional Gaussian random fields and fractional Laplacian differencing on regular lattice. Computational methods for maximum likelihood estimation of parameters have been described and compared. Using both simulation and data examples, it is demonstrated that the lattice based model facilitate faster and more stable statistical computations of the maximum likelihood estimators than the model based on fractional Gaussian fields, while providing practically close estimates of dependence and anisotropy parameters. Moreover, the h-likelihood method for the model on regular lattice provide more useful estimates of uncertainty and confidence intervals for the aforementioned parameters than the maximum likelihood method for the geostatistical model based on the fractional Gaussian fields. It must be stressed that there are definite advantages in discretizing the space using a regular lattice instead of other well known ideas such as triangulation \citep{lind:rue:2011} and neighborhood selection \citep{datt:bane:2016} of irregularly distributed sampling stations. One advantage is the explicit spectral decomposition that allows for the use of fractional values $\nu$ and provides fast matrix-free computation in terms of discrete cosine transformation. Another advantage is accommodation of geometric anisotropies. It must be noted that irregular discretizations do not permit us to accommodate geometric anisotropies in any obvious way. Our presentation has focused on fractional models with long range dependence. To study short range dependence, we can consider stationary Mat\'ern covariance models \citep{hask:cull:2007,stei:2012,guin:mont:2017}. As fractional Gaussian fields are limiting cases of Mat\'ern models, we can also obtain lattice approximations of the latter. Under the same setup as Section \ref{sec:MLfracdiff}, the inverse variance covariance matrix of this approximate Mat\'ern model takes the form \begin{equation}\label{eqn:stationarycase} \lambda_mR^\nu = \lambda_m P^\top (\kappa_{m} + 4\alpha_m D_{01} + 4\alpha'_{m} D_{10})^\nu P \end{equation} where $\kappa_m,\alpha_m,\alpha'_m$ are non-negative and $\kappa_m + 4\alpha_m + 4\alpha'_m = 2.$ Thus, both Mat\'ern models and their lattice approximations contain an additional range parameter, and at first it may appear that Mat\'ern models and their lattice approximations have added flexibility due to the extra range parameter. However, inclusion of an unknown finite range parameter often leads to long flat ridges in the likelihood function, which in turn incur substantial numerical instability in the MLE computations. This, for example, has been observed in \citep{lim:chen:2017} and also in our own experiments with the lattice approximation \eqref{eqn:stationarycase}. Interestingly, the work of \citet{zhan:2004} suggest that the scale and the range cannot both be estimated consistently. In fractional fields, we set the range parameter at infinity. In the short range dependence case, we can also fix the range parameter to a finite number to lessen numerical instabilities and to enhance interpretability. We can then proceed computation as presented in Section~3 of this chapter. \section*{Acknowledgement} The authors thank an anonymous referee for helpful comments. Dutta's research was supported in part by the United States Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA) Hatch project IOW03617. Mondal's research was supported by the National Science Foundation (NSF) award DMS-1916448. The content presented in this chapter are those of the authors and do not necessarily reflect the views of NIFA, USDA and NSF. \bibliographystyle{bathx}
\section{Introduction} \label{intro} Even though ultrasound sonography is a low-cost, safe, and fast imaging technique that has been widely used around the globe in clinical diagnosis, surgical monitoring, medical robots etc., there are still some major drawbacks in ultrasound imaging. Due to the nature of how ultrasound images are captured, it can be hard to see the structures that are deep or underneath some highly reflective surfaces \cite{jensen1999linear}. Certain tissues or structures would bounce back or absorb the sound waves, resulting in dark regions underneath. Such tissues and structures can sometimes produce alterations in ultrasound images which do not represent the actual contents , i.e. artifacts \cite{kremkau1986artifacts}. Moreover, the directionality of ultrasound imaging can make some (parts of) structures difficult to image from certain directions, which may prevent ultrasound images from conveying a complete description of what is going on inside the patient's body. In addition, the directionality may also create confusion for clinicians or medical robots performing downstream tasks. For example, a bullet inside a patient's body would create significant reverberation artifacts that occlude what is underneath. Additionally, when a medical robot inserts a needle into a patient, the reverberation artifacts created by the needle might make the needle tracking algorithm fail or disrupt the identification of the structures of interest \cite{reusz2014needle}. Even though some artifacts have diagnostic significance, which could help clinicians localize certain structures or lesions inside patients' bodies \cite{ahuja1996clinical,baad2017clinical}, the artifacts become less meaningful once the objects of interest are identified. Furthermore, if we preserve the artifacts from different viewpoints, then they could substantially occlude real tissues and the image would be harder to interpret. When there are multiple viewpoints available in ultrasound imaging, we can reconstruct an ultrasound image that represents the underlying structures better while having fewer artifacts. However, no existing method can do the job perfectly. Relatively simple methods such as averaging the overlapping pixel values from different viewpoints \cite{trobaugh1994three} or taking the maximum of such pixels \cite{lasso2014plus} result in lower dynamic range or additional artifacts in the output images. Other more advanced ultrasound compounding algorithms \cite{gobl2018redefining,hennersperger2015computational} reconstruct the 3D volume of ultrasound using a tensor representation, but both of them still combine the overlapping pixels by simply averaging them or taking the maximum. The method proposed by zu Berge et al. \cite{zu2014orientation} utilizes the per-pixel confidence map proposed by Karamalis et al. \cite{karamalis2012ultrasound} as weights in compounding. While this method does not directly take the average or maximum, it does not take the contrast of the image into account. In the survey paper by Mozafari et al. \cite{mozaffari2017freehand}, a large number of 3D compounding methods are covered, but all of the methods deal with the overlapping pixels from different views by taking the average or maximum. Since both bright and dark regions contain useful information in ultrasound images, maximizing the contrast in those regions while lowering the intensity of noise and artifacts in other regions is essential in compounding. In all cases, every existing compounding algorithm tends to introduce new artifacts into the volumes and lower the dynamic range. These algorithms can only preserve either dark or bright regions, but in some clinical settings or in computer vision algorithms to guide downstream medical robots, \textit{both dark and bright regions are useful}. We do not want to naively take the maximum or average when dealing with overlapping pixels from different views, since doing so would lower the contrast or create new artifacts. Our goal is to clearly differentiate all structures, whether dark or bright, while suppressing artifacts and speckle noise to help with downstream computer vision tasks such as vessel segmentation. To better reconstruct the vessels and detect the bones, unlike prior work, we are less concerned with recovering the most ``accurate" individual pixel values but more concerned with enhancing the images by maximizing the contrast. We focus on preserving patches with the largest contrast, suppressing less-certain high frequency information to prevent piecemeal-stitching artifacts and reduce existing artifacts. Our most important contributions are: (1) Use more advanced methods when compounding overlapping pixels between different views instead of directly taking the average or maximum. (2) Keep the pixels and structures with the higher confidence when compounding. (3) Preserve the pixels or patches that have the largest local contrast among the overlapping values from different viewpoints. (4) Identify anatomic boundaries of structures and tissues and treat them differently while compounding. (5) Use Laplacian pyramid blending \cite{burt1983multiresolution} to remove discrepancy in pixel values from ultrasound images captured in different view points. (6) Make use of the advantages of different compounding methods in different frequency scales. \section{Related Work} As for freehand ultrasound compounding, in 1997, Rohling et al. \cite{rohling1997three} proposed to compound the freehand ultrasound images in the same plane iteratively, using an approach that is based on averaging. Later, the same group used interpolation to reconstruct 3D volumes of non-co-planar freehand ultrasound and still averaged the overlapping pixels \cite{rohling1999comparison}. As mentioned by Rohling et al. \cite{rohling19993d} and Mozaffari et al. \cite{mozaffari2017freehand}, the most common method in freehand compounding is to use interpolation to calculate the missing pixels and use averaging to calculate the overlapping pixels while this might not be the best approach. Grau et al. \cite{grau2005adaptive} came up with a compounding method based on phase information in 2005. Although this method is useful, access to Radio Frequency (RF) data is limited, preventing the algorithms from being widely adopted. Around the same time, Behar et al. \cite{behar2006statistical} showed that averaging the different view worked well if the transducer were set up in a certain way by simulation, but in practice, it would be extremely hard to set up the imaging settings that way. In recent years, Karamalis et al. \cite{karamalis2012ultrasound} came forward with a way to calculate physics-inspired confidence values for each ultrasound pixel using a graph representation and random walk \cite{grady2006random}, which Zu Berge et al. \cite{zu2014orientation} used as weights in a weighted average algorithm to compound ultrasound images from different viewpoints. Afterwards, Hung et al. \cite{hung2020ultrasound} proposed a new way to measure the per-pixel confidence based on directed acyclic graphs that can improve the compounding results. Hennersperger et al. \cite{hennersperger2015computational} and Göbl et al. \cite{gobl2018redefining} modeled the 3D reconstruction of ultrasound images based on more complete tensor representations, where they modeled the ultrasound imaging as a sound field. While these two recent papers made great advances in reconstructing ultrasound 3D volumes, they still compound overlapping pixels by averaging or taking the maximum. A review of freehand ultrasound compounding by Mozaffari et al. \cite{mozaffari2017freehand} summarized compounding methods using 2D and 3D transducers. However, few papers talked about how they deal with overlapping pixels, which is what our work mainly focuses on. In the case of robot control instead of freehand ultrasound, Virga et al. \cite{virga2018use} modeled the interpolation/inpainting problem as partial differential equations and solved them with a graph-based method purposed by Hennersperger et al. \cite{hennersperger2014quadratic,virga2018use}. They also did image compounding based on the tensor method by Hennersperger et al. \cite{hennersperger2015computational}. Although ultrasound artifacts have barely been directly considered in previous compounding approaches, it has been widely discussed in literature. Reverberation artifacts and shadowing are useful in diagnosis because those artifacts can help clinicians identify highly reflective surfaces and tissues with attenuation coefficients significantly different from normal tissues \cite{hindi2013artifacts}. Reverberation artifacts are most useful in identifying anomalies in lungs \cite{baad2017clinical,soldati2019role}, while it can also be used in thyroid imaging \cite{ahuja1996clinical}. Shadowing could be used in measuring the width of kidneys \cite{dunmire2016use}. However, artifacts and noise could occlude the view of other objects of interests \cite{mohebali2015acoustic} or hurt the performance of other tasks, such as registration \cite{roche2001rigid}, needle tracking \cite{reusz2014needle}, or segmentation \cite{xu2012ultrasound}. In recent years, several learning-based methods have been focusing on identifying artifacts and shadows and using this information to identify other objects \cite{hung2020weakly,meng2019weakly}, but they all need substantial labeling work and a relatively large dataset. Non-learning-based methods to remove the artifacts, either use RF data \cite{tay2011wavelet} or temporal data \cite{win2010identification} or fill in the artifact regions based on neighboring image content within the same image \cite{tay2006transform}. All of these methods make assumptions about what the missing data probably looks like, whereas our approach utilizes multi-view compounding to obtain actual replacement data for the artifact pixel locations. \section{Methods} \subsection{Identifying Good Boundaries} \label{good} Any sort of averaging between different views in which an object appears either too bright or dark in one view will lower the compounded object's contrast with respect to surrounding pixels. Even though artifacts could be suppressed, the useful structures would also be less differentiated, which is not the optimal approach. Therefore, identifying good anatomic boundaries, and treating them differently than other pixels in compounding, is essential to preserving the dynamic range and contrast of the image. Ultrasound transmits sound waves in the axial (e.g. vertical) direction, so sound waves are more likely to be bounced back by horizontal surfaces. Horizontal edges are also more likely to be artifacts, in particular reverberation artifacts \cite{quien2018ultrasound}. The trait of reverberation artifacts is that the true object is at the top with the brightest appearance compared to the lines beneath which are artificial. The distance between the detected edges of reverberation artifacts are usually shorter than other structures. Also, structures in ultrasound images are usually not a single line of pixels: they usually have thickness. Though reverberation artifact segmentation algorithms like \cite{hung2020weakly} could work well in identifying the bad boundaries, labeling images is a very time-consuming task. Besides, the exact contour of the structures in ultrasound images are ambiguous, which can be hard and time-consuming to label as well, so directly using manual labels would be less efficient and it might introduce new artifacts into the images. Therefore, We propose to refine the detected edges based on the appearances of reverberation artifacts. First we detect the horizontal boundaries through edge detection algorithms. To detect the actual structures in the ultrasound images instead of the edge of the structure, we calculate the gradient at pixel $(x,y)$ by taking the maximum difference between the current pixel and $\alpha$ pixels beneath, \begin{equation} \frac{\partial I(x,y)}{\partial y}=\max_{j=1,2,..,\alpha}|I(x,y)-I(x,y+j)| \end{equation} where in this paper, we set $\alpha$ to 15. We then group the pixels that are connected into clusters, such that pixels belonging to the same boundary are in the same cluster. We remove the clusters containing fewer than 50 pixels. After that, we only keep the clusters that do not have a cluster of pixels above itself in $\beta$ pixels. In this paper, $\beta=20$. A refinement is performed by iterating through the kept clusters and comparing the pixel values against that of the original image. A stack $s$ is maintained, and the pixels in the kept clusters with values greater than $threshold1$ are pushed into it. We pop the pixel $(x,y)$ at the top of the stack and examine the pixels in its 8-neighborhood $(x_n,y_n)$. If $(x_n,y_n)$ has never been examined before and satisfies $I(x_n,y_n)> threshold1$ and at the same time the gradient value is less than $threshold2$, i.e. $|I(x_n,y_n)-I(x,y)|<threshold2$, then we push $(x_n,y_n)$ into the stack $s$. We repeat this procedure until $s$ is empty. We add this step because the boundary detection might not be accurate enough and we can ignore detected boundaries with low pixel values to suppress false positives. In this paper, $threshold1$ and $threshold2$ are set to $30$ and $2$ respectively. The pseudocode for the described algorithm is shown in Algorithm~\ref{Algo_B}. We note that we assigned the values to the parameters based on empirical results. \begin{algorithm}[ht] \KwData{input image $I$} \KwResult{output boundary mask $B$} $edges=clustering(denoising(\frac{dI}{dy}))$;\\ $edges=cleanup(edges)$;\\ $mask=zeros(I.shape)$; $B=zeros(I.shape)$;\\ \For{$edge$ in $edges$}{ \uIf{$\forall e$ in $edges$ that are far enough from edges underneath}{ $mask[edge]=1$; } } \For{$[i,j]$ where $mask[i,j]==1$}{ \uIf{$B[i,j]==0$}{ stack $s$; \# initialize stack\\ \uIf{$I[i,j]>threshold1$}{ $s.push([i,j])$; $B[i,j]=1$; } \While{$s$ is not empty}{ $[x,y]=s.pop$;\\ \For{$[ii,jj]$ in the neighborhood of $[x,y]$}{ \uIf{$I[ii,jj]>threshold1$ and $\mid I[x,y]-I[ii,jj] \mid <threshold2$ and $B[ii,jj]==0$}{ $s.push([ii,jj])$; $B[ii,jj]=1$; } } } } } \caption{Horizontal-edge refinement} \label{Algo_B} \end{algorithm} \subsection{Compounding Algorithm} Attenuation reduces ultrasound image contrast in deeper regions. Simply taking the maximum, median or mean while compounding \cite{lasso2014plus} further undermines the contrast information, where structure information is stored. Taking the maximum also would create artifacts by emphasizing non-existent structures resulting from speckle noise in uncertain regions. Although uncertainty-based compounding approach by \cite{zu2014orientation} suppresses the artifacts and noise to some extent, it produces substantially darker images than the originals and lowers the dynamic ranges. Also, taking the maximum retains the bright regions, but some dark regions are also meaningful, so it would make more sense to preserve the patches with the largest local contrast than to simply select the pixels with maximum values. However, directly taking pixels with the largest contrast would lead to neighboring pixels inconsistently alternating between different source images. Besides, the neighbors of a pixel might all be noise, resulting in instability of the algorithm. Taking the maximum contrast might also emphasize the artifacts. We developed a novel Laplacian-pyramid \cite{burt1983multiresolution} approach to compound the images at different frequency bands and different scales. In this way, we can apply contrast maximization method at certain frequency bands while reconstructing from the pyramid. However, the pixels at extremely large scale in the pyramid represents a patch containing a huge number of pixels in the lower layers, so the contrast in this layer has less anatomical meaning. On the other hand, when the scale is small, the noise in the image would create large local contrast, so maximum weighted contrast might introduce new artifacts into the image. At extremely low and high scales, we thus consider contrast to be less important than intensity confidence measures. Another flaw of directly maximizing the contrast is that the large contrast region might contain artifacts and shadows, so we only maximize the contrast when the overlapping pixels have similar structural confidence values \cite{hung2020ultrasound}, otherwise we use the pixel with the larger structural confidence value in the compounded image, as low structural confidence value indicates that the pixel belongs to artifacts or shadows. Although some anatomic structures would be removed due to the low confidence values, artifacts and noises would also be removed in the compounded image. The anatomic structures are later compensated for in the later stage of the algorithm. Our novel ultrasound compounding method takes ultrasound images from multiple viewpoints and calculates their intensity and structural confidence maps \cite{hung2020ultrasound}, then calculates Laplacian and Gaussian \cite{toet1989image} pyramids of the original images and the Gaussian pyramid of confidence maps. Denote $L_{m,n}$ $GI_{m,n}$ as the n\textsuperscript{th} layer of the Laplacian pyramid and Gaussian pyramid of the m\textsuperscript{th} co-planar ultrasound image respectively, $GC_{m,n}$ $G\Gamma_{m,n}$ as the n\textsuperscript{th} layer of the Gaussian pyramid of the intensity and structural confidence map of m\textsuperscript{th} co-planar ultrasound image respectively, and, $L_k$ as the k\textsuperscript{th} layer of the Laplacian pyramid of the synthetic image. $M$ is the set of viewpoints, with $|M|$ views. Also denote $N(i,j)$ the 8-connected neighborhood of pixel $(i,j)$. Here we combine the weighted maximum contrast and weighted average together. For the k\textsuperscript{th} layer of the pyramid, if the difference across viewpoints between the maximum and minimum structural confidence values $G\Gamma_{m,k}(i,j)$, where $m\in M$, is less than a certain threshold $\gamma$ ($\gamma=0.05$ in this paper), we take the pixel $(i,j)$ with the largest contrast at this scale, since only when there is no artifact at the pixel does taking the largest contrast make sense \begin{equation} \widetilde{m}(i,j) = \argmax_{m \in M} \sum_{(a,b)\in N(i,j)} { \mid GI_{m,k}(a,b)-GI_{m,k}(i,j)\mid} \label{eq} \end{equation} If not, we take the pixel $(i,j)$ with the largest structural confidence at this scale \begin{equation} \widetilde{m}(i,j) = \argmax_{m \in M} G\Gamma_{m,k}(i,j) \end{equation} Denote the intensity-confidence weighted average at the k\textsuperscript{th} layer of the Laplacian pyramid as $La_k$, \begin{equation} La_k(i,j)=\frac{\sum_{m=1}^{\mid M \mid} GC_{m,k}(i,j)L_{m,k}(i,j)}{\sum_{m=1}^{\mid M \mid} GC_{m,k}(i,j)} \end{equation} Then the k\textsuperscript{th} layer of the Laplacian pyramid of the synthetic image can be calculated as, \begin{equation} L_k(i,j)=\phi(k) L_{\widetilde{m}(i,j),k}(i,j)+(1-\phi(k))La_k(i,j) \end{equation} where \begin{equation} \phi(k)=\frac{1}{0.4\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{(2k-K-1)^2}{0.16(K-1)^2})} \end{equation}is a weight function, and K is the number of total layers. This weight function is designed to assign lower weights to contrast maximization and higher weights to intensity-confidence-weighted average in extremely low and high scale. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{pipeline.png}\label{fig:subfig-1}} \caption[]{Compounding with Laplacian and Gaussian pyramid. The compounding is performed in each layer of the pyramid with the confidence map (intensity confidence or structural confidence) used as some sort of weights. The compounding results are reconstructed from the pyramid of the compounded image.} \label{subfig-1} \end{figure*} The compounding algorithm could be further generalized: \begin{equation} L_k(i,j)=\sum_{n=1}^N\phi_n(k)F_n(\{L_{m,k}\}_{m \leq \mid M \mid},\{G_{m,k}\}_{m \leq \mid M \mid}) \end{equation} where \begin{equation} \sum_{n=1}^N\phi_n(k)=1, 0< k \leq K, 0< n \leq N, 0 \leq \phi_n(k) \leq 1 \end{equation} $K$ is the total number of layers, $N$ is the total number of compounding methods, $p$ is the total number of viewpoints, $G_{m,k}$ denote any kind of confidence map at layer $k$ from viewpoint $m$, and $F_n$ denote a compounding method. We can use any weighting scheme to combine any number of compounding schemes in the Laplacian pyramid based on the application and data. The algorithm still takes some sort of confidence-based weighted averaging in some layers of the pyramid. During artifact-free contrast maximization, some anatomic boundaries would be removed incorrectly due to lower structural confidence. Therefore, even though this approach works well in preserving contrast and suppressing artifacts, the actual boundaries of structures still tend to get darker. In addition to what we just proposed above, the algorithm we purposed back in section~\ref{good} can also be incorporated. While reconstructing the image from the new Laplacian pyramid after getting the image from the third layer, the good boundaries are detected and values from the original images are taken. For overlapping pixels here, we take the maximum. We apply the same notation as above, and $GB_{m,k}$ is layer $k$ from viewpoint $m$ of the Gaussian pyramid of the boundaries mask $B$ (Gaussian pyramid of algorithm~\ref{Algo_B}'s output). \begin{equation} L_3(i,j)=\max(\frac{\sum_{m=1}^{\mid M \mid} {GB_{m,3}(i,j)GI_{m,3}(i,j)}} {\sum_{m=1}^{\mid M \mid} {GB_{m,3}(i,j)}},L_3(i,j)) \end{equation} This step is done on the third layer of the pyramid since there are still two layers before the final output, so piecemeal-stitching artifacts can still be suppressed. The step isn't done in deeper layers, so that we can still preserve contrast. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{pipline4.png}\label{fig:subfig0}} \caption[]{Pipeline for combining two individual compounding methods and boundaries enhancement. We combine the results from different methods by different weights in each layer of the pyramid. The anatomic boundaries are enhanced at the third layer so that the enhancement does not introduce new artifacts. } \label{subfig0} \end{figure*} \section{Experiments} \subsection{Data Acquisition} The data used in these experiments were gathered from three different sources: a Advanced Medical Technologies anthropomorphic Blue Phantom (blue-gel phantom), an ex-vivo lamb heart, as well as a live pig. For our initial blue-gel phantom experiments, a UF-760AG Fukuda Denshi diagnostic ultrasound imaging equipment with a linear transducer (51 mm scanning width) set to 12 MHz, a scanning depth of 3 cm and a gain of 21 db is used to scan the surface of the phantom. A needle is rigidly inserted and embedded within the phantom. When scanning the surface, images from two orthogonal viewpoints are collected. As the phantom square it is easy to ensure co-planar orthogonal views with using free-hand imaging without any tracking equipment. The experiment setup is shown in Fig.~\ref{setup1}. For the ex-vivo experiment, a lamb heart is placed within a water bath in order to insure good acoustic coupling. Using a Diasus High Frequency Ultrasound machine, a 10-22 MHz transducer is rigidly mounted onto a 6 degrees of freedom (dof) Universal Robotics UR3e arm. Using a rough calibration to the ultrasound tip, the 6-dof arm is able to ensure co-planar views of the ex-vivo lamb's heart. The experiment setup is shown in Fig.~\ref{setup2}. For the in-vivo experiment, a live pig is used as the imaging subject. A UF-760AG Fukuda Denshi diagnostic ultrasound imaging equipment with a linear transducer (51 mm scanning width) set to 12 MHz, a scanning depth of 5 cm and a gain of 21 db is mounted on the end-effector of the UR3e arm and is placed on the desired location manually to get a good view of the vessel \cite{tracir_setup}. Some manual alignments are needed for the arm to be in proper contact with the pig's skin. This pose of the robot is the zero degree view of the vessel. After this, the rotational controller rotates the probe along the probe's tip by the specified angle. For this experiment we cover a range from 20 degree to -25 degree at an interval of 5 deg. The input to the uR3e robot is sent through a custom GUI that is designed to help the users during surgery. The GUI has relevant buttons for the finer control of the robot in the end-effector frame. The GUI also has a window that displays the ultrasound image in real-time which helps in guiding the ultrasound probe. \begin{figure}[h] \centering \subfloat[][]{\includegraphics[width=0.5\textwidth]{setup1.jpeg}\label{setup1} } \subfloat[][]{\includegraphics[width=0.44\textwidth]{setup2.jpeg}\label{setup2} } \caption{(a) The experiment setup for the blue-gel phantom experiment. The square phantom and the needle is shown in the image. We perform the experiment with free-hand imaging since it is easy to make sure the orthogonal views on a square phantom. (b) The experiment setup for the lamb heart experiment, where the lamb heart is situated in a water bath to ensure acoustic coupling. The imaging is done by a robot-controlled high frequency probe.} \label{setup} \end{figure} \subsection{Qualitative Evaluation} We visually compare the results of our method against average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, and uncertainty-based fusion \cite{zu2014orientation}. As is shown in Fig.~\ref{comp}, our algorithm has the best result in suppressing artifacts, and at the same time, the brightness of the boundaries (green arrows) from our algorithm is similar to that of taking maximum \cite{lasso2014plus}. Our method also preserves a lot more contrast since other parts of the patch are darker in comparison to our bright boundaries, whereas the boundaries from the other two compounding algorithms are darker and therefor less contrasting with the dark interior. Our algorithm also completely suppresses the reverberation artifacts in the regions that the red and yellow arrows point to, while the results from other algorithms all preserve undesirable aspects of artifacts. \begin{figure}[h] \centering {\includegraphics[width=\textwidth]{_1.png}} \caption[]{Compounded patches left to right: average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm, where the green arrows indicate the vessel walls while the red and yellow arrows indicate the artifacts. As is shown in the figure that our result preserves the brightness of the vessel boundaries and suppresses the artifacts at the same time, while other methods fail to do so.} \label{comp} \end{figure} To compare our results against other existing compounding algorithms (average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, and uncertainty-based fusion \cite{zu2014orientation}), we select 5 examples of results on the anthropomorphic phantom, which is shown in Fig.~\ref{comp2}. In the first row, our algorithm almost completely removes the reverberation artifacts in the synthesized image and at the same time preserves the contrast in the images. In other phantom examples, our algorithm is also the best in removing the reverberation artifacts and shadows the vessel walls cast, while preserving the brightness of the vessel walls, needles and other structures in the images. Our algorithm preserves the``good boundaries" that represent the anatomic boundaries while suppressing boundaries that are not real. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{2.png}} \caption[]{Results on the phantom with a needle inserted in it. The left two columns are the two input images (phantom images were acquired orthogonally within plane, where the imaging direction of the first and second column are from left to right and from top to bottom respectively). The right four columns from left to right are results from average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm. On the phantom examples it is clear that our method best preserves bright and dark anatomy while suppressing artifacts.} \label{comp2} \end{figure*} Besides, we also test our algorithm on real tissue images. The comparison between our algorithm and other existing algorithms on the lamb heart is shown in Fig.~\ref{lamb}, where only maximum \cite{lasso2014plus} and our algorithm preserve the contrast at the red arrows, whereas the results by other algorithms are darker in that patch. However maximum \cite{lasso2014plus} fails to preserve the contrast at the blue arrows, while our method keeps the contrast at both red and blue arrows. It also shows that even on highly noisy data, our algorithm also has decent performance. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{3c.png}} \caption[]{Results on the lamb heart ultrasound images. From left to right: two input images, compounded results (only overlapped regions are shown) by average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm. Maximum\cite{lasso2014plus} and ours the only methods that are able to preserve the bright boundaries at the red arrows, but the maximum is not able to preserve the contrast at the blue arrows like ours does.} \label{lamb} \end{figure*} We further demonstrate that our method is able to handle images from more than two viewpoints, i.e. situation where $|M|>2$. In this experiment, we utilize the live-pig data and instead of using the structural confidence, we use a simple contrast maximization (equivalent to the case when all structural confidence at corresponding pixels are equal), due to the difficulty in getting the reference image for the structural confidence. The result is shown in Fig.~\ref{pig}, where the change in probe position between the first two images only consists of translation, and when moving the probe to the third location it also involves rotation. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{6.png}} \caption[]{The compounding result of three live-pig images. The left three images are the input images while the right image is the result. In the result, the vessel and the structure on the right is successfully preserved while the shadows cast by the vessel become less significant.} \label{pig} \end{figure*} We also would like to show how each component of our algorithm contribute to the final output. Our proposed algorithm mainly consists of three parts: (1) structural confidence based artifact-free contrast maximization, (2) intensity confidence based weighted averaging, (3) edge enhancement. Shown in Fig.~\ref{aba}, structural confidence based artifact-free contrast maximization (1) removes the reverberation artifacts and shadows decently well, and preserves some contrast in the images, but some parts of the vessel boundaries are removed as well and creates some unnatural holes in the image. Intensity confidence based weighted averaging (2) preserves the vessel boundaries but not as bright as before, and it also removes the reverberation artifacts but also not as good as structural confidence based artifact-free contrast maximization (1). Edge enhancement (3) clearly enhances the boundaries but at the same time slightly enhances a small portion of the reverberation artifacts (yellow arrow) as well. Generally, the final output ((1)(2)(3)) leverages the different components of the image, having less reverberation artifacts than the result by using only (2) and (3) (red arrow), while having no irregular holes like the result by only (1) and (3) (blue arrow). Depending on different application, we can adjust the weights $\phi(k)$ and how we utilize the detected good boundaries, to compound the images in the way we want. \begin{figure}[h] \centering {\includegraphics[width=\textwidth]{aba.png}} \caption[]{Results showing the effect of different parts of the algorithm. The left column consists of the original images with arrows indicating the imaging direction. The right 6 images are compounding results where the numbers above or under the images indicate which part(s) of the algorithm is (are) used to constructed the compounded images. Note that the correspondence of the numbers are (1) structural confidence based artifact-free contrast maximization, (2) intensity confidence based weighted averaging, (3) edge enhancement.} \label{aba} \end{figure} \subsection{Quantitative Evaluation} We continue to compare our results with average (avg) \cite{trobaugh1994three}, maximum (max) \cite{lasso2014plus}, and uncertainty-based fusion (UBF) \cite{zu2014orientation}, as well as the original images. The challenges to evaluate the results are (1) there are no ground truth images that show what the compounded images should look like, (2) our algorithm is designed to maximize the contrast near boundaries and suppress the artifacts, so the exact pixel values do not matter, so manually labeled binary masks where anatomic boundaries are 1 and other pixels are 0 would not work as some naive ground truths. Besides, since the majority of the pixels would be 0 in those naively labeled images, the peak signal to noise ratio (PSNR) with such images as ground truth would be a lot larger if the images are dark compared with images with larger visual contrast. To show that our method generates images with better quality, we propose to use our variance-based metric. We separately evaluate image patches containing artifacts, which should have low contrast; and patches containing boundaries, which should have high contrast. For the patches with artifacts, we evaluate the algorithms based on the ratio between the variance of the patch and the variance of the whole image (denoted as variance ratio), as well as the ratio between the mean of the patch and the mean of the whole image (denoted as mean ratio). The patches with the artifacts should have lower variance and a similar mean compared with the whole image, since artifacts are supposed to be suppressed. As for patches with real boundary signals, we only care about the contrast, so our metric is the variance ratio. We want the variance in the patches with boundary signals to be much larger than the variance of the whole image. We compute the average mean ratio (AMR), average variance ratio (AVR) on 27 signal patches and 23 artifact patches. These patches are cropped from the same position in every image to keep the comparison fair, and examples of the patches are shown in Fig.~\ref{ex}, where the green boxes are the anatomic boundary signal patches and the red boxes are the artifact patches. The results are listed in Table ~\ref{tab1}. Our method outperforms other algorithms in suppressing artifacts. As for real boundary signals, our method appears superior to all the other methods. \begin{figure}[h] \centering {\includegraphics[width=\textwidth]{5_.png}} \caption[]{An example of boundary signal and artifact patches. From left to right are two original images that are orthogonal, followed by results from average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm. The green and red boxes are examples of boundary and artifact patches. } \label{ex} \end{figure} \begin{table}[h] \centering \caption{Evaluation by Mean and Variance} \label{tab1} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline {} & {} &view1 &view2&avg & max &UBF & ours\\ \hline {artifacts} & \begin{tabular}{c} AMR\\ AVR\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.434\\ 0.109\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.996\\ 0.224\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.757\\ 0.204\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.433\\ 0.234\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.277\\ 0.134\\ \end{tabular} & \begin{tabular}{@{}l@{}} \bfseries 1.206\\ \bfseries 0.048\\ \end{tabular}\\ \hline {boundaries} & \begin{tabular}{c} AVR\\ \end{tabular} & \begin{tabular}{@{}l@{}} 3.609 \end{tabular} & \begin{tabular}{@{}l@{}} 2.172 \end{tabular} & \begin{tabular}{@{}l@{}} 3.007 \end{tabular} & \begin{tabular}{@{}l@{}} 3.612 \end{tabular} & \begin{tabular}{@{}l@{}} 1.696 \end{tabular} & \begin{tabular}{@{}l@{}} \bfseries 3.876 \end{tabular}\\ \hline \end{tabular} \end{table} Additionally, we also compare our results with the previous ones by performing vessel segmentation on the compounded images. We manually selected 6 patches containing vessels with annotated the vessel boundaries as ground truth. We perform a naive segmentation as the following. We first apply Otsu thresholding \cite{otsu1979threshold} to each compounded patch which contains blood vessels to automatically separate the vessel boundaries from the background. We then fit an ellipse to the separated boundary points by the first step to segment the vessel. Table~\ref{tab2} shows dice coefficients \cite{zou2004statistical} comparing each method against ground truth where ours have the best performance. Fig.~\ref{seg} shows an example of the segmentation result. This simple adaptive segmentation performs the best on our compounding results. Since Otsu thresholding is purely pixel intensity based thresholding without considering other information, it is somewhat sensitive to the intensity of noise in the image. Therefore, better segmentation results show that our method is better than the prior algorithms at preserving the vessel walls while suppressing noise and artifacts. \begin{table}[h] \centering \caption{Evaluation by Segmentation}\label{tab2} \begin{tabular}{|c|c|c|c|c|c|c|} \hline {} &view1 &view2&avg & max &UBF & ours\\ \hline {Dice Coefficient} &0.654&0.575& {0.673} & {0.719} &{0.594} &{\bfseries 0.737}\\ \hline \end{tabular} \end{table} \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{seg.png}} \caption[]{The result of vessel segmentation. First two images: two original images (the algorithm is not able to fit an ellipse on the second image). Following four images: results by average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm. The last image: Segmentation results overlaying on the compounded image synthesized by our algorithm. It can be seen that the segmentation on the first original image is flatter because of the missing top and down boundaries, while the segmentation on the result by maximum is affected by the reverberation artifact at the top. Other segmentation results are clearly off, while the segmentation algorithm fits the vessel boundaries very well on our result. } \label{seg} \end{figure*} \section{Conclusion} In this work, we present a new ultrasound compounding method based on ultrasound per-pixel confidence, contrast, and both Gaussian and Laplacian pyramids, taking into account the direction of ultrasound propagation. Our approach appears better at preserving contrast at anatomic boundaries while suppressing artifacts than any of the other compounding approaches we tested. Our method is especially useful in compounding problems where the images are severely corrupted by noise or artifacts and there is substantial information contained in the dark regions in the images. We hope our method could become a benchmark for ultrasound compounding and inspire others to build upon our work. Potential future work includes 3D volume reconstruction, needle tracking and segmentation, artifact identification and removal, etc. \begin{acknowledgements} We thank our collaborators at the University of Pittsburgh, Triton Microsystems, Inc., Sonivate Medical, URSUS Medical LLC, and Accipiter Systems, Inc. We also thank Evan Harber, Nico Zevallos, Abhimanyu, Wanwen Chen and Prateek Gaddigoudar from Carnegie Mellon University for gathering the data and reviewing the paper. This is a post-peer-review, pre-copyedit version of an article published in [insert journal title]. The final authenticated version is available online at: https://doi.org/10.1007/s11548-021-02464-4. \end{acknowledgements} \section*{Declaration} \textbf{Funding} This work was sponsored in part by a PITA grant from the state of Pennsylvania DCED C000072473, and by US Army Medical contracts W81XWH-19-C0083, W81XWH-19-C0101, W81XWH-19-C-002.\\ \textbf{Conflict of interest} Galeotti serves on the advisory board for Activ Surgical, Inc., and he is a Founder and Director for Elio AI, Inc.\\ \textbf{Ethical approval} All applicable international, national, and/or institutional guidelines for the care and use of animals were followed.\\ \textbf{Informed consent} Informed consent was obtained from all individual participants included in the study. \bibliographystyle{spmpsci} \section{Introduction} \label{intro} Even though ultrasound sonography is a low-cost, safe, and fast imaging technique that has been widely used around the globe in clinical diagnosis, surgical monitoring, medical robots etc., there are still some major drawbacks in ultrasound imaging. Due to the nature of how ultrasound images are captured, it can be hard to see the structures that are deep or underneath some highly reflective surfaces \cite{jensen1999linear}. Certain tissues or structures would bounce back or absorb the sound waves, resulting in dark regions underneath. Such tissues and structures can sometimes produce alterations in ultrasound images which do not represent the actual contents , i.e. artifacts \cite{kremkau1986artifacts}. Moreover, the directionality of ultrasound imaging can make some (parts of) structures difficult to image from certain directions, which may prevent ultrasound images from conveying a complete description of what is going on inside the patient's body. In addition, the directionality may also create confusion for clinicians or medical robots performing downstream tasks. For example, a bullet inside a patient's body would create significant reverberation artifacts that occlude what is underneath. Additionally, when a medical robot inserts a needle into a patient, the reverberation artifacts created by the needle might make the needle tracking algorithm fail or disrupt the identification of the structures of interest \cite{reusz2014needle}. Even though some artifacts have diagnostic significance, which could help clinicians localize certain structures or lesions inside patients' bodies \cite{ahuja1996clinical,baad2017clinical}, the artifacts become less meaningful once the objects of interest are identified. Furthermore, if we preserve the artifacts from different viewpoints, then they could substantially occlude real tissues and the image would be harder to interpret. When there are multiple viewpoints available in ultrasound imaging, we can reconstruct an ultrasound image that represents the underlying structures better while having fewer artifacts. However, no existing method can do the job perfectly. Relatively simple methods such as averaging the overlapping pixel values from different viewpoints \cite{trobaugh1994three} or taking the maximum of such pixels \cite{lasso2014plus} result in lower dynamic range or additional artifacts in the output images. Other more advanced ultrasound compounding algorithms \cite{gobl2018redefining,hennersperger2015computational} reconstruct the 3D volume of ultrasound using a tensor representation, but both of them still combine the overlapping pixels by simply averaging them or taking the maximum. The method proposed by zu Berge et al. \cite{zu2014orientation} utilizes the per-pixel confidence map proposed by Karamalis et al. \cite{karamalis2012ultrasound} as weights in compounding. While this method does not directly take the average or maximum, it does not take the contrast of the image into account. In the survey paper by Mozafari et al. \cite{mozaffari2017freehand}, a large number of 3D compounding methods are covered, but all of the methods deal with the overlapping pixels from different views by taking the average or maximum. Since both bright and dark regions contain useful information in ultrasound images, maximizing the contrast in those regions while lowering the intensity of noise and artifacts in other regions is essential in compounding. In all cases, every existing compounding algorithm tends to introduce new artifacts into the volumes and lower the dynamic range. These algorithms can only preserve either dark or bright regions, but in some clinical settings or in computer vision algorithms to guide downstream medical robots, \textit{both dark and bright regions are useful}. We do not want to naively take the maximum or average when dealing with overlapping pixels from different views, since doing so would lower the contrast or create new artifacts. Our goal is to clearly differentiate all structures, whether dark or bright, while suppressing artifacts and speckle noise to help with downstream computer vision tasks such as vessel segmentation. To better reconstruct the vessels and detect the bones, unlike prior work, we are less concerned with recovering the most ``accurate" individual pixel values but more concerned with enhancing the images by maximizing the contrast. We focus on preserving patches with the largest contrast, suppressing less-certain high frequency information to prevent piecemeal-stitching artifacts and reduce existing artifacts. Our most important contributions are: (1) Use more advanced methods when compounding overlapping pixels between different views instead of directly taking the average or maximum. (2) Keep the pixels and structures with the higher confidence when compounding. (3) Preserve the pixels or patches that have the largest local contrast among the overlapping values from different viewpoints. (4) Identify anatomic boundaries of structures and tissues and treat them differently while compounding. (5) Use Laplacian pyramid blending \cite{burt1983multiresolution} to remove discrepancy in pixel values from ultrasound images captured in different view points. (6) Make use of the advantages of different compounding methods in different frequency scales. \section{Related Work} As for freehand ultrasound compounding, in 1997, Rohling et al. \cite{rohling1997three} proposed to compound the freehand ultrasound images in the same plane iteratively, using an approach that is based on averaging. Later, the same group used interpolation to reconstruct 3D volumes of non-co-planar freehand ultrasound and still averaged the overlapping pixels \cite{rohling1999comparison}. As mentioned by Rohling et al. \cite{rohling19993d} and Mozaffari et al. \cite{mozaffari2017freehand}, the most common method in freehand compounding is to use interpolation to calculate the missing pixels and use averaging to calculate the overlapping pixels while this might not be the best approach. Grau et al. \cite{grau2005adaptive} came up with a compounding method based on phase information in 2005. Although this method is useful, access to Radio Frequency (RF) data is limited, preventing the algorithms from being widely adopted. Around the same time, Behar et al. \cite{behar2006statistical} showed that averaging the different view worked well if the transducer were set up in a certain way by simulation, but in practice, it would be extremely hard to set up the imaging settings that way. In recent years, Karamalis et al. \cite{karamalis2012ultrasound} came forward with a way to calculate physics-inspired confidence values for each ultrasound pixel using a graph representation and random walk \cite{grady2006random}, which Zu Berge et al. \cite{zu2014orientation} used as weights in a weighted average algorithm to compound ultrasound images from different viewpoints. Afterwards, Hung et al. \cite{hung2020ultrasound} proposed a new way to measure the per-pixel confidence based on directed acyclic graphs that can improve the compounding results. Hennersperger et al. \cite{hennersperger2015computational} and Göbl et al. \cite{gobl2018redefining} modeled the 3D reconstruction of ultrasound images based on more complete tensor representations, where they modeled the ultrasound imaging as a sound field. While these two recent papers made great advances in reconstructing ultrasound 3D volumes, they still compound overlapping pixels by averaging or taking the maximum. A review of freehand ultrasound compounding by Mozaffari et al. \cite{mozaffari2017freehand} summarized compounding methods using 2D and 3D transducers. However, few papers talked about how they deal with overlapping pixels, which is what our work mainly focuses on. In the case of robot control instead of freehand ultrasound, Virga et al. \cite{virga2018use} modeled the interpolation/inpainting problem as partial differential equations and solved them with a graph-based method purposed by Hennersperger et al. \cite{hennersperger2014quadratic,virga2018use}. They also did image compounding based on the tensor method by Hennersperger et al. \cite{hennersperger2015computational}. Although ultrasound artifacts have barely been directly considered in previous compounding approaches, it has been widely discussed in literature. Reverberation artifacts and shadowing are useful in diagnosis because those artifacts can help clinicians identify highly reflective surfaces and tissues with attenuation coefficients significantly different from normal tissues \cite{hindi2013artifacts}. Reverberation artifacts are most useful in identifying anomalies in lungs \cite{baad2017clinical,soldati2019role}, while it can also be used in thyroid imaging \cite{ahuja1996clinical}. Shadowing could be used in measuring the width of kidneys \cite{dunmire2016use}. However, artifacts and noise could occlude the view of other objects of interests \cite{mohebali2015acoustic} or hurt the performance of other tasks, such as registration \cite{roche2001rigid}, needle tracking \cite{reusz2014needle}, or segmentation \cite{xu2012ultrasound}. In recent years, several learning-based methods have been focusing on identifying artifacts and shadows and using this information to identify other objects \cite{hung2020weakly,meng2019weakly}, but they all need substantial labeling work and a relatively large dataset. Non-learning-based methods to remove the artifacts, either use RF data \cite{tay2011wavelet} or temporal data \cite{win2010identification} or fill in the artifact regions based on neighboring image content within the same image \cite{tay2006transform}. All of these methods make assumptions about what the missing data probably looks like, whereas our approach utilizes multi-view compounding to obtain actual replacement data for the artifact pixel locations. \section{Methods} \subsection{Identifying Good Boundaries} \label{good} Any sort of averaging between different views in which an object appears either too bright or dark in one view will lower the compounded object's contrast with respect to surrounding pixels. Even though artifacts could be suppressed, the useful structures would also be less differentiated, which is not the optimal approach. Therefore, identifying good anatomic boundaries, and treating them differently than other pixels in compounding, is essential to preserving the dynamic range and contrast of the image. Ultrasound transmits sound waves in the axial (e.g. vertical) direction, so sound waves are more likely to be bounced back by horizontal surfaces. Horizontal edges are also more likely to be artifacts, in particular reverberation artifacts \cite{quien2018ultrasound}. The trait of reverberation artifacts is that the true object is at the top with the brightest appearance compared to the lines beneath which are artificial. The distance between the detected edges of reverberation artifacts are usually shorter than other structures. Also, structures in ultrasound images are usually not a single line of pixels: they usually have thickness. Though reverberation artifact segmentation algorithms like \cite{hung2020weakly} could work well in identifying the bad boundaries, labeling images is a very time-consuming task. Besides, the exact contour of the structures in ultrasound images are ambiguous, which can be hard and time-consuming to label as well, so directly using manual labels would be less efficient and it might introduce new artifacts into the images. Therefore, We propose to refine the detected edges based on the appearances of reverberation artifacts. First we detect the horizontal boundaries through edge detection algorithms. To detect the actual structures in the ultrasound images instead of the edge of the structure, we calculate the gradient at pixel $(x,y)$ by taking the maximum difference between the current pixel and $\alpha$ pixels beneath, \begin{equation} \frac{\partial I(x,y)}{\partial y}=\max_{j=1,2,..,\alpha}|I(x,y)-I(x,y+j)| \end{equation} where in this paper, we set $\alpha$ to 15. We then group the pixels that are connected into clusters, such that pixels belonging to the same boundary are in the same cluster. We remove the clusters containing fewer than 50 pixels. After that, we only keep the clusters that do not have a cluster of pixels above itself in $\beta$ pixels. In this paper, $\beta=20$. A refinement is performed by iterating through the kept clusters and comparing the pixel values against that of the original image. A stack $s$ is maintained, and the pixels in the kept clusters with values greater than $threshold1$ are pushed into it. We pop the pixel $(x,y)$ at the top of the stack and examine the pixels in its 8-neighborhood $(x_n,y_n)$. If $(x_n,y_n)$ has never been examined before and satisfies $I(x_n,y_n)> threshold1$ and at the same time the gradient value is less than $threshold2$, i.e. $|I(x_n,y_n)-I(x,y)|<threshold2$, then we push $(x_n,y_n)$ into the stack $s$. We repeat this procedure until $s$ is empty. We add this step because the boundary detection might not be accurate enough and we can ignore detected boundaries with low pixel values to suppress false positives. In this paper, $threshold1$ and $threshold2$ are set to $30$ and $2$ respectively. The pseudocode for the described algorithm is shown in Algorithm~\ref{Algo_B}. We note that we assigned the values to the parameters based on empirical results. \begin{algorithm}[ht] \KwData{input image $I$} \KwResult{output boundary mask $B$} $edges=clustering(denoising(\frac{dI}{dy}))$;\\ $edges=cleanup(edges)$;\\ $mask=zeros(I.shape)$; $B=zeros(I.shape)$;\\ \For{$edge$ in $edges$}{ \uIf{$\forall e$ in $edges$ that are far enough from edges underneath}{ $mask[edge]=1$; } } \For{$[i,j]$ where $mask[i,j]==1$}{ \uIf{$B[i,j]==0$}{ stack $s$; \# initialize stack\\ \uIf{$I[i,j]>threshold1$}{ $s.push([i,j])$; $B[i,j]=1$; } \While{$s$ is not empty}{ $[x,y]=s.pop$;\\ \For{$[ii,jj]$ in the neighborhood of $[x,y]$}{ \uIf{$I[ii,jj]>threshold1$ and $\mid I[x,y]-I[ii,jj] \mid <threshold2$ and $B[ii,jj]==0$}{ $s.push([ii,jj])$; $B[ii,jj]=1$; } } } } } \caption{Horizontal-edge refinement} \label{Algo_B} \end{algorithm} \subsection{Compounding Algorithm} Attenuation reduces ultrasound image contrast in deeper regions. Simply taking the maximum, median or mean while compounding \cite{lasso2014plus} further undermines the contrast information, where structure information is stored. Taking the maximum also would create artifacts by emphasizing non-existent structures resulting from speckle noise in uncertain regions. Although uncertainty-based compounding approach by \cite{zu2014orientation} suppresses the artifacts and noise to some extent, it produces substantially darker images than the originals and lowers the dynamic ranges. Also, taking the maximum retains the bright regions, but some dark regions are also meaningful, so it would make more sense to preserve the patches with the largest local contrast than to simply select the pixels with maximum values. However, directly taking pixels with the largest contrast would lead to neighboring pixels inconsistently alternating between different source images. Besides, the neighbors of a pixel might all be noise, resulting in instability of the algorithm. Taking the maximum contrast might also emphasize the artifacts. We developed a novel Laplacian-pyramid \cite{burt1983multiresolution} approach to compound the images at different frequency bands and different scales. In this way, we can apply contrast maximization method at certain frequency bands while reconstructing from the pyramid. However, the pixels at extremely large scale in the pyramid represents a patch containing a huge number of pixels in the lower layers, so the contrast in this layer has less anatomical meaning. On the other hand, when the scale is small, the noise in the image would create large local contrast, so maximum weighted contrast might introduce new artifacts into the image. At extremely low and high scales, we thus consider contrast to be less important than intensity confidence measures. Another flaw of directly maximizing the contrast is that the large contrast region might contain artifacts and shadows, so we only maximize the contrast when the overlapping pixels have similar structural confidence values \cite{hung2020ultrasound}, otherwise we use the pixel with the larger structural confidence value in the compounded image, as low structural confidence value indicates that the pixel belongs to artifacts or shadows. Although some anatomic structures would be removed due to the low confidence values, artifacts and noises would also be removed in the compounded image. The anatomic structures are later compensated for in the later stage of the algorithm. Our novel ultrasound compounding method takes ultrasound images from multiple viewpoints and calculates their intensity and structural confidence maps \cite{hung2020ultrasound}, then calculates Laplacian and Gaussian \cite{toet1989image} pyramids of the original images and the Gaussian pyramid of confidence maps. Denote $L_{m,n}$ $GI_{m,n}$ as the n\textsuperscript{th} layer of the Laplacian pyramid and Gaussian pyramid of the m\textsuperscript{th} co-planar ultrasound image respectively, $GC_{m,n}$ $G\Gamma_{m,n}$ as the n\textsuperscript{th} layer of the Gaussian pyramid of the intensity and structural confidence map of m\textsuperscript{th} co-planar ultrasound image respectively, and, $L_k$ as the k\textsuperscript{th} layer of the Laplacian pyramid of the synthetic image. $M$ is the set of viewpoints, with $|M|$ views. Also denote $N(i,j)$ the 8-connected neighborhood of pixel $(i,j)$. Here we combine the weighted maximum contrast and weighted average together. For the k\textsuperscript{th} layer of the pyramid, if the difference across viewpoints between the maximum and minimum structural confidence values $G\Gamma_{m,k}(i,j)$, where $m\in M$, is less than a certain threshold $\gamma$ ($\gamma=0.05$ in this paper), we take the pixel $(i,j)$ with the largest contrast at this scale, since only when there is no artifact at the pixel does taking the largest contrast make sense \begin{equation} \widetilde{m}(i,j) = \argmax_{m \in M} \sum_{(a,b)\in N(i,j)} { \mid GI_{m,k}(a,b)-GI_{m,k}(i,j)\mid} \label{eq} \end{equation} If not, we take the pixel $(i,j)$ with the largest structural confidence at this scale \begin{equation} \widetilde{m}(i,j) = \argmax_{m \in M} G\Gamma_{m,k}(i,j) \end{equation} Denote the intensity-confidence weighted average at the k\textsuperscript{th} layer of the Laplacian pyramid as $La_k$, \begin{equation} La_k(i,j)=\frac{\sum_{m=1}^{\mid M \mid} GC_{m,k}(i,j)L_{m,k}(i,j)}{\sum_{m=1}^{\mid M \mid} GC_{m,k}(i,j)} \end{equation} Then the k\textsuperscript{th} layer of the Laplacian pyramid of the synthetic image can be calculated as, \begin{equation} L_k(i,j)=\phi(k) L_{\widetilde{m}(i,j),k}(i,j)+(1-\phi(k))La_k(i,j) \end{equation} where \begin{equation} \phi(k)=\frac{1}{0.4\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{(2k-K-1)^2}{0.16(K-1)^2})} \end{equation}is a weight function, and K is the number of total layers. This weight function is designed to assign lower weights to contrast maximization and higher weights to intensity-confidence-weighted average in extremely low and high scale. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{pipeline.png}\label{fig:subfig-1}} \caption[]{Compounding with Laplacian and Gaussian pyramid. The compounding is performed in each layer of the pyramid with the confidence map (intensity confidence or structural confidence) used as some sort of weights. The compounding results are reconstructed from the pyramid of the compounded image.} \label{subfig-1} \end{figure*} The compounding algorithm could be further generalized: \begin{equation} L_k(i,j)=\sum_{n=1}^N\phi_n(k)F_n(\{L_{m,k}\}_{m \leq \mid M \mid},\{G_{m,k}\}_{m \leq \mid M \mid}) \end{equation} where \begin{equation} \sum_{n=1}^N\phi_n(k)=1, 0< k \leq K, 0< n \leq N, 0 \leq \phi_n(k) \leq 1 \end{equation} $K$ is the total number of layers, $N$ is the total number of compounding methods, $p$ is the total number of viewpoints, $G_{m,k}$ denote any kind of confidence map at layer $k$ from viewpoint $m$, and $F_n$ denote a compounding method. We can use any weighting scheme to combine any number of compounding schemes in the Laplacian pyramid based on the application and data. The algorithm still takes some sort of confidence-based weighted averaging in some layers of the pyramid. During artifact-free contrast maximization, some anatomic boundaries would be removed incorrectly due to lower structural confidence. Therefore, even though this approach works well in preserving contrast and suppressing artifacts, the actual boundaries of structures still tend to get darker. In addition to what we just proposed above, the algorithm we purposed back in section~\ref{good} can also be incorporated. While reconstructing the image from the new Laplacian pyramid after getting the image from the third layer, the good boundaries are detected and values from the original images are taken. For overlapping pixels here, we take the maximum. We apply the same notation as above, and $GB_{m,k}$ is layer $k$ from viewpoint $m$ of the Gaussian pyramid of the boundaries mask $B$ (Gaussian pyramid of algorithm~\ref{Algo_B}'s output). \begin{equation} L_3(i,j)=\max(\frac{\sum_{m=1}^{\mid M \mid} {GB_{m,3}(i,j)GI_{m,3}(i,j)}} {\sum_{m=1}^{\mid M \mid} {GB_{m,3}(i,j)}},L_3(i,j)) \end{equation} This step is done on the third layer of the pyramid since there are still two layers before the final output, so piecemeal-stitching artifacts can still be suppressed. The step isn't done in deeper layers, so that we can still preserve contrast. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{pipline4.png}\label{fig:subfig0}} \caption[]{Pipeline for combining two individual compounding methods and boundaries enhancement. We combine the results from different methods by different weights in each layer of the pyramid. The anatomic boundaries are enhanced at the third layer so that the enhancement does not introduce new artifacts. } \label{subfig0} \end{figure*} \section{Experiments} \subsection{Data Acquisition} The data used in these experiments were gathered from three different sources: a Advanced Medical Technologies anthropomorphic Blue Phantom (blue-gel phantom), an ex-vivo lamb heart, as well as a live pig. For our initial blue-gel phantom experiments, a UF-760AG Fukuda Denshi diagnostic ultrasound imaging equipment with a linear transducer (51 mm scanning width) set to 12 MHz, a scanning depth of 3 cm and a gain of 21 db is used to scan the surface of the phantom. A needle is rigidly inserted and embedded within the phantom. When scanning the surface, images from two orthogonal viewpoints are collected. As the phantom square it is easy to ensure co-planar orthogonal views with using free-hand imaging without any tracking equipment. The experiment setup is shown in Fig.~\ref{setup1}. For the ex-vivo experiment, a lamb heart is placed within a water bath in order to insure good acoustic coupling. Using a Diasus High Frequency Ultrasound machine, a 10-22 MHz transducer is rigidly mounted onto a 6 degrees of freedom (dof) Universal Robotics UR3e arm. Using a rough calibration to the ultrasound tip, the 6-dof arm is able to ensure co-planar views of the ex-vivo lamb's heart. The experiment setup is shown in Fig.~\ref{setup2}. For the in-vivo experiment, a live pig is used as the imaging subject. A UF-760AG Fukuda Denshi diagnostic ultrasound imaging equipment with a linear transducer (51 mm scanning width) set to 12 MHz, a scanning depth of 5 cm and a gain of 21 db is mounted on the end-effector of the UR3e arm and is placed on the desired location manually to get a good view of the vessel \cite{tracir_setup}. Some manual alignments are needed for the arm to be in proper contact with the pig's skin. This pose of the robot is the zero degree view of the vessel. After this, the rotational controller rotates the probe along the probe's tip by the specified angle. For this experiment we cover a range from 20 degree to -25 degree at an interval of 5 deg. The input to the uR3e robot is sent through a custom GUI that is designed to help the users during surgery. The GUI has relevant buttons for the finer control of the robot in the end-effector frame. The GUI also has a window that displays the ultrasound image in real-time which helps in guiding the ultrasound probe. \begin{figure}[h] \centering \subfloat[][]{\includegraphics[width=0.5\textwidth]{setup1.jpeg}\label{setup1} } \subfloat[][]{\includegraphics[width=0.44\textwidth]{setup2.jpeg}\label{setup2} } \caption{(a) The experiment setup for the blue-gel phantom experiment. The square phantom and the needle is shown in the image. We perform the experiment with free-hand imaging since it is easy to make sure the orthogonal views on a square phantom. (b) The experiment setup for the lamb heart experiment, where the lamb heart is situated in a water bath to ensure acoustic coupling. The imaging is done by a robot-controlled high frequency probe.} \label{setup} \end{figure} \subsection{Qualitative Evaluation} We visually compare the results of our method against average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, and uncertainty-based fusion \cite{zu2014orientation}. As is shown in Fig.~\ref{comp}, our algorithm has the best result in suppressing artifacts, and at the same time, the brightness of the boundaries (green arrows) from our algorithm is similar to that of taking maximum \cite{lasso2014plus}. Our method also preserves a lot more contrast since other parts of the patch are darker in comparison to our bright boundaries, whereas the boundaries from the other two compounding algorithms are darker and therefor less contrasting with the dark interior. Our algorithm also completely suppresses the reverberation artifacts in the regions that the red and yellow arrows point to, while the results from other algorithms all preserve undesirable aspects of artifacts. \begin{figure}[h] \centering {\includegraphics[width=\textwidth]{_1.png}} \caption[]{Compounded patches left to right: average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm, where the green arrows indicate the vessel walls while the red and yellow arrows indicate the artifacts. As is shown in the figure that our result preserves the brightness of the vessel boundaries and suppresses the artifacts at the same time, while other methods fail to do so.} \label{comp} \end{figure} To compare our results against other existing compounding algorithms (average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, and uncertainty-based fusion \cite{zu2014orientation}), we select 5 examples of results on the anthropomorphic phantom, which is shown in Fig.~\ref{comp2}. In the first row, our algorithm almost completely removes the reverberation artifacts in the synthesized image and at the same time preserves the contrast in the images. In other phantom examples, our algorithm is also the best in removing the reverberation artifacts and shadows the vessel walls cast, while preserving the brightness of the vessel walls, needles and other structures in the images. Our algorithm preserves the``good boundaries" that represent the anatomic boundaries while suppressing boundaries that are not real. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{2.png}} \caption[]{Results on the phantom with a needle inserted in it. The left two columns are the two input images (phantom images were acquired orthogonally within plane, where the imaging direction of the first and second column are from left to right and from top to bottom respectively). The right four columns from left to right are results from average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm. On the phantom examples it is clear that our method best preserves bright and dark anatomy while suppressing artifacts.} \label{comp2} \end{figure*} Besides, we also test our algorithm on real tissue images. The comparison between our algorithm and other existing algorithms on the lamb heart is shown in Fig.~\ref{lamb}, where only maximum \cite{lasso2014plus} and our algorithm preserve the contrast at the red arrows, whereas the results by other algorithms are darker in that patch. However maximum \cite{lasso2014plus} fails to preserve the contrast at the blue arrows, while our method keeps the contrast at both red and blue arrows. It also shows that even on highly noisy data, our algorithm also has decent performance. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{3c.png}} \caption[]{Results on the lamb heart ultrasound images. From left to right: two input images, compounded results (only overlapped regions are shown) by average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm. Maximum\cite{lasso2014plus} and ours the only methods that are able to preserve the bright boundaries at the red arrows, but the maximum is not able to preserve the contrast at the blue arrows like ours does.} \label{lamb} \end{figure*} We further demonstrate that our method is able to handle images from more than two viewpoints, i.e. situation where $|M|>2$. In this experiment, we utilize the live-pig data and instead of using the structural confidence, we use a simple contrast maximization (equivalent to the case when all structural confidence at corresponding pixels are equal), due to the difficulty in getting the reference image for the structural confidence. The result is shown in Fig.~\ref{pig}, where the change in probe position between the first two images only consists of translation, and when moving the probe to the third location it also involves rotation. \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{6.png}} \caption[]{The compounding result of three live-pig images. The left three images are the input images while the right image is the result. In the result, the vessel and the structure on the right is successfully preserved while the shadows cast by the vessel become less significant.} \label{pig} \end{figure*} We also would like to show how each component of our algorithm contribute to the final output. Our proposed algorithm mainly consists of three parts: (1) structural confidence based artifact-free contrast maximization, (2) intensity confidence based weighted averaging, (3) edge enhancement. Shown in Fig.~\ref{aba}, structural confidence based artifact-free contrast maximization (1) removes the reverberation artifacts and shadows decently well, and preserves some contrast in the images, but some parts of the vessel boundaries are removed as well and creates some unnatural holes in the image. Intensity confidence based weighted averaging (2) preserves the vessel boundaries but not as bright as before, and it also removes the reverberation artifacts but also not as good as structural confidence based artifact-free contrast maximization (1). Edge enhancement (3) clearly enhances the boundaries but at the same time slightly enhances a small portion of the reverberation artifacts (yellow arrow) as well. Generally, the final output ((1)(2)(3)) leverages the different components of the image, having less reverberation artifacts than the result by using only (2) and (3) (red arrow), while having no irregular holes like the result by only (1) and (3) (blue arrow). Depending on different application, we can adjust the weights $\phi(k)$ and how we utilize the detected good boundaries, to compound the images in the way we want. \begin{figure}[h] \centering {\includegraphics[width=\textwidth]{aba.png}} \caption[]{Results showing the effect of different parts of the algorithm. The left column consists of the original images with arrows indicating the imaging direction. The right 6 images are compounding results where the numbers above or under the images indicate which part(s) of the algorithm is (are) used to constructed the compounded images. Note that the correspondence of the numbers are (1) structural confidence based artifact-free contrast maximization, (2) intensity confidence based weighted averaging, (3) edge enhancement.} \label{aba} \end{figure} \subsection{Quantitative Evaluation} We continue to compare our results with average (avg) \cite{trobaugh1994three}, maximum (max) \cite{lasso2014plus}, and uncertainty-based fusion (UBF) \cite{zu2014orientation}, as well as the original images. The challenges to evaluate the results are (1) there are no ground truth images that show what the compounded images should look like, (2) our algorithm is designed to maximize the contrast near boundaries and suppress the artifacts, so the exact pixel values do not matter, so manually labeled binary masks where anatomic boundaries are 1 and other pixels are 0 would not work as some naive ground truths. Besides, since the majority of the pixels would be 0 in those naively labeled images, the peak signal to noise ratio (PSNR) with such images as ground truth would be a lot larger if the images are dark compared with images with larger visual contrast. To show that our method generates images with better quality, we propose to use our variance-based metric. We separately evaluate image patches containing artifacts, which should have low contrast; and patches containing boundaries, which should have high contrast. For the patches with artifacts, we evaluate the algorithms based on the ratio between the variance of the patch and the variance of the whole image (denoted as variance ratio), as well as the ratio between the mean of the patch and the mean of the whole image (denoted as mean ratio). The patches with the artifacts should have lower variance and a similar mean compared with the whole image, since artifacts are supposed to be suppressed. As for patches with real boundary signals, we only care about the contrast, so our metric is the variance ratio. We want the variance in the patches with boundary signals to be much larger than the variance of the whole image. We compute the average mean ratio (AMR), average variance ratio (AVR) on 27 signal patches and 23 artifact patches. These patches are cropped from the same position in every image to keep the comparison fair, and examples of the patches are shown in Fig.~\ref{ex}, where the green boxes are the anatomic boundary signal patches and the red boxes are the artifact patches. The results are listed in Table ~\ref{tab1}. Our method outperforms other algorithms in suppressing artifacts. As for real boundary signals, our method appears superior to all the other methods. \begin{figure}[h] \centering {\includegraphics[width=\textwidth]{5_.png}} \caption[]{An example of boundary signal and artifact patches. From left to right are two original images that are orthogonal, followed by results from average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm. The green and red boxes are examples of boundary and artifact patches. } \label{ex} \end{figure} \begin{table}[h] \centering \caption{Evaluation by Mean and Variance} \label{tab1} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline {} & {} &view1 &view2&avg & max &UBF & ours\\ \hline {artifacts} & \begin{tabular}{c} AMR\\ AVR\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.434\\ 0.109\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.996\\ 0.224\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.757\\ 0.204\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.433\\ 0.234\\ \end{tabular} & \begin{tabular}{@{}l@{}} 1.277\\ 0.134\\ \end{tabular} & \begin{tabular}{@{}l@{}} \bfseries 1.206\\ \bfseries 0.048\\ \end{tabular}\\ \hline {boundaries} & \begin{tabular}{c} AVR\\ \end{tabular} & \begin{tabular}{@{}l@{}} 3.609 \end{tabular} & \begin{tabular}{@{}l@{}} 2.172 \end{tabular} & \begin{tabular}{@{}l@{}} 3.007 \end{tabular} & \begin{tabular}{@{}l@{}} 3.612 \end{tabular} & \begin{tabular}{@{}l@{}} 1.696 \end{tabular} & \begin{tabular}{@{}l@{}} \bfseries 3.876 \end{tabular}\\ \hline \end{tabular} \end{table} Additionally, we also compare our results with the previous ones by performing vessel segmentation on the compounded images. We manually selected 6 patches containing vessels with annotated the vessel boundaries as ground truth. We perform a naive segmentation as the following. We first apply Otsu thresholding \cite{otsu1979threshold} to each compounded patch which contains blood vessels to automatically separate the vessel boundaries from the background. We then fit an ellipse to the separated boundary points by the first step to segment the vessel. Table~\ref{tab2} shows dice coefficients \cite{zou2004statistical} comparing each method against ground truth where ours have the best performance. Fig.~\ref{seg} shows an example of the segmentation result. This simple adaptive segmentation performs the best on our compounding results. Since Otsu thresholding is purely pixel intensity based thresholding without considering other information, it is somewhat sensitive to the intensity of noise in the image. Therefore, better segmentation results show that our method is better than the prior algorithms at preserving the vessel walls while suppressing noise and artifacts. \begin{table}[h] \centering \caption{Evaluation by Segmentation}\label{tab2} \begin{tabular}{|c|c|c|c|c|c|c|} \hline {} &view1 &view2&avg & max &UBF & ours\\ \hline {Dice Coefficient} &0.654&0.575& {0.673} & {0.719} &{0.594} &{\bfseries 0.737}\\ \hline \end{tabular} \end{table} \begin{figure*}[h] \centering {\includegraphics[width=\textwidth]{seg.png}} \caption[]{The result of vessel segmentation. First two images: two original images (the algorithm is not able to fit an ellipse on the second image). Following four images: results by average \cite{trobaugh1994three}, maximum \cite{lasso2014plus}, uncertainty-based fusion \cite{zu2014orientation}, and our algorithm. The last image: Segmentation results overlaying on the compounded image synthesized by our algorithm. It can be seen that the segmentation on the first original image is flatter because of the missing top and down boundaries, while the segmentation on the result by maximum is affected by the reverberation artifact at the top. Other segmentation results are clearly off, while the segmentation algorithm fits the vessel boundaries very well on our result. } \label{seg} \end{figure*} \section{Conclusion} In this work, we present a new ultrasound compounding method based on ultrasound per-pixel confidence, contrast, and both Gaussian and Laplacian pyramids, taking into account the direction of ultrasound propagation. Our approach appears better at preserving contrast at anatomic boundaries while suppressing artifacts than any of the other compounding approaches we tested. Our method is especially useful in compounding problems where the images are severely corrupted by noise or artifacts and there is substantial information contained in the dark regions in the images. We hope our method could become a benchmark for ultrasound compounding and inspire others to build upon our work. Potential future work includes 3D volume reconstruction, needle tracking and segmentation, artifact identification and removal, etc. \begin{acknowledgements} We thank our collaborators at the University of Pittsburgh, Triton Microsystems, Inc., Sonivate Medical, URSUS Medical LLC, and Accipiter Systems, Inc. We also thank Evan Harber, Nico Zevallos, Abhimanyu, Wanwen Chen and Prateek Gaddigoudar from Carnegie Mellon University for gathering the data and reviewing the paper. This is a post-peer-review, pre-copyedit version of an article published in [insert journal title]. The final authenticated version is available online at: https://doi.org/10.1007/s11548-021-02464-4. \end{acknowledgements} \section*{Declaration} \textbf{Funding} This work was sponsored in part by a PITA grant from the state of Pennsylvania DCED C000072473, and by US Army Medical contracts W81XWH-19-C0083, W81XWH-19-C0101, W81XWH-19-C-002.\\ \textbf{Conflict of interest} Galeotti serves on the advisory board for Activ Surgical, Inc., and he is a Founder and Director for Elio AI, Inc.\\ \textbf{Ethical approval} All applicable international, national, and/or institutional guidelines for the care and use of animals were followed.\\ \textbf{Informed consent} Informed consent was obtained from all individual participants included in the study. \bibliographystyle{spmpsci}
\section{Introduction} Techniques of light microscopy give us a possibility to look inside living cells and have been considered as the standard in living cells observation for decades. However, each of these microscopic techniques suffers from some disadvantages. Unlike transmitted and reflected light microscopy techniques, fluorescence microscopy only allows the observation of specific structures which have been preliminarily labeled for fluorescence. Other limitations of the fluorescence microscopy are connected with photobleaching and enhanced phototoxicity~\cite{Resol}. These aspects are avoided in contrast microscopy, e.g., phase contrast microscopy or differential interference contrast~\cite{Resol}, where, naturally, some intracellular structures are enhanced whereas some of them are suppressed by artificial light interferences. For observation of the biological experiments using all microscopic techniques mentioned above, a grayscale digital camera is sufficient. Classical bright-field transmission microscopy allows the observation of unlabeled living cells and tissues and their internal structures and is of increasing interest due to recent experiments on super-resolution using videoenhancement~\cite{Ryc17} and live cell dynamics~\cite{Ryc17b}. In order to describe the spectral properties of the observed sample, in the proposed microscope arrangement~\cite{Ryc17,Ryc17b}, the object response is captured by a color digital camera chip sensor. As a result of even slight defects of equipment and noises in the optical path, images may suffer from strong distortions which disable an exact and simple data processing and analysis. Therefore, the microscope as any standard equipment should be calibrated in order to achieve more accurate representation of the object for further digital processing or visualization. In the presented article we describe a method for capturing, correction, and visualization of 12-bit raw images of unmodified biological samples obtained using a videoenhanced bright-field wide-field microscope. The image correction is based on the spectral characteristics of light captured by each camera pixel during the calibration process. The possible image visualization utilizes a full 8-bit-per-channel (bpc) range for transformation of 14-bpc corrected images. In this way, we acquired the most informative (in the sense of color and contrast) microscopic images of cells that can be mutually visually compared at the level of the whole image series. The method may be technically improved but, from the conceptual point of view, is complete. \section{Material and Methods} \subsection{High-resolution bright-field wide-field light microscope} \label{microscope} In this paper we present the process of calibration of the optical path and digital camera sensor chip of the high-resolved bright-field wide-field light microscope enabling observation of sub-microscopic objects, further called the nanoscope~\cite{Ryc17,Ryc17b}. This microscope was developed by the Institute of Complex Systems (ICS) of the Faculty of Fisheries and Protection of Waters (FFPW) in collaboration with the Petr Tax - Optax company (Czech Republic). The optical path consisted of two Luminus 360 light emitting diodes charged by the current up to 5000 mA which illuminate by series of light flashes in a gentle mode and enable the videoenhancement~\cite{Irene}. A projective lens magnifies (2$\times$) and projects the image on a JAI camera with a 12-bpc color Kodak KAI-16000 digital camera chip of 4872$\times$3248 resolution. The process of capturing the primary signal is controlled by a custom control software. The optical system of the microscope is facilitated by infrared and ultraviolet filters. This prototype microscope allows to conduct experiments in time-lapse mode (a capturing of the image series at one focus position) and scanning of the interior of the cell along the optical axis in the z-scan mode (capturing in different focus positions). The z-scan can be performed automatically by a programmable mechanics with the step size down to the 100 nm. Fig.~\ref{Fig4} shows an image from a time-lapse experiment on HeLa cells taken at the microscope set-up as follows: camera gain 0, offset 0, and exposure 148 ms; LED current 4500 mA; objective Nikon LWD 20$\times$/0.4, Ph1 ADL, $\infty$/1.2, WD 3.1. Fig.~\ref{Fig6} shows an image from a time-lapse experiment (2685 images) on human fibroblasts taken at the microscope set-up as follows: camera gain 0, offset 300, and exposure 89.2 ms; LED current 4500 mA; objective Nikon LWD 40$\times$/0.55, Ph1 ADL, $\infty$/1.2, WD 2.1. The comparative biological experiments in Fig.~\ref{Fig3} were done using \begin{enumerate} \item a Nikon Eclipse 80i microscope with an objective Nikon Plan Fluor 40$\times$/0.75, Ph2 DLL, $\infty$/0.17 WD 0.66 and a CCD-13QB camera and \item an Olympus IX51 microscope with an objective LUC Plan FLN 40$\times$ 0.60 Ph2, $\infty$/0-2/NF 22 and an Infinity 1 camera. \end{enumerate} \subsection{Microscope system calibration and image correction} \label{calibration} At the same experimental set-up as the biological microscope experiment (Sect. 2.1), the camera calibration and image correction was performed in the following steps: \begin{enumerate} \item Acquisition of the calibration images of gray coatings. \begin{enumerate} \item A Linear Step ND Filter NDL--10S--4 with 10 different opacities coated on a 2.0-mm glass was used to collect the data on camera chip sensor response. Filters 5--10 (counted from the darkest filter) were scanned through the whole depth with the step of 100 nm. In this way, 6 sets of 1309 images each were obtained. The curve L1 in Fig.~\ref{Fig2}b--f corresponds to the filter of the fifth opacity. The image of zero (L0) and the highest (L7) intensity was measured in dark and without any filter, respectively. \item The point information gain entropy (PIE) which gives a total change of the information after iterative removing one pixel of each individual image intensity (chosen R\'{e}nyi parameter $\alpha = 2$)~\cite{Ryc16} was calculated using the Image Info Extractor Professional software (ICS FFPW)~\cite{Ryc16} for each (red, green, blue) camera channel of the gray filter' response images. \item The values PIE allowed to pick an in-focus image taken from the center of the coatings (Fig.~\ref{Fig1}). This reduced the diffraction-induced noise from the bounds of the sample. \end{enumerate} \begin{figure} \centering \includegraphics[width=\textwidth]{Fig1.png} \caption{Dependence of the PIE at $\alpha$ = 2 on the relative position of the objective along the z-axis for the red (R), green (G), and blue (B) channel of a calibration gray filter. The in-focus image selected for the calibration is shown by the arrow.} \label{Fig1} \end{figure} \item Measurement of the light transmittance spectra. \begin{enumerate} \item The microscope objective was replaced for a fibre spectrophotometer Ocean Optics USB 4000 VIS-NIR-ES by which the light transmittance spectra (Fig.~\ref{Fig2}b) of the gray coatings NDL--10S--4 relevant to those in item 1a were measured successively. \end{enumerate} \item Calculation of the photon counts reaching each camera pixel after passing the gray calibration coatings. \begin{enumerate} \item The light spectra reaching each pixel of the camera (Fig.~\ref{Fig2}c--e) were obtained from multiplication of the gray filter's transmittance spectra (item 2) by the respective (red, green, blue) quantum efficiency profile of the Kodak KAI-16000 chip (Fig.~\ref{Fig2}a). \item For each gray NDL--10S--4 filter, the total number of photons (i.e., photon counts in Fig.~\ref{Fig2}f) captured by each pixel was calculated as an integral (trapezoidal rule) of the area below the respective incident spectrum. \end{enumerate} \item Plotting the calibration curves for all pixels and image correction. \begin{enumerate} \item For each pixel of the mean calibration image (see items 1--2), the points of the calibration curve were constructed (e.g., Fig.~\ref{Fig3}) as a dependency of each individual pixel intensity of each individual calibration image (item 1) on the total number of photons reaching this pixel (Fig.~\ref{Fig2}f). Each pair of two consecutive calibration points was fitted by the linear interpolation. \item Using the calibration relation of the relevant section of the calibration curve, the intensity of each pixel of the testing image of the biological experiment (Fig.~\ref{Fig6}) was converted to values of the total number of photons (in double precision numbers). For further image operations, the resulted matrix was transferred into a 14-bpc png format. \end{enumerate} \end{enumerate} The computation part of the microscope system calibration and image correction described in items 3--4 is implemented in the Verca software (ICS FFPW). \begin{figure} \centering \includegraphics[width=\textwidth]{Fig2.png} \caption{a) The declared spectra of the Bayer mask filters for a Kodak KAI-16000 camera chip. b) The light spectra of gray coatings NDL--10S--4 measured by a fiber spectrophotometer. c) The spectra of incoming light captured by the red camera channel. d) The spectra of incoming light captured by the green (G1, G2) camera channel. e) The spectra of incoming light captured by the blue camera channel. f) Integral values for each light level.} \label{Fig2} \end{figure} \begin{figure} \centering \includegraphics[width=.9\textwidth]{Fig3.png} \caption{Examplary calibration curves for the red (R), green (G1, G2), and blue (B) camera channel for image pixels [1, 8], [1, 2428], [1608, 1], and [1608, 2428] of images visualized in Fig.~\ref{Fig6}.} \label{Fig3} \end{figure} \subsection{Least information loss (LIL) image conversion} \label{LIL} The LIL Converter software was developed by the ICS FFPW to visualize $>$8-bpc images more informatively and comparably~\cite{LIL}. This program allows to convert RGB, grayscale, and raw (an appropriate Bayer mask~\cite{bayer} can be selected) images to 8-bpc images. Rescaling over the intensity maximum and minimum can be applied either separately for each color channel or together. If color channels are normalized separately, more information in the image is preserved although information about absolute color is completely lost. In case of multi-image series, the intensities are rescaled between intensity maximum and minimum for the whole series. The empty (unoccupied) levels are always removed in case that they are empty in all channels. Images can be cropped which is useful namely for removing dead pixel rows and columns. The program is optimized to utilize multi-core CPUs. \section{Results and Discussion} The nanoscope described in Sect.~\ref{microscope} was developed mainly for studying unlabeled live and fixed cells to investigate the most native cell structures. It means that any (immuno)fluorescent modifications are not applied to the cells and no additional contributions to the object response influence the detected signals. Investigation of the unmodified samples does not require complicated sample preparation and does not decrease the lifetime of the cells. In exemplary Fig.~\ref{Fig4}, HeLa cells are presented in the bright-field, phase contrast, and a fluorescent mode of a commercial microscope and in videoenhanced high-resolved imaging using the nanoscope at the same total magnification. The staining for fluorescence microscopy (c) visualizes only selected parts of the cell interior and information about the positions, shapes, or behavior of the other organelles is lost. In the phase contrast imaging (a), the cell borders are surrounded by halos of light interference. In the bright-field mode of the commercial microscope (b), the cell structures are hardly observed mainly due to a lower intensity of illumination and a larger size of an object which can be projected on a camera pixel. Moreover, due to the usage of a grayscale camera, the spectral characteristics of the standard bright-field image are hardly analyzable. In contrast, the nanoscope (d) provides primary color images where all cell borders, nucleus, nucleolus, condensing chromosome during mitosis, and some other organelles are nicely visible and the contrast can be further intensify by the method of simultaneous optical path and camera chip calibration applied to the primary, raw, camera signal (Sect.~\ref{calibration}). \begin{sidewaysfigure} \centering \includegraphics[width=.78\textwidth]{Fig4.png} \caption{8-bpc images of HeLa cells pictured by a) a phase contrast microscope Olympus IX51 with a Infinity 1 camera (40$\times$ objective magnification), b) a bright-field microscope Olympus IX51 with a Infinity 1 camera (40$\times$ objective magnification) giving raw image data and c) a fluorescent microscope Nikon Eclipse 80i in its standard arrangement (40$\times$ objective magnification, stained by fluorescein), and d) the nanoscope (20$\times$ objective magnification followed by 2$\times$ projective lens magnification) giving raw image data (without microscope calibration). Panels b and d are visualized by the LIL algorithm.} \label{Fig4} \end{sidewaysfigure} The calibration method (Sect.~\ref{calibration}) allows to obtain more realistic spectral properties of a microobject. Narrow original, uncorrected, image intensity spectra (Fig.~\ref{Fig5}a) resulting from peculiarities of the light source and transformation of light energy into electrical signals in the camera chip show that for further computer processing there is not much information (which can be significant) about color differences between different parts of the image and between essential regions of it and the background. On the contrary, corrected intensity spectra (relevant to the number of photons reaching a camera chip) are much wider and allow computer to use color information more extensively (Fig.~\ref{Fig5}b). Moreover, the calibrated image shows no signs of optical vignetting (which is an important feature mainly for macroscopic camera imaging). \begin{figure} \centering \includegraphics[width=\textwidth]{Fig5.png} \caption{Red, green, and blue intensity histograms. a) Uncorrected raw image, b) corrected raw image, c) uncorrected RGB image after LIL compression of the single image (Fig.~\ref{Fig6}a), d) corrected RGB image after LIL compression through the single image (Fig.~\ref{Fig6}c), e) uncorrected RGB image of the single image (Fig.~\ref{Fig6}b), f) corrected RGB image after LIL compression through the whole image series (Fig.~\ref{Fig6}d).} \label{Fig5} \end{figure} For the correct assessment of experimental results, a correct visualization of the $>$8-bpc images is important as well. For this, we proposed a LIL algorithm (Sect.~\ref{LIL}). Fig.~\ref{Fig6} shows images of human fibroblasts with and without prior calibration converted using the LIL algorithm applied to a single image and a whole time-lapse series. Fig.~\ref{Fig5}c--f further presents intensity histograms relevant to the images in Fig.~\ref{Fig6}. Due to the camera sensor design, a border between a left and right half of the uncalibrated image can be seen which can be crucial for digital processing algorithms. In addition, in the calibrated, corrected, image we suppressed images of artefacts such as grains of dust in the microscope optical path (cf. Fig.~\ref{Fig6}a with c) and intensify color contrast between different cellular structures and the background. All images in Fig.~\ref{Fig6} were obtained by separate normalization of color channels so colors there are not natural. Even though the information is necessarily lost in the transformation from $>$8-bpc into 8-bpc images, the LIL transformation of an individual image preserves the highest proportion of it and the transformation of a whole series preserves most of the information in the series and allows a mutual visual comparison of its images. \begin{sidewaysfigure} \centering \includegraphics[width=.8\textwidth]{Fig6.png} \caption{The LIL 8-bpc visualization of the original 12-bpc uncalibrated (a--b) and 14-bpc calibrated (c--d) image of human fibroblasts captured by the nanoscope (40$\times$ objective magnification). The LIL transformation was performed for a single image (a, c) and through the whole image series (b, d). The yellow and red arrow shows the interface of two chip halves and stains of dirty on the microscope optics, respectively.} \label{Fig6} \end{sidewaysfigure} \section{Conclusions} \label{conclusions} Biological experiments are hardly reproducible and repeatable but also often sensitive to the technical provision. The approach presented in this article allows without any extensive technical treatment to reduce the impact of optical path inhomogeneities and defects of a camera chip on color properties of images in a bright-field microscopy experiment and thus allows to maximize the yield of information obtained in the technically simplest possible experiment. Due to the fact that this experimental technology allows observation of living cells in dynamics and due to the perspectives of the Point Information Gain~\cite{Ryc16} or the Point Divergence Gain~\cite{Ryc18}, in image processing (in particular, in feature and edge detection), the calibration procedure proposed here (if needed, followed by a correct image intensity compression) is promising for more adequate and precise transfer of color information to computer and researcher. The information in the uncalibrated image is obviously dominated by the chip and light path inhomogeneities and for proper evaluation only calibrated images may be used for assessment. This is particularly important for eventual automatic machine analysis which will be unavoidable in case of a large field of sight containing many individual objects that cannot be analyzed manually. The further study will be carried out to utilize the method in observation and comparison of normal and cancerous cells and subcellular structures behavior during mitosis. \subsubsection*{Acknowledgments.} This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic -- projects CENAKVA\linebreak (No. CZ.1.05/2.1.00/01.0024), CENAKVA II (No. LO1205 under the NPU I program), the CENAKVA Centre Development (No. CZ.1.05/2.1.00/19.0380) -- and from the European Regional Development Fund in frame of the project Kompetenzzentrum MechanoBiologie (ATCZ133) in the Interreg V-A Austria--Czech Republic programme. The work was further financed by the project GAJU 017/2016/Z, by the TA CR Gama PoC 02-22-\v{S}tys and 03-24-Rycht\'{a}rikov\'{a} sub-projects of TG03010027.
\section{Introduction} Current deep neural networks require significant quantities of data to train for a new task. When only limited labelled data is available, meta-learning approaches train a network initialisation on other \emph{source} tasks, so it is suitable for fine-tuning to new few-shot \emph{target} tasks~\cite{Finn2017}. Often, training data samples have additional properties, which we collectively refer to as \emph{context}, readily available through metadata. We give as an example the \textit{alphabet} in a few-shot character recognition task (Fig. \ref{fig:split}). This is distinct from multi-label problems as we pursue invariance to the context (i.e. alphabet), so as to generalise to unseen contexts in fine-tuning, rather than predicting its label. In this work, we focus on problems where the target task is not only novel but does not have the same context as tasks seen during training. This is a difficult problem for meta-learners, as they can overfit on context knowledge to generate an initialisation, which affects the suitability for fine-tuning for tasks with novel contexts. Prior works on meta-learning have not sought to exploit context, even when readily available~\cite{Finn2017,Rusu2019,Sun,Antoniou2018,Finn2018,Sun2019,Nichol,Bertinetto2018,Snell2017,Vinyals2016,Ren2018,Requeima2019,Tseng2020}. We propose a meta-learning framework to tackle both task-generalisation and context-agnostic objectives, jointly. As with standard meta-learning, we aim for trained weights that are suitable for few-shot fine-tuning to target. Note that concepts of \textit{context} and \textit{domain} might be incorrectly confused. Domains are typically different datasets with a significant gap, whereas context is one or more distractor signals within one dataset (e.g. font or writer for character classification), and can be either discrete or continuous. \begin{figure}[t] \centering \subfigure[\emph{Character-based} split. \label{fig:split_character}]{\includegraphics[scale=0.19,trim={0 0 30 0}]{images/split_h_1.png}} \subfigure[\emph{Alphabet-based} split. \label{fig:split_alphabet}]{\includegraphics[scale=0.19]{images/split_h_2.png}} \caption{Visualisation of how context (e.g. alphabets, shown as different colours) can contribute to train/target splits. In commonly-used split (a), a classifier could overfit on context with no ill effects. If there is novel context, as in (b), this will prove problematic. In this paper, we show how context-agnostic meta-learning can benefit performance on few-shot target tasks without shared context.} \label{fig:split} \end{figure} \begin{figure}[t] \centering \subfigure[Randomly sample a task from all available training tasks.\label{fig:meta1}]{\includegraphics[scale=0.21]{images/meta_omni1.png}} \hspace{1mm} \subfigure[Two copies are taken of the primary network weights.\label{fig:meta2}]{\includegraphics[scale=0.21]{images/meta_omni2.png}} \hspace{1mm} \subfigure[$k$ rounds of optimisation on the chosen task, without context knowledge, to update $\hat{\phi}$.\label{fig:meta3}]{\includegraphics[scale=0.21,trim={-10 0 0 0}]{images/meta_omni3.png}}\\ \subfigure[$l$ rounds of context-adversarial optimisation, passing the gradients though a gradient reversal layer to update $\bar \phi$.\label{fig:meta4}]{\includegraphics[scale=0.21]{images/meta_omni4.png}} \hspace{1mm} \subfigure[Update primary weights from task-specific and context-agnostic optimisations. \label{fig:meta5}]{\includegraphics[scale=0.21]{images/meta_omni5.png}} \hspace{1mm} \subfigure[After meta-learning, the primary network can be fine-tuned for a new few-shot target task that might not share context with the training set. \label{fig:meta6}]{\includegraphics[scale=0.21, trim={0 0 0 0}]{images/meta_omni6.png}} \caption{{A visualisation of the proposed context-agnostic meta-learning approach through a character classification example (context shown as character colours)} using an alphabet-based split (Fig.~\ref{fig:split_alphabet}). The method is detailed in Algorithm~\ref{alg:meta}, where (a) to (e) corresponds to one outer loop iteration, which is repeated on random training tasks. (f) shows fine-tuning to target.} \label{fig:meta} \end{figure} Figure~\ref{fig:meta} presents an overview of the proposed framework, illustrated on the application of character classification. We assume that both task labels (e.g. character classification) and context labels (e.g. alphabet) are available for the training data. At each iteration of meta-learning, we randomly pick a task~(Fig.~\ref{fig:meta1}), and optimise the model's weights for both task-generalisation~(Fig.~\ref{fig:meta3}) and context-agnosticism~(Fig.~\ref{fig:meta4}) objectives. This is achieved through keeping two copies of the model's weights (Fig.~\ref{fig:meta2}), one for each objective, and then updating the primary weights with a mixture of both results (Fig.~\ref{fig:meta5}). These learnt weights are not only task-generalisable but importantly have been trained in an adversarial manner on context labels. To demonstrate the generality of our framework, and the opportunities in considering context, we show that it is applicable to three commonly used few-shot meta-learning algorithms \cite{Finn2017,Antoniou2018,Nichol}, and {test our context-agnostic meta-learning framework on four diverse problems, showing clear improvements compared to prior work and baselines.} The first problem (Sec~\ref{sec:omniglot}) is Omniglot character classification \cite{Lake2015}. We show that when using an alphabet-based split, our approach improves over non context-aware meta-learning approaches by 4.3\%. The second (Sec~\ref{sec:miniimagenet}) is Mini-ImageNet \cite{Vinyals2016} few-shot classification, where image classification is the task, and broader class group labels are the context. An improvement of 2.8\% is observed when utilising our approach. The third (Sec~\ref{sec:CUB}) is few-shot classification CUB \cite{WahCUB_200_2011}, where the primary colour of each bird (taken from annotations in metadata) is the context. An improvement of 1.9\% is found in this case. The fourth (Sec~\ref{sec:calorie}) is predicting energy expenditure of people performing daily activities from video~\cite{Tao}. For this problem, we consider calorie prediction as the task, and the identities as the context. We show that our approach drops the Mean Square Error (MSE) from 2.0 to 1.4. \section{Related Work} \label{sec:related} \noindent \textbf{Few-shot Learning:} Existing few-shot methods belong to one of three categories: generative approaches \cite{Zhang2018,Dwivedi2019}, embedding-based meta-learners \cite{Snell2017,Vinyals2016,Ren2018} and adaptation-based meta-learners \cite{Finn2017,Rusu2019,Sun,Antoniou2018,Finn2018,Sun2019,Nichol,Bertinetto2018,Requeima2019,Tseng2020}. Adaptation-based meta-learners produce initial models which can be fine-tuned quickly to unseen tasks, using limited labelled data. One widely-used method is Model Agnostic Meta-Learning (MAML)~\cite{Finn2017}, where repeated specialisation on tasks drawn from the training set encourages the ability to adapt to new tasks with little data. Later variations on this approach include promoting training stability~\cite{Antoniou2018} and improving training speed and performance on more realistic problems with deeper architectures~\cite{Nichol}. Some works have learned alternative training curricula~\cite{Sun} or modified the task specialisation~\cite{Rusu2019,Bertinetto2018}. Others have learned alternative fine-tuning mechanisms \cite{Requeima2019,Tseng2020} or pseudo-random labels~\cite{Sun2019} to help with adaptation to unseen tasks. These adaptation-based meta-learners contrast with embedding-based meta-learners, which find a space where the few-shot task can be embedded. A classifier is then constructed in this space, e.g. by comparing distances of target samples to seen source samples \cite{Vinyals2016}. None of the above works have exploited context available from metadata of the training data. Further, they have been evaluated on datasets where additional context knowledge is not available \cite{Oreshkin2018,Dwivedi2019}, where context is shared between the training and target split \cite{Lake2015,Vinyals2016} or combinations of the above \cite{triantafillou2019metadataset,Tseng2020}. We select adaptation-based meta-learning as the most suitable candidate for few-shot tasks with context. This is because there is likely to be insufficient target data for generative approaches, and target samples from a novel context are unlikely to embed well in the space constructed by embedding-based meta-learners. \noindent \textbf{Domain Adaptation/Generalisation:} \hspace{6pt} Different from domains, contexts are additional labels present within the same dataset, can be continuous and one sample could be associated with multiple contexts. However, methods that attempt domain adaptation and generalisation are relevant for achieving context-agnostic learning. Domain adaptation techniques aim to align source and target data. Some works use domain statistics to apply transformations to the feature space~\cite{Busto2017}, minimise alignment errors~\cite{Haeusser2017}, generate synthetic target data \cite{Hoffman2018,Huang2018} or learn from multiple domains concurrently~\cite{Rebuffi2017,Perrett2019,Li2019}. Adversarial domain classifiers have also been used to adapt a single \cite{Ganin2015,Zhang2019,Kang2018} and multiple \cite{Ros2019} source domains to a target domain. The disadvantage of all these approaches is that sufficient target data is required, making them unsuitable for few-shot learning. Domain generalisation works find representations agnostic to the dataset a sample is from. Approaches include regularisation \cite{Balaji2018}, episodic training \cite{Li2019a,Dou2019a} and adversarial learning \cite{Li2018}. In this paper, we build on adversarial training, as in~\cite{Ganin2015,Zhang2019,Kang2018,Ros2019,Li2018} for context-agnostic few-shot learning. \section{Proposed Method} We start Section~\ref{sec:problemFormulation} by formulating the problem, and explaining how it differs from commonly-tackled meta-learning problems. In Section~\ref{sec:methodProposed}, we detail our proposal to introduce context-agnostic training during meta-learning. \subsection{Problem Formulation} \label{sec:problemFormulation} \noindent \textbf{Commonalities to other meta-learning approaches:} The input to our method is labelled training data for a number of tasks, as well as limited (i.e. few-shot) labelled data for target tasks. Adaptation-based meta-learning is distinct from other learning approaches in that the trained model is not directly used for inference. Instead, it is optimised for fine-tuning to a target task. These approaches have two stages: (1) the meta-learning stage - generalisable weights across tasks are learnt, suitable for fine-tuning, and (2) the fine-tuning to target stage - initialisation weights from the meta-learning stage are updated given a limited amount of labelled data from the target task. This fine-tuned model is then used for inference on test data on the target task. Throughout this section, we will focus on stage~(1), i.e. the meta-learning stage, as this is where our contribution lies. \noindent \textbf{Our novelty:} We consider problems where the unseen target task does not share context labels with the training data. We assume each training sample has both a task label and a context label. The context labels are purely auxiliary - they are not the prediction target of the main network. We utilise context labels to achieve context-agnostic meta-learning using tasks drawn from the training set and argue that incorporating context-agnosticism provides better generalisation. This is particularly important when the set of context labels in the training data is small, increasing the potential discrepancy between tasks. \subsection{Context-Agnostic Meta-Learning} \label{sec:methodProposed} {Our contribution is applicable to adaptation-based meta-learning algorithms which are trained in an episodic manner. This means they use an inner update loop to fine-tune the network weights on a single task, and an outer update loop which incorporates changes made by the inner loop into a set of primary network weights \cite{Finn2017,Rusu2019,Antoniou2018,Finn2018,Nichol}. To recap, none of these algorithms exploit context knowledge, and although they differ in the way they specialise to a single task in the inner loop, they all share a common objective:} \begin{equation} \label{eq:obj1} \min_{\phi} \mathbb{E}_{\tau}\left[ L_{\tau} \left( U_{\tau}^k \left( \phi \right) \right) \right] , \end{equation} where $\phi$ are the network weights, $\tau$ is a randomly sampled task and $L_{\tau}$ is the loss for this task. $U_{\tau}$ denotes an update which is applied $k$ times, using data from task $\tau$. {Algorithm~\ref{alg:meta} shows (in black) the core of the method employed by \cite{Finn2017,Antoniou2018,Nichol}, including the inner and outer loop structure common to this class of meta-learning technique. They differ in the way they calculate and backpropogate $\nabla L_{\tau}$ in the inner specialisation loop (where different order gradients are applied, and various other training tricks are used). This step appears in Algorithm \ref{alg:meta} L7-10 and Fig. \ref{fig:meta3}. However, they can all be modified to become context-agnostic in the same way - this is our main contribution (shown in blue in the algorithm), which we discuss next.} To achieve context-agnostic meta-learning, we propose to train a context-adversarial network alongside the task-specialised network. This provides a second objective to our meta-learning. We update the meta-learning objective from Eq.~\ref{eq:obj1} to include this context-adversarial objective, to become \begin{equation} \min_{\phi, \psi} \mathbb{E}_{\tau}\left[ L_{\tau} \left( U_{\tau}^k \left( \phi \right) \right) + \lambda L_C \left( U_C^l \left( \psi, \phi \right) \right) \right] , \label{eq:obj2} \end{equation} where $L_C$ is a context loss, given by an associated context network with weights $\psi$, which acts on the output of the network with weights $\phi$. $U_C \left(\psi, \phi \right)$ is the adversarial update which is performed $l$ times. The relative contribution of $L_C$ is controlled by $\lambda$. Because $L_C$ and $L_{\tau}$ both operate on $\phi$, they are linked and should be optimised jointly. Equation~\ref{eq:obj2} can thus be decomposed into two optimisations: \begin{eqnarray} \label{eq:cond1} \phi\! &= &\!\argmin_{\phi} \left( L_{\tau} \left( U_{\tau}^k \left( \phi \right) \right) - \lambda L_C \left( U_C^l \left(\psi, \phi \right) \right) \right) \\ \label{eq:cond2} \psi\! &= &\! \argmin_{\psi} \left( L_C \left( U_C^l \left(\psi, \phi \right) \right) \right) . \end{eqnarray} \begin{algorithm}[t] \SetAlgoLined{ Initialise primary network with parameters $\phi$.\\ \blue{Initialise adversarial network with parameters $\psi$.}\\ \blue{Link primary and adversarial networks with GRL}\\ \For{Iteration in outer loop}{ Select random task $\tau$.\\ Set $\hat \phi = \phi$ \blue{and $\bar \phi = \phi$}.\\ \For{Iteration in inner specialisation loop}{ Construct batch with samples from task $\tau$.\\ Calculate $L_{\tau}$. \\ Optimise $\hat \phi$ w.r.t. $L_{\tau}$.\\ } \For{\blue{Iteration in inner adversarial loop}}{ \blue{Construct batch with samples from training dataset.}\\ \blue{Add context label noise with probability $\epsilon$.} \\ \blue{Calculate $L_C$.} \\ \blue{Optimise $\psi$ and $\bar \phi$ w.r.t. $L_C$}\\ } Update \blue{$\phi \gets \phi + \alpha (\hat \phi - \phi + \lambda (\bar \phi - \phi))$.} } } \caption{Context-agnostic meta-learning framework. Proposed additions which can be encapsulated by existing adaptation-based meta-learning approaches, such as \cite{Finn2017,Antoniou2018,Nichol}, are in blue. } \label{alg:meta} \end{algorithm} We can observe the adversarial nature of $L_C$ in Eqs. \ref{eq:cond1} and \ref{eq:cond2}, where, {while} $\psi$ attempts to minimise $L_C$, $\phi$ attempts to extract features which are context-agnostic (i.e. maximise $L_C$). To optimise, we proceed with two steps. The first is to update the context predictor $\psi$ using the gradient $ \nabla_{\psi} L_C(\psi, \phi)$. This is {performed} $l$ times, which we write as \begin{equation}\label{eq:grad1} U_C^l \left( \nabla_{\psi} L_C(\psi, \phi) \right). \end{equation} A higher $l$ means the adversarial network trains quicker, when balanced against $k$ to ensure $\psi$ and $\phi$ learn together in an efficient manner. The second step is to update the primary network with weights $\phi$ with the gradient \begin{equation}\label{eq:grad2} \nabla_{\phi} L_{\tau} \left( U_{\tau}^k(\phi) \right) - \lambda \nabla_{\phi} L_C \left( U_C^l (\psi, \phi) \right). \end{equation} The first term corresponds to the contribution of the task-specific inner loop. The method in \cite{Nichol} reduces this quantity to $\left(\phi - U_{\tau}^k(\phi) \right) / \alpha$, where $\alpha$ is the learning rate. $\lambda$~is a weighting factor for the contribution from the adversarial classifier, which can analogously be reduced to $\lambda \left(\phi - U_C^l (\psi, \phi) \right) / \alpha$. It can be incorporated by backpropagating the loss from $\psi$ through a gradient reversal layer (GRL) to~$\phi$. As well as performing Eqs.~\ref{eq:grad1} and \ref{eq:grad2}, we also perform each iteration of the $l$ adversarial updates $U_C$ with respect to $\psi$ and $\phi$ concurrently. In practice, the process above can be simplified by taking two copies of the primary weights at the start of the process as shown in Algorithm~\ref{alg:meta}, which matches the illustration in Fig.~\ref{fig:meta}. At each outer iteration, we first choose a task (Algorithm~\ref{alg:meta} L5) and make two copies of the primary weights $\phi$ (L6): $\hat \phi$ (weights used for the task-specialisation inner loop) and $\bar \phi$ (weights used for the context-adversarial inner loop). The task specialisation loop is then run on~$\hat \phi$~(L7-10). Next, the adversarial loop is run on $\bar \phi$ and $\psi$ (L12-17). The primary weights~$\phi$ are updated using weighted contributions from task-specialisation ($\hat \phi$) and context-generalisation ($\bar \phi$)~(L18). Note that using two separate copies of the weights ensures that the task-specialisation inner loop is as similar as possible to the one fine-tuned for the target task. The optimiser state and weights for the adversarial network with weights $\psi$ are persistent between outer loop iterations so $\psi$ can learn context as training progresses. This contrasts with the optimisers acting on the $\hat \phi$ and $\bar \phi$, which are reset every outer loop iteration for the next randomly selected task to encourage the initialisation to be suitable for fast adaptation to a novel task. Following standard meta-learning approaches, the weight initialisations $\phi$ can be fine-tuned to an unseen target task. After fine-tuning on the few-shot labelled data from target tasks, this updated model can be used for inference on unlabelled data from these target tasks (see Fig. \ref{fig:meta6}). No context labels are required for the target, as the model is trained to be context-agnostic. Our method is thus suitable for fine-tuning to the target task when new context is encountered, as well as when contexts overlap. Next, we explore four problems for evaluation. Recall that our approach assumes both task and context labels are available during training. In all our cases studies, we select datasets where context is available, or can be discovered, from the metadata. \section{Case Study 1: Character Classification}\label{sec:omniglot} \noindent \textbf{Problem Definition.} Our first case study is few-shot image classification benchmark - Omniglot~\cite{Lake2015}. We consider the task as character classification and the context as which alphabet a character is from. We follow the standard setup introduced in \cite{Vinyals2016}, which consists of 1- and 5-shot learning on sets of 5 and 20 characters (5- or 20-way) from 50 alphabets. However, we make one major and important change. Recall, we have suggested that existing meta-learning techniques are not designed to handle context within the training set, or context-discrepancy between training and target. The protocol from \cite{Vinyals2016} uses a \emph{character}-based split, where an alphabet can contribute characters to \textit{both} train and target tasks (Fig. \ref{fig:split_character}). Instead, we eliminate this overlap by ensuring that the characters are from different alphabets, i.e. an \emph{alphabet}-based split (Fig.~\ref{fig:split_alphabet}). \noindent \textbf{Evaluation and Baselines.} {We evaluate the proposed context-agnostic framework using three meta-learners: MAML++ ~\cite{Antoniou2018}, MAML~\cite{Finn2017} and REPTILE~\cite{Nichol}. Note that other adaptation-based meta-learning methods could also be used by substituting in their specific inner-specialisation loops \cite{Rusu2019,Finn2018}. Unmodified versions are used as baselines, and are compared against versions which are modified with our proposed context agnostic (CA) component.} We accordingly refer to our modified algorithms as CA-MAML++, CA-MAML and CA-REPTILE. We report results without transduction, that is batch normalisation statistics are not calculated from the entire target set in advance of individual sample classification. This is more representative of a practical application. As in~\cite{Vinyals2016}, the metric is top-1 character classification accuracy. We run experiments on the full dataset, and also on a reduced number of alphabets. With 5 alphabets, for example, characters from 4 alphabets are used for training, and a few-shot task is chosen from the 5th alphabet only. As the number of alphabets in training decreases, a larger context gap would be expected between training and target. We report averages over 10 random train/target splits, and keep these splits consistent between experiments on the same number of alphabets. \noindent \textbf{Implementation Details.} The widely-used architecture, optimiser and hyperparameters introduced in \cite{Vinyals2016}, are used. We implement the adversarial context predictor in the proposed context-agnostic methods as a single layer which takes the penultimate features layer (256D) as input with a cross-entropy loss applied to the output, predicting the alphabet. Context label randomisation is used in the adversarial classifier, where 20\% of the context labels are changed. This stops the context adversarial loss tending to zero too quickly (similar to label smoothing~\cite{Salimans2016}). {We use $l=3$ (Eq. \ref{eq:obj2}) for all Omniglot experiments. The context-agnostic component increases the training time by 20\% for all methods.} \noindent {\bf Results.} {Table \ref{tab:alphabets_all} shows the results of the proposed framework applied to \cite{Antoniou2018,Finn2017,Nichol} on 5-50 alphabets, using the alphabet-based split shown in Fig.~\ref{fig:split_alphabet}. We report results per method, to show our proposed context-agnostic component improves on average across all methods, tasks and numbers of alphabets. 85\% of individual method/task/alphabet combinations show an improvement, with a further 10\% being comparable (within 1\% accuracy). Overall, the proposed framework gives an average performance increase of 4.3\%. This improvement is most pronounced for smaller numbers of alphabets (e.g. average improvements of $>=$6.2\%, 4.9\% and 4.2\% for 5 and 10 alphabets for \cite{Nichol,Finn2017,Antoniou2018} respectively). This trend is shown in Fig. \ref{fig:diff_ab}, and} supports our earlier hypothesis that the inclusion of a context-agnostic component is most beneficial when the context overlap between the train and target data is smaller. Fig. \ref{fig:diff_task} shows the improvement for each XS YW task, averaged over the number of alphabets. Larger improvements are observed for all methods on the 1-shot versions of 5- and 20-way tasks, with \cite{Nichol} improving the most on 1S 5W and \cite{Finn2017,Antoniou2018} improving the most on 1S 20W. \begin{table}[t] \centering \caption{Character classification accuracy on Omniglot, using an alphabet-based split, with the number of training alphabets varied between 5 and 50. XS YW indicates X-shot fine-tuning at a Y-way classification tasks. Base methods are compared against context-agnostic (CA) versions.} \resizebox{1\textwidth}{!}{% \begin{tabular}{llrrrrr} \toprule & & \multicolumn{5}{c}{Number of Alphabets} \\ \cmidrule(){3-7} Task \hspace{5mm} & Method \hspace{10mm} & \hspace{25pt}5 & \hspace{15pt}10 & \hspace{15pt}15 & \hspace{15pt}20 & \hspace{15pt}50 \\ \midrule \multirow{6}{*}{1S 20W} & MAML++ \cite{Antoniou2018} & 58.7 & 57.2 & 64.7 & \bf{85.6} & 89.6 \\ & CA-MAML++ & \bf{72.3} & \bf{67.6} & \bf{82.4} & 84.8 & \bf{90.9} \\ \cmidrule(){2-7} & MAML \cite{Finn2017} & 61.4 & 78.2 & 81.5 & 83.7 & 87.5 \\ & CA-MAML & \bf{69.8} & \bf{82.8} & \bf{82.1} & \bf{89.8} & \bf{93.8} \\ \cmidrule(){2-7} & REPTILE \cite{Nichol} & 11.9 & 18.1 & 37.6 & 51.6 & 64.9 \\ & CA-REPTILE & \bf{20.7} & \bf{21.8} & \bf{39.5} & \bf{55.5} & \bf{66.5} \\ \midrule \multirow{6}{*}{1S 5W} & MAML++ \cite{Antoniou2018} & 97.4 & 96.2 & \bf{94.9} & 93.4 & 93.7 \\ & CA-MAML++ & \bf{98.1} & \bf{97.1} & 90.1 & \bf{95.8} & \bf{97.1} \\ \cmidrule(){2-7} & MAML \cite{Finn2017} & 86.1 & 87.0 & \bf{96.1} & 94.4 & 90.5 \\ & CA-MAML & \bf{94.5} & \bf{91.3} & 94.7 & \bf{96.0} & \bf{96.2} \\ \cmidrule(){2-7} & REPTILE \cite{Nichol} & 52.2 & 68.8 & 79.4 & 75.5 & 77.5 \\ & CA-REPTILE & \bf{62.2} & \bf{76.9} & \bf{83.4} & \bf{83.2} & \bf{85.5} \\ \bottomrule \end{tabular} \hspace{0.05\linewidth} \begin{tabular}{llrrrrr} \toprule & & \multicolumn{5}{c}{Number of Alphabets} \\ \cmidrule(){3-7} Task \hspace{5mm} & Method \hspace{10mm} & \hspace{25pt}5 & \hspace{15pt}10 & \hspace{15pt}15 & \hspace{15pt}20 & \hspace{15pt}50 \\ \midrule \multirow{6}{*}{5S 20W} & MAML++ \cite{Antoniou2018} & 81.0 & 84.1 & 92.4 & 93.5 & 95.8 \\ & CA-MAML++ & \bf{84.8} & \bf{90.8} & \bf{96.0} & \bf{94.5} & \bf{96.3} \\ \cmidrule(){2-7} & MAML \cite{Finn2017} & 81.7 & 83.8 & 84.0 & 91.2 & \bf{89.0} \\ & CA-MAML & \bf{86.0} & \bf{91.8} & \bf{92.9} & \bf{93.1} & 86.9 \\ \cmidrule(){2-7} & REPTILE \cite{Nichol} & 58.4 & 68.1 & 76.7 & \bf{76.0} & 78.0 \\ & CA-REPTILE & \bf{61.1} & \bf{73.7} & \bf{78.3} & 75.8 & \bf{81.6} \\ \midrule \multirow{6}{*}{5S 5W} & MAML++ \cite{Antoniou2018} & \bf{99.4} & \bf{99.3} & \bf{98.7} & 97.0 & 96.8 \\ & CA-MAML++ & 99.3 & 98.6 & 98.5 & \bf{99.4} & \bf{96.9} \\ \cmidrule(){2-7} & MAML \cite{Finn2017} & 96.6 & 95.8 & 97.2 & 97.9 & 98.9 \\ & CA-MAML & \bf{97.8} & \bf{98.5} & \bf{97.6} & \bf{98.6} & \bf{99.1} \\ \cmidrule(){2-7} & REPTILE \cite{Nichol} & 85.2 & 85.6 & \bf{93.2} & 88.5 & 89.4 \\ & CA-REPTILE & \bf{88.3} & \bf{94.4} & 92.4 & \bf{91.6} & \bf{92.9} \\ \bottomrule \end{tabular} } \label{tab:alphabets_all} \end{table} For the ablation studies, we use \cite{Nichol} as our base meta-learner as it is the least computationally expensive. Based on preliminary studies, we believe the behaviour is consistent, and the conclusions stand, for the other methods. {In the results above, we used $\lambda=1.0$ for the contribution of our adversarial component $\lambda$ (Eq.~\ref{eq:obj1}). Next,} we provide results on how varying $\lambda$ can affect the model's performance. For this, we use 5S 5W, 10 alphabet task. Fig. \ref{fig:lambda} shows training progress with $\lambda = \{10.0, 2.0, 1.0, 0.5, 0.1\}$. We can see that a high weighting ($\lambda = 10.0$) causes a drop in training accuracy around iteration 40K, as the optimisation prioritises becoming context-agnostic over the ability to specialise to a task. However, the figure shows reasonable robustness to the choice of $\lambda$. \begin{figure}[t] \centering \subfigure[Averaged over the 1- and 5-shot, 5- and 20-way tasks, showing the effect of the number of unique context labels (i.e. alphabets). \label{fig:diff_ab}]{\includegraphics[scale=0.43, trim={0 0 0 0}]{images/diff_ab.pdf}} \hspace{1mm} \subfigure[Averaged over number of alphabets (5, 10, 15, 20 and 50), showing how each task is affected.\label{fig:diff_task}]{\includegraphics[scale=0.43, trim={0 -1 0 0}]{images/diff_task.pdf}} \caption{{Accuracy improvements given by our context-agnostic (CA-) versions of \cite{Antoniou2018,Finn2017,Nichol} using the alphabet-based split (shown in Fig. \ref{fig:split_alphabet}).}} \label{fig:diff} \end{figure} \begin{figure}[t] \centering \subfigure[Accuracy on the training set after the inner loops.]{\includegraphics[width=0.49\columnwidth, trim={0 0 0 25}]{images/train_sd3.png}} \subfigure[Accuracy on the target set after fine-tuning to the target task.]{\includegraphics[width=0.49\columnwidth, trim={0 0 0 20}]{images/test_sd3.png}} \caption{These plots show how the weighting ($\lambda$) of the context-adversarial component affects training and target performance during one run of the 5-shot/5-way 10 alphabet task using an alphabet-based split.} \label{fig:lambda} \end{figure} \begin{figure}[t] \centering \subfigure[50 alphabets. \label{fig:ab50}]{\includegraphics[width=0.49\columnwidth, trim={0 0 0 0}]{images/50ab.pdf}} \subfigure[10 alphabets.\label{fig:ab10}]{\includegraphics[width=0.49\columnwidth, trim={0 0 0 0}]{images/10ab.pdf}} \caption{{Comparison of character-based and alphabet-based training/target splits using 50 and 10 alphabets.}} \label{fig:ab} \end{figure} Next, we investigate the differences between character-based and alphabet-based training/target splits (visualised in Fig. \ref{fig:split}). Fig.~\ref{fig:ab} shows the effects of context-agnosticism when evaluating on character-based splits and alphabet-based splits. Fig. \ref{fig:ab50} uses 50 alphabets for comparison, and Fig. \ref{fig:ab10} uses 10 alphabets. While both approaches are comparable on character-based splits (blue vs red), we show a clear improvement in using our context-agnostic meta-learning approach when tested on alphabet-based splits (yellow vs green). This is a sterner test due to the training and target sets being made up from data with different contexts. The context-agnostic version is significantly better for all cases and both alphabet sizes. Finally, as previous approaches only evaluate on the easier character-based split for Omniglot, using all 50 alphabets, we provide comparative results to published works on this setup. We list reported results from \cite{Finn2017,Antoniou2018,Nichol} as well as our replications to ensure a direct comparison (the same codebase and splits can be used with and without the context-agnostic component). For this setup, we use the same data augmentation as~\cite{Finn2017,Antoniou2018,Nichol}. Results are given in Table \ref{tab:omniglot_results_1}, which confirms that context-agnostic versions of the base methods achieve comparable performance, despite there being shared context between source and target. In summary, this section presented experiments on the Omniglot character classification dataset. We show that, on average, our proposed context-agnostic approach gives performance improvements across all {methods and} tasks, particularly for smaller alphabet sizes, which introduce a bigger context gap between training and target. \begin{table}[t] \caption{Comparative results on Omniglot using the standard character-based split. *: results reported in cited papers. Even though both training and target tasks share context, our CA contribution maintains performance on this standard split. } \centering \resizebox{0.53\textwidth}{!}{ \begin{tabular}{lrrrr} \toprule Method & \hspace{2mm}5S 5W & \hspace{2mm}1S 5W & \hspace{2mm}5S 20W & \hspace{2mm}1S 20W \\ \midrule MAML++ \cite{Antoniou2018}* & 99.9 & 99.4 & 99.3 & 97.7 \\ MAML++ \cite{Antoniou2018} & 99.9 & 99.5 & 98.7 & 95.4 \\ CA-MAML++ & 99.8 & 99.5 & 98.8 & 95.6 \\ \midrule MAML \cite{Finn2017}* & 99.8 & 98.6 & 98.9 & 95.8 \\ MAML \cite{Finn2017} & 99.8 & 99.3 & 97.0 & 92.3 \\ CA-MAML & 99.8 & 99.3 & 97.2 & 94.8 \\ \midrule REPTILE \cite{Nichol}* & 98.9 & 95.4 & 96.7 & 88.1 \\ REPTILE \cite{Nichol} & 98.9 & 97.3 & 96.4 & 87.3 \\ CA-REPTILE & 98.6 & 97.6 & 95.9 & 87.8 \\ \bottomrule \end{tabular} } \label{tab:omniglot_results_1} \end{table} \section{Case Study 2: General Image Classification}\label{sec:miniimagenet} \noindent \textbf{Problem Definition.} Our second case study uses the few-shot image classification benchmark - Mini-ImageNet~\cite{Vinyals2016}. We use the experimental setup introduced in \cite{Vinyals2016}, where the task is a 1- or 5-shot 5-way classification problem. Similar to our previous case study, we aim for context labels, and a context-based split. This dataset has no readily-available context labels, and there is a large overlap between the train and target splits (e.g. 3 breeds of dog in target, 12 in train). We address this by manually assigning 12 superclass labels, which we use as context. We then ensure that superclasses used for training and testing are distinct. \noindent \textbf{Evaluation, Baselines and Implementation.} Similar to Section \ref{sec:omniglot}, we evaluate using MAML++ ~\cite{Antoniou2018} and MAML~\cite{Finn2017}. Unmodified versions are used as baselines, and are compared against versions which are modified with our proposed CA component. Transduction is not used, and the metric is top-1 image classification accuracy. The same architecture, hyperparameters etc. as in~\cite{Antoniou2018} are used. We use $k=5$ (Eq. \ref{eq:obj1}) and $l=2$ (Eq. \ref{eq:obj2}). Results are given for the original Mini-ImageNet splits and our superclass-based splits with context labels. \noindent \textbf{Results.} Table \ref{tab:mi_results} shows the results on the original train/target split and the new splits with no shared context. Results show comparable performance for the original split, but importantly improved performance in the context-based split. Our context-agnostic component improves over \cite{Finn2017} and \cite{Antoniou2018} by an average 3.3\% on the most difficult 1S 5W task. An average 2.2\% improvement is also seen on the easier 5S 5W task. Similar to Omniglot, note that few shot classification on Mini-ImageNet is more challenging (by an average of 8.7\% across all methods) when there is no shared context between training and target data. \begin{table}[t] \caption{Results on Mini-ImageNet and CUB using the original splits which have shared context between train and target tasks, and the new context-based splits with no shared context between training and target tasks.} \centering \resizebox{0.8\textwidth}{!}{% \begin{tabular}{lrrrrrrrr} \toprule & \multicolumn{4}{c}{Mini-ImageNet} & \multicolumn{4}{c}{CUB} \\ & \multicolumn{2}{c}{Original split} & \multicolumn{2}{c}{Context Split} & \multicolumn{2}{c}{Original split} & \multicolumn{2}{c}{Context Split}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} Method & \hspace{2mm}1S 5W & \hspace{2mm}5S 5W & \hspace{2mm}1S 5W & \hspace{2mm}5S 5W & \hspace{2mm}1S 5W & \hspace{2mm}5S 5W & \hspace{2mm}1S 5W & \hspace{2mm}5S 5W\\ \midrule MAML++ \cite{Antoniou2018} & \bf{52.0} & \bf{68.1} & 40.1 & 60.1 & \bf{38.7} & 57.2 & 42.2 & 56.7 \\ CA-MAML++ & 51.8 & \bf{68.1} & \bf{44.4} & \bf{61.5} & 38.0 & \bf{58.4} & \bf{43.3} & \bf{57.9} \\ \midrule MAML \cite{Finn2017} & \bf{48.3} & \bf{64.3} & 41.1 & 56.5 & 42.5 & \bf{56.1} & 37.7 & 54.7 \\ CA-MAML & \bf{48.3} & 64.2 & \bf{43.3} & \bf{59.5} & \bf{42.6} & 55.9 & \bf{40.3} & \bf{57.5} \\ \bottomrule \end{tabular} } \label{tab:mi_results} \end{table} \section{Case Study 3: Fine-Grained Bird Classification}\label{sec:CUB} \noindent \textbf{Problem Definition.} For our third case study, we use the few-shot fine-grained bird classification benchmark CUB \cite{WahCUB_200_2011}. CUB contains a large amount of metadata from human annotators. For context labels, we have taken each bird's primary colour, but could have chosen a number of others e.g. bill shape. The CUB dataset has 200 classes, with 9 different primary colours. We ensure splits are distinct with respect to this property. \noindent \textbf{Evaluation, Baselines and Implementation.} We use the same setup as for Mini-ImageNet (Section \ref{sec:miniimagenet}). \noindent \textbf{Results.} Table \ref{tab:mi_results} shows the results on the original train/target splits and the new splits with no shared context (i.e. no shared primary colour). When there is less less shared context between train and target data, our context-agnostic component improves over \cite{Finn2017} and \cite{Antoniou2018} by an average of 1.9\% across all tasks, whilst performance is maintained on the original split. \section{Case Study 4: Calorie Estimation from Video}\label{sec:calorie} \noindent \textbf{Problem Definition.} In this fourth problem, we use the dataset from \cite{Tao2018}, where the task is to {estimate energy expenditure for an input video sequence of an indiviual carrying out a variety of actions.} Different from the first three case studies, this is a regression task, rather than a classification one, as calorie readings are continuous. The target task is to estimate the calorimeter reading for seen, as well as unseen, actions. Importantly, the individual captured forms the context. Alternative context labels could include, for example, age or Body Mass Index (BMI). Our objective is thus to perform meta-learning to generalise across actions, as well as being individual-agnostic, for calorie prediction of a new individual. We use silhouette footage and calorimeter readings from 10 participants performing a number of daily living tasks as derived from the {SPHERE Calorie dataset} of~\cite{Tao}. Using a relatively small amount of data to fine-tune to target is appropriate because collecting data from individuals using a calorimeter is expensive and cumbersome. \noindent \textbf{Evaluation and Baselines.} Ten-fold leave-one-person-out cross-validation is used for evaluation. We report results using MSE across all videos for each subject. For fine-tuning to target, we use labelled calorie measurements from the first 32 seconds (i.e. the first 60 video samples, where each sample is 30 frames subsampled at 1fps) of the target subject. Evaluation is then performed using the remaining data from the target subject, which is 28 minutes on average. We compare the following methods, using cross-fold, leave-one-person-out validation: \begin{itemize} \item Metabolic Equivalent (MET) from \cite{Tao}. This offers a baseline of calorie estimation through a look-up table of actions and their duration. This has been used as a baseline on this dataset previously. \item Method from Tao et al. \cite{Tao2018} that utilises IMU and depth information not used by our method. \item Pre-train - standard training process, trained on 9 subjects and tested on target subject without fine-tuning. \item Pre-train/fine-tune - standard training process on 9 subjects and fine-tuned on the target subject. \item REPTILE - meta-learning from~\cite{Nichol} on 9 subjects and fine-tuned on target. \item CA-REPTILE - our proposed context-agnostic meta-learning approach. \end{itemize} Note that we chose to use \cite{Nichol} as the baseline few-shot method because it is less computationally expensive (important when scaling up the few shot-problem to video) than \cite{Finn2017,Antoniou2018}, as discussed in Section \ref{sec:related}. \noindent \textbf{Implementation Details.} Images are resized to 224x224, and fed to a ResNet-18 architecture \cite{He2016}. No previous works have addressed this individual-agnostic personalisation problem. Following~\cite{Tao}, it is believed that a window of 30s is required as input for energy expenditure prediction. We sample the data at 1fps and use the ResNet CNN's output from the penultimate layer as input to a Temporal Convolutional Network (TCN) \cite{Bai2018} for temporal reasoning. Our model is trained end-to-end using Adam \cite{Kingma2015} and contains 11.2M parameters. {We use $k=10$ (Eq. \ref{eq:obj1}) and $l=1$ (Eq. \ref{eq:obj2}) for all Calorie experiments. A lower value of $l$ is required than for Omniglot, as context information is easier for the adversarial network to learn (i.e. people are easier to distinguish than alphabets).} MSE is used as the regression loss function. Augmentation during training consists of random crops and random rotations up to 30$^\circ$. The same architecture is used for all baselines (except MET and \cite{Tao2018}), making results directly comparable. \begin{table}[t] \caption{MSE for all 10 participants on the Calorie dataset, using leave-one-out cross-validation. A lower MSE indicates better results. Methods with only an average reported are results taken from the referenced publications.} \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lrrrrrrrrrrr} \toprule {Method} & {P1} & {P2} & {P3} & {P4} & {P5} & {P6} & {P7} & {P8} & {P9} & {P10} & {\hspace{2mm}Avg} \\ \midrule MET Lookup \cite{Tao} & - & - & - & - & - & - & - & - & - & - & 2.25 \\ Tao et al. \cite{Tao2018} & - & - & - & - & - & - & - & - & - & - & 1.69 \\ Pre-train only & 1.21 & \bf{0.89} & 0.88 & 1.86 & 1.24 & \bf{2.46} & 7.50 & \bf{0.89} & 1.25 & 3.11 & 2.13 \\ Pre-train/fine-tune & 0.58 & 1.64 & 0.75 & 0.53 & 1.13 & 4.26 & 5.83 & 1.29 & 1.41 & 3.53 & 2.10 \\ REPTILE \cite{Nichol} & 0.48 & 1.65 & 0.52 & 0.90 & 2.12 & 3.28 & 6.48 & 1.26 & \bf{0.83} & 2.58 & 2.01 \\ CA-REPTILE & \bf{0.39} & 1.11 & \bf{0.46} & \bf{0.48} & \bf{0.87} & 2.68 & \bf{3.75} & 1.07 & 0.87 & \bf{2.32} & \bf{1.40} \\ \bottomrule \end{tabular} } \label{tab:cal_main_results} \end{table} \noindent{\bf Results.} Table \ref{tab:cal_main_results} compares the various methods. The context-agnostic meta-learning method obtains a 35\% reduction in MSE over the pre-training only, a 33\% reduction over the pre-train/fine-tune model, and a 30\% improvement over the non context-agnostic version. For 3 out of 10 individuals, pre-training outperforms any fine-tuning. We believe this is due to these participants performing actions at the start of the sequence in a different manner to those later. However, our context-agnostic approach offers the best fine-tuned results. Fig.~\ref{fig:qual} shows qualitative silhouette sequences with calorimeter readings as groundtruth, which are to compared to predictions from our method and baselines. Results demonstrate that the context-agnostic version estimates the ground truth curve better than other methods from participants with low and high energy expenditure variability. \begin{figure}[t] \centering \subfigure{\includegraphics[width=\textwidth, trim={-250 0 -40 0}]{images/Subject4_Record2.png}}\\ \vspace{-4mm} \addtocounter{subfigure}{-1} \subfigure{\includegraphics[width=\textwidth]{images/Subject4_Record2graph.pdf}}\vspace{-10pt} \\ \subfigure{\includegraphics[width=\textwidth, trim={-250 0 -40 0}]{images/Subject1_Record2.png}}\\ \vspace{-4mm} \addtocounter{subfigure}{-1} \subfigure{\includegraphics[width=\textwidth]{images/Subject1_Record2graph.pdf}}\vspace{-12pt} \caption{Example energy expenditure predictions on two sequences from different participants in the Calorie dataset.} \label{fig:qual} \vspace{-10pt} \end{figure} \section{Conclusion} In this paper, we proposed context-agnostic meta-learning that learns a network initialisation which can be fine-tuned quickly to new few-shot target problems. An adversarial context network acts on the initialisation in the meta-learning stage, along with task-specialised weights, to learn context-agnostic features capable of adapting to tasks which do not share context with the training set. This overcomes a significant drawback with current few-shot meta-learning approaches, that do not exploit context which is often readily available. The framework is evaluated on the Omniglot few-shot character classification dataset and the Mini-ImageNet and CUB few-shot image recognition tasks, where it demonstrates consistent improvements when exploiting context information. We also evaluate on a few-shot regression problem, for calorie estimation from video, showing significant improvements. This is the first work to demonstrate the importance and potential of incorporating context into few-shot methods. We hope this would trigger follow-up works on other problems, methods and contexts. \noindent \textbf{Data Statement:} Our work uses publicly available datasets. Proposed context-based splits are available at \url{github.com/tobyperrett/context_splits}. \noindent \textbf{Acknowledgement:} This work was performed under the SPHERE Next Steps Project, funded by EPSRC grant EP/R005273/1. \bibliographystyle{splncs}
\section*{Results} \noindent\textbf{Upper Critical Field $H_{\rm c2}$}. We have measured $H_{\rm c2}$ parallel to the $c$-axis, in a series of high quality single crystal samples of BaFe$_2$(As$_{1-x}$P$_x$)$_2$ spanning the superconducting part of the phase diagram using two different techniques. Close to $T_{\rm c}(H=0)$ we measured the heat capacity of the sample using a micro-calorimeter in fields up to 14\,T (see figure 1a). This gives an unambiguous measurement of $H_{\rm c2}(T)$ and the slope $h^\prime$ which unlike transport measurements is not complicated by contributions from vortex motion \cite{Serafin2010}. At lower temperature, we used micro-cantilever torque measurements in pulsed magnetic fields up to 60\,T. Here, an estimate of $H_{\rm c2}$ was made by observing the field where hysteresis in the torque magnetisation loop closes (see figure 1b). Although, strictly speaking, this marks the irreversibility line $H_{\rm irr}$, this is a lower limit for $H_{\rm c2}(0)$ and in superconductors with negligible thermal fluctuations and low anisotropy such as BaFe$_2$(As$_{1-x}$P$_x$)$_2$ $H_{\rm irr}$ should coincide approximately with $H_{\rm c2}$. Indeed, in Fig.\,2 we show that the extrapolation of the high temperature specific heat results, using the Helfand-Werthamer (HW) formula \cite{Helfand1966}, to zero temperature are in good agreement with the irreversibility field measurements showing both are good estimates of $H_{\rm c2}(0)$. \begin{figure} \includegraphics*[width=0.95\linewidth,clip]{Ba122_Hc2} \caption{{\bf Upper critical field as a function of concentration $x$.} ({\bf a})$H_{\rm c2}(0)$ in BaFe$_2$(As$_{1-x}$P$_x$)$_2$ estimated from the slope of $H_{\rm c2}(T)$ close to $T_{\rm c}$ using $H_{\rm c2}(0) = -0.73T_{\rm c} (dH_{\rm c2}/dT)|_{T_{\rm c}}$ (squares) \cite{Helfand1966}, and also estimates of $H_{\rm c2}(0)$ from the irreversibility field at low temperature ($T=1.5$\,K) measured by torque magnetometry (circles). Error bars on $H_{\rm c2}$ (circles) represent the uncertainties in locating $H_{\rm irr}$ and (squares) in extrapolating the values close to $T_{\rm c}$ to $T$ = 0. Error bars on $x$ represent standard deviations. ({\bf b}) The same data plotted as $(H_{\rm c2}(0))^{0.5}/T_{\rm c}$, which, in conventional theory, is proportional to the mass enhancement $m^*$. The mass renormalization $m^*/m_{\rm b}$ derived from specific heat measurements is shown for comparison (triangles) \cite{Walmsley2013}. The dashed line is a guide to the eye and solid lines in both parts are linear fits to the data.} \end{figure} In the clean-limit we would expect $(H_{\rm c2}(0))^{1/2}/T_{\rm c}$ to be proportional to the renormalized effective mass $m^*$. Surprisingly, we show in figure 2 that this quantity increases by just $\sim 20$\% from $x=0.47$ to $x=0.30$ whereas $m^*$ increases by $\sim$400\% for the same range of $x$. \noindent\textbf{Lower Critical Field $H_{\rm c1}$}. We measured $H_{\rm c1}$ in our BaFe$_2$(As$_{1-x}$P$_x$)$_2$ samples using a micro-Hall probe array. Here the magnetic flux density $B$ is measured at several discrete points a few microns from the surface of the sample. Below $H_{\rm c1}$, $B$ increases linearly with the applied field $H$ due to incomplete shielding of the sensor by the sample. Then, as the applied field passes a certain field $H_{\rm p}$, $B$ increases more rapidly with $H$ indicating that vortices have entered the sample (see figure 1 c,d). Care must be taken in identifying $H_{\rm p}$ with $H_{\rm c1}$ because, in some cases, surface pinning and geometrical barriers can push $H_p$ well above $H_{\rm c1}$. However, in our measurements several different checks, such as the equality of $H_p$ for increasing and decreasing field \cite{Liang2005}, and the independence of $H_{\rm p}$ on the sensor position \cite{Okzakai2009}, rule this out (see Methods). \begin{figure} \includegraphics*[width=0.95\linewidth,clip]{Ba122_Hc1_vsT} \caption{{\bf Temperature dependence of $H_{\rm c1}$ in samples of BaFe$_2$(As$_{1-x}$P$_x$)$_2$.} The lines show the linear extrapolation used to determine the value at $T=0$. Error bars represent the uncertainty in locating $H_{\rm c1}$ from the raw $M(H)$ data.} \end{figure} \begin{figure} \includegraphics*[height=14.5cm,clip]{Ba122_Hc1_results} \caption{{\bf Concentration $x$ dependence of lower critical field and associated energies for BaFe$_2$(As$_{1-x}$P$_x$)$_2$.} ({\bf a}), Lower critical field $H_{\rm c1}$ extrapolated to $T=0$ and $T_{\rm c}$. The location of the QCP is indicated. Error bars on $H_{\rm c1}$ represent the combination of uncertainties in extrapolating $H_{\rm c1}(T)$ to $T$ = 0 and in the demagnetizing factor. Error bars on $x$ are standard deviations. ({\bf b}), Vortex line energy $E_{\rm line} = E_{\rm em}+E_{\rm core}$ at $T=0$ from the $H_{\rm c1}(0)$ data and equations \ref{eq:em} and \ref{eq:hc1gen} shown as squares. The electromagnetic energy calculated using equation \ref{eq:em} and different estimates of $\lambda$ is also shown. The triangles are direct measurements from Ref. \cite{Hashimoto2012}, and the circles are estimates derived by scaling the band-structure value of $\lambda$ by the effective mass enhancement from specific heat \cite{Walmsley2013}. Error bars on $E_{\rm em}$ (circles) are calculated from the uncertainty in jump size in heat capacity at $T_{\rm c}$. ({\bf c}), Vortex core energy $E_{\rm core} = E_{\rm line}-E_{\rm em}$ along with an alternative estimate derived from the specific heat condensation energy ($E_{\rm cond}$) and the effective vortex area ($\pi\xi_e^2$). The uncertainties are calculated from a combination of those in the other panels. The dashed lines in all panels are guides to the eye.} \end{figure} The temperature dependence of $H_{\rm c1}$ is found to be linear in $T$ at low temperature for all $x$ (figure 3), which again is indicative of a lack of surface barriers which tend to become stronger at low temperature causing an upturn in $H_{\rm c1}(T)$ \cite{Burlachkov1992}. Extrapolating this linear behaviour to zero temperature gives us $H_{\rm c1}(0)$ which is plotted versus $x$ in Fig.\,4a. Surprisingly, instead of a dip in $H_{\rm c1}(0)$ at the QCP predicted by equation \ref{eq:hc1conv} in conjunction with the observed behaviour of $\lambda(x)$ \cite{Hashimoto2012}, there is instead a strong peak. To resolve this discrepancy we consider again the arguments leading to equation \ref{eq:hc1conv}. In general $H_{\rm c1}$ is determined from the vortex line energy $E_{\rm line}$ which is composed of two parts \cite{Liang94}, \begin{equation} H_{\rm c1}=(E_{\rm em}+E_{\rm core})/\phi_0. \label{eq:hc1gen} \end{equation} The first, $E_{\rm em}$ is the electromagnetic energy associated with the magnetic field and the screening currents which in the high $\kappa$ approximation is given by \begin{equation} E_{\rm em} = \frac{\phi_0^2}{4\pi\mu_0\lambda^2} \ln \kappa. \label{eq:em} \end{equation} The second contribution arises from the energy associated with creating the normal vortex core $E_{\rm core}$. In high $\kappa$ superconductors, $E_{\rm core}$ is usually almost negligible and is accounted for by the additional constant 0.5 in equation \ref{eq:hc1conv}. However, in superconductors close to a QCP we argue this may not be the case. In Fig.\,4b,c we use equations \ref{eq:hc1gen} and \ref{eq:em} to determine $E_{\rm em}$ and $E_{\rm core}$. Away from the QCP, $E_{\rm core}$ is approximately zero and so the standard theory accounts for $H_{\rm c1}(0)$ well. However as the QCP is approached there is a substantial increase in $E_{\rm core}$ as determined by from the corresponding increase in $H_{\rm c1}$. We can check this interpretation by making an independent estimate of the core energy from the condensation energy $E_{\rm cond}$ which we estimate from the experimentally measured specific heat (see Methods). The core energy is then $E_{\rm cond}\pi\xi^2_{\rm e}$ where $\xi_{\rm e}$ is the effective core radius which may be estimated from the coherence length $\xi_{\rm GL}$ derived from $H_{\rm c2}$ measurements using Eq.\ \ref{eq:hc2}. In Fig.\,4 we see that $E_{\rm cond}\pi\xi^2_{e}$ has a similar dependence on $x$ as $E_{\rm core}$ and is in approximate quantitative agreement if $\xi_{\rm e} \simeq 4.0\xi_{\rm GL}$ for all $x$. Hence, this suggests that the observed anomalous increase in $H_{\rm c1}$ could be caused by the high energy needed to create a vortex core close to the QCP. \section*{Discussion} \noindent In principle, the relative lack of enhancement in $H_{\rm c2}$ close to the QCP could be caused by impurity or multiband effects, although we argue that neither are likely explanations. Impurities decrease $\xi_{\rm GL}$ and in the extreme dirty limit $H_{\rm c2}\propto m^*T_{\rm c}/\ell$, where $\ell$ is the electron mean-free-path \cite{Shulga2002}. Hence, even in this limit we would expect $H_{\rm c2}$ to increase with $m^*$ although not as strongly as in the clean case. Impurities increase $H_{\rm c2}$ and as the residual resistance increases close to $x=0.3$ \cite{Kasahara2010} we would actually expect a larger increase in $H_{\rm c2}$ than expected from clean limit behaviour. dHvA measurements show that $\ell \gg \xi_{\rm GL}$ at least for the electron bands and for $x>0.38$, which suggest that, in fact, our samples are closer to the clean limit. To discuss the effect of multiple Fermi surface sheets on $H_{\rm c2}$ we consider the results of Gurevich \cite{Gurevich2010} for two ellipsoidal Fermi surface sheets with strong interband pairing. This limit is probably the one most appropriate for BaFe$_2$(As$_{1-x}$P$_x$)$_2$ \cite{Hirschfeld2011}. In this case for $H\|c$, $h^\prime \propto T_c/(v_1^2+v_2^2)$ were $v_{1,2}$ are the in-plane Fermi velocities on the two sheets. So if the velocity were strongly renormalized on one sheet only ($v_1\rightarrow 0$) then $H_{\rm c2}$ would be determined mostly by $v_2$ on the second sheet and hence would not increase with $m^*$ in accordance with our results. However, in this case the magnetic penetration depth $\lambda$, which will also be dominated by the Fermi surface sheet with the largest $v$, would not show a peak at the QCP in disagreement with experiment \cite{Hashimoto2012}. In fact, the numerical agreement between the increase in $m^*$ with $x$ as determined by $\lambda$ or specific heat, which in contrast to $\lambda$ is dominated by the low Fermi velocity sections, rather suggests that the renormalization is mostly uniform on all sheets \cite{Walmsley2013}. In the opposite limit, appropriate to the prototypic multiband superconductor MgB$_2$, where intraband pairing dominates over interband, $H_{\rm c2}$ will be determined by the band with the lowest $v$ \cite{Gurevich2010} and again an increase in $m^*$ should be reflected in $H_{\rm c2}$. So these multiband effects cannot easily explain our results. Another effect of multiband superconductivity is that it can modify the temperature dependence of $H_{\rm c2}$ such that it departs from the HW model. For example, in some iron-based superconductors a linear dependence of $H_{\rm c2}(T)$ was found over a wide temperature range \cite{Yeninas2013}. For BaFe$_2$(As$_{1-x}$P$_x$)$_2$ however, the coincidence between the HW extrapolation of the $H_{\rm c2}$ data close to $T_{\rm c}$ and the pulsed field measurement of $H_{\rm irr}$ for $T\ll T_{\rm c}$ for all $x$, would appear to rule out any significant underestimation of $H_{\rm c2}(0)$. In Supplementary Figure 3 we show that $H_{\rm irr}$ for a sample with $x=0.51$ fits the HW theory for $H_{\rm c2}(T)$ over the full temperature range. There is no reason why $H_{\rm irr}$ would underestimate $H_{\rm c2}(0)$ by the same factor as the HW extrapolation. Even in cuprate superconductors where, unlike here, there is evidence for strong thermal fluctuation effects, $H_{\rm irr}$ has been shown to agree closely with $H_{\rm c2}$ in the low temperature limit \cite{Grissonnanche2014}. The magnitude of the discrepancy between the behaviour of $H_{\rm c2}(0)$ and $m^*$ discussed above (see figure 2) also makes an explanation based on an experimental underestimate of $H_{\rm c2}(0)$ implausible. Another possibility is that in heavy fermion superconductors the mass enhancement is often reduced considerably at high fields and so therefore $m^*$ could be reduced at fields comparable to $H_{\rm c2}$. In BaFe$_2$(As$_{1-x}$P$_x$)$_2$ however, a significantly enhanced mass in fields greater than $H_{\rm c2}$ can be inferred from the dHvA measurements \cite{Walmsley2013} and low temperature, high field, resistivity \cite{Analytis2014}. Although very close to the QCP the mass inferred from these measurements is slightly reduced from the values inferred from the zero field specific heat measurements \cite{Walmsley2013} this cannot account for the lack of enhancement of $H_{\rm c2}$ shown in figure 2. Our results are similar to the behaviour observed in another quantum critical superconductor, CeRhIn$_5$. Here the pressure tuned QCP manifests a large increase in the effective mass as measured by the dHvA effect and the low temperature resistivity. $T_{\rm c}$ is maximal at the QCP but $H_{\rm c2}$ displays only a broad peak, inconsistent with the mass enhancement shown by the other probes \cite{Knebel2008}. We should note that in this system $H_{\rm c2}$ at low temperatures is Pauli limited. However, close to $T_{\rm c}$, $H_{\rm c2}$ is always orbitally limited and as neither $h^\prime$ or $H_{\rm c2}(0)$ are enhanced in BaFe$_2$(As$_{1-x}$P$_x$)$_2$ or CeRhIn$_5$ \cite{Knebel2008}, Pauli limiting can be ruled out as the explanation. A comparison to the behaviour observed in cuprates is also interesting. Here two peaks in $H_{\rm c2}(0)$ as a function of doping $p$ in YBa$_2$Cu$_3$O$_{7-\delta}$ have been reported \cite{Grissonnanche2014}, which approximately coincide with critical points where other evidence suggests that the Fermi surface reconstructs. Quantum oscillation measurements indicate that $m^*$ increases close to these points \cite{Sebastian2010}, suggesting a direct link between $H_{\rm c2}(0)$ and $m^*$ in the cuprates in contrast to our finding here for BaFe$_2$(As$_{1-x}$P$_x$)$_2$. However, by analysing the data in the same way as we have done here, it can be seen \cite{Tafti2014} that $H_{\rm c2}(0)^{0.5}/T_{\rm c}$ for YBa$_2$Cu$_3$O$_{7-\delta}$ is independent of $p$ above $p\simeq 0.18$ and falls for $p$ below this value, reaching a minimum at $p\simeq 1/8$. This suggest that at least the peak at higher $p$ is driven by the increasing gap value rather than a peak in $m^*$, in agreement with our results here, and that the minimum in $H_{\rm c2}(0)^{0.5}/T_{\rm c}$ coincides with the doping where charge order is strongest at $p\simeq 1/8$ \cite{Huecker2014}. The lack of enhancement of $H_{\rm c2}(0)$ in all these systems suggests a fundamental failure of theory. One possibility is that this may be driven by microscopic mixing of superconductivity and antiferromagnetism close to the QCP. In the vicinity of the QCP, antiferromagnetic order is expected to emerge near the vortex core region where the superconducting order parameter is suppressed \cite{Demler2004,Zhang2002}. Such a field-induced antiferromagnetic order has been observed experimentally in cuprates \cite{Lake2004,Kakuyanagi03}. When the QCP lies beneath the superconducting dome, as in the case of BaFe$_2$(As$_{1-x}$P$_x$)$_2$ \cite{Hashimoto2012,Shibauchi2014}, antiferromagnetism and superconductivity can coexist on a microscopic level. In such a situation, as pointed out in Ref.\,\cite{Zhang2002}, the field-induced antiferromagnetism can extend outside the effective vortex core region where the superconducting order parameter is finite. Such an extended magnetic order is expected to lead to further suppression of the superconducting order parameter around vortices. This effect will enlarge the vortex core size, which in turn will suppress the upper critical field in agreement with our results. We would expect this effect to be a general feature of superconductivity close to an antiferromagnetic QCP, but perhaps not relevant to the behaviour close to $p=0.18$ in the cuprates. To explain the $H_{\rm c1}$ results we postulate that the vortex core size is around 4 times larger than the estimates from $H_{\rm c2}$. This is in fact expected in cases of multiband superconductivity or superconductors with strong gap anisotropy. In MgB$_2$ \cite{Eskildsen2002,Koshelev2003} and also in the anisotropic gap superconductor 2\textit{H}-NbSe$_2$ \cite{Hartmann1994} the effective core size has been found to be around 3 times $\xi_{\rm GL}$, similar to that needed to explain the behaviour here. BaFe$_2$(As$_{1-x}$P$_x$)$_2$ is known to have a nodal gap structure \cite{Hashimoto2010} which remains relatively constant across the superconducting dome \cite{Hashimoto2012} and so we should expect the core size to be uniformly enhanced for all $x$. The peak in $H_{\rm c1}(x)$ at the QCP is then, primarily caused by the fluctuation driven enhancement in the normal state energy, but the effect is magnified by the nodal gap structure of BaFe$_2$(As$_{1-x}$P$_x$)$_2$. We expect the observed anomalous increase in $H_{\rm c1}$ to be a general feature of quantum critical superconductors as these materials often have nodal or strongly anisotropic superconducting gap structures and the increase in normal state energy is a general property close to a QCP. The relative lack of enhancement in $H_{\rm c2}$ also seems to be a general feature, which may be linked to a microscopic mixing of antiferromagnetism and superconductivity. \section*{Methods} \small \noindent\textbf{Sample growth and characterisation.} BaFe$_2$(As$_{1-x}$P$_x$)$_2$ samples were grown using a self flux technique as described in Ref.\ \cite{Kasahara2010}. Samples for this study were screened using specific heat and only samples with superconducting transition width less than 1\,K were measured (see Supplementary Figure 1). To determine the phosphorous-concentration in the samples we carried out energy-dispersive x-ray analysis (EDX) on several randomly chosen spots on each crystal ($H_{\rm c1}$ samples) or measured the $c$-axis lattice parameter using x-ray diffraction ($H_{\rm c2}$ samples) which scales linearly with $x$. For some of the $H_{\rm c2}$ samples measured using high field torque magnetometry the measured de Haas-van Alphen frequency was also used to determine $x$ as described in Ref.\ \cite{Walmsley2013}. \noindent\textbf{Measurements of $H_{\rm c2}$.} Close to $T_{\rm c}$ the upper critical field was determined using heat capacity. For this a thin film microcalorimeter was used \cite{Walmsley2013}. We measured the superconducting transition at constant magnetic field up to 14\,T (see Supplementary Figure 2). The midpoint of the increase in $C$ at the transition defines $T_{\rm c}(H)$. At low temperatures ($T \ll T_{\rm c}$) we used piezo-resistive microcantilevers to measure magnetic torque in pulsed magnetic field and hence determine the irreversibility field $H_{\rm irr}$. The crystals used in the pulsed field study were the same as those used in Ref. \cite{Walmsley2013} for the de Haas-van Alphen effect (except samples for $x\simeq 0.3$). By taking the difference between the torque in increasing and decreasing field we determined the point at which the superconducting hysteresis closes as $H_{\rm irr}$ (see figure 1(b)). For some compositions we measured $H_{\rm irr}$ in dc field over the full temperature range and found it to agree well with the HW model and also the low temperature measurements in pulsed field on the same sample (Supplementary figure 3). Our heat capacity measurements of $H_{\rm c2}$ close to $T_{\rm c} (H=0)$ are in good agreement with those of Ref.\ \cite{Chaparro2012}. \noindent\textbf{Measurements of $H_{\rm c1}$.} The measurements of the field of first flux penetration $H_{\rm p}$ have been carried out using micro-Hall arrays. The Hall probes were made with either GaAs/AlGaAs heterostructures (carrier density $n_s=3.5\times 10^{11}\rm{cm}^{-2}$) or GaAs with a 1$\mu$m thick silicon doped layer (concentration $n_s=1\times 10^{16}\rm{cm}^{-3}$). The latter had slightly lower sensitivity but proved more reliable at temperatures below 4\,K. The measurements were carried out using a resistive magnet so that the remanent field during zero field cooling was as low as possible. The samples was warmed above $T_{\rm c}$ after each field sweep and then cooled at a constant rate to the desired temperature. When strong surface pinning is present $H_{\rm p}$ may be pushed up significantly beyond $H_{\rm c1}$. In this case there will also be a significant difference between the critical field $H_{\rm p}$ measured at the edge and the centre of the sample (for example see Ref.\ \cite{Okzakai2009}) and also a difference between the field where flux starts to enter the sample and the field at which it leaves. Some of our samples, also showing signs of inhomogeneity, such as wide superconducting transitions, showed this behaviour. An example is shown in supplementary figure 4. In this sample the sensor at the edge shows first flux penetration at $H_{\rm p}\approx 5$\,mT whereas the value is $\sim 3$ times higher at the centre. For decreasing fields, the centre sensor shows a similar value to the edge sensor. All the samples reported in this paper showed insignificant difference between $H_{\rm p}$ at the centre and the edge and also for increasing and decreasing fields. Hence, we conclude that $H_{\rm c1}$ in our samples is not significantly increased by pinning. As our samples are typically thin platelets, demagnetisation effects need to be taken into account for measurement of $H_{\rm c1}$. Although an exact solution to the demagnetisation problem is only possible for ellipsoids and infinite slabs, a good approximation for thin slabs has been obtained by Brandt \cite{Brandt1999}. Here $H_{\rm c1}$ is related to the measured $H_{\rm p}$, determined from $H$ using \begin{eqnarray} H_{\rm c1} = \frac{H_{\rm p}}{\tanh\sqrt{0.36 l_{\rm c}/l_{\rm a}}} \end{eqnarray} where $l_{\rm c}$ is the sample dimension along the field and $l_a$ perpendicular to the field. All samples in this study had $l_{\rm c} \ll l_{\rm a}$. To ensure that the determination of the effective field is independent of the specific dimension we have carried out multiple measurements on a single sample cleaved to give multiple ratios of $l_{\rm c}/l_{\rm a}$. The results of this study (supplementary figure 5) show that $H_{\rm c1}$ determined by this method are independent of the aspect ratio of the sample. Furthermore, the samples used all had similar $l_{\rm c}/l_{\rm a}$ ratios (see Supplementary Table 1), and so any correction would not give any systematic errors as a function of $x$. \noindent\textbf{Calculation of condensation energy.} The condensation energy can be calculated from the specific heat using the relation \begin{equation} E_{\rm cond} = \int_0^\infty \! \left[C_{\rm s}(T)-C_{\rm n}(T)\right] dT. \label{Eq.Cond} \end{equation} To calculate this we first measured a sample of BaFe$_2$(As$_{1-x}$P$_x$)$_2$ with $x=0.47$, using a relaxation technique in zero field and $\mu_0 H=14$\,T which is sufficient at this doping to completely suppress superconductivity and thus reach the normal state. We used this 14\,T data to determine the phonon heat capacity and we then subtract this from the zero field data to give the electron specific heat of the sample. We then fitted this data to a phenomenological nodal gap, alpha model (with variable zero temperature gap) similar to that described in Ref.\ \cite{Taylor2007} (see supplementary figure 6). We then integrated this fit function using Eq.\ \ref{Eq.Cond} to give $E_{\rm cond}$ for this value of $x$. For lower values of $x$ (higher $T_{\rm c}$) the available fields were insufficient to suppress superconductivity over the full range of temperature, so we assumed that the shape of the heat capacity curve does not change appreciably with $x$ but rather just scales with $T_c$ and the jump height at $T_{\rm c}$. This is implicitly assuming that the superconducting gap structure does not change appreciably with $x$, which is supported by magnetic penetration depth $\lambda$ measurements which show that normalised temperature dependence $\lambda(T)/\lambda(0)$ is relatively independent of $x$ \cite{Hashimoto2012}. With this assumption we can then calculate $$E_{\rm cond}(x) = \frac{E_{\rm cond}(x_{\rm ref}) T_{\rm c}(x) \Delta C(x)}{T_{\rm c}(x_{\rm ref}) \Delta C(x_{\rm ref})},$$ where $x_{\rm ref}=0.47$. \def\vskip 6pt \setlength{\parindent}{0pt}{\sffamily\bfseries\selectfont References} \setlength{\parindent}{12pt}{\vskip 6pt \setlength{\parindent}{0pt}{\sffamily\bfseries\selectfont References} \setlength{\parindent}{12pt}}
\section{Introduction} \label{sec:s1} Zinc oxide (ZnO) has been largely investigated in the past years it is a low cost, high electron mobility, it is transparent and can be sinthesized in several nanostructured shapes\,\cite{ZLWang2004}. Doping in ZnO has been widely used to tailor its electronic, magnetic and optical properties. In particular, cobalt doped ZnO nanostructures have been largely investigated in the past years both experimentally\,\cite{Wang:12,Castro2016,Wojnarowicz:2015,Nanomaterials2017,Chanda:17,Geburt:2013} and theoretically \,\cite{Sarsari:13,Dalpian2006,Patterson,SciRep2017} due to the its promising application in optoelectronics and spintronics. In nanostructures, as the surface/volume ratio is large, the influence of surface effects on the incorporation of impurities in nanostructures can play an important role. Effects of surface passivation, morphology of the surface and co-doping can influence the incorporation of dopantes, as has been discussed in Ref.\cite{Erwin2005}. In this letter, we investigate Co incorporation in ZnO nanowires in order to determine changes in the magnetic properties and electronic structure upon surface passivation. Our modeling mimics certain experimental conduitions whjere air or hydrogen atmosphere is present. In general local density functionals lead to wrong description of band gaps. In order to reproduce experimental gap of ZnO and provide a better description of we have performed density functional theory calculations with hydbrid functionals. We show that although there is a strong localization of the Co states with no significant change in its magnetic moment. However there is a site preference depending on the wire surface termination. \section{Computational details} We have used density-functional theory (DFT)\,\cite{Kohn:65} together with the projected augmented wave (PAW) method, as implemented in the Vienna Ab initio Simulation Package (VASP)\,\cite{Kresse:99}. The Heyd-Scuseria-Ernzenhof (HSE06) form of the exchange-correlation potential was used to obtain geometries, formation energies and magnetic moments. To model Co impurities in ZnO we built up a 96 atom supercell using our calculated PBE lattice parameters of ZnO, $a$=3.25{\AA} and $c$=5.25{\AA}. To ensure convergence of structural, electronic and magnetic properties, a cutoff of 400 eV was used for the plane-wave expansion of the wave function. Atomic forces were converged up to 0.01\,eV/{\AA}. For Brillouin zone integrations, a $(1\times 1 \times 4)$ Monkhorst-Pack {\bf k}-point sampling was used. The HSE results should provide a reliable method to determine the electronic structure. Previous results have demonstrated that although 25\% of Hartree-Fock can be justified\,\cite{Lany:10}, to obtain the experimental band gap of ZnO, 36\% HF is needed\,\cite{Janotti:09,Lany:10}. Therefore, we have used this admixture to reproduce the experimental gap of bulk ZnO\,\cite{CRC}. The predicted energy position of the minority spin Co-t$2$ states is 3.0-3.6 eV above the ZnO conduction band minimum which is closer to GW calculations\,\cite{Sarsari:13,ALRosaunpublished,Lorke:16}. \section{Results} In Fig. \ref{fig:benchmark} we show the variation of the band gap of ZnO bulk with the amount of Hartree-Fock added to the exchange-correlation functional for both PBE0 and HSE. As reported previously, due to the lack of screening, the PBE0 functional requires a large amount of HF (around 40\,$\%$) to reach the experimental gap\cite{Lany:10}. On the other hand the HSE functional can reproduce the experimental gap with 25\,$\%$ admixture. \begin{figure}[ht!] \includegraphics*[width=8cm]{./benchmark_ZnObulkHF.eps} \caption{\label{fig:benchmark} HSE and PBE0 energy gaps for ZnO as a function of the Hartree-Fock exchange admixture.} \end{figure} The geometry of the structures we have investigated is shown in Figs.\,\ref{fig:geometries}. First we discuss the non-passivated wires. A single Co occupying a substitutional Zn site in the middle of the wire does not produce strong distortion in the ZnO wire lattice (around 0.1\,{AA}). For Co sitting at subsurface/surface sites, Co moves (outwards inwards), as the covalent radius is similar to Zn. The Co-O bond lengths remain very close to the values in pure ZnO, rangind from 1.8-2.1 \,{\AA} in bare wires and from 2.0-2.2 \,{\AA} in hydrogenated wires. The formation energy of an isolated defect is calculated setting the energy zero to the minimal energy configuration. Next we discuss the thermodynamic stability of Co in these small diameter wires. In the case of hydrogen adsorption, the most stable configuration is a fully hydrogenated wire. The incorporation of a Co atom in the wires has a dependence with the site position. For bare wires, the preferred position is the surface position. This effect is called self healing because nanostructures have a small volume compared to their surface, leading to the migration to the surface\,\cite{Dalpian2006,darosa2010,Erwin2005}. It has been further suggested size is not the only effect responsible for impurity incorporation. Erwin et al.\,\cite{Erwin2005} suggested that impurity incorporation depends on three main factors, surface morphology, nanostructure shape, and surfactants. Indeed, in our previous work we have shown that adsorption of hydrogen and water on the surface of ZnO wires is an exothermic reaction \cite{XuAPL2007,Fan:07,XuPRB2009}. Later on we have investigated the incorporation of N in ZnO nanowires and showed the effect of hydrogen passivation\,\cite{darosa2010} under N incorporation which has been confirmed experimentally that N should sit close to surface sites and at oxygen positions\,\cite{Buyanova2015}. We show here that a different behavior is found for Co in ZnO ultrathin wires. By passivating the nanowire surfaces with hydrogen, Co have a lower formation energy than when it is incorporated in the bulk (for both PBE and HSE functionals), as shown in Table\,\ref{table:formation}. Specially for these ultrathin wires, this effect is dramatic for passivation with hydrogen, since the energy difference between bulk and surface position is 1.0 eV with HSE. Main factors for this behavior is that Co has a similar size as ZnO. \begin{figure}[t]% \includegraphics*[width=8cm]{./Co_ZnO_bulk_HSE.eps} \caption{Total and projected density-of-states for Co doped ZnO bulk with HSE functional. a) with 36$\%$ admixture and b) with 25$\%$ admixture. The Fermi level is represented as a dashed line).} \label{fig:dos_bulk} \end{figure} \begin{figure}[ht!] \includegraphics*[width=8cm]{./codopedzno_wire.eps} \caption{\label{fig:geometries} Atomic configurations for ${\rm Co_{Zn}}$ in bare ZnO nanowires. O, Zn and Co atoms are shown in red, gray and blue color, respectively.} \end{figure} \begin{table}[h] \caption{\label{table:formation} Total magnetic moments $\mu_{\rm tot}$ (in $\mu_{\rm B})$ and relative formation energies $\rm E_f$ (in eV) of neutral Co impurities in ZnO calculated with PBE and HSE functionals. In brackets the magnetic moment projected on the Co atoms is shown.} \begin{tabular}{lcccc} \hline \hline config. & \multicolumn{2}{c}{$\mu_{\rm tot}$} & \multicolumn{2}{c}{$\rm E_f$}\\ \hline & PBE & HSE & PBE & HSE\\ bare inner & 3.10(2.50) & 3.00(2.69) & 0.25 & 0.24\\ bare sub & 3.15(2.53) & 3.00 (2.70) & 0.20 & 0.78\\ bare surf & 3.14(2.50) & 3.00 (2.70) & 0.00 & 0.00\\ \hline hydro inner & 3.00(2.45) & 3.00 (2.70) & 0.00 & 0.00\\ hydro sub & 3.00(2.46) & 3.00 (2.70) & 0.01 & 0.21\\ hydro surf & 3.00(2.34) & 3.00 (2.65) & 0.22 & 1.00\\ \hline \hline \end{tabular} \end{table} \begin{figure}[ht!] \includegraphics*[width=8cm]{./dos_cozno_nw_bare.eps} \includegraphics*[width=8cm]{./dos_cozno_nw_hydro.eps} \caption{\label{fig:dos_doped} Density of states of Co doped ZnO nanowires. (a) - (c) bare wires and (d)-(f) hydrogenated wires with HSE and 0.36 HF exchange. The vertical dashed line represents the Fermi level which is set to the highest occupied state.} \end{figure} Doping of ZnO bulk with Co splits the Co-3$d$ states into $e$ and $t_{2}$ states. Co assumes a high-spin configuration with local magnetic moment of 2.7 $\mu_{\rm B}$ with a small hybridization with neighboring O atoms. The majority spin states $e$ are close to the valence band maximum of ZnO and therefore hybridize with the O-2$p$ states. The $e$ minority spin states lie right above the ZnO valence band maximum. The amount of Hartre-Fock exchange changes slightly the position of the $e$ minority spin states. Including 0.25 of HF exchange in the DFT exchange term places these states 0.5 eV above the top of the valence band. The distance between the $e$ minority spin states $e$ and the t$_{2}$ states is shown in Fig.\,\ref{fig:dos_bulk} (a). We obtain a value of 5.3\,eV, in agreement with Refs.\,\cite{Sarsari:13,Patterson}. By increasing the amount of HF exchange to 0.36 the $e$ minority states are pushed towards the valence band top. The $e$-t$_{2}$ energy difference increases to 7\,eV as shown in Fig.\,\ref{fig:dos_bulk} (b). In Fig.\ref{fig:dos_doped} the DOS for the doped wires is shown. Figs.\ref{fig:dos_doped} (a)-(c) shows the results for bare wire. When the Co incorporation is at inner site (Fig.\ref{fig:dos_doped} (a)) the location of the $e$ minority states is similar to the one in ZnO bulk. As the Co moves towards the surface, relaxation and symmetry breaking effects yield to Co $e$ states located slightly at higher energies inside the band gap, but still similar to the DOS in Co doped ZnO bulk, as it can be seen in Fig.\ref{fig:dos_doped} (b) and (c). In bare wires, the distance between the $e$ and $t_2$ states for different sites is 6.7 eV (inner), 6.5 eV (subsurface) and 6.6 eV (surface). The DOS in hydrogenated wires shows some noticeable differences. Now the $e$ minority states lie inside the valence band and are shifted to lower energies as the Co atom is positioned towards the surface. This means that hybridization of cobalt with ZnO may be tuned by hydrogenation or incorporation of other adsorbing species in the sample during the doping process. One of the reasons is that strain at surface sites may change due to adsorption, facilitating Co incorporation. We have previously shown that strain indeed can induce diffusion of vacancies towards the surface\,\cite{Deng2014,Kou2017}. Obviously this is a kinetic process and barriers for Co diffusion under this conditions need to be investigated to confirm this idea. However, the strain does not affect considerably the distance between the $e$ and $t_2$ states minority spin states, which are 7.0 eV (inner), 6.3 eV (subsurface) and 6.3 eV (surface). \section{Conclusions} We have investigated ZnO nanowires doped with Co using hybrid functionals. We show that the impurity prefers to sit at bulk positions when the surface is passivated. On the other hand, bare wires suffer from self-purification problems leading to segregation of the dopant of Co towards surface sites. This indicates that the impurity can be more easily incorporated depending on the atmosphere it is prepared and may be facilitated by external atoms and relaxation of the surface. A route to investigate the diffusion of such impurities in nanostructures upon different enviroment conditions would provide further insights on these complex systems. \section{Acknowledgements} We are thankful for the financial support from the Brazilian agencies CNPq and FAPEG. A.L.R and T.F. would like to thank also German Science Foundation (DFG) under the program FOR1616. \providecommand{\WileyBibTextsc}{} \let\textsc\WileyBibTextsc \providecommand{\othercit}{} \providecommand{\jr}[1]{#1} \providecommand{\etal}{~et~al.}
\section{Introduction} Let $x = (x_1,x_2,\ldots,x_N)$ be a sequence of symbols over some alphabet $\mathbb{X}$, where each symbol is sampled from one of $k$ sources, with distributions $\mu_1,\ldots,\mu_k$. Given the sequence $x$, consider the problem of inferring the distribution of the sources, and of the classification of the samples, in the sense of determining for each sample $x_i$ which source produced it. A well known toy instance of this problem is the ``dishonest casino'', where the sources are biased coins, see \cite{Durbin98biologicalsequence}. A classical real world application is in speech recognition, see \cite{hmmspeach}. In general, applications appear in virtually any field involving time series or sequential data. For instance, in financial times series \cite{hmmfinance}, biological sequence analysis \cite{hmmbiology1},\cite{hmmbiology},\cite{Durbin98biologicalsequence}, computer vision \cite{hmmvision}, and climate modelling \cite{changepoint_climate}. See also \cite{hmmactivity1},\cite{hmmactivity2}, \cite{changepoint_security}, \cite{changepoint_deform} for a variety of other applications. Further, the above problem setting can also be viewed as a change-point detection problem, \cite{changepointbook} (see also \cite{hmm_changepoint} for an HMM based framework), and in particular the applications in \cite{hmmfinance},\cite{hmmbiology1},\cite{changepoint_climate},\cite{changepoint_deform}, \cite{regshifts}, and \cite{hmmactivity2} are of this type. When modelling the data $x = (x_1,x_2,\ldots,x_N)$, in order to be able to distinguish between the sources, one clearly needs some conditions on how the sources change as time progresses. Indeed, if the source is chosen independently at each time $i$, it is easy to see that the sources are indistinguishable and one effectively sees a single source with distribution equal to the empirical distribution of the data. A natural assumption on the underlying sequence of sources (also referred to as sequence of states) $s =(s_1,\ldots,s_N)$ is that it forms a Markov chain, and the resulting model is a Hidden Markov Model (HMM). In particular this assumption is made in all of the above mentioned work. With the HMM model, given a sequence of data $x$ one can find a maximum likelihood HMM, and then, for instance, use the Viterbi sequence (the most likely state sequence $s$ given the data) for classification. However, while the Markov chain assumption on the state sequence $s$ is convenient, and there exists a variety of effective inference methods for the problem, the data source itself rarely satisfies the Markov condition on the sequence of the states. Consider for instance the financial time series applications, as considered in \cite{hmmfinance}, \cite{changepoint_deform}, \cite{regshifts}. The data is a time series of stock prices or commodity value indices, and the underling hidden states reflect the general conditions of the market, such as bull or bear markets. If this data is modelled by an HMM, the model will imply that every day there is a certain probability that the market will enter a ``bear'' state, and that the expected time the system will spend in this state will be inversely proportional to this probability. Moreover, the model will imply that this probability does not change from day to day, due to stationarity of the Markov chain. Such properties clearly do not hold for real data as the stock markets are notoriously non-stationary. As another example, consider the task of monitoring human physical activity during a day (see, for instance, \cite{hmmactivity2}). Suppose that different states of the system correspond to different activities, such as walking, climbing stairs, running, driving, riding a bicycle. Assume that time steps are seconds, that at each time instance $i$, $x_i$ corresponds to some set of features produced by the current activity, and that the activities can be distinguished based on the distribution of the features. In this situation, it clearly makes little sense to assign probabilities to transitions between, say, walking and climbing stairs states, since such probability will depend strongly on the environment, will change on different days and during the day, and in any case is likely to be too small to be meaningful. More generally, similar considerations apply in many problem instances where HMMs are used as a change-point detection tool. In this paper we show that surprisingly, if one wants to learn the distributions of the sources, one can largely ignore the issue of modelling the environment, or modelling the transition mechanism between the states, under the assumption that the states do not change too often. Specifically, we define an Interval Model $I$ of the data $x = (x_1,x_2,\ldots,x_N, \ldots)$ to be a finite or infinite sequence of consecutive intervals in $\mathbb{N}$, $I_1,I_2,\ldots \subset \mathbb{N}$ with a mapping $\tau:\mathbb{N} \rightarrow \Set{1, \ldots,k}$ such that for any $i\in I_l$, $x_i$ has distribution $\mu_{\tau(l)}$ and all $x_i$ are independent. As mentioned above, in order to be able to differentiate between the sources, one must make \textit{some} assumption on how the source to be sampled is chosen at each time instance. The assumption that we make in this paper is that for every $l$, $\Abs{I_l} \geq m$ for some $m > 0$. This means that once the system enters a certain state, it stays at least $m$ time units in that state. To the best of our knowledge, this assumption did not appear in the literature before. With this assumption, our main result, Theorem \ref{thm:full_statement}, states that if $x$ is a sample from the Interval Model, then a maximum likelihood HMM estimator for the sequence $x$ will produce an HMM with source distributions that approximate the distributions $\mu_1,\ldots,\mu_k$. In other words, we show that an HMM estimator learns the correct source distributions despite the fact that the sequence was not generated by an HMM. We refer to this phenomenon as the \textit{resilience} of the HMM. Our result can be viewed as an extension of the classical HMM consistency results, \cite{baum1966}, \cite{petrie} as well as an extension of the more recent consistency under misspecification results, \cite{mevel}. On the application side, our results provide a better theoretical understanding of the methods that are already widely used. The Interval Model with the minimal $m$ duration assumption is a fairly general model. Indeed, except for the minimal duration $m$ for each state, we \textit{make no other assumptions about the transitions} between the intervals. The transitions between different states \textit{need not follow any deterministic or probabilistic pattern}, and in particular the process $x$ \textit{does not need to be stationary} and moreover, it \textit{does not need to be ergodic}. The intervals themselves also can be of arbitrary lengths, provided it is larger than $m$. Therefore the Interval Model setting can be best described as partly stochastic and partly adversarial (see \cite{adversarial}). The values $x_i$ are obtained by sampling from the sources, but the choice of changes of the sources can be arbitrary and hence adversarial. In addition, we do not require $m$ to be known apriori but the precision of approximation of Theorem \ref{thm:full_statement} will grow with $m$, with explicit bounds. While in this paper we are concerned with estimating the source distributions, we note that once the sources are known, the problem of classifying the data according to the source is relatively easy. For instance one could use a sliding window over the data, $w_i = (x_i,\ldots,x_{i+l})$, and for each $i$ decide which source is the most likely to produce $w_i$. Note that if the sources are known, one can easily compute the length $l$ of the window that is required to distinguish between the sources with high probability. Clearly, the more distinct the sources are, the smaller $l$ is required. In cases where $l$ is small compared to $m$, it is straightforward obtain guarantees on the accuracy of this method. We note that in contrast, the standard HMM decoding approach, the Viterbi sequence, does not in general have any guarantees and for the non-stationary Interval Model type data can be significantly inaccurate. We now proceed to discuss our results in more detail. Consider a sample $x = (x_1,x_2, \ldots,x_N)$ generated from an Interval Model $I$ as discussed above. We describe the behaviour of a maximum likelihood HMM estimator on such a sequence in two stages. First, we show that with high probability, there exists an HMM $H_0$ which assigns a high likelihood to the sequence $x$. Specifically, we show that there is an HMM that assigns log-likelihood \begin{equation} \label{eq:intro_right_likelihood} L(H_0,x) = \frac{1}{N} \log \Probu{H_0}{x} \geq -\frac{\log \Brack{ 2k \cdot m}}{m} - \sum_{j\leq k} w_j H(\mu_j) \end{equation} to $x$, where $w_j$ is the proportion of indices $i\leq N$ sampled from $\mu_j$ and $H(\mu_j)$ are the entropies of the sources. As detailed in the proofs, the term $-\sum_{i\leq k} w_i H(\mu_i)$ is the normalized log-likelihood that the model $I$ itself assigns to a typical sample $x$, and it represents the true likelihood of the data. Therefore $L(H_0,x)$ is a sum of a true likelihood, and an error term which decreases with increasing $m$. The log-likelihood (\ref{eq:intro_right_likelihood}) is achieved on an HMM that has emission distributions $\mu_i$ identical to those of $I$, and the probability of a state change in this HMM is of order $\frac{1}{m}$. In view of this, the main difficulty resolved in this paper, and the main technical contribution, consists in showing that if a fixed HMM $H$ has emission distributions that significantly differ from $\mu_1, \ldots, \mu_k$, then the log-likelihood it assigns to $x$ is lower then (\ref{eq:intro_right_likelihood}). We remark that due to the the hidden states, the likelihood function of an HMM is a complicated quantity which is usually controlled implicitly, see the discussion in \cite{douc2011}. On the other hand, in this paper we show that by appropriate use of type theories (for both the model $I$ and for a Markov chain) we can give explicit bounds on the likelihood for finite $N$. While type theory is a well known information theoretic tool, the particular combination of arguments that allows us to control the likelihood of an HMM is new. To use type theory we will introduce the second moments of the model $I$ and the HMM. Roughly speaking, for each $a,b \in \mathbb{X}$ and a random vector $X$, the second moment $M_X(a,b)$ is the probability that $x_i = a$ and $x_{i+1} = b$ averaged over all $i$. The second moment captures a basic temporal structure of the process. The main technical result of the paper, Theorem \ref{thm:main_thm}, shows that if the second moment of an HMM $H$, denoted $M_H$, differs from the moment for the model $I$, $M_I$, then for most samples $x$ from $I$, the likelihood $L(H,x)$ will be low. Combined with additional arguments, this will imply that the maximum likelihood HMM will have the correct second moment. It is now natural to ask how much information does the second moment $H$ contain about the emission distributions $\nu_i$ of $H$? In particular, is it true that if $M_I = M_H$ then the model $I$ and $H$ have the same set of emission distributions? In general, the answer to this question is negative. Elegant counterexamples can be found in \cite{chung1996} (see also \cite{AHK12}). However, it is also well known and easy to see that the column space of the second moment matrices is spanned by the emission distributions. We will see that a similar statement holds for our definitions of moments, which somewhat differ from the classical ones. Therefore, if $M_I$ and $M_H$ are known, we can reconstruct the $k$-dimensional subspaces $span\Set{\mu_j} \subset \mathbb{R} ^{| \mathbb{X}|}$ and $span\Set{\nu_j} \subset \mathbb{R} ^{|\mathbb{X}|}$ spanned by emissions of $I$ and $H$ respectively. Note that in order to specify a measure on $|\mathbb{X}|$ points one needs $|\mathbb{X}|-1$ parameters, but if one knows that the measure belongs to a given $k$-dimensional subspace, then only $k-1$ parameters are required. Since $k$ is typically much smaller then $|\mathbb{X}|$, this means that the second moment contains most of the information about the emissions (consider the case $k=2$ and $\Abs{\mathbb{X}} = 100$ for the sake of illustration). Finally, we note that our approach can extended to moments higher then two. Indeed, the main combinatorial tool used in this paper is type theory for second moments of Markov chains as developed in \cite{ccc87}, where higher moments analog is also presented. However, all the ideas necessary for such an extension are present already in the second moment case and in this paper we restrict our attention only to the second moments. The rest of this paper is organized as follows: In Section \ref{sec:literaure} we review the literature. Section \ref{sec:defins} contains the definitions and the statements of the results, as well as a sketch of our main technical argument. We conclude by a discussion in Section \ref{sec:discuss}. For clarity of presentation, the full proofs are deferred to Section \ref{sec:sup_mat_proofs}. \section{Related Work} \label{sec:literaure} As noted in the Introduction, real data often does not behave as a sequence generated by an HMM. Some aspects of this problem may be addressed is via the notion of Hidden \textit{semi-}Markov Models (HSMMs, see the survey \cite{hsmm_surv}). HSMM is an extension of an HMM which was developed in recent years to overcome a particular issue of state duration. In a Markov process, and hence in an HMM, the time the system stays in a given state is always a geometric, memoryless random variable, with an expectation that may depend on the state. In a semi-Markov model, the duration of a stay in a given state is allowed to be an arbitrary random variable depending on the current state. While HSMMs were shown to be more suitable than the HMMs in a large variety of cases, this comes at a cost. Since one can not realistically model arbitrary duration times, one can either resort to parametric families of distributions that might be better suited to a particular application than the geometric variable, or one may consider arbitrarily distributed but bounded duration times. The first option requires expert knowledge of the application domain, while the second introduces a huge space of parameters and is still limited in what it can model (due to boundedness). See \cite{hsmm_surv} for a detailed account of the advantages and the issues with HSMMs. The approach of this paper provides a different perspective on the issue of duration times. Indeed, while an HSMM provides a more general model of transitions between the states of the system than HMM, we show that if we want to estimate the source distributions, then under Interval Model assumptions we \textit{do not need} to model the transitions between the sates at all, and the simple HMM estimator suffices. This has run time and sample complexity advantages, but more importantly -- we are guaranteed an approximation to the true sources without the need to guess and to model the transitions between the states. It is worth emphasizing that in some situations modelling the transitions is important. For instance, in speech recognition certain phonemes are much more likely to occur after certain other phonemes, and this transition information is important for the applications. However, in other situations, such as the financial time series and human activity series described in the Introduction, it is unlikely that there exists any stationary probabilistic model of the transitions. Hence it is important to know that an estimation procedure works for any, possibly non-stationary or non-probabilistic transition mechanism, as expressed by the Interval Model and guaranteed by our results. A problem setup somewhat similar to the Interval Model was recently investigated in \cite{khaleghi}, in the context of change point detection methods. Similarly to the Interval Model, in the model of \cite{khaleghi} the data is composed from intervals, and each interval is generated by one of $k$ sources. Moreover, the sources there can be arbitrary stationary processes, which is significantly more general than the independent processes which we consider in this paper. However, the results of \cite{khaleghi} hold only in the asymptotic regime where the number of intervals is fixed and the number of samples $N$ goes to infinity. This means that the length of each individual interval is required to go to infinity with $N$. This makes the problem simpler, since in this regime one can essentially learn the source from a single interval. In contrast, in the Interval Model we require the intervals to be of minimal length $m$, but we do not require the lengths to go to infinity with $N$, and our approximation results hold for any fixed $m \geq 2$. This regime requires the estimator to combine the information from \textit{all} the intervals in the sample to estimate the sources, and our approach uses methods completely different from those of \cite{khaleghi}. We now turn to a discussion of the literature related to the more technical aspects of this paper. The classical consistency result for HMMs, \cite{baum1966}, \cite{petrie}, states that if $x = (x_i)_{i=1}^{\infty}$ is an infinite sequence generated by an HMM $H$, and $H_n$ is a sequence of maximum likelihood estimators for the growing sequences $(x_i)_{i=1}^{n}$, then $H_n$ converges to $H$ with probability 1 (over $x$). A key technical component of these results is an extension of the Shannon-McMillan-Breiman Theorem. This extension deals with the asymptotic behaviour of the likelihood assigned by a given HMM to a sequence generated by a different HMM. The original results were formulated and proved for finite state HMMs with a finite value alphabet $\mathbb{X}$, and with additional light restrictions on $H$. More recently, several results have appeared which extend the consistency theorem to HMMs with more general state and value spaces, and investigate the conditions under which such extensions are possible. See for instance \cite{legland_mevel}, \cite{douc2011}. Our result can also be viewed as an extension of the consistency theorem, but in a different direction. We consider only finite state HMMs and finite value spaces $\mathbb{X}$, but we do not assume that $x$ is generated by an HMM. The study of such questions, known as \textit{misspecification} results, started only recently. The results in \cite{mevel} characterize the behaviour of a maximum likelihood estimator for HMMs when $x$ is generated by general ergodic processes satisfying some mixing-type conditions. In particular it is shown that the sequence $H_n$ of maximum likelihood estimators converges to an HMM $H$ such that the limiting Kullback-Leibler divergence between $H$ and the process generating $x$ is minimal (see \cite{mevel}, Section IV). However, neither the result nor its proof supply any information about what the minimizing $H$ actually is, and are therefore of a limited practical value. We note, however, that such a limitation is in fact unavoidable, due to the generality of the setup. If all we know about $x$ is that it is generated by a general ergodic process, it is unlikely that anything concrete can be said about $H$. On the other hand, in this paper we assume a specific structure of the process $x$, namely that it is generated by an interval model, and we show that in this case, the source distributions of $H$ approximate those of $I$. Therefore, our result can also be viewed as a statement about the properties of the minimizer $H$ for the case when $x$ is generated by $I$. In all of the above mentioned work on consistency and misspecification, the assumption of ergodicity of the process generating $x$ plays a crucial role and the underlying proof methods rely heavily on this assumption. It is therefore interesting to note that in this work we do not require the Interval Model to be ergodic. The details on the relation between the Interval Model and ergodicity are given in Section \ref{sec:IM_ergodicity}. Here we mention that in contrast to the existing methods, our approach \textit{provides inequalities that are valid for finite $N$} rather than asymptotic results, which allows us to avoid the global ergodicity assumption and to work in the more general adversarial setting. Moments of the data play an important role in our approach. In recent years, moments of the data have been used for parameter estimation in various mixture models. For instance, in \cite{AroraBSVD}, \cite{AroraGHMMSWZ13}, it was shown that for several types of mixture models, the underlying distributions $\mu_j$ can be inferred from the second moment of the data under an ``anchor words'' assumption on $\mu_j$s. In \cite{AHK12} it was shown that for a sufficiently large number of samples and under lighter assumptions on $\mu_j$, the third moment of the data can be used to reconstruct $\mu_j$ for a variety of mixtures, including the HMM. Note that the use of moments in this paper is different. Our estimator is the classical maximum likelihood estimator rather than an estimator based on moments. We use moments only as a tool to show that properties of the estimator approximate the properties of the true model. Finally, we make essential use of type theory for Markov chains. The results we use were obtained in \cite{ccc87}, where second order and higher order type theory is developed. \section{Definitions and Results} \label{sec:defins} In Sections \ref{sec:def_res_prelim}, \ref{sec:models} and \ref{sec:moments} we introduce the notions necessary to state the results. Section \ref{sec:results} contains the statements and outlines of the proofs. \subsection{Preliminaries} \label{sec:def_res_prelim} For a finite set $S$, denote by $\Delta_{S}$ the set of all probability measures on $S$. For any two probability distributions $\mu,\nu \in \Delta_{\mathbb{X}}$, define the entropy and the Kullback-Leibler divergence by \begin{equation} H(\mu) = -\sum_{a \in \mathbb{X}} \mu(a) \log \mu(a) \\ \end{equation} and \begin{equation} D(\nu|\mu) = \sum_{a \in \mathbb{X}} \nu(a) \log \frac{\nu(a)}{\mu(a)}. \end{equation} The total variation distance between $\mu,\nu \in \Delta_{\mathbb{X}}$ is given by \begin{equation} \norm{\mu - \nu}_{TV} = \sum_{s \in S} \Abs{\mu(s)- \nu(s)}. \end{equation} \subsection{Models} \label{sec:models} An Interval Model is a tuple $I = I\left(\Set{I_l}_{l \in \mathbb{N}},\Set{\mu_i}_{i=1}^k,\tau, m \right)$, where $I_l$ is a sequence of consecutive intervals, $I_l = [b_l,e_l] \subset \mathbb{N}$, such that $b_1 = 1$, and $b_{l+1} = e_l + 1$ for all $l$, $\mu_i$ are probability measures on a fixed finite ground set $\mathbb{X}$, $\tau: \mathbb{N} \rightarrow \Set{1,\ldots,k}$ is an assignment of distributions to intervals, and $m>0$ is such that $|I_l|\geq m$ for all $l \in \mathbb{N}$. We say that a sequence of random variables with values in $\mathbb{X}$, $X = X_1,X_2, \ldots$ , is distributed according to interval model $I$, denoted $X \sim I$, if $X_i$ are independent and for every $l \in \mathbb{N}$ and $i \in I_l$, $X_i$ has distribution $\mu_{\tau(l)}$. For any finite $N$, the let the weights $\Set{w_j}$ be the proportions of each of the states $\mu_j$ in the data. Specifically, define \begin{equation} \label{eq:k_j_def} K_j(N) = \Set{i \leq N \spaceo | \spaceo i \in I_l \mbox{\hspace{2 mm} and \hspace{2 mm}} \tau(l) = j } \end{equation} to be the set of indices $i \leq N$ such that $X_i \sim \mu_j$ and set \begin{equation} \label{eq:weights_def} w_j = w_j(N) = \frac{1}{N}\Abs{K_j(N)}. \end{equation} Note that $w_i$ depends on $N$. For brevity of the notation this dependence is always assumed but not explicitly written. Throughout the paper we assume for convenience that $m>2$. For each time $i \in \mathbb{N}$ we define $\kappa(i)$ to be the index of the distribution of $X_i$, meaning $\kappa(i) = \tau(l)$ where $l$ is such that $i \in I_l$. A Hidden Markov Model, HMM, is a tuple $H = H\left(S,\{\nu_i\}_{1}^k, \{p_{ij}\}_{i,j=1}^k\right)$ where $S = \Set{1,\ldots,k}$ is a state space, $\nu_i$ are corresponding emission probabilities, and $p_{ij} = \Prob{S_{t+1} = j | S_t = i}$ for the Markov chain $S_1,S_2,S_3, \ldots$ of the states. For a sequence $x = (x_1,\ldots,x_{N+1})$, the log-likelihood of $x$ under the HMM $H$ with initial distribution $\pi$ is defined by \begin{align} \label{eq:HMM_x_likelihood} &L(x,H,\pi) = \\ & =\frac{1}{N+1} \log \Brack{\sum_{s = s_1,\ldots,s_{N+1}} \pi(s_1) \cdot \prod_{i=1}^N p_{s_{i},s_{i+1}} \prod_{i=1}^{N+1} \nu_{s_i}(x_i)}, \nonumber \end{align} where the sum is over all possible paths of length $N+1$ of the underlying Markov chain. \subsection{Moments} \label{sec:moments} For a sequence $x = (x_1,\ldots,x_{N+1})$, the second moment is a probability distribution $M(x) \in \Delta_{\groundX \times \groundX}$, defined by \begin{equation} \label{eq:data_moment_def} M(x)(a,b) = \frac{1}{N}\Abs{\Set{i\leq N \spaceo | \spaceo x_i = a \wedge x_{i+1} =b}} \end{equation} for all $a,b \in \mathbb{X}$. The second moment describes the frequencies of observing each pair of symbols $a,b$ consecutively. For a random vector $X=(X_1,\ldots,X_{N+1})$, the second moment is the expectation of moments over all realizations of $X$, \begin{equation} M_X = \Exp_{x \sim X} M(x). \end{equation} For instance, if $X_i$ are independent and have the same distribution $\mu$, then $M_X(a,b) = \mu(a) \cdot \mu(b)$. To obtain an expression for the second moments for interval model $I$, define for fixed $N$ \begin{equation} \label{eq:I_transition_counts} c_{rl} = \Abs{\Set{ i < N+1 \spaceo | \spaceo \kappa(i) = r \wedge \kappa(i+1) = l }}. \end{equation} $c_{rl}$ counts the transitions from state $r$ to state $l$ in the model, up to time $N+1$. Then, if $X =(X_1, \ldots, X_{N+1}) \sim I$, we have \begin{equation} \label{eq:I_moment_def} M_X(a,b) = \frac{1}{N} \sum_{r,l \leq k} c_{rl} \mu_r(a) \cdot \mu_l(b). \end{equation} Next, to state our technical result, Theorem \ref{thm:main_thm}, we will require a definition of a \textit{generalized second moment} of an HMM. To motivate this definition, let us first write (\ref{eq:I_moment_def}) in a slightly different form. Denote for every $r,l \leq k$, $u_{rl} = \frac{c_{rl}}{N}$ and set $u_{r} = \sum_{l\leq k} u_{rl}$. Then one can write (\ref{eq:I_moment_def}) as \begin{align} \label{eq:moment_expanded} &M_{X}(a,b) = \\ &\left(\begin{array}{lll} u_1 \mu_1(a), \ldots, u_k \mu_k(a) \end{array}\right) \left(\begin{array}{lll} \hdots & \hdots & \hdots \\ \hdots & u_{ij} & \hdots \\ \hdots & \hdots & \hdots \end{array}\right) \left(\begin{array}{l} \mu_1(b) \\ \vdots \\ \mu_k(b) \end{array}\right). \nonumber \end{align} Equivalently, we have \begin{equation} \label{eq:generalized_moment_motive} M_{X}(a,b) = \phi_a \cdot U \cdot \chi_b, \end{equation} where $\phi_a = (u_1 \mu_1(a), \ldots, u_k \mu_k(a))^{T} \in \mathbb{R} ^k$, $\chi_b = (\mu_1(b), \ldots, \mu_k(b)) \in \mathbb{R} ^k $ and $U$ is the $k \times k$ matrix $U = (u_{ij})$. Now, given an HMM $H = H(S,\{\nu_i\}_{1}^k, \{p_{ij}\}_{i,j=1}^k)$, and a set of \textit{arbitrary} vectors $\phi = \{\phi_a\}_{a \in \mathbb{X}} \in \mathbb{R} ^k$, we define the \textit{generalized second moment} of $H$ as a matrix $M_{\phi,H} \in \mathbb{R} ^{\Abs{\mathbb{X}} \times \Abs{\mathbb{X}}}$ given by \begin{equation} \label{eq:generalized_moment_def} M_{\phi,H}(a,b) = \phi_a \cdot p \cdot \chi_b, \end{equation} where analogously to (\ref{eq:generalized_moment_motive}) we have $\chi_b = (\mu_1(b), \ldots, \mu_k(b)) \in \mathbb{R} ^k $, but $\phi_a$ are arbitrary. The reasons for requiring this definition will become apparent during the proof of Theorem \ref{thm:main_thm}. We call a set of vectors $\phi = \{\phi_a\}_{a \in \mathbb{X}}$ as above \textit{proper} if all the entries of all $\phi_a$ are non-negative, and \begin{equation} \sum_{a \in \mathbb{X}} \sum_{j\leq k} \phi_a(j) = 1. \end{equation} If $\phi$ is a proper system, define a probability measure $d_{\phi}$ on $\mathbb{X}$ by \begin{equation} d_{\phi}(a) = \sum_{j\leq k} \phi_a(j). \end{equation} We conclude this section by stating the connection between column spaces of $M_X$ , $M_{\phi,H}$, and spaces spanned by $\{\mu_j\}_{j\leq k}$ and $\{\nu_j\}_{j\leq k}$ respectively. Note that for any matrix $M$, the column space of $M$ coincides with the image of $M$, $Im(M)$ as an operator $ \mathbb{R} ^{\mathbb{X}} \rightarrow \mathbb{R} ^{\mathbb{X}}$. \begin{lem} \label{lem:measures_span_moments} \begin{enumerate} \item If $X \sim I$ for an interval model $I$. Then $Im(M_X) \subset span\{\mu_j\}_{j\leq k}$. \item For an HMM $H$ and an arbitrary set $\{\phi_a\}_{a \in \mathbb{X}}$, $Im(M_{\phi,H}) \subset span\{\nu_j\}_{j\leq k}$. \end{enumerate} \end{lem} The proof is given in Section \ref{sec:proof_lem_meas_span_moments}. Finally, for any $M \in \Delta_{\groundX \times \groundX}$, define the left and right marginalizations, $\bar{M},\dbar{M} \in \Delta_{\groundX}$ by \begin{equation} \bar{M}(a) = \sum_{b \in \mathbb{X}} M(a,b) , \hspace{2 mm} \dbar{M}(a) = \sum_{b \in \mathbb{X}} M(b,a). \end{equation} \subsection{Results} \label{sec:results} As discussed in the Introduction, the first part of argument consists in showing that there is an HMM $H$ that assigns high likelihood to most of the samples from $I$. This is formalized in the following Lemma. \newcommand{N_{min}}{N_{min}} Given an Interval Model $I = I\left(\Set{I_l}_{l \in \mathbb{N}},\Set{\mu_i}_{i=1}^k,\tau, m \right)$, for any $N>0$ we define \begin{equation} N_{min} = \min_{j \leq k} w_j \cdot N, \end{equation} with $w_j$ as defined in (\ref{eq:weights_def}). \begin{lem} \label{lem:right_path_full} For any set of probability distributions $\Set{\mu_j}_{j=1}^k$, there exists a function $\varepsilon : \mathbb{N} \rightarrow \mathbb{R} $ such that $\lim_{N \rightarrow \infty} \varepsilon(N) \rightarrow 0$ and such that the following holds: \newline For any Interval Model $I = I\left(\Set{I_l}_{l \in \mathbb{N}},\Set{\mu_i}_{i=1}^k,\tau, m \right)$, there is an HMM $H$ and an initial distribution $\pi$, such that for every $N >0$, if $X=(X_1,\ldots,X_N) \sim I$ then with probability at least $1 - \varepsilon(N_{min})$, \begin{equation} \label{eq:lem_high_likelihood} L(X,H,\pi) \geq - \frac{\log 2km}{m} - \sum_{j} w_j H(\mu_j) - \varepsilon(N_{min}). \end{equation} \end{lem} The proof is given in Section \ref{sec:proof_lem_right_path_full}. We take a moment to discuss the particular dependence on $N$ exhibited in the above Lemma. The fact that the error term $\varepsilon(N_{min})$ in (\ref{eq:lem_high_likelihood}) depends on $N_{min}$ rather than $N$ means that in order for $\varepsilon(N_{min})$ to be small, the interval $[1,\ldots,N]$ needs to contain a sufficient number of samples from every one of the distributions $\mu_1,\ldots, \mu_k$. As will be evident from the proof, this assumption is necessary to obtain (\ref{eq:lem_high_likelihood}). On the other hand, the function $\varepsilon$ is completely determined by the distributions $\Set{\mu_j}_{j=1}^k$. In other words, in order to control the error in (\ref{eq:lem_high_likelihood}) for a model $I$, we only need to know its distributions, and in particular $\varepsilon$ does not depend on the particular interval structure of the model. We are now ready to state the main result of this paper. Let $X=(X_1,\ldots,X_N)$ be generated by an Interval Model $I$. For an HMM $H$ define \begin{equation} \label{eq:main_condition_on_phi_a} D = D(H) = \inf_{\phi \in P} \norm{M_X - M_{\phi,H}}_{TV} - \frac{3}{m}, \end{equation} where \begin{equation} \label{eq:P_defin} P = \Set{\phi \spaceo | \spaceo \mbox{$\phi$ is proper and } \norm{d_{\phi} - \bar{M}_X}_{TV} \leq \frac{3}{m}}. \end{equation} In other words, $D$ measures how well $M_X$ can be approximated by a generalized moment $M_{\phi,H}$ where $\phi$ can be any proper system with $d_{\phi}$ close to the marginal $\bar{M}_X$. To gain some intuition into this quantity, consider the case where $D$ is small, and $M_X$ has the maximal rank, $k$. Then, standard matrix perturbation theory results imply that $Im(M_X)$ is close to $Im(M_{\phi,H})$ and hence $span\{\mu_j\}_{j\leq k}$ is close to $span\{\nu_j\}_{j\leq k}$ by Lemma \ref{lem:measures_span_moments}. Note also that the set $P$ in (\ref{eq:P_defin}) is non-empty. Indeed, the proper system $\phi$ defined in (\ref{eq:generalized_moment_motive}) satisfies $d_{\phi} = \bar{M}_X$. We assume throughout the paper that $D \geq 0$, which amounts to considering only the cases where $M_{\phi,H}$ at least somewhat differs from $M_X$. \begin{thm} \label{thm:main_thm} There is a constant $c>0$ such that for any set of probability distributions $\Set{\mu_j}_{j=1}^k$, there exists functions $\varepsilon$ and $r$ such that $\lim_{N \rightarrow \infty} \varepsilon(N) \rightarrow 0$, $\lim_{N \rightarrow \infty} r(N) \rightarrow c$ and the following holds: \newline For any Interval Model $I = I(\Set{I_l}_{l \in \mathbb{N}},\Set{\mu_i}_{i=1}^k,\tau, m )$, and HMM $H = H(S,\{\nu_i\}_{1}^k, \{p_{ij}\}_{i,j=1}^k)$, if $X=(X_1,\ldots,X_N)$ is a sample from $I$ then with probability at least $1 - \varepsilon_{N_{min}}$ over $X$, for every initial distribution $\pi$, \begin{equation} \label{eq:likelihood_in_main_thm} L(x,H,\pi) \leq -D^2 - \sum_{j} w_j H(\mu_j) + \varepsilon(N_{min}), \end{equation} where $D$ is as defined in (\ref{eq:main_condition_on_phi_a}). \end{thm} The proof is given in Section \ref{sec:proofs}. Here we briefly describe the main idea of the proof. Fix an Interval Model $I$, an HMM $H$, and $N>0$. Define a neighbourhood $U \subset \Delta_{\groundX \times \groundX}$ of $M_X$ by \begin{equation} U = \Set{ M \in \Delta_{\groundX \times \groundX} \spaceo | \spaceo \norm{M - M_X}_{TV} \leq \frac{3}{m}}, \end{equation} and denote by $O$ the set of all sequences $x = (x_1, \ldots, x_N)$ such that $M(x) \in U$. Roughly speaking, the proof of (\ref{eq:likelihood_in_main_thm}) can be seen as a combination of two different uses of type theory. First, the type theory of Markov chains can be used to show that if an HMM $H$ satisfies (\ref{eq:main_condition_on_phi_a}), then the likelihood assigned to the set $O$ by $H$ is at most $2^{-N D^2}$, \begin{equation} \Probu{H}{O} = \sum_{x \in O} 2^{N L(x,H,\pi)} \leq 2^{-N D^2}. \end{equation} On the other hand, type theory for independent sequences together with additional concentration results can be used to show that $O$ contains a subset $X^l \subset O$ of size at least $2^{N \cdot\Brack{\sum_{j} w_j H(\mu_j)}}$ such that all $x \in X^l$ are equiprobable with respect to $X$ and $X^l$ is of nearly full measure, $\Probu{X}{X^l} \geq 1 - \varepsilon$. Combing these two statements, one obtains \begin{equation} \label{eq:proof_sketch1} \frac{1}{|X^l|} \sum_{x \in X^l} \Probu{H}{x} \leq \frac{1}{|X^l|} \Probu{H}{O} \leq 2^{-N \Brack{ D^2 + \sum_{j} w_j H(\mu_j)} }. \end{equation} Note that (\ref{eq:proof_sketch1}) is in fact an averaged version of (\ref{eq:likelihood_in_main_thm}). The corresponding high probability formulation can be easily obtained via Markov's inequality. \iffalse Finally, we state our result about the behaviour of the maximum likelihood HMM estimator on samples of model $I$. \begin{thm} \label{thm:full_statement} Let $I$ be an infinite ergodic interval model. Let $H_N$ be a maximal likelihood HMM estimator for the sample $(X_1,\ldots,X_N)$. Then with probability $1$, the sequence $H_N$ converges to an HMM $H$, satisfying \begin{equation} \label{eq:lim_estimator_d} D(H) \leq \sqrt{\frac{\log 3km}{m}}. \end{equation} \end{thm} The existence of the limit estimator $H$ follows from previous work, \cite{mevel}, \cite{douc2012}. The moment bound (\ref{eq:lim_estimator_d}) for this estimator is obtained by a combination of Lemma \ref{lem:right_path_full} and Theorem \ref{thm:main_thm}. The proof is given in Section \ref{sec:proof_of_full_statement}. \fi Finally, we state our main result about the behaviour of the maximum likelihood HMM estimator on samples of model $I$. For $\delta >0$, let $\mathcal{H}_{\delta}$ be the set of HMMs for which transition and emission probabilities are bounded below by $\delta$, \begin{equation} \mathcal{H}_{\delta} = \Set{ H(S,\{\nu_i\}_{1}^k, \{p_{ij}\}_{i,j=1}^k)} \end{equation} where $\nu_i$ and $p_{ij}$ satisfy \begin{equation} \nu_i(x) \geq \delta \hspace{2 mm} \forall i\leq k, x \in \mathbb{X} \mbox{ and } p_{ij} \geq \delta \hspace{2 mm} \forall i,j\leq k. \end{equation} In what follows we assume that the HMM guaranteed by Lemma \ref{lem:right_path_full} is in $\mathcal{H}_{\delta}$. This is equivalent to the following: \begin{equation} \label{eq:delta_bound} \delta \leq \frac{1}{m} \mbox{ and } \delta \leq \mu_j(x) \hspace{2 mm} \forall j\leq k, x \in \mathbb{X}. \end{equation} \begin{thm} \label{thm:full_statement} Fix the distributions $\Set{\mu_j}_{j=1}^k$ and $m>0$. For any $\delta$ satisfying (\ref{eq:delta_bound}) there is a constant $c>0$, a function $r$ such that $\lim_{N \rightarrow \infty} r(N) \rightarrow c$ and a sequence $\varepsilon_N$ with $\lim_{N \rightarrow \infty} \varepsilon_N \rightarrow 0$ such that the following holds: \newline Let $X=(X_1,\ldots,X_N)$ be a sample from Interval Model $I = I(\Set{I_l}_{l \in \mathbb{N}},\Set{\mu_i}_{i=1}^k,\tau, m )$. Let $H$ be a maximum likelihood estimator in $\mathcal{H}_{\delta}$ for the sequence $X$. Then with probability at least $1 - \varepsilon_{N_{min}}$ over $X$, \begin{equation} \label{eq:full_statement_d_h} D(H) \leq \sqrt{\frac{\log 3km}{m}}. \end{equation} \end{thm} \begin{cor} \label{cor:limsup} Let $X=(X_N)_{N=1}^{\infty}$ be an infinite sample from an infinite Interval Model $I = I(\Set{I_l}_{l \in \mathbb{N}},\Set{\mu_i}_{i=1}^k,\tau, m )$. Let $H_N$ be a sequence of maximum likelihood estimators for the sequences $(X_1,\ldots,X_N)$. Then with probability $1$, \begin{equation} \label{eq:full_statement_d_h_infty} \limsup_{N \rightarrow \infty} D(H_N) \leq \sqrt{\frac{\log 3km}{m}}. \end{equation} \end{cor} We now make a few remarks about the proof. The proof of Theorem \ref{thm:full_statement} is obtained by an application of Theorem \ref{thm:main_thm} to an appropriate (multiplicative) $\varepsilon$-net inside the set $\mathcal{H}_{\delta}$ and by the union bound. For this approach to work the log-likelihood needs to be a Lipschitz function of the HMM $H$. This is guaranteed by the assumption $H \in \mathcal{H}_{\delta}$, with the Lipschitz constant depending on $\delta$. This assumption is common in the literature and is used for similar purposes, although it is usually used somewhat differently. Moreover, this assumption can be easily removed if we let $N \rightarrow \infty$ in the Theorem statement, as formalized in Corollary \ref{cor:limsup}. Note that in contrast to existing consistency results, since we do not assume ergodicity of $I$, the sequence $H_N$ in the statement of Corollary \ref{cor:limsup} does not necessarily converge to a limit. Nevertheless, inequality (\ref{eq:full_statement_d_h_infty}) holds. Another remark concerns the magnitude of $N$ required for (\ref{eq:full_statement_d_h}) to hold with high probability. While the size of $\varepsilon$-net in $\mathcal{H}_{\delta}$ is exponential in $k$ and $\Abs{\mathbb{X}}$, the probability of error in Theorem \ref{thm:main_thm} is essentially exponentially small in $N$. Therefore for the union bound to hold, it suffices for $N$ to be polynomial in $k$ and $\Abs{\mathbb{X}}$. The complete proof is given in Section \ref{sec:proof_of_full_statement}. \section{Discussion} \label{sec:discuss} In this work we considered time series generated by a finite state system, with an assumption that the states are somewhat \textit{persistent}, in the sense that the system stays at a state at least $m$ time units before the state changes. The advantage of such an assumption is that for a variety of systems it may be fairly realistic, and that it is minimal in the sense that we do not attempt to model the transition mechanism of the system between different states. Indeed, we show that for \textit{any} such mechanism, the distributions of the sources can still be approximated. An apparent paradox of this result is that we show that the approximation can be done using an HMM estimator, and HMM estimator \textit{does} assume a particular transition mechanism between the states. The resolution of this paradox, and the reason that the approach works, is that when the states have the persistence property, the transition mechanism provably has little influence on certain statistics (the likelihood and the moments, for this paper) of the system. From a purely technical perspective, one possible extension of this work would be to relax the assumption that the system should spend at least $m$ time units at each state. It should be enough for a system to satisfy this assumption most of the time rather than strictly every time it enters a new state. We believe such an extension can be proved using the approach developed in this paper. Another possible extension is to replace the discrete emission distributions used in this paper by some continuous class, such as the Gaussians. From a more general perspective, we believe that the idea of replacing fully generative models by more adversarial settings may be extended to other estimation problems. Consider for instance topic modelling, where topics are distributions and documents are samples from mixtures of topics. By far the most popular generative model used to learn topics is the Latent Dirchlet Allocation, (LDA, \cite{LDAorig}). LDA proposes a particular mechanism by which the documents are generated from the topics. LDA often performs extremely well in practice, despite the fact that real documents are clearly not generated by the LDA mechanism. This indicates that, analogously to our HMM results, there might exist some persistence properties of the topics which would guarantee that the topics are recovered by maximum likelihood LDA despite the fact that the documents were not generated by LDA. Identification of such persistence properties could contribute, for instance, to model independent (or less model-dependent) definitions of topics. \section{Proofs} \label{sec:sup_mat_proofs} \subsection{Interval Model and Ergodicity} \label{sec:IM_ergodicity} In this section we discuss the relation between the Interval Model and ergodicity. For the purposes of this discussion, a process $X = (X_1, X_2,\ldots, X_N, \ldots)$ is ergodic if there exists a distribution $\mu$ on $\mathbb{X}$ such that for every $f : \mathbb{X} \rightarrow \mathbb{R} $, \begin{equation} \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{i=1}^N f(X_i) = \int f(x) d\mu(x) \end{equation} with probability $1$ over $X$. Ergodicity means that space integrals with respect to $\mu$ can be recovered by a time average over a single trajectory of the process. If the process $X$ is generated by an interval model, then for every $N$ we can write \begin{equation} \label{eq:I_ergodicity} \frac{1}{N} \sum_{i=1}^N f(X_i) = \sum_{j \leq k} w_j \frac{1}{\Abs{K_j}} \sum_{i \in K_j}^N f(X_i), \end{equation} with the sets $K_j$ as defined in (\ref{eq:k_j_def}). Since $X_i$ with $i \in K_j$ are independent, if $\Abs{K_j} \rightarrow \infty$, then we have \begin{equation} \label{eq:I_ergodicity_single_comp} \frac{1}{\Abs{K_j}} \sum_{i \in K_j}^N f(X_i) \rightarrow \int f(x) d\mu_j(x) \end{equation} by the law of large numbers. Therefore (\ref{eq:I_ergodicity}) converges if and only if for each $j\leq k$, $w_j(N)$ converges to some limiting value $\hat{w}_j$ with $N \rightarrow \infty$, in which case the limiting measure is $\mu = \sum_{j \leq k} \hat{w_j} \mu_j$. This situation demonstrates well the general line of reasoning used in this paper. We will assume that $\Abs{K_j}$ is large enough for integrals with respect to each component to converge (this corresponds to large $N_{min}$ in the statements of the results), as in (\ref{eq:I_ergodicity_single_comp}), but we will \textit{not} require the more global condition that the weights $w_j(N)$ converge. \subsection{Preliminaries} \label{sec:proof_lem_meas_span_moments} In what follows it will be convenient to use the tensor notation for operators -- for any two vectors $v,w \in \mathbb{R} ^{\mathbb{X}}$, $v \otimes w$ is a rank 1 linear operator $ \mathbb{R} ^{\mathbb{X}} \rightarrow \mathbb{R} ^{\mathbb{X}}$, which acts by $(v \otimes w)(u) = \inner{u}{v} \cdot w$ for all $u \in \mathbb{R} ^{\mathbb{X}}$. In particular, for $a,b \in \mathbb{X}$ we have $\inner{(v \otimes w) \delta_a}{\delta_b} = v(a) \cdot w(b)$. For instance, if $X= (X_1, \ldots, X_{N+1})$ all $X_i$ are independent with distribution $\mu$ then \begin{equation} M_X = \mu \otimes \mu. \end{equation} If $X = (X_1, \ldots, X_{N+1}) \sim I$ is a sample from the interval model, we can write \ref{eq:I_moment_def} equivalently as \begin{equation} \label{eq:I_moment_def_tensor} M_X = \frac{1}{N} \sum_{r,l \leq k} c_{rl} \mu_r \otimes \mu_l. \end{equation} An important property of the interval model is that the number of transitions between \textit{different} states in the model is small. We formalize it in the following Lemma. \begin{lem} For an interval model $I$ and $N>0$, for every $r,l \leq k$, let $c_{rl}$ be the state transition counts, as defined in (\ref{eq:I_transition_counts}). Then \begin{equation} \label{eq:small_mixed_moments} \frac{1}{N} \sum_{r \neq l} c_{rl} \leq \frac{1}{m}. \end{equation} \end{lem} \begin{proof} Indeed, since interval length is at least $m$, the set $\Set{1,\ldots , N+1}$ contains at most $\ceil{ (N+1)/m}$ different intervals and hence at most $\ceil{ (N+1)/m} - 1$ transitions. \end{proof} It follows from (\ref{eq:small_mixed_moments}) that \begin{equation} \label{eq:matrix_small_mixed_moments} \norm{\frac{1}{N} \sum_{r \neq l } c_{rl} \mu_r \otimes \mu_l}_{TV} = \norm{M_X - \sum_{i \leq k} w_i \mu_i \otimes \mu_i}_{TV} \leq \frac{1}{m}. \end{equation} We refer to the expression \begin{equation} \label{eq:pure_moment_def} M_{X,pure} = \sum_{i \leq k} w_i \mu_i \otimes \mu_i \end{equation} as a \textit{pure} moment and to the expression \begin{equation} \label{eq:mixed_moment_def} M_{X,mixed} = \frac{1}{N} \sum_{r \neq l } c_{rl} \mu_r \otimes \mu_l \end{equation} as a \textit{mixed} moment. The pure moment captures the contribution to the moment inside each interval, while the mixed moment captures the contribution from transitions between the intervals. Finally, we prove Lemma \ref{lem:measures_span_moments}. \begin{proof}[Proof of Lemma \ref{lem:measures_span_moments}] The statement for $M_X$ follows directly from (\ref{eq:I_moment_def}). To show the statement for $M_{\phi,H}$, for any $\phi_a \in \mathbb{R} ^k$ and $i \leq k$ let $\phi_a(i)$ be the $i$-th coordinate of $\phi_a$. For every $i \leq k$, define $\hat{\phi}_i \in \mathbb{R} ^{\mathbb{X}}$ by $\hat{\phi}_i (a) = \phi_a(i)$. Then by the definition, (\ref{eq:generalized_moment_def}), \begin{equation} M_{H,\pi}(a,b) = \sum_{i \leq k} \sum_{j \leq k} p_{ij} \phi_a(i) \nu_j(b), \end{equation} and hence \begin{equation} M_{H,\pi} = \sum_{i,j \leq k} p_{ij} \hat{\phi}_i \otimes \nu_j. \end{equation} Since the image of $\hat{\phi}_i \otimes \nu_j$ is spanned by $\nu_j$, it follows that $Im(M_{H,\pi}) \subset span\{\nu_j\}_{j=1}^k$. \end{proof} \subsection{Proof of Lemma \ref{lem:right_path_full}} \label{sec:proof_lem_right_path_full} \begin{proof}[Proof of Lemma \ref{lem:right_path_full}] Consider an HMM $H$ with $k$ states, $S = \Set{1,\ldots,k}$, with emission probabilities equal to those of the model $I$, $\mu_i$, and some transition matrix, $p_{ij}$. In order to show the lower bound on $L$, it suffices to consider a single path of the HMM. Let $s=s_1, \ldots,s_N$ be a sequence of states of $H$ that follows precisely the sequence of states in $I$, so that $s_i = \kappa(i)$. Let the initial distribution $\pi$ be a delta measure concentrated on the first state of $I$, $\kappa(1)$. Recall that the likelihood $L(x,H,\pi) = \frac{1}{N} \log \Probu{H,\pi}{x}$ is given by a sum (\ref{eq:HMM_x_likelihood}). The contribution of a single path $s$ in this sum is \begin{equation} \label{eq:true_path_likelihood} \frac{1}{N} \sum_{i,j \leq k} c_{ij} \log p_{ij} + \frac{1}{N} \sum_{j=1}^k \sum_{i} \log \mu_j(x_i^j). \end{equation} where $c_{ij}$ are the transition counts of the model $I$, as in (\ref{eq:I_transition_counts}), and $x_i^j$ are the entries of $X$, rearranged so that for all $i$, $x_i^j$ are entries sampled from $\mu_j$. Consider the second term first, \begin{equation} \frac{1}{N} \sum_{j=1}^k \sum_{i} \log \mu_j(x_i^j) = \sum_{j=1}^k \frac{c_{jj}}{N} \frac{1}{c_{jj}} \sum_{i} \log \mu_j(x_i^j). \end{equation} Clearly, by the law of large numbers, $\frac{1}{c_{jj}} \sum_{i} \log \mu_j(x_i^j) \rightarrow -H(\mu_j)$ with $\Abs{c_{jj}} \rightarrow \infty$. Next, the first term in (\ref{eq:true_path_likelihood}), \begin{equation} \frac{1}{N} \sum_{i,k \leq k} c_{ij} \log p_{ij} \end{equation} controls the underlying Markov chain probability of the path $s$. Since the total number of transitions between different states in the model is small, see (\ref{eq:small_mixed_moments}), this probability is large when $p_{ii}$ are close to $1$. In particular, by choosing $p_{ii} = 1 - \frac{1}{m}$ and $p_{ij} = \frac{1}{(k-1)m}$ for all $i$ and $j \neq i$, we obtain \begin{eqnarray} \frac{1}{N} \sum_{i,k \leq k} c_{ij} \log p_{ij} = \\ \frac{1}{N} \sum_{i \leq k} c_{ii} \log (1 - \frac{1}{m}) + \frac{1}{N} \sum_{i \neq j } c_{ij} \log \frac{1}{(k-1)m} \geq \\ -\frac{1}{m} - \frac{1}{m^2} - \frac{1}{m} \log (k-1)m \label{eq:log_passage_lem_right_path} \geq \\ - \frac{\log 2km}{m} \end{eqnarray} where in line (\ref{eq:log_passage_lem_right_path}) we have used (\ref{eq:small_mixed_moments}) and the fact that \begin{equation} \log ( 1- \frac{1}{m}) \geq -\frac{1}{m} - \frac{1}{m^2} \end{equation} for $m\geq 2$. \end{proof} \subsection{Proof of Theorem \ref{thm:main_thm}} \label{sec:proofs} Let $x = (x_1, \ldots, x_N)$ be distributed according to an interval model $I$. We first show that the empirical second moment of a sample $x$ is close to its expected second moment, $M_X$. \begin{lem} \label{lem:empirical_mx_bound} Let $X = (X_1, \ldots, X_N)$ be distributed according to the interval model $I = I(\Set{I_l}_{l \in \mathbb{N}},\Set{\mu_i}_{i=1}^k,\tau, m )$. Denote \begin{equation} \label{eq:tilde_w_def} \tilde{w} = \min_{i \leq k} w_i. \end{equation} Then for every $\varepsilon \geq 0$, \begin{equation} \label{eq:lem_empitrical_mx_bound} \Probu{X}{ \norm{M(x) - M_X}_{TV} \geq \varepsilon + \frac{2}{m}} \leq 2^{-c_1 \tilde{w}N \cdot \Brack{\frac{\varepsilon^2}{|\mathbb{X}|^4} - \frac{ c_2 \log k}{\tilde{w}N} }}, \end{equation} where $c_1,c_2>0$ are absolute constants. \end{lem} Before we proceed with the proof, note the particular form of the error, $\varepsilon + \frac{2}{m}$, in (\ref{eq:lem_empitrical_mx_bound}). As we show in what follows, the pure component of $M_X$, (\ref{eq:pure_moment_def}), can be approximated by the pure component of $M(x)$ to arbitrary precision, giving rise to the $\varepsilon$ term in the error. However, the mixed component of $M_X$ will not necessarily be approximated well by the mixed part of $M(x)$, but has a small norm, (\ref{eq:matrix_small_mixed_moments}) and gives rise to the $\frac{1}{m}$ term in the error. To see why the mixed moment of $M_X$ might not be well approximated by the mixed part of $M(x)$, recall that we do not place any assumptions on the transitions between different states in the interval model. In particular, for $r \neq l$, $c_{rl}$ need not be large and consequently there may not be enough samples to recover $\mu_r \otimes \mu_l$. \begin{proof}[Proof Of Lemma \ref{lem:empirical_mx_bound}] We first consider samples from each state $j$ separately and show that they approximate $\mu_j \otimes \mu_j$ well. This can be achieved by standard methods if the samples are independent. We therefore divide the indices into independent pairs. Let \begin{equation} A_j = \Set{ i < N+1 \spaceo | \spaceo \kappa(i) = j \mbox{ \hspace{2 mm} and \hspace{2 mm}} \kappa(i+1) = j} \end{equation} be the set of indices $i$ such that $X_i \sim \mu_j$ and set \begin{equation} B = \Set{1, \ldots, N } \setminus \Brack{\cup_{j \leq k} A_j} \end{equation} to be the set of indices where the transitions between intervals occur. Divide $A_j$ into a set of odd pairs and even pairs as follows: \begin{eqnarray} A_j^1 = \Set{ (i,i+1) \spaceo | \spaceo i \in A_j \mbox{,\hspace{2 mm} i is odd} }, \\ A_j^2 = \Set{ (i,i+1) \spaceo | \spaceo i \in A_j \mbox{,\hspace{2 mm} i is even} }. \end{eqnarray} For instance, if $(1,2,3,4,5,6) \subset A_j$, then $(1,2),(3,4),(5,6)$ are odd pairs, and $(2,3),(4,5)$ are even. Then the pairs in each $A_j^t$ are mutually independent. Hence they can be considered i.i.d samples from the measure $\mu_j \times \mu_j$. To estimate how well independent empirical samples of $\mu_j \times \mu_j$ approximate $\mu_j \times \mu_j$, we use the Dvoretzky Kiefer Wolfowitz inequality, \cite{dkw}, which bounds the $\sup$ distance between the empirical and true distribution. Specifically, if $\mu$ is a probability distribution on a set $S$ and $Y_1,\ldots,Y_N$ are independent samples from $\mu$, it follows from the Dvoretzky Kiefer Wolfowitz inequality that \begin{equation} \Prob{ \sup_{s \in S} \Abs{ \mu(s) - \Brack{\frac{1}{N} \sum_{i \leq N} \delta_{Y_i}}(s) } \geq \varepsilon} \leq 2 \exponent{ -2 N \varepsilon^2} \end{equation} for every $\varepsilon \geq 0$. We apply this with $S = \mathbb{X} \times \mathbb{X}$, $\mu = \mu_j \times \mu_j$ and $Y_i = (X_i,X_{i+1})$. For $a,b \in \mathbb{X}$, $j \leq k$ and $t \in \Set{1,2}$, let \begin{equation} R_{j,t}(a,b) = \mu_j(a)\mu_j(b) -\frac{1}{|A_j^t|} \sum_{(i,i+1) \in A_j^t} \delta(X_i = a)\cdot\delta(X_{i+1} = b) \end{equation} be the difference between the empirical and the true measures. Then \begin{equation} \label{eq:dkw_use} \Prob{ \max_{a,b \in \mathbb{X}}\Abs{ R_{j,t}(a,b) } \geq \varepsilon/|\mathbb{X}|^2 } \leq 2^{-c |A_j^t| \frac{\varepsilon^2}{|\mathbb{X}|^4}}. \end{equation} Since $\norm{R_{j,t}}_{TV} \leq \Abs{\mathbb{X}}^2 \sup_{a,b} \Abs{R_{{j,t}}(a,b)}$, we obtain the total variation bound \begin{equation} \label{eq:dkw_tv} \Prob{ \norm{R_{j,t}}_{TV} \geq \varepsilon } \leq 2^{-c |A_j^t| \frac{\varepsilon^2}{|\mathbb{X}|^4}}. \end{equation} Since $|A_j^t| \approx \frac{1}{2} w_j N$, the union bound over all $j,t$ implies that \begin{align} \label{eq:dkw_union} \Prob{ \max_{j,t} \norm{R_{j,t}}_{TV} \geq \varepsilon } &\leq 2k \cdot 2^{-c \tilde{w}N \cdot\frac{\varepsilon^2}{|\mathbb{X}|^4}} \nonumber \\ &\leq 2^{-c \tilde{w}N \cdot \Brack{\frac{\varepsilon^2}{|\mathbb{X}|^4} - \frac{ c_1 \log k}{\tilde{w}N} }} \end{align} for an appropriate absolute constant $c_1 > 0$. Finally, note that \begin{align} M(x)(a,b) = & \frac{1}{N} \sum_{j \leq k} \sum_{t \in \Set{1,2}} \sum_{i \in A_j^t} \delta_{x_i}(a) \delta_{x_{i+1}}(b) \\ & + \frac{1}{N} \sum_{i \in B} \delta_{x_i}(a) \delta_{x_{i+1}}(b) \nonumber \\ = & \nonumber \\ & \sum_{j \leq k} \sum_{t \in \Set{1,2}} \frac{|A_j^t|}{N} \frac{1}{|A_j^t|} \sum_{i \in A_j^t} \delta_{x_i}(a) \delta_{x_{i+1}}(b) \nonumber \\ & + \frac{|B|}{N} \frac{1}{|B|} \sum_{i \in B} \delta_{x_i}(a) \delta_{x_{i+1}}(b) .\nonumber \end{align} Therefore \begin{align} M(x) - M_X = & \\ & \sum_{j \leq k} \sum_{t \in \Set{1,2}} \frac{|A_j^t|}{N} R_{j,t} \nonumber \\ & - M_{X,mixed} + \frac{|B|}{N} \frac{1}{|B|} \sum_{i \in B} \delta_{x_i}(a). \nonumber \end{align} By (\ref{eq:small_mixed_moments}) and (\ref{eq:matrix_small_mixed_moments}), the total variation norm of the last two terms is bounded by $\frac{1}{m}$ each. The first term is a convex combination, and therefore the claim of the lemma follows from (\ref{eq:dkw_union}). \end{proof} We proceed with the proof of Theorem \ref{thm:main_thm}. Define a set \begin{equation} \label{eq:u_eps_def} U = \Set{ M \in \Delta_{\mathbb{X} \times \mathbb{X}} \spaceo | \spaceo \norm{M - M_X}_{TV} \leq \frac{3}{m} }. \end{equation} Lemma \ref{lem:empirical_mx_bound} (with $\varepsilon = 1/m$) states that $M(x) \in U$ with high probability over $I$. Next, fix an HMM $H$ and let $M_H = M_{\phi,H}$ be the generalized second moment attaining the infimum in (\ref{eq:main_condition_on_phi_a}), \begin{equation} \label{eq:m_x_m_h_dist} D = \norm{M_X - M_H}_{TV} = \inf_{\phi \in P} \norm{M_X - M_{\phi,H}}_{TV} - 3/m. \end{equation} Let \begin{equation} O = \Set{x=(x_1, \ldots, x_{N+1}) \spaceo | \spaceo M(x) \in U } \end{equation} be the set of all data sequences $x$ with $M(x) \in U$. Using the type theory for second moments of Markov chains, we will show that \begin{equation} \label{eq:entropy_hmm_bound} \Probu{H,\pi}{ O } \leq 2^{-N D^2}, \end{equation} for every initial distribution $\pi$, where $D$ is given by (\ref{eq:m_x_m_h_dist}). Equivalently, under $H$, probability of observing a sequence $x$ with $M(x) \in U$ is at most $2^{-N D^2}$. We first prove Theorem \ref{thm:main_thm} assuming (\ref{eq:entropy_hmm_bound}), and then prove (\ref{eq:entropy_hmm_bound}). The inequality (\ref{eq:entropy_hmm_bound}) bounds the likelihood under $H$ of \textit{all} $x$ such that $M(x) \in U_{\varepsilon}$. To prove Theorem \ref{thm:main_thm}, we need to bound the likelihood $L(x,H,\pi)$ of \textit{individual} sequence $x$ produced by $I$. Let \begin{equation} H(X) = \frac{1}{N} H(X_1, \ldots, X_N) = \sum_{i \leq k} w_i H(\mu_i) \end{equation} be the entropy of a sample $X = (X_1, \ldots, X_N)$ from $I$. The following lemma describes the type theory for for samples from model $I$. Recall that $\tilde{w}$ was defined in (\ref{eq:tilde_w_def}) and has the property that in a sample $X = (X_1,\ldots,X_N)$ from $I$ every source appears at least $\tilde{w} N$ times. \begin{lem} \label{lem:iid_aep} Let $X = (X_1,\ldots,X_N) \sim I$. Then there exist a subset $G \subset \mathbb{X}^N$ of sequences such that \begin{enumerate} \item For every $x^l= (x_1,\ldots,x_N) \in G$, \begin{equation} \label{eq:xl_close_to_exp} \Abs{-\frac{1}{N} \log P_X(x^l) - H(X)} \leq \varepsilon_N, \end{equation} \item \begin{equation} \label{eq:xl_full_prob} \sum_{x^l \in G} P_X(x^l) \geq 1 - \varepsilon_N, \end{equation} \end{enumerate} where $\varepsilon_N \rightarrow 0$ with $\tilde{w}N \rightarrow \infty$. \end{lem} This Lemma is a version of an Asymptotic Equipartition Property (AEP) for independent variables (see \cite{coverthomas}). Statements (\ref{eq:xl_close_to_exp}) and (\ref{eq:xl_full_prob}) follow from a weak law of large numbers. The details of the proof are identical to the standard AEP and are omitted. Note that on one hand sequences $x^l$ are a set of almost full probability by (\ref{eq:xl_full_prob}), and on the other hand, by Lemma \ref{lem:empirical_mx_bound}, $\Probu{X}{M(x) \in U}$ is also close to $1$. Denote \begin{equation} X^l = G \cap O. \end{equation} It follows that \begin{equation} \label{eq:xl_xl_prob} \Probu{X}{X^l} \geq 1 - \varepsilon_N, \end{equation} perhaps with a slightly different $\varepsilon_N$. In addition, similarly to the standard AEP, we can obtain cardinality estimates on $\Abs{X^l}$. Indeed, combining (\ref{eq:xl_xl_prob}) and (\ref{eq:xl_close_to_exp}) we get \begin{equation} \label{eq:xl_cardinality} 2^{N (H(X) + \varepsilon_N)} \geq \Abs{X^l} \geq 2^{N (H(X) - \varepsilon_N)}. \end{equation} Next, using (\ref{eq:entropy_hmm_bound}) we can write \begin{equation} \sum_{x^l \in X^l} \Probu{H}{x^l} \leq \Probu{H}{ O} \leq 2^{-N D^2}, \end{equation} or equivalently, \begin{equation} \label{eq:average_statement_intermediate} \frac{1}{\Abs{X^l}} \sum_{l} \Probu{H}{x^l} \leq 2^{-N \Brack{D^2 + \frac{1}{N}\log |X^l|}}. \end{equation} Using (\ref{eq:xl_cardinality}) we therefore obtain \begin{equation} \label{eq:average_statement} \frac{1}{\Abs{X^l}} \sum_{l} \Probu{H}{x^l} \leq 2^{-N \Brack{D^2 + \sum_{i \leq k} w_i H(\mu_i) - \varepsilon_N }}. \end{equation} Note that (\ref{eq:average_statement}) is essentially the statement of Theorem \ref{thm:main_thm} on average over $x^l$. By applying Markov inequality to this average we get that the proportion of $x \in X_l$ which satisfy \begin{equation} \label{eq:x_l_likelihood} L(x,H) \geq -D^2/2 - \sum_j w_j H(\mu_j) - \varepsilon_N \end{equation} is at most $2^{-N \frac{D^2}{2}}$. Formally, \begin{equation} \label{eq:x_l_proportion_statement} \frac{\Abs{\Set{x \in X^l \spaceo | \spaceo x \mbox{ satisfies (\ref{eq:x_l_likelihood})}}} }{ \Abs{X^l} } \leq 2^{-N \frac{D^2}{2}}. \end{equation} Using (\ref{eq:xl_close_to_exp}) and (\ref{eq:xl_cardinality}) again, this estimate implies a probability estimate over $X$, \begin{align} \label{eq:proof_final_markov} &\Probu{X}{ L(x,H) \geq -D^2/2 - \sum_j w_j H(\mu_j) + \varepsilon_N} \\ &\leq 2^{-N \Brack{\frac{D^2}{2} - 2\varepsilon_N}} + \varepsilon_N, \nonumber \end{align} therefore concluding the proof of Theorem \ref{thm:main_thm}. It remains to prove the bound (\ref{eq:entropy_hmm_bound}). We begin with a standard construction for transforming an HMM into a Markov chain in a special form. This converts the problem of bounding the likelihood of data under an HMM to a problem of bounding a likelihood of a certain set of paths in the chain. Given an HMM $H = H(S,\{\nu_i)\}_{1}^k, \{p_{ij}\}_{i,j=1}^k)$, construct a Markov chain $H' = (S',p')$ with state space $S' = S\times \mathbb{X}$, and transition probabilities \begin{equation} p'_{(i,a),(j,b)} = p_{ij} \nu_j(b). \end{equation} For a state $(i,a) \in S'$, we refer to $a$ as the data component of the state. Clearly, by observing a random walk of $H'$ and looking only at the data component, we get a distribution over the data that is identical to that of the HMM. Note that for a single data vector $x = (x_1,\ldots,x_{N+1})$, there are exactly $k^{N+1}$ paths of the chain $H'$ yielding the data $x$. Next, we use type theory for Markov chains to obtain deviation bounds on the empirical second moment of a random walk. Similarly to second moment of the data, for a Markov chain $H' = (S',p')$, and a path $s = s_1,s_2, \ldots,s_{N+1}$, where $s_i \in S'$, define the second moment $M(s) \in \Delta_{S' \times S'}$ by \begin{align} &M(s)(u,v) = \\ &\frac{1}{N} \Abs{ \Set{ i \leq N \spaceo | \spaceo s_{i} = u \wedge s_{i+1} = v} }, \end{align} for all $u,v \in S'$. For a subset $\Pi \subset \Delta_{S' \times S'}$, the second order type theory provides bounds of the form \begin{equation} \label{eq:type_demo} \Probu{H'}{ M(s) \in \Pi } \leq 2^{-N\cdot D}, \end{equation} where $D$ is a suitably defined distance between the set $\Pi$ and the transition matrix $p'$. Statement (\ref{eq:type_demo}) is a Markov chain analog of Sanov's theorem for i.i.d sequences (\cite{sanov}, \cite{coverthomas}). We use a second moment deviation inequality due to \cite{ccc87}, stated as Lemma \ref{lem:sanov}. Note that type theory provides estimates on moments of paths of the chain $H'$, which take values in $\Delta_{S' \times S'}$, while our assumptions are about moments of the data, $M(x) \in \Delta_{\mathbb{X} \times \mathbb{X}}$. We now describe the connection between the two types of moments. Consider a Markov chain $H'=(S',p')$ corresponding to an HMM $H$. Define a linear map $T : \Delta_{S' \times S'} \rightarrow \Delta_{\mathbb{X} \times \mathbb{X}}$ by \begin{equation} T(M')(a,b) = \sum_{i,j \leq k} M'((i,a),(j,b)). \end{equation} If $M'$ is the second moment of a path of the chain, then $T(M')$ is the second moment of the data. The map $T$ satisfies the following inequality, which is crucial for our analysis. \begin{lem} For any $M_1,M_2 \in \Delta_{S' \times S'}$, \begin{equation} \label{eq:T_contraction} D(T(M_1)|T(M_2)) \leq D(M_1 | M_2). \end{equation} \end{lem} \begin{proof} This result is a consequence of the chain rule for relative entropies. To see this, represent an element $v = ((i,a),(j,b)) \in S' \times S'$ as a pair $v = (u,w)$ where $u = (i,j)$ and $w=(a,b)$ are the state and data parts of $v$. Then $M \in \Delta_{S' \times S'}$ is a distribution over all $(u,w)$. Denote by $V = (U,W)$ the random vector with values in $S' \times S'$ and distribution $M$. Then, by definition, $T(M)$ is the marginal distribution of component $W$ of $V$. By the chain rule for relative entropies (\cite{coverthomas}, equation (2.67) ), \begin{align} \label{eq:T_contraction_proof} &D(M_1,M_2) = \\ &D( (U_1,W_1) | (U_2,W_2)) = \nonumber \\ & D( W_1 | W_2) + \nonumber \\ & \hspace{2 mm} +\sum_{a,b \in \mathbb{X}} M_1(a,b) \cdot D\Brack{ \BrackSq{ U_1 | W_1 = (a,b)} | \BrackSq{ U_2 | W_2 = (a,b)} } \nonumber \\ & \geq D( W_1 | W_2) \nonumber \\ & = D(T(M_1)|T(M_2)). \nonumber \end{align} where the inequality in (\ref{eq:T_contraction_proof}) is due to the non-negativity of relative entropy. \end{proof} To state the deviation result, Lemma \ref{lem:sanov}, we require some additional notation. For any measure $M \in \Delta_{S' \times S'}$, define the left and right marginalizations $\bar{M}, \dbar{M} \in \Delta_{S'}$ by \begin{equation} \bar{M}(u) = \sum_{v\in S'} M(u,v) ,\hspace{2 mm} \dbar{M}(u) = \sum_{v\in S'} M(v,u). \end{equation} Moreover, given $M \in \Delta_{S' \times S'}$, define the related transition matrix to be \begin{equation} M(v|u) = \frac{M(u,v)}{\bar{M}(u)}. \end{equation} A measure $M \in \Delta_{S' \times S'}$ is called stationary, if $\bar{M} = \dbar{M}$. Such measure is a stationary measure of a random walk given by the transition matrix $M(v|u)$. We denote by $\Delta_{S' \times S'}^0$ the set of all stationary measures. Finally, we introduce a quantity that will control the deviations of moments. Given a transition matrix $p'=p'_{uv}$ and a measure $M \in \Delta_{S' \times S'}$, define \begin{eqnarray} \label{eq:dmp_def_1} D(M | p' ) = & \sum_{u,v \in S'} M(u,v) \log \frac{M(v|u)}{p'_{uv}} = \\ \label{eq:dmp_def_2} &\sum_{u,v \in S'} M(u,v) \log \frac{M(u,v)}{\bar{M}(u)p'_{uv}}. \end{eqnarray} The quantity $D(M|p')$ differs from the standard Kullback- Leibler divergence since $p'$ is not a measure. However, as follows from (\ref{eq:dmp_def_2}), we can write $D(M|p') = D(M|z)$, where $D(t|z)$ is the standard KL divergence and $z\in \Delta_{S' \times S'}$ is defined by $z(u,v) = \bar{M}(u) \cdot p'_{uv}$. For any closed set $\Pi \subset \Delta_{S' \times S'}$, denote $\Pi_0 = \Delta_{S' \times S'}^0 \cap \Pi$. \begin{lem}[\cite{ccc87}] \label{lem:sanov} Let $\Pi \subset \Delta_{S' \times S'}$ be a closed convex set. For any $D' > 0$ there is a sequence $\varepsilon_N$ with $\lim_{N \rightarrow \infty} \varepsilon_N = 0$ such that for any Markov chain $C = (S',p')$ satisfying (\ref{eq:ccc_condition}), \begin{equation} \label{eq:ccc_condition} D' = \min_{M \in \Pi_0} D(M|p'), \end{equation} if $X = X_1,\ldots,X_{N+1}$ is a random walk generated by $C$ then \begin{equation} \Probu{C}{ M(X) \in \Pi} \leq 2^{-N \Brack{D' - \varepsilon_N}}. \end{equation} \end{lem} Lemma \ref{lem:sanov} provides us with likelihood estimates that depend on the parameters of the unfolded Markov chain $H'$. To obtain the bound (\ref{eq:entropy_hmm_bound}) for an HMM $H$ we apply Lemma \ref{lem:sanov} to the Markov chain $H'$ with the set $\Pi \subset \Delta_{S' \times S'}$ given by \begin{equation} \Pi = T^{-1}(U) = \Set{M' \spaceo | \spaceo T(M') \in U}, \end{equation} where $U_{\varepsilon}$ was defined in (\ref{eq:u_eps_def}). Choose some $M' \in \Pi$ and denote $M = T(M')$. In what follows we show that if $D$ is given by (\ref{eq:m_x_m_h_dist}), then \begin{equation} \label{eq:deviation_application_base} D(T(M')|T(\bar{M'} p' )) \geq 2 D^2. \end{equation} Note that by (\ref{eq:T_contraction}) this implies \begin{equation} D(M'|\bar{M'} p' ) \geq 2 D^2, \end{equation} and hence $D' \geq 2 D^2$ in Lemma \ref{lem:sanov}, therefore proving (\ref{eq:entropy_hmm_bound}). Next, to obtain (\ref{eq:deviation_application_base}), observe that by Pinsker's Inequality (see \cite{coverthomas} \footnote{Pinsker's Inequality: $2\norm{\mu - \nu}_{TV}^2 \leq D(\mu|\nu)$ for all measures $\mu,\nu$. } ), it is sufficient to show that \begin{equation} \label{eq:deviation_application_tv} \norm{T(M') - T(\bar{M'} p') }_{TV} \geq D. \end{equation} Recall that by definition $T(M') \in U$, and hence \begin{equation} \norm{T(M') - M_X }_{TV} \leq \frac{3}{m}. \end{equation} Thus to obtain (\ref{eq:deviation_application_tv}) it is sufficient to show that \begin{equation} \label{eq:deviation_application_tv2} \norm{M_X - T(\bar{M'} p') }_{TV} \geq D + \frac{3}{m}. \end{equation} Let us now write the explicit expression for $T(\bar{M'} p')$. \begin{eqnarray} T(\bar{M'} p' )(a,b) = \sum_{i,j \leq k} \bar{M'}((i,a)) p'_{(i,a),(j,b)} = \\ \sum_{i,j \leq k} \bar{M'}((i,a)) p_{ij} \mu_j(b). \label{eq:tmfull} \end{eqnarray} In addition, observe that by definition, with the notation $M = T(M')$, \begin{eqnarray} \sum_{i \leq k} \bar{M'}((i,a)) = \\ \sum_{i \leq k} \sum_{j \leq k} \sum_{b \in \mathbb{X}} M'((i,a),(j,b)) = \\ \sum_{b \in \mathbb{X}} M(a,b) = \bar{M}(a). \label{eq:mprimembarmarg} \end{eqnarray} For every $a,b \in \mathbb{X}$, denote $\phi_a = (M'(1,a),\ldots,M'(k,a)) \in \mathbb{R} ^k$, and $\chi_b = (\nu_1(b), \ldots,\nu_k(b))$. Then we can rewrite (\ref{eq:tmfull}) in the generalized second moment form (\ref{eq:generalized_moment_def}) as \begin{equation} T(\bar{M'} p' )(a,b) = \phi_a \cdot p \cdot \chi_b. \end{equation} Moreover, since $M = T(M') \in U$, the marginals satisfy \begin{equation} \label{eq:marginals_ph_a_condition} \norm{\bar{M_X} - \bar{M}}_{TV} \leq \norm{M_X - M}_{TV} \leq \frac{3}{m}. \end{equation} Therefore, by (\ref{eq:mprimembarmarg}) and (\ref{eq:marginals_ph_a_condition}) the condition (\ref{eq:main_condition_on_phi_a}) for $\phi_a$ in the main Theorem holds. \subsection{Proof of Theorem \ref{thm:full_statement}} \label{sec:proof_of_full_statement} \newcommand{\mathcal{H}_{\delta}}{\mathcal{H}_{\delta}} \begin{proof}[Proof of Theorem \ref{thm:full_statement}] Set $D_0 = \sqrt{\frac{\log 3km}{m}}$. To obtain Theorem \ref{thm:full_statement} it suffices to show that with high probability over $X$, \begin{align} \label{eq:full_statement_proof_likelihood} L(x,H,\pi) \leq & -D_0^2 - \sum_{j} w_j H(\mu_j) + \varepsilon(N_{min}) \\ = & -\frac{\log 3km}{m} - \sum_{j} w_j H(\mu_j) + \varepsilon(N_{min}) \nonumber \end{align} jointly for all HMMs $H \in \mathcal{H}_{\delta}$ which satisfy \begin{equation} \label{eq:full_statement_big_d} D(H) > \sqrt{\frac{\log 3km}{m}}. \end{equation} Indeed, assume that (\ref{eq:full_statement_proof_likelihood}) holds for all $H$ which satisfy (\ref{eq:full_statement_big_d}). Then, since by Lemma \ref{lem:right_path_full} we know that there exists an HMM $H_0 \in \mathcal{H}_{\delta}$ such that \begin{equation} \label{eq:full_statement_proof_H_0_likelihood} L(x,H_0,\pi) > -\frac{\log 3km}{m} - \sum_{j} w_j H(\mu_j) - \varepsilon(N_{min}), \end{equation} it follows that the maximum likelihood estimator $H$ must satisfy \begin{equation} D(H) \leq \sqrt{\frac{\log 3km}{m}}. \end{equation} Note that for a single fixed HMM $H$ satisfying (\ref{eq:full_statement_big_d}), the statement (\ref{eq:full_statement_proof_likelihood}) holds by Theorem \ref{thm:main_thm} with high probability. However, since we would like to have explicit exponential probability bounds, we work directly with estimate \ref{eq:x_l_proportion_statement} in the proof of Theorem \ref{thm:main_thm} rather than with the final statement of that theorem. Then, for a fixed $H$, the probability over $X^l$ of $x$ not satisfying (\ref{eq:full_statement_proof_likelihood}) is at most $2^{-N \frac{D_0^2}{2}}$. The uniform statement for all $H \in \mathcal{H}_{\delta}$ satisfying (\ref{eq:full_statement_big_d}) can be obtained by approximation and union bound. We first define an appropriate metric on $\mathcal{H}_{\delta}$. Consider the set $\mathcal{H}_{\delta}$ as a subset of the Euclidean space $ \mathbb{R} ^F$, where $F = k^2 + k\cdot \Abs{\mathbb{X}}$ and we simply consider the parameters of an HMM as coordinates. For any $H \in \mathcal{H}_{\delta}$ let $v(H) = (v_t(H))_{t=1}^F \in \mathbb{R} ^F$ be the vector corresponding to $H$. In what follows we identify $\mathcal{H}_{\delta}$ with a subset of $ \mathbb{R} ^F$, $\Set{v(H) \spaceo | \spaceo H \in \mathcal{H}_{\delta}} \subset \mathbb{R} ^F$. By definition we have for every $H \in \mathcal{H}_{\delta}$, \begin{equation} \label{eq:full_statement_v_t_delta} v_t(H) \geq \delta \hspace{2 mm} \forall t \leq F. \end{equation} Define a map $R : \mathbb{R} ^F \mapsto \mathbb{R} ^F$ by \begin{equation} R(v) = (\log v_1, \ldots, \log v_F) \end{equation} and define a metric on $\mathcal{H}_{\delta}$ by \begin{align} d_{*}(H_1,H_2) =& \norm{R(v(H_1)) - R(v(H_2))}_{\infty} \\ =& \max_{t\leq F} \Abs{\log \frac{v_t(H_1)}{v_t(H_2)} }. \nonumber \end{align} Next, for $\gamma >0$ let $\Gamma_{\gamma}$ be the minimal cardinality of a $\gamma$-net of $\mathcal{H}_{\delta}$ with respect to the metric $d_{*}$. Since $\mathcal{H}_{\delta}$ is bounded in $ \mathbb{R} ^F$, the map $R$ is coordinate-wise at most $\frac{1}{\delta}$-Lipschitz on $\mathcal{H}_{\delta}$ (by (\ref{eq:full_statement_v_t_delta})), and $d_{*}$ is an $\ell_{\infty}$ norm on the image of $R$, standard volumetric arguments imply that \begin{equation} \Gamma_{\gamma} \leq c \Brack{\frac{1}{\gamma \cdot \delta} }^F. \end{equation} It is also easy to check that the normalized log-likelihood is $1$-Lipschitz with respect to $d_{*}$: \begin{equation} \Abs{L(x,H_1,\pi) - L(x,H_2,\pi)} \leq d_{*}(H_1,H_2), \end{equation} for every sequence $x$ and distribution $\pi$. Consider an $1/N$-net in $\mathcal{H}_{\delta}$. As noted above, for an individual HMM $H$ satisfying (\ref{eq:full_statement_big_d}), the statement (\ref{eq:full_statement_proof_likelihood}) holds with probability at least $1 - 2^{-N \frac{D_0^2}{2}}$ over $X^l$. Therefore with probability at least \begin{equation} \label{eq:full_statement_prob_final} 1 - 2^{-N \frac{D_0^2}{2}} \cdot 2^ {F \log \frac{1}{N \cdot \delta}}, \end{equation} we have \begin{equation} \label{eq:full_statement_likelihood_after_lip} L(x,H,\pi) \leq -D_0^2 - \sum_{j} w_j H(\mu_j) + \varepsilon(N_{min}) + \frac{1}{N}. \end{equation} It remains to observe that for any $N$ such that \begin{equation} N \geq 2\frac{F \log \frac{1}{N \cdot \delta}}{D_0^2}, \end{equation} the probability in (\ref{eq:full_statement_prob_final}) is positive, and approaches 1 for larger $N$, thereby completing the proof. \end{proof} \iffalse \begin{proof}[Proof of Theorem \ref{thm:full_statement}] The existence of an almost sure limit $H$ of $H_N$ is implied by a general consistency result in \cite{douc2012}, Section 4.2. With slight restricting assumptions this also follows from the result in \cite{mevel}. Next, by Lemma \ref{lem:right_path_full} there exists a sequence of HMMs $H'_N$ such that with probability 1, for any initial distribution $\pi$, \begin{equation} \liminf_{N \rightarrow \infty} L((x_1,\ldots,x_N),H'_N,\pi) \geq -\frac{\log 3km}{m} - \sum_{j} w_j H(\mu_j). \end{equation} Since $H$ is the limit of maximal likelihood estimators, it follows that \begin{equation} \liminf_{N \rightarrow \infty} L((x_1,\ldots,x_N),H,\pi) \geq -\frac{\log 3km}{m} - \sum_{j} w_j H(\mu_j). \end{equation} Combining this with Theorem \ref{thm:main_thm} implies that \begin{equation} \label{eq:full_statement_big_d} D^2(H) \leq \frac{\log 3km}{m}, \end{equation} therefore concluding the proof. \end{proof} \fi \bibliographystyle{apalike}
\section{Introduction} Let $\mathfrak{C}$ be the convex hull of a compact subset $Q$ in the Euclidean space $\R^m$. By Carathéodory's theorem \cite{Handbook}, $\mathfrak{C}$ is the set of all convex combinations of at most $(m+1)$-tuples of points on $Q$. Thus, $\mathfrak{C}$ is a compact convex subset. Any point in $\mathfrak{C}\setminus Q$ is an inner point of a line segment contained in $\mathfrak{C}$; that is, the complement $\mathfrak{C} \setminus Q$ does not contain extreme points of $\mathfrak{C}$. The compactness of the convex hull and, therefore, the existence of a huge variety of convex subsets with many non-extreme points on the boundary, admits a straightforward generalization to the sphere and the Lobachevsky space; moreover, it holds \emph{locally} in any two-dimensional Riemannian manifold. Recall that a set $\mathfrak{C}$ in a Riemannian manifold $(M,g)$ is called \emph{convex} if for any pair of points $x,y\in \mathfrak{C}$ any minimizing geodesic $[x,y]$ lies in $\mathfrak{C}$. A point in $\mathfrak{C}$ is called \emph{extreme} if it does not lie in an interior of a geodesic in~$\mathfrak{C}$. The \emph{convex hull} of a set $Q\subset M$ is the minimal convex subset of $M$ that contains $Q$. It seems to be a folklore belief that a version of the statement above should hold true in all Riemannian manifolds; see the discussion at mathoverflow \cite{petrunin-2009}. In the present note we prove that the somewhat counter-intuitive opposite is the case for \emph{generic} Riemannian manifolds. It agrees with the pattern: \emph{a typical object in your favorite theory looks like nothing you have ever seen before}. Further Riemannian manifolds will be assumed to be connected and $\mathcal C^\infty$-smooth. Given a positive integer $k$, we say that a property $\mathcal P$ holds for \emph{$\mathcal C^k$-generic} Riemannian metric $g$ on a manifold $M$ if the property $\mathcal P$ holds for a dense \emph{G-delta set} (that is, a countable intersection of open subsets) of metric tensors in the $\mathcal C^k$-topology. \begin{thm}{Main theorem}\label{thm:main} Let $\mathfrak C$ be an arbitrary convex subset of a $\mathcal C^2$-generic Riemannian manifold $(M,g)$. Then the set of non-extreme points in $\mathfrak C$ is the union of an open set and at most countable family of geodesics in $(M,g)$. In particular, if $\dim M\ge 3$ and no connected component of $\mathfrak C$ is a geodesic, then the set of extreme points of $\mathfrak{C}$ is dense in~$\partial\mathfrak{C}$. \end{thm} Note that our definition of convexity does not require connectedness. However, any convex subset $\mathfrak C$ is locally connected, and $\mathfrak C$ is connected if the manifold $M$ is complete or $\mathfrak C$ is contained in some compact convex subset of $M$. If $\dim M =2$, the statement is rather trivial and holds true for \emph{all} Riemannian metrics not only the generic ones. As a consequence of the main theorem for $\dim M \geq 3$, we obtain the following: \begin{thm}{Corollary}\label{cor:caratheodory} Let $Q$ be a closed subset of a $\mathcal C^2$-generic Riemannian manifold $(M,g)$ of dimension at least $3$. If $Q$ does not lie in a geodesic and the convex hull $\mathfrak C$ of $Q$ is closed and connected, then $\partial \mathfrak C \subset Q$. \end{thm} The corollary gives a positive resolution of a conjecture formulated by Marcel Berger \cite[Note 6.1.3.1]{berger-2003}, stating that convex hulls of $3$ points in most Riemannian manifolds do not need to be compact. Probably the following more exact form of Berger's conjecture might be squeezed out from our key lemma.% \footnote{More open questions are listed in Appendix \ref{app:remarks}.} \begin{thm}{Conjecture} Let $(M,g)$ be an arbitrary Riemannian manifold of dimension at least 3. If the convex hull of any 3-point subset is compact, then $(M,g)$ has constant curvature. \end{thm} The following corollary is essentially known, for $\dim M=3$ its proof has been sketched by Robert Bryant \cite{Bryant} and, for $\dim M\geq 4$, it was proved by Thomas Murphy and Frederick Wilhelm \cite{Wilhelm}. \begin{thm}{Corollary}\label{cor:main} Let $(M,g)$ be a $\mathcal C^2$-generic Riemannian manifold. Then any connected convex subset $\mathfrak C$ of $(M,g)$ is either contained in a geodesic or \emph{full-dimensional}; that is, the interior of $\mathfrak C$ is nonempty. \end{thm} The proofs are built on the following proposition. Its formulation uses the notion of \emph{rank} of a point $p$ in a closed convex set $\mathfrak{C}$; we define it as the dimension of the maximal linear subspace in the tangent cone to $\mathfrak{C}$ at $p$. \begin{thm}{Main proposition}\label{prom:rank} Suppose $\mathfrak{C}$ is a closed convex set in a $\mathcal C^2$-generic $m$-dimensional Riemannian manifold $(M,g)$. Then all non-extreme points of $\mathfrak{C}$ have rank either $1$ or $m$. In particular, if $\dim M\ge 3$ and $\mathfrak{C}$ is bounded by a $\mathcal{C}^1$-smooth hypersurface, then $\mathfrak{C}$ is \emph{strictly convex}; that is, all boundary points of $\mathfrak{C}$ are extreme. \end{thm} The proof relies on the key lemma stated in the following section; it describes a necessary condition on a geodesic in a Riemannian manifold that stays in convex subset $\mathfrak C$. If the geodesic lies in $\partial \mathfrak C$ and contains a point of rank at least 2, then this condition implies a non-trivial property of the curvature tensor. Then we show that the curvature tensor of a generic Riemannian manifold does not meet this property. The latter part is technical but straightforward; it is done by applying the Thom transversality theorem; see Appendix \ref{sec:normalization}. \parbf{Acknowledgments.} We thank Mohammad Ghomi and Frederick Wilhelm for their interest in our result, the anonymous referee for helpful criticism. Alexander Lytchak was partially supported by the DFG grant, no. 281071066, TRR 191. Anton Petrunin was partially supported by the NSF grant, DMS-2005279. \section{Key lemma}\label{sec:key} Let $\mathfrak{C}$ be a closed convex set in an $m$-dimensional Riemannian manifold $(M,g)$. Recall that $\T_x=\T_xM$ denotes the \emph{tangent space} of $M$ at $x$. The \emph{tangent cone} $\K_x=\K_x\mathfrak{C}\subset \T_x$ at $x\in\mathfrak{C}$ is defined as the closure of the set of all velocity vectors of geodesics that start at $x$ and run in $\mathfrak{C}$. Given $x\in \mathfrak{C}$, denote by $\L_x=\L_x\mathfrak{C}$ the \emph{maximal linear subspace} of $\K_x$. We define the \emph{rank} of $x$ in $\mathfrak{C}$ as the dimension of $\L_x$. Note that $\K_x$ is a convex cone in $\T_x$; in particular, $\L_x=\K_x\cap (-\K_x)$. Further $\K_x$ coincides with $\T_x$ if and only if a neighborhood of $x$ lies in the interior of $\mathfrak{C}$. In other words, $x$ has rank $m$ if and only if $\mathfrak{C}$ contains a neighborhood of $x$. Given a tangent vector $\vec x\in\T_pM$, consider the \emph{Jacobi operators} of order $k$ \[R^k_\vec x\:\vec{v}\mapsto \nabla^{k-2}_\vec x\Rm(\vec{v},\vec x)\vec x,\] where $\Rm$ denotes the curvature tensor of $g$; we set $R^1=0$. Note that (i) $R^k_\vec x\:\T_p\z\to \T_p$ is a self-adjoint operator, (ii) $\vec x\mapsto R^k_\vec x$ is a homogeneous polynomial of degree $k$, and (iii) \[R^k_\vec x\cdot\vec x=0 \eqlbl{eq:RXX=0}\] for any $k$ and $\vec x\in\T_p$. The Jacobi equation along a geodesic $\gamma$ takes the form \[\nabla^2_{\gamma'}\cdot\vec i+R^2_{\gamma'}\cdot \vec i=0.\] \begin{thm}{Key lemma}\label{lem:key} Let $(M,g)$ be a Riemannian manifold and $\gamma\:(a_0,b_0)\to M$ be a geodesic that runs in a closed convex set $\mathfrak{C}\subset (M,g)$. Then the tangent cones of $\mathfrak{C}$ are parallel along $\gamma$; that is, the parallel translation along $\gamma$ defines a bijection between the tangent cones $\K_{\gamma(a)}\mathfrak{C}$ and $\K_{\gamma(b)}\mathfrak{C}$ for any $a,b \in (a_0,b_0)$. Moreover, for any $a\in (a_0,b_0)$ the following conditions hold: \begin{subthm}{lem:key:a} For any $\vec{v}\in \K_{\gamma(a)}\mathfrak{C}$ we have \[R^2_{\gamma'(a)}\cdot \vec{v}\in \K_\vec{v}[\K_{\gamma(a)}\mathfrak{C}].\] \end{subthm} \begin{subthm}{lem:key:b} $\L_{\gamma(a)}\mathfrak{C}$ is an invariant subspace of $R^2_{\gamma'(a)}\:\T_{\gamma(a)}\to\T_{\gamma(a)}$. \end{subthm} \end{thm} The proof uses the fact that the parallel translation can be defined via geodesics. In a similar way, this observation was used in \cite[Section 13]{Ber-Nik} and~\cite{Petruninpar}. In fact the main part of the key lemma follows from \cite{Petruninpar}. \parit{Proof of \ref{lem:key}.} Since all statements are local, we may replace $(M,g)$ by its small open convex subset. By doing so we may assume that any pair of points of $(M,g)$ is connected by a unique geodesic and there are no conjugate points. In particular, for any subinterval $[a,b]\subset (a_0,b_0)$ and any tangent vectors $\vec{v} \in \T_{\gamma (a)}$ and $\vec{w} \in \T_{\gamma (b)}$ there exists a unique Jacobi field $\vec i$ along $\gamma$ such that $\vec i(a)=\vec{v}$ and~$\vec i(b)=\vec{w}$. Since Jacobi fields are variational fields of geodesic variations, the convexity of $\mathfrak{C}$ implies the following: \begin{thm}{Observation} Suppose $\vec i$ is a Jacobi field along $\gamma$ and $a_0<a<t<b<b_0$. If $\vec i(a)\in \K_{\gamma(a)}\mathfrak{C}$ and $\vec i(b)\in \K_{\gamma(b)}\mathfrak{C}$, then $\vec i(t)\in \K_{\gamma(t)}\mathfrak{C}$. \end{thm} Choose a subinterval $[a,b] \subset (a_0,b_0)$. Given a large positive integer $k$, consider the arithmetic progression $t_0,\dots,t_{k+1}$ such that $t_0=a$ and $t_k=b$. Choose a tangent vector $\vec{v}_0\in\T_{\gamma(a)}$. Consider the sequence of vectors $\vec{v}_i\z\in\T_{\gamma(t_i)}$ defined recursively by $\vec{v}_{i+1}=2\cdot \vec i_i(t_{i+1})$, where $t\mapsto \vec i_i(t)$ denotes the Jacobi field along $\gamma$ such that $\vec i_i(t_i)=\vec{v}_i$ and $\vec i_i(t_{i+2})=0$. \begin{figure}[ht!]\vskip-0mm\centering\includegraphics{mppics/pic-1}\end{figure} Define $\iota_k\:\T_{\gamma(a)}\to \T_{\gamma(b)}$ by setting $\iota_k(\vec v_0)\df \vec v_k$. According to the observation, if $\vec v_0\in \K_{\gamma(a)}\mathfrak{C}$, then $\iota_k(\vec v_0)\in \K_{\gamma(b)}\mathfrak{C}$. As observed in \cite{Ber-Nik} and~\cite{Petruninpar}, $\iota_k(\vec v_0)$ converges to the parallel translation of $\vec v_0$ along $\gamma$ as $k\to \infty$. Since $\K_{\gamma(b)}\mathfrak{C}$ is closed, the parallel translation along $\gamma$ maps $\K_{\gamma(a)}\mathfrak{C}$ in $\K_{\gamma(b)}\mathfrak{C}$. Switching the direction of $\gamma$, we get the opposite inclusion. That is, the tangent cones $\K_{\gamma(t)}\mathfrak{C}$ are parallel along $\gamma$ --- the main part is proved. Let us use the parallel translation along $\gamma$ to identify the tangent spaces at points on $\gamma$. This way we identify the tangent cones $\K_{\gamma(t)}\mathfrak{C}$ for all $t$; denote the obtained cone by $\K$. For $\vec{v}\in \K$ and small $\epsilon>0$, consider the unique Jacobi field $\vec i_\epsilon$ along $\gamma$ with $\vec i_\epsilon (a+\eps)\z=\vec i_\epsilon(a-\eps)=\vec{v}$. Due to the Jacobi equation, \[\vec i_\epsilon (a)=\vec{v} +\tfrac12\cdot\eps^2\cdot R^2_{\gamma'(a)}\cdot \vec{v} +o(\eps^2).\] According to the observation, $\vec i_\epsilon(a)\in \K$ for any $\eps>0$. Since $\K$ is a closed convex cone, we get $R^2_{\gamma'}\cdot \vec{v}\z\in \K_\vec{v}\K$ --- \ref{SHORT.lem:key:a} is proved. Finally, $\vec{v}\in \L_{\gamma(a)}\mathfrak{C}$ $ \iff$ $\vec{v}, -\vec{v}\in \K$ $\iff$ $\K_\vec{v}\K=\K$. Therefore, if $\vec{v}\in \L_{\gamma(a)}\mathfrak{C}$, then $\pm R^2_{\gamma'(a)}\cdot \vec{v}\in \K$, and hence $R^2_{\gamma'(a)}\cdot \vec{v}\in \L_{\gamma(a)}\mathfrak{C}$. That is, $\L_{\gamma(a)}\mathfrak{C}$ is an invariant subspace of $R^2_{\gamma'(a)}$ --- \ref{SHORT.lem:key:b} is proved. \qeds \section{Main proposition} In this section we will prove the main proposition \ref{prom:rank} modulo one claim; let us introduce notations to state it. Let $M$ be a smooth $m$-dimensional manifold with a Riemannian metric $g$. Suppose $\vec x$ is a nonzero tangent vector at a point $p\in M$. Recall that $R^k_\vec x\:\T_p\z\to\T_p$ denotes the Jacobi operators of $g$ of order $k$ for a tangent vector $\vec x\in\T_p$. An invariant subspace $V\subset \T_p$ of $R^k_\vec x$ will be called \emph{exceptional} if $V\ni \vec x$ and $1< \dim V<m$. (Recall that $R^k_\vec x\cdot \vec x=0$ for any $k$ and $\vec x\in \T_p$. Therefore, the subspace spanned by $\vec x$ is always an invariant subspace of $R^k_\vec x$ for any $k$.) We will say that a metric $g$ on a manifold $M$ is $k$-exceptional if there exists a point $p\in M$ and a non-zero vector $\vec x\in T_p M$, such that the operators $R^2_\vec x,\dots, R^k _\vec x$ have a common exceptional invariant subspace. \begin{thm}{Claim}\label{clm:codim-sigma} For any smooth manifold $M$, there exists an integer $k$ such that the $\mathcal C^k$-generic Riemannian metric is not $k$-exceptional. \end{thm} For $k=2$ (and, probably, also for $k=3$) every Riemannian metric is $k$-exceptional. However, for larger $k$, the $k$-exceptionality defines more and more restrictions on the curvature tensor. Therefore, it is not surprising that \emph{most} Riemannian metrics are not $k$-exceptional, for sufficiently large $k$. A formal proof of this claim is built on Thom transversality theorem; it will be derived in Appendix~\ref{sec:normalization}. \parit{Proof of \ref{prom:rank} modulo \ref{clm:codim-sigma}.} Suppose that $p$ is a nonextreme point of $\mathfrak{C}$; that is, $p$ lies on a nonconstant geodesic $\gamma\:(a,b)\to\mathfrak{C}$. According to the key lemma (\ref{lem:key}), the family of \emph{maximal linear subspaces} $\L_{\gamma(t)}\mathfrak{C}$ of $\K_{\gamma(t)}\mathfrak{C}$ is parallel along $\gamma$ and invariant for $R^2_{\gamma(t)}$. Note that $\L_p$ is exceptional if and only if the rank of $p$ is neither $1$, nor $m$. Further, if a nontrivial geodesic $\gamma$ admits a parallel family $\L_t\subset \T_{\gamma(t)}$ of exceptional invariant subspaces for all $R^2_{\gamma(t)}$, then we say that $\gamma$ is \emph{exceptional}. So, it is sufficient to show that $\mathcal C^2$-generic Riemannian manifolds $(M,g)$ do not have exceptional geodesics. Choose a compact subset $K\subset M$ and $\eps>0$. Consider the set $Z(K,\eps)$ of all Riemannian metrics $g$ on $M$ such that there exists an exceptional geodesic $\gamma$ in $(M,g)$ that starts at a point in $K$ and has length $\eps$. Observe that the geodesics and the curvature tensor depend continuously on the Riemannian metric in $\mathcal C^2$-topology. Therefore, the set $Z(K,\eps)$ is closed with respect to the $\mathcal C^2$-topology. Suppose $\gamma$ is an exceptional geodesic that passes thru $p$ in the direction $\vec x$. By taking covariant derivatives along $\gamma$, we get that the Jacobi operators $R^k_\vec x$ have a common exceptional invariant subspace $\L_p$, for all $k \geq 2$. In other words, for any integer $k \geq 2$ we have \[Z^k(K)\supset Z(K,\eps),\eqlbl{eq:Zk<Z(K,eps)}\] where $Z^k(K)$ denotes the set of all smooth Riemannian metrics on $M$ such that for some $p\in K$ and $\vec x\in\T_p\setminus \{0 \}$ the operators $R^2_\vec x,\dots, R^k _\vec x$ have a common exceptional invariant subspace. By the very definition of $Z^k(K)$, it is closed with respect to $\mathcal C^{k}$-topology on the space of all Riemannian metrics on $M$. By Claim~\ref{clm:codim-sigma}, we can choose $k$ so that $Z^k(K)$ is $\mathcal{C}^k$-meager for any $K$; that is, its complement is a dense G-delta set in the space of all Riemannian metrics on $M$ with $\mathcal{C}^k$-topology. Since $Z(K,\eps)$ is closed with respect to the $\mathcal C^2$-topology, \ref{eq:Zk<Z(K,eps)} implies that $Z(K,\eps)$ is $\mathcal{C}^2$-meager in the space of all Riemannian metrics on $M$. Choose a nested sequence of compact sets $K_1\z\subset K_2\subset \dots$ that cover $M$ and set $\eps_n=\tfrac1n$. Set \[Z(M)=\bigcup_n Z(K_n,\eps_n);\] since $Z(K_n,\eps_n)$ is $\mathcal{C}^2$-meager for every $n$, so is $Z(M)$. It remains to note that $g\in Z(M)$ if and only if $(M,g)$ has an exceptional geodesic. \qeds \section{Main theorem} The following proposition is a special case of a result of Nan Li and Aaron Naber \cite[Theorem 1.6]{li-naber}. It also can be deduced from the result of Luděk Zajíček~\cite{zajicek}. \begin{thm}{Proposition}\label{prop:rectifiable} Let $\mathfrak{C}$ be a closed convex set in a Riemannian manifold $(M,g)$. Then the set of points in $\mathfrak{C}$ with rank at most $k$ is countably \emph{$k$-rectifiable}; that is, this set can be covered by images of a countable set of Lipschitz maps $\RR^k\to (M,g)$. In particular, this set contains at most countably many disjoint Borel sets with positive $k$-dimensional Hausdorff measure. \end{thm} \parit{Proof of \ref{cor:main} and \ref{thm:main}.} We may assume that $\mathfrak {C}$ is connected. Assume, in addition, that $\mathfrak{C}$ is closed. According to \cite[Theorem 1.6]{cheeger-gromoll}, a connected closed convex set $\mathfrak{C}$ in a Riemannian manifold $(M,g)$ is homeomorphic to a manifold with boundary, say~$\mathfrak{B}$. Moreover, the complement $\mathfrak{C}\backslash \mathfrak{B}$ is a totally geodesic submanifold of $(M,g)$; denote its dimension by $d$. The tangent cone $\K_p \mathfrak C$ at any $p \in \mathfrak C\setminus \mathfrak B$ is a $d$-dimensional linear space. By the main proposition (\ref{prom:rank}), $d=0, 1$, or $m$. If $d=0$, then $\mathfrak{C}$ is a single point. If $d=1$, then $\mathfrak C\setminus \mathfrak B$ is a geodesic in $(M,g)$; hence $\mathfrak C$ is contained in a geodesic as well. If $d=m$, then by the invariance of domain we have $\mathfrak C\setminus \mathfrak B$ is open in $M$; that is, $\mathfrak C$ is full-dimensional --- \ref{cor:main} is proved. By the main proposition (\ref{prom:rank}) any non-extreme point $x\in \partial \mathfrak C$ has rank 1. Thus, there is a unique line in $\K_x\mathfrak C$ and it is the tangent line of a geodesic $\gamma\subset\mathfrak C$ that has $p$ as an inner point. Let us extend $\gamma$ to a maximal open interval so that $\gamma$ stays in $\mathfrak C$; note that $p$ uniquely defines $\gamma$. By the main statement of the key lemma, all points on $\gamma$ lie on $\partial \mathfrak C$. By definition, all such geodesics consist of non-extreme points. It gives a subdivision of non-extreme points of $\partial\mathfrak C$ into geodesics with positive lengths. By \ref{prop:rectifiable}, there are only countably many such geodesics. If $\mathfrak C$ is not closed, consider its closure $\bar {\mathfrak C}$; denote by $\bar{\mathfrak{B}}$ its boundary. Note that $\bar{\mathfrak C}$ is locally convex and the above arguments apply to closed locally convex subsets without changes. Observe that any nonextreme point of $\mathfrak{C}$ is a nonextreme point of $\bar{\mathfrak{C}}$ and $\bar{\mathfrak{C}}\backslash\bar{\mathfrak{B}}\subset\mathfrak{C}$ \cite[Lemma 1.5]{cheeger-gromoll}. Hence, the statement follows. \qeds \parbf{Few words before the proof of \ref{cor:caratheodory}.} Let $Q$ be a subset of a Riemannian manifold $M$. Set $Q=Q_0$ and let inductively $Q_{i+1}$ to be the union of all minimizing geodesics between pairs of points of $Q_i$. By definition, the increasing countable union $\mathfrak{C}= \bigcup_iQ_i$ is the convex hull of $Q$. By this description, any point in $\mathfrak{C} \setminus Q$ is a non-extreme point of the convex set $\mathfrak {C}$. Note, that if $M$ is complete and $Q$ is compact, then each $Q_i$ is compact. In the Euclidean space $M=\R^m$ (as well as in the round sphere or in the Lobachevsky space) Carath\'eodory's theorem \cite{Handbook} implies $\mathfrak{C} =Q_m$. As a consequence of Corollary \ref{cor:caratheodory}, we will know that in a generic Riemannian manifold the convex hull $\mathfrak{C}$ of $Q$ is strictly larger than $Q_i$, for all $i$. \parit{Proof of \ref{cor:caratheodory}.} Without loss of generality we can assume that $\mathfrak{C}$ is a proper subset of $M$; in particular $\partial\mathfrak{C}\ne \varnothing$. Since $Q$ is not contained in a geodesic, by the main theorem, $\mathfrak{C}$ has a non-trivial interior. By the construction of $\mathfrak{C}$ above, any point $x\in \mathfrak{C} \setminus Q$ is not an extreme point of~$\mathfrak{C}$. Assume $\partial \mathfrak{C} \not\subset Q$. By the main theorem, the topological manifold $\partial \mathfrak{C}$ is the union of the closed subset $Q\cap \partial \mathfrak{C}$ and a countable union of geodesics. But $\partial \mathfrak{C} \setminus Q$ is an $(m-1)$-dimensional topological manifold. By dimensional reasons, it is not a union of countably many rectifiable curves --- a contradiction. \qeds
\section{Introduction Let $P(z)$ be a polynomial in $\mathbb{C}[z]$ that is not identically zero. We assume to begin with that $P$ has degree $N$, and that $P$ factors into linear factors in $\mathbb{C}[z]$ as \begin{equation}\label{intro1} P(z) = c_0 + c_1z + c_2 z^2 + \cdots + c_N z^N = c_N \prod_{n = 1}^N (z - \alpha_n). \end{equation} If $e : \mathbb{R}/\mathbb{Z} \rightarrow \mathbb{T}$ is the continuous isomorphism given by $e(t) = e^{2 \pi i t}$, then the Mahler measure of $P$ is the positive real number \begin{equation}\label{intro5} \mathfrak M(P) = \exp\biggl(\int_{\mathbb{R}/\mathbb{Z}} \log \bigl|P\bigl(e(t)\bigr)\bigr|\ \text{\rm d}t\biggr) = |c_N| \prod_{n = 1}^N \max\{1, |\alpha_n|\}. \end{equation} The equality on the right of (\ref{intro5}) follows from Jensen's formula. If $P_1(z)$ and $P_2(z)$ are both nonzero polynomials in $\mathbb{C}[z]$, then it is immediate from (\ref{intro5}) that \begin{equation*}\label{intro7} \mathfrak M\bigl(P_1 P_2\bigr) = \mathfrak M\big(P_1\bigr) \mathfrak M\bigl(P_2\bigr). \end{equation*} Mahler measure plays an important role in number theory and in algebraic dynamics, as discussed in \cite{everest1999}, \cite{pritsker2007}, \cite[Chapter 5]{schmidt1995}, and \cite{smyth2007}. Here we restrict our attention to the problem of proving a lower bound for $\mathfrak M(P)$ when the polynomial $P(z)$ has complex coefficients. We establish an analogous result for polynomials in several variables. For $P(z)$ of degree $N$ and given by (\ref{intro1}), there is a well known lower bound due to Mahler which asserts that \begin{equation}\label{intro10} |c_n| \le \binom{N}{n} \mathfrak M(P),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} The inequality (\ref{intro10}) is implicit in \cite{mahler1960}, and is stated explicitly in \cite[section 2]{mahler1962}, (see also the proof in \cite[Theorem 1.6.7]{bombieri2006}). If \begin{equation*}\label{intro15} P(z) = (z \pm 1)^N, \end{equation*} then there is equality in (\ref{intro10}) for each $n = 0, 1, 2, \dots , N$. We now assume that $P(z)$ is a polynomial in $\mathbb{C}[z]$ that is not identically zero, and we assume that $P(z)$ is given by \begin{equation}\label{intro20} P(z) = c_0 z^{m_0} + c_1 z^{m_1} + c_2 z^{m_2} + \cdots + c_N z^{m_N}, \end{equation} where $N$ is a nonnegative integer, and $m_0, m_1, m_2, \dots , m_N$, are nonnegative integers such that \begin{equation}\label{intro25} m_0 < m_1 < m_2 < \cdots < m_N. \end{equation} We wish to establish a lower bound for $\mathfrak M(P)$ which depends on the coefficients and on the number of monomials, but which does {\it not} depend on the degree of $P$. Such a result was recently proved by Dobrowolski and Smyth \cite{smyth2017}. We use a similar argument, but we obtain a sharper result that includes Mahler's inequality (\ref{intro10}) as a special case. \begin{theorem}\label{thmintro1} Let $P(z)$ be a polynomial in $\mathbb{C}[z]$ that is not identically zero, and is given by {\rm (\ref{intro20})}. Then we have \begin{equation}\label{intro30} |c_n| \le \binom{N}{n} \mathfrak M(P),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} \end{theorem} Let $f : \mathbb{R}/\mathbb{Z} \rightarrow \mathbb{C}$ be a trigonometric polynomial, not identically zero, and a sum of at most $N + 1$ distinct characters. Then we can write $f$ as \begin{equation}\label{intro33} f(t) = \sum_{n = 0}^N c_n e(m_n t), \end{equation} where $c_0, c_1, c_2, \dots , c_N$, are complex coefficients, and $m_0, m_1, m_2, \dots , m_N$, are integers such that \begin{equation*}\label{intro35} m_0 < m_1 < m_2 < \cdots < m_N. \end{equation*} As $f$ is not identically zero, the Mahler measure of $f$ is the positive number \begin{equation*}\label{intro38} \mathfrak M(f) = \exp\biggl(\int_{\mathbb{R}/\mathbb{Z}} \log |f(t)|\ \text{\rm d}t\biggr). \end{equation*} It is trivial that $f(t)$ and $e(-m_0 t)f(t)$ have the same Mahler measure. Thus we get the following alternative formulation of Theorem \ref{thmintro1}. \begin{corollary}\label{corintro1} Let $f(t)$ be a trigonometric polynomial with complex coefficients that is not identically zero, and is given by {\rm (\ref{intro33})}. Then we have \begin{equation}\label{intro40} |c_n| \le \binom{N}{n} \mathfrak M(f),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} \end{corollary} For positive integers $M$ we will prove an extension of Corollary \ref{corintro1} to trigonometric polynomials \begin{equation}\label{intro50} F : (\mathbb{R}/\mathbb{Z})^M \rightarrow \mathbb{C}, \end{equation} that are not identically zero. The Fourier transform of $F$ is the function \begin{equation*}\label{intro53} \widehat F : \mathbb{Z}^M \rightarrow \mathbb{C}, \end{equation*} defined at each lattice point $\boldsymbol k$ in $\mathbb{Z}^M$ by \begin{equation}\label{intro55} \widehat F(\boldsymbol k) = \int_{(\mathbb{R}/\mathbb{Z})^M} F(\boldsymbol x) e\bigl(-\boldsymbol k^T \boldsymbol x\bigr)\ \text{\rm d}\bx. \end{equation} In the integral on the right of (\ref{intro55}) we write $\text{\rm d}\bx$ for integration with respect to a Haar measure on the Borel subsets of $(\mathbb{R}/\mathbb{Z})^M$ normalized so that $(\mathbb{R}/\mathbb{Z})^M$ has measure $1$. We write $\boldsymbol k$ for a (column) vector in $\mathbb{Z}^M$, $\boldsymbol k^T$ for the transpose of $\boldsymbol k$, $\boldsymbol x$ for a (column) vector in $(\mathbb{R}/\mathbb{Z})^M$, and therefore \begin{equation*}\label{intro60} \boldsymbol k^T \boldsymbol x = k_1 x_1 + k_2 x_2 + \cdots + k_N x_N. \end{equation*} As $F$ is not identically zero, the Mahler measure of $F$ is the positive real number \begin{equation*}\label{intro75} \mathfrak M(F) = \exp\biggl(\int_{(\mathbb{R}/\mathbb{Z})^M} \log \bigl|F(\boldsymbol x)\bigr|\ \text{\rm d}\bx\biggr). \end{equation*} We assume that $\mathfrak S \subseteq \mathbb{Z}^M$ is a nonempty, finite set that contains the support of $\widehat F$. That is, we assume that \begin{equation}\label{intro80} \{\boldsymbol k \in \mathbb{Z}^M : \widehat F(\boldsymbol k) \not= 0\} \subseteq \mathfrak S, \end{equation} and therefore $F$ has the representation \begin{equation}\label{intro70} F(\boldsymbol x) = \sum_{\boldsymbol k \in \mathfrak S} \widehat F(\boldsymbol k) e\bigl(\boldsymbol k^T \boldsymbol x\bigr). \end{equation} Basic results in this setting can be found in Rudin \cite[Sections 8.3 and 8.4]{rudin1962}. If $\boldsymbol \alpha = (\alpha_m)$ is a (column) vector in $\mathbb{R}^M$, we write \begin{equation*}\label{intro100} \varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R} \end{equation*} for the homomorphism given by \begin{equation}\label{intro105} \varphi_{\boldsymbol \alpha}(\boldsymbol k) = \boldsymbol k^T \boldsymbol \alpha = k_1 \alpha_1 + k_2 \alpha_2 + \cdots + k_M \alpha_M. \end{equation} It is easy to verify that $\varphi_{\boldsymbol \alpha}$ is an injective homomorphism if and only if the coordinates $\alpha_1, \alpha_2, \dots , \alpha_M$, are $\mathbb{Q}$-linearly independent real numbers. Let the nonempty, finite set $\mathfrak S \subseteq \mathbb{Z}^M$ have cardinality $N+1$, where $0 \le N$. If $\varphi_{\boldsymbol \alpha}$ is an injective homomorphism, then the set \begin{equation*}\label{intro110} \big\{\varphi_{\boldsymbol \alpha}(\boldsymbol k) : \boldsymbol k \in \mathfrak S\big\} \end{equation*} consists of exactly $N+1$ real numbers. It follows that the set $\mathfrak S$ can be indexed so that \begin{equation}\label{intro115} \mathfrak S = \big\{\boldsymbol k_0, \boldsymbol k_1, \boldsymbol k_2, \dots , \boldsymbol k_N\big\}, \end{equation} and \begin{equation}\label{intro120} \varphi_{\boldsymbol \alpha}\bigl(\boldsymbol k_0\bigr) < \varphi_{\boldsymbol \alpha}\bigl(\boldsymbol k_1\bigr) < \varphi_{\boldsymbol \alpha}\bigl(\boldsymbol k_2\bigr) < \cdots < \varphi_{\boldsymbol \alpha}\bigl(\boldsymbol k_N\bigr). \end{equation} By using a limiting argument introduced in a paper of Boyd \cite{boyd1981}, we will prove the following generalization of (\ref{intro40}). \begin{theorem}\label{thmintro2} Let $F : (\mathbb{R}/\mathbb{Z})^M \rightarrow \mathbb{C}$ be a trigonometric polynomial that is not identically zero, and is given by {\rm (\ref{intro70})}. Let $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be an injective homomorphism, and assume that the finite set $\mathfrak S$, which contains the support of $\widehat F$, is indexed so that {\rm (\ref{intro115})} and {\rm (\ref{intro120})} hold. Then we have \begin{equation}\label{intro125} \bigl|\widehat F\bigl(\boldsymbol k_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} \end{theorem} Let $F$ and $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be as in the statement of Theorem \ref{thmintro2}, and then let $\varphi_{\boldsymbol \beta} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be a second injective homomorphism. It follows that $\mathfrak S$ can be indexed so that (\ref{intro115}) and (\ref{intro120}) hold, and $\mathfrak S$ can also be indexed so that \begin{equation}\label{intro145} \mathfrak S = \big\{\boldsymbol \ell_0, \boldsymbol \ell_1, \boldsymbol \ell_2, \dots , \boldsymbol \ell_N\big\}, \end{equation} and \begin{equation}\label{intro150} \varphi_{\boldsymbol \beta}\bigl(\boldsymbol \ell_0\bigr) < \varphi_{\boldsymbol \beta}\bigl(\boldsymbol \ell_1\bigr) < \varphi_{\boldsymbol \beta}\bigl(\boldsymbol \ell_2\bigr) < \cdots < \varphi_{\boldsymbol \beta}\bigl(\boldsymbol \ell_N\bigr). \end{equation} In general the indexing (\ref{intro115}) is distinct from the indexing (\ref{intro145}). Therefore the system of inequalities \begin{equation}\label{intro155} \bigl|\widehat F\bigl(\boldsymbol k_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F),\quad\text{for each $n = 0, 1, 2, \dots , N$}, \end{equation} and \begin{equation}\label{intro160} \bigl|\widehat F\bigl(\boldsymbol \ell_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F),\quad\text{for each $n = 0, 1, 2, \dots , N$}, \end{equation} which follow from Theorem \ref{thmintro2}, are different, and in general neither system of inequalities implies the other. \section{Proof of Theorem \ref{thmintro1}} It follows from (\ref{intro5}) that the polynomial $P(z)$, and the polynomial $z^{-m_0} P(z)$, have the same Mahler measure. Hence we may assume without loss of generality that the exponents $m_0, m_1, m_2, \dots , m_N$, in the representation (\ref{intro20}) satisfy the more restrictive condition \begin{equation}\label{mahler250} 0 = m_0 < m_1 < m_2 < \cdots < m_N. \end{equation} If $N = 0$ then (\ref{intro30}) is trivial. If $N = 1$, then \begin{equation*}\label{mahler252} \binom{1}{0} = \binom{1}{1} = 1, \end{equation*} and using Jensen's formula we find that \begin{equation*}\label{mahler254} \mathfrak M\bigl(c_0 + c_1 z^{m_1}\bigr) = \max\{|c_0|, |c_1|\}. \end{equation*} Therefore the inequality (\ref{intro30}) holds if $N = 1$. Throughout the remainder of the proof we assume that $2 \le N$, and we argue by induction on $N$. Thus we assume that the inequality (\ref{intro30}) holds for polynomials that can be expressed as a sum of strictly less than $N + 1$ monomials. Besides the polynomial \begin{equation}\label{mahler256} P(z) = c_0 z^{m_0} + c_1 z^{m_1} + c_2 z^{m_2} + \cdots + c_N z^{m_N}, \end{equation} we will work with the polynomial \begin{equation}\label{mahler258} Q(z) = z^{m_N} P\bigl(z^{-1}\bigr) = c_0 z^{m_N - m_0} + c_1 z^{m_N - m_1} + c_2 z^{m_N- m_2} + \cdots + c_N. \end{equation} It follows from (\ref{intro5}) that \begin{equation}\label{mahler260} \mathfrak M(Q) = \exp\biggl(\int_{\mathbb{R}/\mathbb{Z}} \log \bigl|e(m_N t) P\bigl(e(- t)\bigr)\bigr|\ \text{\rm d}t\biggr) = \mathfrak M(P). \end{equation} Next we apply an inequality of Mahler \cite{mahler1961} to conclude that both \begin{equation}\label{mahler262} \mathfrak M\bigl(P^{\prime}\bigr) \le m_N \mathfrak M(P),\quad\text{and}\quad \mathfrak M\bigl(Q^{\prime}\bigr) \le m_N \mathfrak M(Q). \end{equation} Because \begin{equation*}\label{mahler266} P^{\prime}(z) = \sum_{n = 1}^N c_n m_n z^{m_n - 1} \end{equation*} is a sum of strictly less than $N + 1$ monomials, we can apply the inductive hypothesis to $P^{\prime}$. It follows that \begin{equation}\label{mahler271} |c_n| m_n \le \binom{N-1}{n-1} \mathfrak M\bigl(P^{\prime}\bigr) \le m_N \binom{N-1}{n-1} \mathfrak M(P) \end{equation} for each $n = 1, 2, \dots , N$. As \begin{equation*}\label{276} m_0 = 0,\quad\text{and}\quad \binom{N-1}{-1} = 0, \end{equation*} it is trivial that (\ref{mahler271}) also holds at $n = 0$. In a similar manner, \begin{equation*}\label{mahler281} Q^{\prime}(z) = \sum_{n = 0}^{N-1} c_n (m_N - m_n) z^{m_N - m_n - 1} \end{equation*} is a sum of strictly less that $N + 1$ monomials. We apply the inductive hypothesis to $Q^{\prime}$, and get the inequality \begin{equation}\label{mahler286} |c_n| (m_N - m_n) \le \binom{N-1}{N - 1 - n} \mathfrak M\bigl(Q^{\prime}\bigr) \le m_N \binom{N-1}{n} \mathfrak M(Q) \end{equation} for each $n = 0, 1, 2, \dots , N-1$. In this case we have \begin{equation*}\label{mahler291} (m_N - m_N) = 0,\quad\text{and}\quad \binom{N-1}{N} = 0, \end{equation*} and therefore (\ref{mahler286}) also holds at $n = N$. To complete the proof we use the identity (\ref{mahler260}), and we apply the inequality (\ref{mahler271}), and the inequality (\ref{mahler286}). In this way we obtain the bound \begin{equation}\label{mahler296} \begin{split} |c_n| m_N &= |c_n| m_n + |c_n| (m_N - m_n)\\ &\le m_N \binom{N-1}{n-1} \mathfrak M(P) + m_N \binom{N-1}{n} \mathfrak M(P)\\ &= m_N \binom{N}{n} \mathfrak M(P). \end{split} \end{equation} This verifies (\ref{intro30}). \section{Archimedean orderings in the group $\mathbb{Z}^M$} In this section we consider $\mathbb{Z}^M$ as an ordered group. To avoid degenerate situations, we assume throughout this section that $2 \le M$. Let $\boldsymbol \alpha$ belong to $\mathbb{R}^M$, and let $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be the homomorphism defined by (\ref{intro105}). We assume that the coordinates $\alpha_1, \alpha_2, \dots , \alpha_M$, are $\mathbb{Q}$-linearly independent so that $\varphi_{\boldsymbol \alpha}$ is an injective homomorphism. It follows, as in \cite[Theorem 8.1.2 (c)]{rudin1962}, that $\varphi_{\boldsymbol \alpha}$ induces an archimedean ordering in the group $\mathbb{Z}^M$. That is, if $\boldsymbol k$ and $\boldsymbol \ell$ are distinct points in $\mathbb{Z}^M$ we write $\boldsymbol k < \boldsymbol \ell$ if and only if \begin{equation*}\label{order-5} \varphi_{\boldsymbol \alpha}(\boldsymbol k) = \boldsymbol k^T \boldsymbol \alpha < \varphi_{\boldsymbol \alpha}(\boldsymbol \ell) = \boldsymbol \ell^T \boldsymbol \alpha \end{equation*} in $\mathbb{R}$. Therefore $\bigl(\mathbb{Z}^M, <\bigr)$ is an ordered group, and the order is archimedean. If $\mathfrak S \subseteq \mathbb{Z}^M$ is a nonempty, finite subset of cardinality $N + 1$, then the elements of $\mathfrak S$ can be indexed so that \begin{equation}\label{order1} \mathfrak S = \big\{\boldsymbol k_0, \boldsymbol k_1, \boldsymbol k_2, \dots , \boldsymbol k_N\big\} \end{equation} and \begin{equation}\label{order5} \boldsymbol k_0^T \boldsymbol \alpha < \boldsymbol k_1^T \boldsymbol \alpha < \boldsymbol k_2^T \boldsymbol \alpha < \cdots < \boldsymbol k_N^T \boldsymbol \alpha. \end{equation} A more general discussion of ordered groups is given in \cite[Chapter 8]{rudin1962}. Here we require only the indexing (\ref{order1}) that is induced in the finite subset $\mathfrak S$ by the injective homomorphism $\varphi_{\boldsymbol \alpha}$. If $\boldsymbol b = (b_m)$ is a (column) vector in $\mathbb{Z}^M$, we define the norm \begin{equation}\label{order7} \|\boldsymbol b\|_{\infty} = \max\big\{|b_m| : 1 \le m \le M\big\}. \end{equation} And if $\mathfrak S \subseteq \mathbb{Z}^M$ is a nonempty, finite subset we write \begin{equation*}\label{order10} \|\mathfrak S\|_{\infty} = \max\big\{\|\boldsymbol k\|_{\infty} : \boldsymbol k \in \mathfrak S\big\}. \end{equation*} Following Boyd \cite{boyd1981}, we define the function \begin{equation*}\label{order20} \nu : \mathbb{Z}^M \setminus \{\boldsymbol 0\} \rightarrow \{1, 2, 3, \dots \} \end{equation*} by \begin{equation}\label{order25} \nu(\boldsymbol a) = \min\big\{\|\boldsymbol b\|_{\infty} : \text{$\boldsymbol b \in \mathbb{Z}^M$, $\boldsymbol b \not= \boldsymbol 0$, and $\boldsymbol b^T \boldsymbol a = 0$}\big\}. \end{equation} It is known (see \cite{boyd1981}) that the function $\boldsymbol a \mapsto \nu(\boldsymbol a)$ is unbounded, and a stronger conclusion follows from our Lemma \ref{lemorder2}. Moreover, if $\nu(\boldsymbol a)$ is sufficiently large, then the map $\boldsymbol k \mapsto \boldsymbol k^T \boldsymbol a$ restricted to points $\boldsymbol k$ in the finite subset $\mathfrak S$ takes distinct integer values, and therefore induces an ordering in $\mathfrak S$. This follows immediately from the triangle inequality for the norm (\ref{order7}), and was noted in \cite{boyd1981}. As this result will be important in our proof of Theorem \ref{thmintro2}, we prove it here as a separate lemma. \begin{lemma}{\sc [D.~Boyd]}\label{lemorder1} Let $\mathfrak S \subseteq \mathbb{Z}^M$ be a nonempty, finite subset with cardinality $|\mathfrak S| = N + 1$, and let $\boldsymbol a \not= \boldsymbol 0$ be a point in $\mathbb{Z}^M$ such that \begin{equation}\label{order28} 2 \|\mathfrak S\|_{\infty} < \nu(\boldsymbol a). \end{equation} Then \begin{equation}\label{order30} \big\{\boldsymbol k^T \boldsymbol a : \boldsymbol k \in \mathfrak S\big\} \end{equation} is a collection of $N + 1$ distinct integers. \end{lemma} \begin{proof} If $N = 0$ the result is trivial. Assume that $1 \le N$, and let $\boldsymbol k$ and $\boldsymbol \ell$ be distinct points in $\mathfrak S$. If \begin{equation*}\label{order35} \boldsymbol k^T \boldsymbol a = \boldsymbol \ell^T \boldsymbol a, \end{equation*} then \begin{equation*}\label{order40} (\boldsymbol k - \boldsymbol \ell)^T \boldsymbol a = \boldsymbol 0. \end{equation*} It follows that \begin{equation*}\label{order45} \nu(\boldsymbol a) \le \|\boldsymbol k - \boldsymbol \ell\|_{\infty} \le \|\boldsymbol k\|_{\infty} + \|\boldsymbol \ell\|_{\infty} \le 2 \|\mathfrak S\|_{\infty}, \end{equation*} and this contradicts the hypothesis (\ref{order28}). We conclude that (\ref{order30}) contains $N + 1$ distinct integers. \end{proof} Let $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be an injective homomorphism, and let $\mathfrak S \subseteq \mathbb{Z}^M$ be a nonempty, finite subset of cardinality $N + 1$. We assume that the elements of $\mathfrak S$ are indexed so that both (\ref{order1}) and (\ref{order5}) hold. If $\boldsymbol a \not= \boldsymbol 0$ in $\mathbb{Z}^M$ satisfies (\ref{order28}), then it may happen that the indexing (\ref{order1}) also satisfies the system of inequalities \begin{equation*}\label{order50} \boldsymbol k_0^T \boldsymbol a < \boldsymbol k_1^T \boldsymbol a < \boldsymbol k_2^T \boldsymbol a < \cdots < \boldsymbol k_N^T \boldsymbol a. \end{equation*} We write $\mc B(\boldsymbol \alpha, \mathfrak S)$ for the collection of such lattice points $\boldsymbol a$. That is, we define \begin{equation}\label{order55} \begin{split} \mc B(\boldsymbol \alpha, \mathfrak S) &= \big\{\boldsymbol a \in \mathbb{Z}^M : \text{$2 \|\mathfrak S\|_{\infty} < \nu(\boldsymbol a)$}\\ &\qquad\qquad\text{and $\boldsymbol k_0^T \boldsymbol a < \boldsymbol k_1^T \boldsymbol a < \boldsymbol k_2^T \boldsymbol a < \cdots < \boldsymbol k_N^T \boldsymbol a$}\big\}. \end{split} \end{equation} The following lemma establishes a crucial property of $\mc B(\boldsymbol \alpha, \mathfrak S)$. \begin{lemma}\label{lemorder2} Let the subset $\mc B(\boldsymbol \alpha, \mathfrak S)$ be defined by {\rm (\ref{order55})}. Then $\mc B(\boldsymbol \alpha, \mathfrak S)$ is an infinite set, and the function $\nu$ restricted to $\mc B(\boldsymbol \alpha, \mathfrak S)$, is unbounded on $\mc B(\boldsymbol \alpha, \mathfrak S)$. \end{lemma} \begin{proof} By hypothesis \begin{equation}\label{order265} \eta = \eta(\boldsymbol \alpha, \mathfrak S) = \min\big\{\boldsymbol k_n^T \boldsymbol \alpha - \boldsymbol k_{n-1}^T \boldsymbol \alpha : 1 \le n \le N\big\} \end{equation} is a positive constant that depends on $\boldsymbol \alpha$ and $\mathfrak S$. By Dirichlet's theorem in Diophantine approximation (see \cite{cassels1965} or \cite{schmidt1980}), for each positive integer $Q$ there exists an integer $q$ such that $1 \le q \le Q$, and \begin{equation}\label{order270} \max\big\{\|q \alpha_m\| : m = 1, 2, \dots , M\big\} \le (Q + 1)^{-\frac{1}{M}} \le (q + 1)^{-\frac{1}{M}}, \end{equation} where $\|\ \|$ on the left of (\ref{order270}) is the distance to the nearest integer function. Let $\mc Q$ be the collection of positive integers $q$ such that \begin{equation}\label{order272} \max\big\{\|q \alpha_m\| : m = 1, 2, \dots , M\big\} \le (q + 1)^{-\frac{1}{M}}. \end{equation} Because $2 \le M$, at least one of the coordinates $\alpha_m$ is irrational, and it follows from (\ref{order270}) that $\mc Q$ is an infinite set. For each positive integer $q$ in $\mc Q$, we select integers $b_{1 q}, b_{2 q}, \dots , b_{M q}$, so that \begin{equation}\label{order275} \|q \alpha_m\| = |q \alpha_m - b_{m q}|,\quad\text{for $m = 1, 2, \dots , M$}. \end{equation} Then (\ref{order272}) can be written as \begin{equation}\label{order277} \max\big\{|q \alpha_m - b_{m q}| : m = 1, 2, \dots , M\big\} \le (q + 1)^{-\frac{1}{M}}. \end{equation} Let $\boldsymbol b_q = \bigl(b_{m q}\bigr)$ be the corresponding lattice point in $\mathbb{Z}^M$, so that $q \mapsto \boldsymbol b_q$ is a map from $\mc Q$ into $\mathbb{Z}^M$. It follows using (\ref{order265}) and (\ref{order277}), that for each index $n$ we have \begin{equation*}\label{order295} \begin{split} q \eta &\le q \boldsymbol k_n^T \boldsymbol \alpha - q \boldsymbol k_{n-1}^T \boldsymbol \alpha\\ &= \boldsymbol k_n^T \boldsymbol b_q - \boldsymbol k_{n-1}^T \boldsymbol b_q + \bigl(\boldsymbol k_n - \boldsymbol k_{n-1}\bigr)^T (q \boldsymbol \alpha - \boldsymbol b_q)\\ &\le \boldsymbol k_n^T \boldsymbol b_q - \boldsymbol k_{n-1}^T \boldsymbol b_q + 2 \|\mathfrak S\|_{\infty} \biggl(\sum_{m = 1}^M |q\alpha_m - b_{m q}|\biggr)\\ &\le \boldsymbol k_n^T \boldsymbol b_q - \boldsymbol k_{n-1}^T \boldsymbol b_q + 2 \|\mathfrak S\|_{\infty} M (q + 1)^{-\frac{1}{M}}. \end{split} \end{equation*} Therefore for each sufficiently large integer $q$ in $\mc Q$, the lattice point $\boldsymbol b_q$ satisfies the system of inequalities \begin{equation*}\label{order296} \boldsymbol k_0^T \boldsymbol b_q < \boldsymbol k_1^T \boldsymbol b_q < \boldsymbol k_2^T \boldsymbol b_q < \cdots < \boldsymbol k_N^T \boldsymbol b_q. \end{equation*} We conclude that for a sufficiently large integer $L$ we have \begin{equation}\label{order298} \big\{\boldsymbol b_q : \text{$L \le q$ and $q \in \mc Q$}\big\} \subseteq \mc B(\boldsymbol \alpha, \mathfrak S). \end{equation} This shows that $\mc B(\boldsymbol \alpha, \mathfrak S)$ is an infinite set. To complete the proof we will show that the function $\nu$ is unbounded on the infinite collection of lattice points \begin{equation}\label{order302} \big\{\boldsymbol b_q : \text{$L \le q$ and $q \in \mc Q$}\big\}. \end{equation} If $\nu$ is bounded on (\ref{order302}), then there exists a positive integer $B$ such that \begin{equation}\label{order305} \nu(\boldsymbol b_q) \le B \end{equation} for all points $\boldsymbol b_q$ in the set (\ref{order302}). Let $\mc C_B$ be the finite set \begin{equation*}\label{order310} \mc C_B = \big\{\boldsymbol c \in \mathbb{Z}^M : 1 \le \|\boldsymbol c\|_{\infty} \le B\big\}. \end{equation*} Because $\alpha_1, \alpha_2, \dots , \alpha_M$, are $\mathbb{Q}$-linearly independent, and $\mc C_B$ is a finite set of nonzero lattice points, we have \begin{equation}\label{order320} 0 < \delta_B = \min\bigg\{\biggl|\sum_{m = 1}^M c_m \alpha_m\biggr| : \boldsymbol c \in \mc C_B\bigg\}. \end{equation} By our assumption (\ref{order305}), for each point $\boldsymbol b_q$ in (\ref{order302}) there exists a point $\boldsymbol c_q = (c_{m q})$ in $\mc C_B$, such that \begin{equation}\label{order325} \boldsymbol c_q^T \boldsymbol b_q = \sum_{m = 1}^M c_{m q} b_{m q} = 0. \end{equation} Using (\ref{order277}) and (\ref{order325}), we find that \begin{equation}\label{order330} \begin{split} q \delta_B &\le q \biggl|\sum_{m = 1}^M c_{m q} \alpha_m \biggr|\\ &= \biggl|\sum_{m = 1}^M c_{m q}\bigl(q \alpha_m - b_{m q}\bigr)\biggr|\\ &\le \biggl(\sum_{m = 1}^M |c_{m q}|\biggr) \max\big\{|q \alpha_m - b_{m q}| : m = 1, 2, \dots , M\big\}\\ &\le M B (q + 1)^{-\frac{1}{M}}. \end{split} \end{equation} But (\ref{order330}) is impossible when $q$ is sufficiently large, and the contradiction implies that the assumption (\ref{order305}) is false. We have shown that $\nu$ is unbounded on the set (\ref{order302}). In view of (\ref{order298}), the function $\nu$ is unbounded on $\mc B(\boldsymbol \alpha, \mathfrak S)$. \end{proof} \section{Proof of Theorem \ref{thmintro2}} If $M = 1$ then the inequality (\ref{intro125}) follows from Corollary \ref{corintro1}. Therefore we assume that $2 \le M$. Let $\varphi_{\boldsymbol \alpha} : \mathbb{Z}^M \rightarrow \mathbb{R}$ be an injective homomorphism, and let the set $\mathfrak S$ be indexed so that {\rm (\ref{intro115})} and {\rm (\ref{intro120})} hold. It follows from Lemma \ref{lemorder2} that the collection of lattice points $\mc B(\boldsymbol \alpha, \mathfrak S)$ defined by (\ref{order55}), is an infinite set, and the function $\nu$ defined by (\ref{order25}) is unbounded on $\mc B(\boldsymbol \alpha, \mathfrak S)$. Let $\boldsymbol a$ be a lattice point in $\mc B(\boldsymbol \alpha, \mathfrak S)$. If $F : (\mathbb{R}/\mathbb{Z})^M \rightarrow \mathbb{C}$ is given by (\ref{intro70}), we define an associated trigonometric polynomial $F_{\boldsymbol a} : \mathbb{R}/\mathbb{Z} \rightarrow \mathbb{C}$ in one variable by \begin{equation}\label{order360} F_{\boldsymbol a}(t) = \sum_{\boldsymbol k \in \mathfrak S} \widehat F(\boldsymbol k) e\bigl(\boldsymbol k^T \boldsymbol a t\bigr) = \sum_{n = 0}^N \widehat F(\boldsymbol k_n) e\bigl(\boldsymbol k_n^T \boldsymbol a t\bigr), \end{equation} where the equality on the right of (\ref{order360}) uses the indexing (\ref{intro115}) induced by $\varphi_{\boldsymbol \alpha}$. The hypothesis (\ref{intro120}) implies that the integer exponents on the right of (\ref{order360}) satisfy the system of inequalities \begin{equation}\label{order365} \boldsymbol k_0^T \boldsymbol a < \boldsymbol k_1^T \boldsymbol a < \boldsymbol k_2^T \boldsymbol a < \cdots < \boldsymbol k_N^T \boldsymbol a. \end{equation} Then it follows from (\ref{intro40}), (\ref{order360}), and (\ref{order365}), that \begin{equation}\label{order370} \bigl|\widehat F\bigl(\boldsymbol k_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F_{\boldsymbol a}),\quad\text{for each $n = 0, 1, 2, \dots , N$}. \end{equation} We have proved that the system of inequalities (\ref{order370}) holds for each lattice point $\boldsymbol a$ in $\mc B(\boldsymbol \alpha, \mathfrak S)$. To complete the proof we appeal to an inequality of Boyd \cite[Lemma 2]{boyd1981}, which asserts that if $\boldsymbol b$ is a parameter in $\mathbb{Z}^M$ then \begin{equation}\label{order375} \limsup_{\nu(\boldsymbol b) \rightarrow \infty} \mathfrak M\bigl(F_{\boldsymbol b}\bigr) \le \mathfrak M(F). \end{equation} More precisely, if $\boldsymbol b_1, \boldsymbol b_2, \boldsymbol b_3, \dots $, is a sequence of points in $\mathbb{Z}^M$ such that \begin{equation}\label{order380} \lim_{j \rightarrow \infty} \nu(\boldsymbol b_j) = \infty, \end{equation} then \begin{equation}\label{order385} \limsup_{j \rightarrow \infty} \mathfrak M\bigl(F_{\boldsymbol b_j}\bigr) \le \mathfrak M(F). \end{equation} Because $\nu$ is unbounded on $\mc B(\boldsymbol \alpha, \mathfrak S)$, there exists a sequence $\boldsymbol b_1, \boldsymbol b_2, \boldsymbol b_3, \dots $, contained in $\mc B(\boldsymbol \alpha, \mathfrak S)$ that satisfies (\ref{order380}). Hence the sequence $\boldsymbol b_1, \boldsymbol b_2, \boldsymbol b_3, \dots $, in $\mc B(\boldsymbol \alpha, \mathfrak S)$ also satisfies (\ref{order385}). From (\ref{order370}) we have \begin{equation}\label{order390} \bigl|\widehat F\bigl(\boldsymbol k_n\bigr)\bigr| \le \binom{N}{n} \mathfrak M(F_{\boldsymbol b_j}), \end{equation} for each $n = 0, 1, 2, \dots , N$, and for each $j = 1, 2, 3, \dots $. The inequality (\ref{intro125}) plainly follows from (\ref{order385}) and (\ref{order390}). This completes the proof of Theorem \ref{thmintro2}. \medskip Boyd conjectured in \cite{boyd1981} that (\ref{order375}) could be improved to \begin{equation}\label{order400} \lim_{\nu(\boldsymbol b) \rightarrow \infty} \mathfrak M\bigl(F_{\boldsymbol b}\bigr) = \mathfrak M(F). \end{equation} The proposed identity (\ref{order400}) was later verified by Lawton \cite{lawton1983} (see also \cite{dobrowolski2017} and \cite{lalin2013}). Here we have used Boyd's inequality (\ref{order375}) because it is simpler to prove than (\ref{order400}), and the more precise result (\ref{order400}) does not effect the inequality (\ref{intro125}).
\section{Introduction} \label{intro} In principle, the solid theoretical justification of the QCD applicability to heavy-flavor production requires a detailed analysis of the convergence of the perturbative series for corresponding production cross sections. Presently, such analysis is below the horizon because the basic spin-averaged characteristics of heavy flavor photo- \cite{Ellis-Nason,Smith-Neerven}, electro- \cite{LRSN}, and hadro-production \cite{Nason-D-E-1,Nason-D-E-2,BKNS} are known exactly only up to the next-to-leading order (NLO) in $\alpha_s$.\footnote{Recently, some 25 years after the NLO results \cite{BKNS}, first complete next-to-next-to-leading order (NNLO) predictions for the heavy-quark pair hadroproduction were obtained \cite{Czakon-Mitov-1,Czakon-Mitov-2}.} The problem is that these NLO corrections are large; they increase the leading-order (LO) predictions for both charm and bottom production cross sections by approximately a factor of two. Moreover, soft-gluon resummation of the threshold Sudakov logarithms indicates that higher-order contributions can also be substantial. (For details, see Refs.~\cite{Laenen-Moch,kid2}.) Perturbative instability leads to a high sensitivity of the theoretical calculations to standard uncertainties in the input QCD parameters. The total uncertainties associated with the unknown values of these parameters are so large that one can only estimate the order of magnitude of the perturbative QCD (pQCD) predictions for charm production cross sections in wide energy range \cite{Mangano-N-R,Frixione-M-N-R,R-Vogt,Moch2}. Since the charm and bottom production cross sections are not perturbatively stable, it is of special interest to study those observables that are well-defined in pQCD. Measurements of such observables will provide, in particular, direct test of the conventional parton model based on pQCD. Moreover, as discussed below, some of the perturbatively stable quantities are sensitive to resummation of the mass logarithms and thus will be good probes of the heavy-quark densities in the proton. Experimental information about the heavy-quark content of the proton is necessary for construction of the appropriate variable-flavor-number factorization scheme (VFNS) which may improve the convergence of the perturbative series \cite{ACOT,Collins}. Nontrivial examples of the perturbatively stable observables were proposed in Refs.~\cite{we1,we2,we3,we4,we5,we6,we7,we8}, where the azimuthal $\cos(2\varphi)$ asymmetry, $A(x,Q^2)$, and Callan-Gross ratio, $R(x,Q^2)=F_L/F_T$, in heavy-quark leptoproduction were analyzed.\footnote{Well-known examples include the shapes of differential cross sections of heavy flavor production, which are sufficiently stable under radiative corrections. Note also the perturbative stability of the charge asymmetry in top-quark hadroproduction \cite{Almeida-S-V}.} In particular, radiative corrections to the azimuthal $\cos(2\varphi)$ asymmetry were considered in Refs.~\cite{we1,we2,we3,we4}. It was shown that, contrary to the production cross sections, the asymmetry is quantitatively well defined in pQCD: the contribution of the dominant photon-gluon fusion mechanism to $A(x,Q^2)$ is stable, both parametrically and perturbatively. The perturbative and parametric stability of the ratio $R(x,Q^2)=F_L/F_T$ was discussed in Refs.~\cite{we7,we8}. It was shown that large perturbative contributions to the structure functions $F_T(x,Q^2)$ and $F_L(x,Q^2)$ cancel each other in their ratio $R(x,Q^2)$ with good accuracy. As a result, the NLO corrections to the LO photon-gluon fusion predictions for the Callan-Gross ratio are less than $10\%$ in a wide region of the variables $x$ and $Q^2$. In the present paper, we continue the studies of perturbatively stable observables in heavy-quark leptoproduction, \begin{equation} \ell(l )+N(p)\rightarrow \ell(l -q)+Q(p_{Q})+X[\bar{Q}](p_{X}). \label{1} \end{equation} Neglecting the contribution of $Z$-boson exchange, the azimuth-dependent cross section of the reaction (\ref{1}) can be written as \begin{eqnarray} \frac{\mathrm{d}^{3}\sigma_{lN}}{\mathrm{d}x\mathrm{d}Q^{2}\mathrm{d}\varphi }&=&\frac{2\alpha^{2}_{em}}{Q^4} \frac{y^2}{1-\varepsilon}\Bigl[ F_{T}( x,Q^{2})+ \varepsilon F_{L}(x,Q^{2}) \Bigr. \nonumber \\ &&+\Bigl. \varepsilon F_{A}( x,Q^{2})\cos 2\varphi \nonumber \\ &&+2\sqrt{\varepsilon(1+\varepsilon)} F_{I}( x,Q^{2})\cos \varphi\Bigr], \label{2} \end{eqnarray} where $\alpha_{\mathrm{em}}$ is Sommerfeld's fine-structure constant, \\ $F_{2}(x,Q^2)=2x(F_{T}+F_{L})$, the quantity $\varepsilon$ measures the degree of the longitudinal polarization of the virtual photon in the Breit frame \cite{dombey}, $\varepsilon=\frac{2(1-y)}{1+(1-y)^2}$, and the kinematic variables are defined by \begin{eqnarray} \bar{S}=2\left( \ell\cdot p\right),\, \qquad &Q^{2}=-q^{2},\qquad \quad &x=\frac{Q^{2}} {2p\cdot q}, \nonumber \\ y=\frac{p\cdot q}{p\cdot \ell },\qquad \quad ~ &Q^{2}=xy\bar{S},\qquad \quad &\xi= \frac{Q^2}{m^2}. \label{3} \end{eqnarray} In the nucleon rest frame, the azimuth $\varphi$ is the angle between the lepton scattering plane and the heavy quark production plane, defined by the exchanged photon and the detected quark $Q$ (see Fig.~\ref{Fg.1}). The covariant definition of $\varphi $ is \begin{eqnarray} \cos \varphi &=&\frac{r\cdot n}{\sqrt{-r^{2}}\sqrt{-n^{2}}},\, \sin \varphi =\frac{Q^{2}\sqrt{1/x^{2}+4m_{N}^{2}/Q^{2}}}{2\sqrt{-r^{2}}% \sqrt{-n^{2}}}n\cdot \ell , \nonumber \\ r^{\mu } &=&\varepsilon ^{\mu \nu \alpha \beta }p_{\nu }q_{\alpha }\ell _{\beta },\quad \, n^{\mu }=\varepsilon ^{\mu \nu \alpha \beta }q_{\nu }p_{\alpha }p_{Q\beta }. \label{5} \end{eqnarray} \begin{figure} \begin{center} \mbox{\epsfig{file=graph.eps,width=160pt}} \caption{\label{Fg.1}\small Definition of the azimuthal angle $\varphi$ in the nucleon rest frame.} \end{center} \end{figure} In Eqs.~(\ref{3}) and (\ref{5}), $m$ and $m_{N}$ are the masses of the heavy quark and the target, respectively. The Callan-Gross ratio, $R(x,Q^{2})$, and azimuthal $\cos(2\varphi)$ asymmetry, $A(x,Q^{2})$, are defined as \begin{equation}\label{6} R(x,Q^{2})=\frac{F_{L}}{F_{T}}(x,Q^{2}),\,\,\,\,\, A(x,Q^{2})=2x\frac{F_{A}}{F_{2}}(x,Q^{2}). \end{equation} In this paper, we first review the available theoretical results for the quantities $R(x,Q^{2})$ and $A(x,Q^{2})$ adding for completeness the ingredients missed in previous analyses. In particular, in Refs.~\cite{we7,we8}, only the contributions of the photon-gluon fusion mechanism to $R(x,Q^2)$ were considered at both LO and NLO. Now, using the explicit NLO results \cite{LRSN,Blumlein}, we provide the complete NLO predictions which include the contributions of both the photon-gluon, $\gamma ^{*}g\to Q\bar{Q}(g)$, and photon-(anti)quark, $\gamma ^{*}q\to Q\bar{Q}q$, fusion components. The complete ${\cal O}(\alpha_{s}^2)$ corrections to $R(x,Q^2)$ do not exceed 10--15$\%$ in the energy range $x>10^{-4}$. Presently, the exact NLO predictions for the azimuth dependent structure function $F_{A}(x,Q^{2})$ are not available. For this reason, we use the soft-gluon approximation to estimate the radiative corrections to $F_{A}(x,Q^{2})$. Our analysis shows that the NLO soft-gluon predictions for $A(x,Q^{2})$ affect the LO results by less than a few percent at $Q^2 \lesssim m^2$ and $x\gtrsim 10^{-2}$. Note also that both the LO and NLO predictions for the Callan-Gross ratio and azimuthal asymmetry are sufficiently insensitive, to within ten percent, to standard uncertainties in the QCD input parameters $\mu_{F}$, $\mu_{R}$, $\Lambda_{\mathrm{QCD}}$, and the parton distribution functions (PDFs). Then we consider some experimental and phenomenological applications of the observed perturbative stability. We derive the compact analytic formulae for the hadron-level azimuthal asymmetry and Callan-Gross ratio in the limit of low $x\ll 1$. It is shown that our analytic LO results for $A(x\to 0,Q^{2})$ and $R(x\to 0,Q^2)$ are stable not only under the NLO corrections to the partonic cross sections, but also under the DGLAP \cite{DGLAP1,DGLAP2,DGLAP3} evolution of the gluon PDF. As to the experimental applications, our compact LO formula for $R(x\to 0,Q^2)$ conveniently reproduce the last HERA results for $F_2^c(x,Q^2)$ and $F_2^b(x,Q^2)$ obtained by H1 Collaboration \cite{H1HERA1} with the help of more cumbersome NLO estimations of $F_{L}(x,Q^2)$. Analytic predictions for $A(x\to 0,Q^{2})$ will be useful in extraction of the azimuthal asymmetries from the incoming COMPASS results as well as from future data on heavy-quark leptoproduction at the proposed EIC \cite{EIC} and LHeC \cite{LHeC,LHeC2} colliders at BNL/JLab and CERN, correspondingly. Finally, we analyze the properties of $R(x,Q^2)$ and $A(x,Q^2)$ within the variable- flavor- number scheme (VFNS) of QCD. These quantities seems to be very promising probes of the heavy-quark densities in the proton. This is because the Callan-Gross ratio and azimuthal asymmetry are perturbatively stable but sensitive to resummation of the mass logarithms of the type $\alpha_{s}\ln\left( Q^{2}/m^{2}\right)$. Our analysis shows that resummation of the mass logarithms leads to reduction of the ${\cal O}(\alpha_{s})$ predictions for $A(x,Q^2)$ and $R(x,Q^2)$ by $(30$--$50)\%$ at $x\sim 10^{-2}$--$10^{-1}$ and $Q^2\gg m^2$.\footnote{At ${\cal O}(\alpha_{s}^2)$, the corresponding reduction of the finite-flavor-number scheme predictions for $R(x,Q^2)$ is estimated to be about 20$\%$.} We conclude that the ratios $R(x,Q^2)$ and $A(x,Q^2)$ will be good probes of the heavy-quark content of the proton. This paper is organized as follows. In Section~\ref{NLO}, we analyze the exact NLO results for the Callan-Gross ratio. The soft-gluon contributions to $A(x,Q^{2})$ are investigated in Section~\ref{SGR}. The analytic LO results for the ratios $R(x,Q^{2})$ and $A(x,Q^{2})$ at low $x$ are discussed in Section~\ref{analytic}. In Secton~\ref{resum}, we consider the resummation of the mass logarithms of the type $\alpha_{s}\ln\left( Q^{2}/m^{2}\right)$ for the Callan-Gross ratio and azimuthal $\cos(2\varphi)$ asymmetry. \section{\label{NLO} Exact NLO predictions for the Callan-Gross ratio $R(x,Q^2)$} At leading order, ${\cal O}(\alpha_{s})$, leptoproduction of heavy flavors proceeds through the photon-gluon fusion (GF) mechanism, \begin{equation} \label{7} \gamma ^{*}(q)+g(k_{g})\rightarrow Q(p_{Q})+\bar{Q}(p_{\bar{Q}}). \end{equation} The relevant Feynman diagrams are depicted in Fig.~\ref{Fg.2}\emph{a}. \begin{figure*} \begin{center} \mbox{\epsfig{file=GFQSgraph.eps,width=450pt}} \end{center} \caption{\label{Fg.2}\small LO Feynman diagrams of the photon-gluon fusion (a) and photon-quark scattering (b).} \end{figure*} The corresponding $\gamma ^{*}g$ cross sections, $\hat{\sigma}_{k,g}^{(0)}(z,\lambda)$ ($k=2,L,A,I$), have the form \cite{LW1}: \begin{eqnarray} \hat{\sigma}_{2,g}^{(0)}(z,\lambda)&=&\frac{\alpha_{s}}{2\pi}\hat{\sigma}_{B}(z) \Bigl\{\bigl[(1-z)^{2}+z^{2}+4\lambda z(1-3z) \nonumber\\ &&-8\lambda^{2}z^{2}\bigr] \ln\frac{1+\beta_{z}}{1-\beta_{z}} \nonumber\\ &&-\left[1+4z(1-z)(\lambda-2)\right]\beta_{z}\Bigr\}, \label{8} \\ \hat{\sigma}_{L,g}^{(0)}(z,\lambda)&=&\frac{2\alpha_{s}}{\pi}\hat{\sigma}_{B}(z)z \Bigl\{-2\lambda z\ln\frac{1+\beta_{z}}{1-\beta_{z}}+\left(1-z\right)\beta_{z}\Bigr\},\nonumber\\ \hat{\sigma}_{A,g}^{(0)}(z,\lambda)&=&\frac{\alpha_{s}}{\pi}\hat{\sigma}_{B}(z)z \Bigl\{2\lambda\left[1-2z(1+\lambda)\right]\ln\frac{1+\beta_{z}}{1-\beta_{z}} \nonumber \\ &&+(1-2\lambda)(1-z)\beta_{z}\Bigr\}, \nonumber \\ \hat{\sigma}_{I,g}^{(0)}(z,\lambda)&=&0, \nonumber \end{eqnarray} with $\hat{\sigma}_{B}(z)=(2\pi)^2e_{Q}^{2}\alpha_{\mathrm{em}}\,z/Q^{2}$, where $e_{Q}$ is the electric charge of quark $Q$ in units of the positron charge and $\alpha_{s}\equiv\alpha_{s}(\mu_R^2)$ is the strong-coupling constant. In Eqs.~(\ref{8}), we use the following definition of partonic kinematic variables: \begin{equation}\label{9} z=\frac{Q^{2}}{2q\cdot k_{g}},\qquad\lambda =\frac{m^{2}}{Q^{2}}, \qquad \beta_{z}=\sqrt{1-\frac{4\lambda z}{1-z}}. \end{equation} The hadron-level cross sections, $\sigma_{k,GF}(x,Q^2)$ ($k=2,L,A,I$), corresponding to the GF subprocess, have the form \begin{equation}\label{10} \sigma_{k,GF}(x,Q^2)=\int_{x(1+4\lambda)}^{1}\mathrm{d}z\,g(z,\mu_{F}) \hat{\sigma}_{k,g}\left(x/z,\lambda,\mu_{F}\right), \end{equation} where $g(z,\mu_{F})$ is the gluon PDF of the proton. The leptoproduction cross sections $\sigma_{k}(x,Q^2)$ are related to the structure functions $F_{k}(x,Q^2)$ as follows: \begin{eqnarray} F_{k}(x,Q^2) &=&\frac{Q^{2}}{8\pi^{2}\alpha_{\mathrm{em}}x}\sigma_{k}(x,Q^2) \qquad (k=T,L,A,I), \nonumber \\ F_{2}(x,Q^2) &=&\frac{Q^{2}}{4\pi^{2}\alpha_{\mathrm{em}}}\sigma_{2}(x,Q^2), \label{11} \end{eqnarray} where $\sigma_{2}(x,Q^2)=\sigma_{T}(x,Q^2)+\sigma_{L}(x,Q^2)$. At NLO, ${\cal O}(\alpha_{s}^2)$, the contributions of both the photon-gluon, $\gamma ^{*}g\to Q\bar{Q}(g)$, and photon-(anti)quark, $\gamma ^{*}q\to Q\bar{Q}q$, fusion components are usually presented in terms of the dimensionless coefficient functions $c_{k}^{(n,l)}(z,\lambda)$ as \begin{eqnarray} \hat{\sigma}_{k}(z,\lambda,\mu^{2})&=&\frac{e_{Q}^{2}\alpha_{\mathrm{em}}\alpha_{s}}{m^{2}} \Bigl\{ c_{k}^{(0,0)}(z,\lambda)+4\pi\alpha_{s}\Bigl[c_{k}^{(1,0)}(z,\lambda) \nonumber \\ &&+c_{k}^{(1,1)}(z,\lambda)\ln\frac{\mu^{2}}{m^{2}} \Bigr]\Bigr\}+{\cal O}(\alpha_{s}^3), \label{12} \end{eqnarray} where we identify $\mu=\mu_{F}=\mu_{R}$. The coefficients $c_{k,g}^{(1,1)}(z,\lambda)$ and $c_{k,q}^{(1,1)}(z,\lambda)$ ($k=T,L,\\A,I$) of the $\mu$-dependent logarithms can be evaluated explicitly using renormalization group arguments \cite{Ellis-Nason,LRSN}. The results of direct calculations of the coefficient functions $c_{k,g}^{(1,0)}(z,\lambda)$ and $c_{k,q}^{(1,0)}(z,\lambda)$ for $k=T,L$ are presented in Refs.~ \cite{LRSN,Blumlein}. Using these NLO predictions, we analyze the $Q^2$ dependence of the ratio $R(x,Q^2)=F_L/F_T$ at fixed values of $x$. The panels $(a)$, $(b)$ and $(c)$ of Fig.~\ref{Fg.3} show the NLO predictions for Callan-Gross ratio $R(x,Q^2)$ in charm leptoproduction as a function of $\xi=Q^2 /m^2$ at $x=10^{-1}$, $10^{-2}$ and $10^{-3}$, correspondingly. In our calculations, we use the CTEQ6M parametrization of the PDFs together with the values $m_c=1.3$~GeV and $\Lambda=326$~MeV \cite{CTEQ6}.\footnote{Note that we convolute the NLO CTEQ6M distribution functions with both the LO and NLO partonic cross sections that makes it possible to estimate directly the degree of stability of the pQCD predictions under radiative corrections.} Unless otherwise stated, we use $\mu=\sqrt{4m_c^{2}+Q^{2}}$ throughout this paper. \begin{figure*} \begin{center} \begin{tabular}{cc} \mbox{\epsfig{file=R_Q_1_stab_2.eps,width=230pt}} & \mbox{\epsfig{file=R_Q_2_stab_2.eps,width=230pt}}\\ \mbox{\epsfig{file=R_Q_3_stab_2.eps,width=230pt}} & \mbox{\epsfig{file=K_Q_stab_2.eps,width=230pt}}\\ \end{tabular} \caption{\label{Fg.3}\small $(a)$, $(b)$ and $(c)$ \emph{panels:} $Q^2$ dependence of the LO (solid curves) and NLO (dashed curves) predictions for the Callan-Gross ratio, $R(x,Q^2)=F_L/F_T$, in charm leptoproduction at $x=10^{-1}$, $10^{-2}$ and $10^{-3}$. \emph{$(d)$~panel:} $Q^2$ dependence of the $K$ factor for the transverse structure function, $K(x,Q^2)=F_T^{\mathrm{NLO}}/F_T^{\mathrm{LO}}$, at the same values of $x$.} \end{center} \end{figure*} For comparison, the panel $(d)$ of Fig.~\ref{Fg.3} shows the $Q^2$ dependence of the QCD correction factor for the transverse structure function, $K(x,Q^2)=F_T^{\mathrm{NLO}}/F_T^{\mathrm{LO}}$. One can see that sizable radiative corrections to the structure functions $F_T(x,Q^2)$ and $F_L(x,Q^2)$ cancel each other in their ratio $R(x,Q^2)=F_L/F_T$ with good accuracy. As a result, the NLO contributions to the ratio $R(x,Q^2)$ are of the order of $10\%$ for $x > 10^{-4}$. Another remarkable property of the Callan-Gross ratio closely related to fast perturbative convergence is its parametric stability.\footnote{Of course, parametric stability of the fixed-order results does not imply a fast convergence of the corresponding series. However, a fast convergent series must be parametrically stable. In particular, it must exhibit feeble $\mu_{F}$ and $\mu_{R}$ dependences.} Our analysis shows that the fixed-order predictions for the ratio $R(x,Q^2)$ are less sensitive to standard uncertainties in the QCD input parameters than the corresponding ones for the production cross sections. For instance, sufficiently above the production threshold, changes of $\mu$ in the range $(1/2)\sqrt{4m_{c}^{2}+Q^{2}}<\mu <2 \sqrt{4m_{c}^{2}+Q^{2}}$ only lead to $10\%$ variations of $R(x,Q^{2})$ at NLO. For comparison, at $x=0.1$ and $\xi = 4.4$, such changes of $\mu$ affect the NLO predictions for the quantities $F_{T}(x,Q^2)$ and $R(x,Q^{2})$ in charm leptoproduction by more than $100\%$ and less than $10\%$, respectively. Keeping the value of the variable $Q^{2}$ fixed, we analyze the dependence of the pQCD predictions on the uncertainties in the heavy-quark mass. We observe that changes of the charm-quark mass in the interval 1.3 $<m_{c}<1.7$ GeV affect the Callan-Gross ratio by (2--3)\% at $Q^{2}=10$ GeV$^2$ and $x<10^{-1}$. The corresponding variations of the structure functions $F_T(x,Q^2)$ and $F_L(x,Q^2)$ are about 20\%. We also verified that the recent CTEQ versions \cite{CTEQ6,CT10,CT14} \footnote{For a review of the present status of all currently available PDF sets, see Ref.~\cite{PDF-LHC}.} of the PDFs lead to NLO predictions for $R(x,Q^{2})$ that coincide with each other with an accuracy of about $5\%$ at $10^{-3}\leq x< 10^{-1}$. \section{\label{SGR} Soft-gluon corrections to the azimuthal asymmetry $A(x,Q^{2})$ at NLO} Presently, the exact NLO predictions for the azimuth dependent structure function $F_{A}(x,Q^{2})$ are not available. For this reason, we consider the NLO predictions for the azimuthal $\cos(2\varphi)$ asymmetry within the soft-gluon approximation. For the reader's convenience, we collect the final results for the parton-level GF cross sections to the next-to-leading logarithmic (NLL) accuracy. More details may be found in Refs.~\cite{Laenen-Moch,we2,we4,we7}. At NLO, photon-gluon fusion receives contributions from the virtual ${\cal O}(\alpha_{\mathrm{em}}\alpha_{s}^{2})$ corrections to the Born process~(\ref{7}) and from real-gluon emission, \begin{equation} \label{13} \gamma ^{*}(q)+g(k_{g})\rightarrow Q(p_{Q})+\bar{Q}(p_{\bar{Q}})+g(p_{g}). \end{equation} The partonic invariants describing the single-particle inclusive (1PI) kinematics are \begin{eqnarray} s^{\prime }&=&2q\cdot k_{g}=\zeta S^{\prime }, \quad \, t_{1}=\left( k_{g}-p_{Q}\right) ^{2}-m^{2}=\zeta T_{1}, \nonumber \\ s_{4}&=&s^{\prime }+t_{1}+u_{1},\quad \, ~ u_{1}=\left( q-p_{Q}\right) ^{2}-m^{2}=U_{1}, \label{14} \end{eqnarray} where $\zeta$ is defined through $\vec{k}_{g}= \zeta\vec{p}$, $s^{\prime}=s+Q^{2}$, and $s_{4}$ measures the inelasticity of the reaction (\ref{13}). The corresponding 1PI hadron-level variables describing the reaction (\ref{1}) are \begin{eqnarray} S^{\prime }&=&2q\cdot p=S+Q^{2},\qquad T_{1}=\left( p-p_{Q}\right) ^{2}-m^{2}, \nonumber \\ S_{4}&=&S^{\prime }+T_{1}+U_{1},\qquad \quad U_{1}=\left( q-p_{Q}\right) ^{2}-m^{2}. \label{15} \end{eqnarray} The exact NLO calculations of unpolarized heavy-quark production \cite{Ellis-Nason,Smith-Neerven,LRSN,Nason-D-E-1} show that, near the partonic threshold, a strong logarithmic enhancement of the cross sections takes place in the collinear, $|\vec{p}_{g,T}|\to 0$, and soft, $|\vec{p}_{g}|\to 0$, limits. This threshold (or soft-gluon) enhancement is of universal nature in perturbation theory and originates from an incomplete cancellation of the soft and collinear singularities between the loop and the bremsstrahlung contributions. Large leading and next-to-leading threshold logarithms can be resummed to all orders of the perturbative expansion using the appropriate evolution equations \cite{Contopanagos-L-S}. The analytic results for the resummed cross sections are ill-defined due to the Landau pole in the coupling constant $\alpha_{s}$. However, if one considers the obtained expressions as generating functionals and re-expands them at fixed order in $\alpha_{s}$, no divergences associated with the Landau pole are encountered. Soft-gluon resummation for the photon-gluon fusion was performed in Ref.~\cite{Laenen-Moch} and confirmed in Refs.~\cite{we2,we4}. To NLL accuracy, the perturbative expansion for the partonic cross sections, $\mathrm{d}^{2}\hat{\sigma}_{k}(s^{\prime},t_{1},u_{1})/(\mathrm{d}t_{1}\, \mathrm{d}u_{1})$ ($k=T,L,A,I$), can be written in factorized form as \begin{eqnarray} s^{\prime 2}\frac{\mathrm{d}^{2}\hat{\sigma}_{k}}{\mathrm{d}t_{1}\mathrm{d}u_{1}}( s^{\prime },&&\!\! t_{1},u_{1})=B_{k}^{\mathrm{ Born}}( s^{\prime },t_{1},u_{1})\Biggl[\delta (s^{\prime }+t_{1}+u_{1}) \nonumber \\ &&+\sum_{n=1}^{\infty } \left( \frac{\alpha _{s}C_{A}}{\pi}\right)^{n}K^{(n)}( s^{\prime },t_{1},u_{1})\Biggr]. \label{16} \end{eqnarray} The functions $K^{(n)}( s^{\prime },t_{1},u_{1}) $ in Eq.~(\ref{16}) originate from the collinear and soft limits. Since the azimuthal angle $\varphi $ is the same for both $\gamma ^{*}g$ and $Q\bar{Q}$ center-of-mass systems in these limits, the functions $K^{(n)}( s^{\prime },t_{1},u_{1}) $ are also the same for all $\hat{\sigma}_{k}$, ($k=T,L,A,I$). At NLO, the soft-gluon corrections to NLL accuracy in the $\overline{\mathrm{MS}}$ scheme read \cite{Laenen-Moch} \begin{eqnarray} K^{(1)}( s^{\prime },t_{1},u_{1}) &=& 2\left[ \frac{\ln \left( s_{4}/m^{2}\right) }{s_{4}}\right]_{+}-\left[\frac{1}{s_{4}}\right]_{+}\Biggl[1+\ln \frac{u_{1}}{t_{1}} \nonumber \\ &&-\left( 1-\frac{2C_{F}}{ C_{A}}\right) \left( 1+\mathrm{Re}L_{\beta }\right) +\ln \frac{\mu ^{2}}{m^{2}} \Biggr] \nonumber \\ &&{}+\delta ( s_{4}) \ln \frac{-u_{1}}{m^{2}} \ln \frac{\mu ^{2}}{m^{2}}. \label{17} \end{eqnarray} In Eq.~(\ref{17}), $C_{A}=N_{c}$, $ C_{F}=(N_{c}^{2}-1)/(2N_{c})$, $N_{c}$ is the number of quark colors, and $ L_{\beta }=(1-2m^{2}/s)\{\ln[(1-\beta_{z})/(1+\beta_{z})]+$i$\pi\}$ with $\beta_{z}=\sqrt{1-4m^{2}/s}$. The single-particle inclusive ``plus'' distributions are defined by \begin{eqnarray} \left[\frac{\ln^{l}\left( s_{4}/m^{2}\right) }{s_{4}}\right]_{+}&=&\lim_{\epsilon \rightarrow 0}\Biggl[\frac{\ln^{l}\left(s_{4}/m^{2}\right) }{s_{4}}\theta ( s_{4}-\epsilon) \nonumber \\ &&+\frac{1}{l+1}\ln ^{l+1}\frac{\epsilon }{m^{2}}\delta (s_{4})\Biggr].\label{18} \end{eqnarray} For any sufficiently regular test function $h(s_{4})$, Eq.~(\ref{18}) implies that \begin{eqnarray} &&\int_{0}^{s_{4}^{\max }}\mathrm{d}s_{4}\,h(s_{4})\left[ \frac{\ln ^{l}\left( s_{4}/m^{2}\right) }{s_{4}}\right]_{+} \nonumber \\ &&=\int_{0}^{s_{4}^{\max}}\mathrm{d}s_{4}\left[ h(s_{4})-h(0)\right] \frac{\ln ^{l}\left( s_{4}/m^{2}\right)}{s_{4}} \nonumber \\ &&~~+\frac{1}{l+1}h(0)\ln ^{l+1}\frac{s_{4}^{\max }}{m^{2}}. \label{19} \end{eqnarray} Standard NLL soft-gluon approximation allows us to determine unambiguously only the singular $s_{4}$ behavior of the cross sections defined by Eq.~(\ref{18}). To fix the $s_{4}$ dependence of the Born-level distributions $B_{k}^{\mathrm{Born}}(s^{\prime},t_{1},u_{1})$ in Eq.~(\ref{16}), we use the method proposed in \cite{we7} and based on comparison of the soft-gluon predictions with the exact NLO results. According to \cite{we7}, \begin{eqnarray} B_{k}^{\mathrm{ Born}}( s^{\prime },t_{1},u_{1})&\equiv& s^{\prime 2}\frac{\mathrm{d}\hat{\sigma}^{(0)}_{k,g}}{\mathrm{d}t_{1}}(x_{4}s^{\prime},x_{4}t_{1}), \nonumber \\ x_{4}&=&-\frac{u_{1}}{s^{\prime}+t_{1}}=1-\frac{s_{4}}{s^{\prime}+t_{1}}, \label{20} \end{eqnarray} where the leading order GF differential distributions, $\frac{\mathrm{d}\hat{\sigma}^{(0)}_{k,g}}{\mathrm{d}t_{1}}(s^{\prime},t_{1})$, are: \begin{eqnarray} \frac{\mathrm{d}\hat{\sigma}^{(0)}_{T,g}}{\mathrm{d}t_{1}}(s^{\prime},t_{1})=\pi e_{Q}^{2}\alpha_{\mathrm{em}}\alpha_{s}&&\!\!\!\frac{1}{s^{\prime 2}}\Biggl\{-\frac{t_{1}}{s^{\prime}+t_{1}}-\frac{s^{\prime}+t_{1}}{t_{1}} \nonumber \\ +4\left( \frac{s}{s^{\prime}}+\frac{m^{2}s^{\prime }}{t_{1}(s^{\prime}+t_{1})}\right)&&\!\!\!\left[ \frac{Q^{2}}{s^{\prime}}-\frac{s^{\prime}(m^{2}-Q^{2}/2)}{t_{1}(s^{\prime}+t_{1})}\right] \Biggr\}, \nonumber \\ \frac{\mathrm{d}\hat{\sigma}^{(0)}_{L,g}}{\mathrm{d}t_{1}}(s^{\prime},t_{1})=\pi e_{Q}^{2}\alpha_{\mathrm{em}}\alpha_{s}&&\!\!\!\frac{8Q^{2}}{s^{\prime 3}}\left( \frac{s}{s^{\prime }}+\frac{m^{2}s^{\prime }}{t_{1}(s^{\prime}+t_{1})}\right), \label{21} \\ \frac{\mathrm{d}\hat{\sigma}^{(0)}_{A,g}}{\mathrm{d}t_{1}}(s^{\prime},t_{1})=\pi e_{Q}^{2}\alpha_{\mathrm{em}}\alpha_{s}&&\!\!\!\frac{4}{s^{\prime 2}}\left( \frac{s}{s^{\prime}}+\frac{ m^{2}s^{\prime }}{t_{1}(s^{\prime}+t_{1})}\right) \nonumber \\ &&\!\!\!\times\left(\frac{Q^{2}}{s^{\prime }}-\frac{m^{2}s^{\prime}}{t_{1}(s^{\prime}+t_{1})}\right), \nonumber \\ \frac{\mathrm{d}\hat{\sigma}^{(0)}_{I,g}}{\mathrm{d}t_{1}}(s^{\prime},t_{1})=\pi e_{Q}^{2}\alpha_{\mathrm{em}}\alpha_{s}&&\!\!\!\frac{4\sqrt{Q^{2}}}{s^{\prime 2}}\!\!\left(\!\! \frac{-t_{1}s(s^{\prime}+t_{1}) }{s^{\prime 2}}-m^{2}\!\!\right)^{\!\!1/2} \nonumber \\ \times\frac{s^{\prime}+2t_{1}}{-t_{1}(s^{\prime}+t_{1})}&&\!\!\!\left( 1-\frac{2Q^{2}}{s^{\prime }}+\frac{2m^{2}s^{\prime}}{t_{1}(s^{\prime}+t_{1})}\right). \nonumber \end{eqnarray} Comparison with the exact NLO results given by Eqs.~(4.7) and (4.8) in Ref.~\cite{LRSN} indicates that the usage of the distributions $B_{k}^{\mathrm{ Born}}(s^{\prime},t_{1},u_{1})$ defined by Eqs.~(\ref{20}) and (\ref{21}) in present paper provides an accurate account of the logarithmic contributions originating from the collinear gluon emission. Numerical analysis shows that Eqs. (\ref{20}) and (\ref{21}) render it possible to describe with good accuracy the exact NLO predictions for the functions $\hat{\sigma}^{(1)}_{T}(s^{\prime})$ and $\hat{\sigma}^{(1)}_{L}(s^{\prime})$ near the threshold at relatively low virtualities $Q^{2}\sim m^{2}$ \cite{we7}.\footnote{Note that soft-gluon approximation is unreliable for high $Q^{2} \gg m^{2}$.} \begin{figure*} \begin{center} \begin{tabular}{cc} \mbox{\epsfig{file=A_x_SGR.eps,width=230pt}} & \mbox{\epsfig{file=K_x_SGR.eps,width=230pt}}\\ \end{tabular} \caption{\label{Fg.4}\small \emph{Left panel:} LO (solid lines) and NLO (dashed lines) soft-gluon predictions for the $x$ dependence of the azimuthal $\cos(2\varphi)$ asymmetry, $A(x,Q^{2})=2xF_{A}/F_{2}$, in charm leptoproduction at $\xi=1$ and 5. \emph{Right panel:} $x$ dependence of the $K$ factor, $K(x,Q^2)=F_2^{\mathrm{NLO}}/F_2^{\mathrm{LO}}$, at the same values of $\xi$.} \end{center} \end{figure*} Our results for the $x$ distribution of the azimuthal $\cos(2\varphi)$ asymmetry, $A(x,Q^{2})=2xF_{A}/F_{2}$, in charm leptoproduction at fixed values of $\xi$ are presented in the left panel of Fig.~\ref{Fg.4}. For comparison, the $K$ factor, $K(x,Q^2)=F_2^{\mathrm{NLO}}/F_2^{\mathrm{LO}}$, for the structure function $F_2$ at the same values of $\xi$ is shown in the right panel of Fig.~\ref{Fg.4}. One can see that the sizable soft-gluon corrections to the production cross sections affect the Born predictions for $A(x,Q^2)$ at NLO very little, by a few percent only. \section{\label{analytic} Analytic LO results for $R(x,Q^2)$ and $A(x,Q^2)$ at low $x$} Since the ratios $R(x,Q^2)$ and $A(x,Q^2)$ are perturbatively stable, it makes sense to provide the LO hadron-level predictions for these quantities in analytic form. In this Section, we derive compact low-$x$ approximation formulae for the azimuthal $\cos(2\varphi)$ asymmetry and quantity $R_2(x,Q^2)$ closely related to the Callan-Gross ratio $R(x,Q^2)$, \begin{equation} \label{22} R_{2}(x,Q^{2})=2x\frac{F_{L}}{F_{2}}(x,Q^2)=\frac{R(x,Q^{2})}{1+R(x,Q^{2})}. \end{equation} We will see below that our obtained results may be useful in the extraction of the structure functions $F_k$ ($k=2,L,A,I$) from measurements of the reduced cross sections. To obtain the hadron-level predictions, we convolute the LO partonic cross sections given by Eqs.~(\ref{8}) with the low-$x$ asymptotics of the gluon PDF: \begin{equation} \label{23} g(x,Q^2)\stackrel{x\to 0}{\longrightarrow}\frac{1}{x^{1+\delta}}. \end{equation} The value of $\delta$ in Eq.~(\ref{23}) is a matter of discussion. The simplest choice, $\delta =0$, leads to a non-singular behavior of the structure functions for $x\to 0$.\footnote{The LO predictions for the Callan-Gross ratio in the case of $\delta =0$ were studied in Ref.~\cite{kotikov}.} Another extreme value, $\delta =1/2$, historically originates from the BFKL resummation of the leading powers of $\ln(1/x)$ \cite{BFKL1,BFKL2,BFKL3}. In reality, $\delta$ is a function of $Q^2$. Theoretically, the $Q^2$ dependence of $\delta$ is calculated using the DGLAP evolution equations \cite{DGLAP1,DGLAP2,DGLAP3}. We have derived the analytic low-$x$ formulae for the ratios $A^{(\delta )}(Q^2)\equiv A^{(\delta)}(x\to 0,Q^2)$ and $R_{2}^{(\delta )}(Q^2)\equiv R^{(\delta)}_2(x\to 0,Q^2)$ with arbitrary values of $\delta$ in terms of the Gauss hypergeometric function. Our results have the following form:\\ \begin{strip} \begin{picture}(240,10) \put(0,10){\line(1,0){240}} \end{picture} \begin{equation} \label{29} \qquad \qquad \qquad A^{(\delta )}(Q^2)=2\frac{\frac{2+\delta +2\lambda}{3+\delta }\mathrm {\Phi} \left( 1+\delta ,\frac{1}{1+4\lambda }\right) -\left( 1+4\lambda \right) \mathrm{\Phi} \left( 2+\delta ,\frac{1}{1+4\lambda }\right) }{\left[ 1+\frac{% \delta \left( 1-\delta ^{2}\right) }{\left( 2+\delta \right) \left( 3+\delta \right) }\right] \mathrm{\Phi} \left( \delta ,\frac{1}{1+4\lambda }\right) -\left( 1+4\lambda \right) \left( 4-\delta -\frac{10}{3+\delta }\right) \mathrm{\Phi} \left( 1+\delta ,\frac{1}{1+4\lambda }\right) }, \end{equation} \begin{equation} \label{24} \qquad \qquad \qquad R_{2}^{(\delta )}(Q^2)=4\frac{\frac{2+\delta }{3+\delta }\mathrm{\Phi} \left( 1+\delta ,\frac{1}{1+4\lambda }\right) -\left( 1+4\lambda \right) \mathrm{\Phi} \left( 2+\delta ,\frac{1}{1+4\lambda }\right) }{\left[ 1+\frac{% \delta \left( 1-\delta ^{2}\right) }{\left( 2+\delta \right) \left( 3+\delta \right) }\right] \mathrm{\Phi} \left( \delta ,\frac{1}{1+4\lambda }\right) -\left( 1+4\lambda \right) \left( 4-\delta -\frac{10}{3+\delta }\right) \mathrm{\Phi} \left( 1+\delta ,\frac{1}{1+4\lambda }\right) }, \end{equation} \begin{flushright} \begin{picture}(240,10) \put(0,-10){\line(1,0){240}} \end{picture} \end{flushright} \end{strip} \noindent where $\lambda=m^2/Q^2$ and the function $\mathrm{\Phi}\left(r,z\right)$ is \begin{eqnarray} \mathrm{\Phi} \left( r,z\right)&=&\frac{z^{1+r}}{1+r}\,\frac{\mathrm{\Gamma} \left( 1/2\right) \mathrm{\Gamma} \left(1+r\right) }{\mathrm{\Gamma} \left( 3/2+r\right)} \nonumber \\ &&\times\,{}_{2}F_{1}\left(\frac{1}{2},1+r,\frac{3}{2}+r;z\right).\label{25} \end{eqnarray} The hypergeometric function ${}_{2}F_{1}(a,b;c;z)$ has the following series expansion: \begin{eqnarray} {}_{2}F_{1}\left( a,b,c;z\right)&=&\frac{\mathrm{\Gamma} \left( c\right) }{\mathrm{\Gamma} \left( a\right)\mathrm{\Gamma} \left( b\right)} \nonumber \\ &&\times\sum\limits_{n=0}^{\infty }\frac{\mathrm{\Gamma} \left( a+n\right) \mathrm{\Gamma} \left( b+n\right) }{\mathrm{\Gamma} \left( c+n\right) }\frac{% z^{n}}{n!}. \label{26} \end{eqnarray} In Fig.~\ref{Fg.5}, we investigate the result (\ref{24}) for $R_{2}^{(\delta )}(Q^2)$. The left panel of Fig.~\ref{Fg.5} shows the ratio $R_{2}^{(\delta )}(Q^2)$ as functions of $\xi$ for two extreme cases, $\delta =0$ and $1/2$. One can see that the difference between these quantities varies slowly from $20\%$ at low $Q^2$ to $10\%$ at high $Q^2$. For comparison, the LO results for $R_2(x,Q^2)$ are also shown at several values of $x$. In calculations, the CTEQ6L gluon PDF \cite{CTEQ6} was used. We observe that, for $x\to 0$, the CTEQ6L predictions converge to the function $R^{(1/2)}_2(Q^2)$ practically in the entire region of $Q^2$. We have verified that the similar situation takes also place for other CTEQ PDF versions \cite{CT10,CT14}. \begin{figure*} \begin{center} \begin{tabular}{cc} \mbox{\epsfig{file=Ranalytic_6.eps,width=230pt}} & \mbox{\epsfig{file=Ranalytic2_6.eps,width=230pt}}\\ \end{tabular} \caption{\label{Fg.5}\small LO low-$x$ predictions for the ratio $R_2(x,Q^2)=2xF_L/F_2$ in charm leptoproduction. \emph{Left panel:} Asymptotic ratios $R^{(0)}_2(Q^2)$ (gray points) and $R^{(1/2)}_2(Q^2)$ (black points), as well as CTEQ6L predictions for $R_2(x,Q^2)$ at $x=10^{-2}$, $10^{-3}$ and $10^{-4}$. \emph{Right panel:} Asymptotic ratio $R^{(\delta )}_2(Q^2)$ at $\delta=0$, 0.2, 0.3, 0.4 and 0.5.} \end{center} \end{figure*} In the right panel of Fig.~\ref{Fg.5}, the $\delta$ dependence of the asymptotic ratio $R^{(\delta)}_2(Q^2)$ is investigated. One can see that the ratio $R^{(\delta)}_2(Q^2)$ rapidly converges to the function $R^{(1/2)}_2(Q^2)$ for $\delta > 0.2$. In particular, the relative difference between $R^{(0.5)}_2(Q^2)$ and $R^{(0.3)}_2(Q^2)$ varies slowly from $6\%$ at low $Q^2$ to $2\%$ at high $Q^2$. Our analysis presented in Fig.~\ref{Fg.6} shows that the quantity $A^{(\delta )}(Q^2)$ defined by Eq.~(\ref{29}) has the properties very similar to the ones demonstrated by the ratio $R^{(\delta)}_2(Q^2)$. In particular, one can see from Fig.~\ref{Fg.6} that the hadron-level predictions for $A^{(\delta )}(Q^2)$ depend weakly on $\delta$ practically in the entire region of $Q^2$ for $\delta > 0.2$. As mentioned above, the $Q^2$ dependence of the parameter $\delta$ is determined with the help of the DGLAP evolution. However, our analysis shows that hadron-level predictions for both $A^{(\delta)}(x\to 0,Q^2)$ and $R^{(\delta)}_2(x\to 0,Q^2)$ depend weakly on $\delta$ practically in the entire region of $Q^2$ for $0.2< \delta < 0.9$. For this reason, it makes sense to consider the ratios $A^{(\delta )}(Q^2)$ and $R^{(\delta)}_2(Q^2)$ in particular case of $\delta = 1/2$. The results are:\\ \begin{strip} \begin{picture}(240,10) \put(0,10){\line(1,0){240}} \end{picture} \begin{equation} \label{27a} \qquad \qquad \qquad A^{(1/2)}(Q^2)=12\frac{(1+ 8\lambda) E(1/(1 + 4\lambda)) - 8\lambda K(1/(1 + 4\lambda))}{ \left( -37 + 72\lambda \right)E(1/(1 + 4\lambda)) + 2\left( 23 - 36\lambda \right)K(1/(1 + 4\lambda)) }, \end{equation} \begin{equation} \label{27} \qquad \qquad \qquad R^{(1/2)}_2(Q^2)=\frac{8}{1 + 4\lambda}\,\frac{ \left[ 3 + 4\lambda \left( 13 + 32\lambda \right) \right] E(1/(1 + 4\lambda)) - 4\lambda \left( 9 + 32\lambda \right) K(1/(1 + 4\lambda)) }{ \left( -37 + 72\lambda \right)E(1/(1 + 4\lambda)) + 2\left( 23 - 36\lambda \right)K(1/(1 + 4\lambda)) }, \end{equation} \begin{flushright} \begin{picture}(240,10) \put(0,-10){\line(1,0){240}} \end{picture} \end{flushright} \end{strip} \noindent where the functions $K(y)$ and $E(y)$ are the complete elliptic integrals of the first and second kinds defined as \begin{equation} \label{28} K(y)=\int\limits_0^1\!\!\!\frac{{\mathrm d}t}{\sqrt{(1-t^2)(1-yt^2)}},\; E(y)=\int\limits_0^1\!{\mathrm d}t \sqrt{\frac{1-yt^2}{1-t^2}}. \end{equation} One can see from Figs.~\ref{Fg.5} and \ref{Fg.6} that our simple formulae (\ref{27}) and (\ref{27a}) with $\delta =1/2$ (i.e., without any evolution) describes with good accuracy the low-$x$ CTEQ results for $R_2(x,Q^2)$ and $A(x,Q^2)$. We conclude that the hadron-level predictions for both $R_2(x\to 0,Q^2)$ and $A(x\to 0,Q^2)$ are stable not only under the NLO corrections to the partonic cross sections, but also under the DGLAP evolution of the gluon PDF. \begin{figure*} \begin{center} \begin{tabular}{cc} \mbox{\epsfig{file=Aanalytic_6.eps,width=230pt}} & \mbox{\epsfig{file=Aanalytic2_6.eps,width=230pt}}\\ \end{tabular} \caption{\label{Fg.6}\small LO low-$x$ predictions for the ratio $A(x,Q^2)=2xF_A/F_2$ in charm leptoproduction. \emph{Left panel:} Asymptotic ratios $A^{(0)}(Q^2)$ (gray points) and $A^{(1/2)}(Q^2)$ (black points), as well as CTEQ6L predictions for $A(x,Q^2)$ at $x=10^{-2}$, $10^{-3}$ and $10^{-4}$. \emph{Right panel:} Asymptotic ratio $A^{(\delta )}(Q^2)$ at $\delta=0$, 0.2, 0.3, 0.4 and 0.5.} \end{center} \end{figure*} Let us now discuss how the obtained analytic results may be used in the extraction of the structure functions $F_k$ ($k=2,L,A,I$) from experimental data. Usually, it is the so-called "reduced cross section", $\tilde{\sigma}(x,Q^{2})$, that can directly be measured in DIS experiments: \begin{eqnarray} \tilde{\sigma}(x,Q^{2})&=&\frac{1}{1+(1-y)^2}\frac{xQ^4}{2\pi\alpha^{2}_{\mathrm{em}}}\frac{\mathrm{d}^{2}\sigma_{lN}}{\mathrm{d}x\mathrm{d}Q^{2}} \nonumber \\ &=&F_{2}(x,Q^{2})-\frac{2xy^{2}}{1+(1-y)^2}F_{L}(x,Q^{2}) \label{30} \\ &=&F_{2}(x,Q^{2})\left[1-\frac{y^{2}}{1+(1-y)^2}R_{2}(x,Q^{2})\right]\!\!. \label{31} \end{eqnarray} In earlier HERA analyses of charm and bottom electroproduction, the corresponding longitudinal structure functions were taken to be zero for simplicity. In this case, $\tilde{\sigma}(x,Q^{2})=F_{2}(x,Q^{2})$. In later papers, the structure function $F_{2}(x,Q^2)$ is evaluated from the reduced cross section (\ref{30}) where the longitudinal structure function $F_{L}(x,Q^2)$ is estimated from the NLO QCD expectations. Instead of this rather cumbersome procedure, we propose to use the expression (\ref{31}) with the quantity $R_{2}(x,Q^2)$ defined by the analytic LO expressions (\ref{24}) or (\ref{27}). This simplifies the extraction of $F_{2}(x,Q^2)$ from measurements of $\tilde{\sigma}(x,Q^{2})$ but does not affect the accuracy of the result in practice because of perturbative stability of the ratio $R_{2}(x,Q^2)$. In Table~\ref{tab1} and \ref{tab2}, we compare the results of our analysis of the last HERA data on the charm and bottom electroproduction with the NLO values, $F_2(\mathrm{NLO})$, obtained by the H1 collabotation \cite{H1HERA1}. One can see that the LO Eq.~(\ref{27}) reproduce the NLO H1 results for $F_2^c(x,Q^2)$ and $F_2^b(x,Q^2)$ with an accuracy better than 1\%. \begin{table*} \caption{\label{tab1} Values of $F_2^c(x,Q^2)$ extracted from the HERA measurements of $\tilde{\sigma}^c(x,Q^{2})$ for various values of $Q^2$ and $x$. The NLO H1 results \cite{H1HERA1} are compared with the LO predictions corresponding to the case of $\delta =0.5$.} \begin{center} \begin{tabular}{||ccc||cc||cc||} \hline $\quad Q^2 \quad$ & $x$ & ~$\quad y\quad$~ & ~$\quad \tilde{\sigma}^c \quad$~ & ~Error~ & $\qquad F_2^c$(NLO)$\qquad$ & $ F_2^c$(LO) \\ (GeV$^2$) & & & & (\%) & H1 & $\delta=0.5$ \\ \hline \hline 5.0 & 0.00020 & 0.246 & 0.148 & 17.6 & $0.149\pm0.026$ & $0.149\pm0.026$ \\ 8.5 & 0.00050 & 0.167 & 0.176 & 14.8 & $0.176\pm0.026$ & $0.176\pm0.026$ \\ 8.5 & 0.00032 & 0.262 & 0.186 & 15.5 & $0.187\pm0.029$ & $0.187\pm0.029$ \\ 12.0 & 0.00130 & 0.091 & 0.150 & 18.7 & $0.150\pm0.028$ & $0.150\pm0.028$ \\ 12.0 & 0.00080 & 0.148 & 0.177 & 15.9 & $0.177\pm0.028$ & $0.177\pm0.028$ \\ 12.0 & 0.00050 & 0.236 & 0.240 & 11.2 & $0.242\pm0.027$ & $0.241\pm0.027$ \\ 12.0 & 0.00032 & 0.369 & 0.273 & 13.8 & $0.277\pm0.038$ & $0.277\pm0.038$ \\ 20.0 & 0.00200 & 0.098 & 0.187 & 12.7 & $0.188\pm0.023$ & $0.187\pm0.024$ \\ 20.0 & 0.00130 & 0.151 & 0.219 & 11.9 & $0.219\pm0.026$ & $0.220\pm0.026$ \\ 20.0 & 0.00080 & 0.246 & 0.274 & 10.2 & $0.276\pm0.028$ & $0.276\pm0.028$ \\ 20.0 & 0.00050 & 0.394 & 0.281 & 13.8 & $0.287\pm0.040$ & $0.287\pm0.040$ \\ 35.0 & 0.00320 & 0.108 & 0.200 & 12.7 & $0.200\pm0.025$ & $0.200\pm0.025$ \\ 35.0 & 0.00200 & 0.172 & 0.220 & 11.8 & $0.220\pm0.026$ & $0.221\pm0.026$ \\ 35.0 & 0.00130 & 0.265 & 0.295 & 9.7 & $0.297\pm0.029$ & $0.298\pm0.029$ \\ 35.0 & 0.00080 & 0.431 & 0.349 & 12.7 & $0.360\pm0.046$ & $0.359\pm0.046$ \\ 60.0 & 0.00500 & 0.118 & 0.198 & 10.8 & $0.199\pm0.021$ & $0.198\pm0.021$ \\ 60.0 & 0.00320 & 0.185 & 0.263 & 8.4 & $0.264\pm0.022$ & $0.264\pm0.022$ \\ 60.0 & 0.00200 & 0.295 & 0.335 & 8.8 & $0.339\pm0.030$ & $0.339\pm0.030$ \\ 60.0 & 0.00130 & 0.454 & 0.296 & 15.1 & $0.307\pm0.046$ & $0.306\pm0.046$ \\ 120.0 & 0.01300 & 0.091 & 0.133 & 14.1 & $0.133\pm0.019$ & $0.133\pm0.019$ \\ 120.0 & 0.00500 & 0.236 & 0.218 & 11.1 & $0.220\pm0.024$ & $0.220\pm0.024$ \\ 120.0 & 0.00200 & 0.591 & 0.351 & 12.8 & $0.375\pm0.048$ & $0.374\pm0.048$ \\ 200.0 & 0.01300 & 0.151 & 0.160 & 11.9 & $0.160\pm0.019$ & $0.160\pm0.019$ \\ 200.0 & 0.00500 & 0.394 & 0.237 & 13.5 & $0.243\pm0.033$ & $0.242\pm0.033$ \\ 300.0 & 0.02000 & 0.148 & 0.117 & 18.5 & $0.117\pm0.022$ & $0.117\pm0.022$ \\ 300.0 & 0.00800 & 0.369 & 0.273 & 12.7 & $0.278\pm0.035$ & $0.278\pm0.035$ \\ 650.0 & 0.03200 & 0.200 & 0.084 & 30.9 & $0.085\pm0.026$ & $0.084\pm0.026$ \\ 650.0 & 0.01300 & 0.492 & 0.195 & 16.2 & $0.203\pm0.033$ & $0.202\pm0.033$ \\ 2000.0 & 0.05000 & 0.394 & 0.059 & 36.4 & $0.060\pm0.022$ & $0.060\pm0.022$ \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \caption{\label{tab2} Values of $F_2^b(x,Q^2)$ extracted from the HERA measurements of $\tilde{\sigma}^b(x,Q^{2})$ for various values of $Q^2$ and $x$. The NLO H1 results \cite{H1HERA1} are compared with the LO predictions corresponding to the case of $\delta =0.5$.} \begin{center} \begin{tabular}{||ccc||cc||cc||} \hline $\quad Q^2 \quad$ & $x$ & ~$\quad y\quad$~ & ~$\quad \tilde{\sigma}^b \quad$~ & ~Error~ & $\qquad F_2^b$(NLO)$\qquad$ & $ F_2^b$(LO) \\ (GeV$^2$) & & & & (\%) & H1 & $\delta=0.5$ \\ \hline \hline 5. & 0.00020 & 0.246 & 0.00244 & 46.1 & $0.00244\pm0.00112$ & $0.00244\pm0.00113$ \\ 12. & 0.00032 & 0.369 & 0.00487 & 31.8 & $0.00490\pm0.00156$ & $0.00489\pm0.00156$ \\ 12. & 0.00080 & 0.148 & 0.00247 & 43.5 & $0.00248\pm0.00108$ & $0.00247\pm0.00108$ \\ 25. & 0.00050 & 0.492 & 0.01189 & 25.1 & $0.01206\pm0.00303$ & $0.01203\pm0.00302$ \\ 25. & 0.00130 & 0.189 & 0.00586 & 34.1 & $0.00587\pm0.00200$ & $0.00587\pm0.00200$ \\ 60. & 0.00130 & 0.454 & 0.01928 & 25. & $0.01969\pm0.00492$ & $0.01962\pm0.00490$ \\ 60. & 0.00500 & 0.118 & 0.00964 & 32.6 & $0.00965\pm0.00315$ & $0.00965\pm0.00315$ \\ 200. & 0.00500 & 0.394 & 0.02365 & 23.2 & $0.02422\pm0.00562$ & $0.02415\pm0.00560$ \\ 200. & 0.01300 & 0.151 & 0.01139 & 34.4 & $0.01142\pm0.00393$ & $0.01142\pm0.00393$ \\ 650. & 0.01300 & 0.492 & 0.01331 & 34.7 & $0.01394\pm0.00484$ & $0.01388\pm0.00481$ \\ 650. & 0.03200 & 0.200 & 0.01018 & 30.1 & $0.01024\pm0.00308$ & $0.01023\pm0.00308$ \\ 2000. & 0.05000 & 0.394 & 0.00499 & 61.1 & $0.00511\pm0.0031$ & $0.00511\pm0.00312$ \\ \hline \end{tabular} \end{center} \end{table*} High accuracy of our LO approach is explained as follows. One can see from Eq.(32) that the LO corrections to the extracted function $F_2(x,Q^2)$ due to the non-zero value of $R_2(x,Q^2)$ cannot exceed $30\%$ because the ratio $R_2(x,Q^2)$ is itself less than 0.3 practically in the entire region of the variables $x$ and $Q^2$. For this reason, the NLO corrections to $R_2(x,Q^2)$, having a relative size of the order of $10\%$, cannot affect the value of $F_2(x,Q^2)$ by more than $3\%$. In reality, the effect of radiative corrections to $R_2(x,Q^2)$ on the extracted values of $F_2(x,Q^2)$ is less than $1\%$ since $y\ll 1$ in most of the experimentally accessible kinematic range. Taking into account that typical experimental errors are of about (10--20)$\%$, we conclude that our analytic predictions for $R_2(x,Q^2)$ and $A(x,Q^2)$ will be useful in extraction of the structure functions from presently available and future data. The structure functions $F_A$ and $F_I$ can be extracted from the $\varphi$-dependent DIS cross section, \begin{eqnarray} \frac{\mathrm{d}^{3}\sigma_{lN}}{\mathrm{d}x\mathrm{d}Q^{2}\mathrm{d}\varphi} &=&\frac{2\alpha^{2}_{em}y^2}{Q^4(1-\varepsilon)}\Biggl[\frac{1}{2x} F_{2}( x,Q^{2})- (1-\varepsilon) F_{L}(x,Q^{2}) \Bigr. \nonumber \\ &&+\varepsilon F_{A}( x,Q^{2})\cos 2\varphi \nonumber \\ &&+\Bigl. 2\sqrt{\varepsilon(1+\varepsilon)} F_{I}( x,Q^{2})\cos \varphi\Biggr], \label{32} \end{eqnarray} where $\varepsilon=\frac{2(1-y)}{1+(1-y)^2}$. For this purpose, one should measure the first moments of the $\cos(\varphi)$ and $\cos(2\varphi)$ distributions defined as \begin{equation} \label{33} \langle \cos n\varphi \rangle (x,Q^{2})= \frac{\int_{0}^{2\pi }\mathrm{d}\varphi \cos n\varphi \frac{\mathrm{d}^{3}\sigma _{lN}} {\mathrm{d}x\mathrm{d}Q^{2}\mathrm{d}\varphi } (x,Q^{2},\varphi ) }{\int_{0}^{2\pi }\mathrm{d}\varphi \frac{\mathrm{d}^{3}\sigma_{lN}} {\mathrm{d}x\mathrm{d}Q^{2}\mathrm{d}\varphi } (x,Q^{2},\varphi ) }. \end{equation} Using Eq.~(\ref{32}), we obtain: \begin{eqnarray} \langle \cos 2\varphi \rangle(x,Q^{2})&=&\frac{1}{2}\frac{\varepsilon A(x,Q^{2})}{1-(1-\varepsilon)R_2(x,Q^{2})}, \nonumber \\ A(x,Q^{2})&=&2x\frac{F_{A}}{F_{2}}(x,Q^{2}),\label{34} \end{eqnarray} and \begin{eqnarray} \langle \cos \varphi \rangle(x,Q^{2})&=&\frac{\sqrt{\varepsilon (1+\varepsilon)} A_I(x,Q^{2})}{1-(1-\varepsilon)R_2(x,Q^{2})}, \nonumber \\ A_I(x,Q^{2})&=&2x\frac{F_{I}}{F_{2}}(x,Q^{2}).\label{35} \end{eqnarray} One can see from Eqs.~(\ref{34}) and (\ref{35}) that, using the perturbatively stable predictions (\ref{24}) for $R_2(x,Q^{2})$, we will be able to determine the structure functions $F_A(x,Q^{2})$ and $F_I(x,Q^{2})$ from future data on the moments $\langle\cos 2\varphi\rangle$ and $\langle\cos \varphi\rangle$. On the other hand, according to Eq.~(\ref{34}), the analytic results (\ref{24}) and (\ref{29}) for the quantities $R_2(x,Q^{2})$ and $A(x,Q^{2})$ provide us with the perturbatively stable predictions for $\langle\cos 2\varphi\rangle$ which may be directly tested in experiment. So, our obtained analytic and perturbatively stable predictions for the ratios $R_2(x,Q^{2})$ and $A(x,Q^{2})$ will simplify both the extraction of structure functions from measurements of the $\varphi$-dependent cross section (\ref{32}) and test of self-consistency of the extraction procedure. \section{\label{resum} Resummation of the Mass Logarithms} In this Section, we discuss the properties of the quantities $R(x,Q^2)$ and $A(x,Q^2)$ within the variable-flavor-number scheme (VFNS) \cite{ACOT,Collins}. The VFNS is an approach alternative to the traditional fixed-flavor-number scheme (FFNS) where only light degrees of freedom ($u,d,s$ and $g$) are considered as active. Within the VFNS, the mass logarithms of the type $\alpha_{s}\ln\left( Q^{2}/m^{2}\right)$ are resummed through the all orders into a heavy quark density which evolves with $Q^{2}$ according to the standard DGLAP \cite{DGLAP1,DGLAP2,DGLAP3} evolution equations. Hence this approach introduces the parton distribution functions (PDF) for the heavy quarks and changes the number of active flavors by one unit when a heavy quark threshold is crossed. At leading order, ${\cal O}(\alpha_{s}^0)$, the only photon-quark scattering (QS) subprocess within the VFNS is \begin{equation} \gamma ^{*}(q)+Q(k_{Q})\rightarrow Q(p_{Q}). \label{36} \end{equation} Corresponding Feynman diagram is depicted in Fig.~\ref{Fg.2}\emph{b}. The ${\cal O}(\alpha_{s}^0)$ $\gamma ^{*}Q$ cross sections, $\hat{\sigma}_{k,\mathrm{Q}}^{(0)}(z,\lambda)$, are: \begin{eqnarray} \hat{\sigma}_{2,\mathrm{Q}}^{(0)}(z,\lambda)&=&\hat{\sigma}_{B}(z)\sqrt{1+4\lambda z^{2}}\, \delta(1-z), \nonumber \\ \hat{\sigma}_{L,\mathrm{Q}}^{(0)}(z,\lambda)&=&\hat{\sigma}_{B}(z)\frac{4\lambda z^{2}} {\sqrt{1+4\lambda z^{2}}}\,\delta(1-z), \label{37} \\ \hat{\sigma}_{A,\mathrm{Q}}^{(0)}(z,\lambda)&=&\hat{\sigma}_{I,\mathrm{Q}}^{(0)}(z,\lambda)=0, \nonumber \end{eqnarray} with $z=Q^{2}/(2q\cdot k_{Q})$ and $\hat{\sigma}_{B}(z)=(2\pi)^2e_{Q}^{2}\alpha_{\mathrm{em}}\,z/Q^{2}$. Within the VFNS, the mass logarithms of the type $\alpha_s^n\ln^n(Q^{2}/m^{2})$, which dominate the production cross sections at high energies, $Q^{2}\rightarrow \infty$, are resummed via the renormalization group equations. In practice, the resummation procedure consists of two steps. First, the mass logarithms have to be subtracted from the fixed order predictions for the partonic cross sections in such a way that, in the limit $Q^{2}\rightarrow \infty$, the well known massless $\overline{\text{MS}}$ coefficient functions are recovered. Instead, a heavy-quark density in the hadron, $h(x,Q^{2})$, has to be introduced. This density obeys the usual massless NLO DGLAP evolution equation with the boundary condition $h(x,Q^{2}=Q_{0}^2)=0$ where $Q_{0}^2\sim m^{2}$. Within the VFNS, the treatment of heavy quarks depends on the values chosen for $Q^{2}$. At low $Q^{2}<Q_{0}^2$, the production cross sections are described by the light parton contributions ($u,d,s$ and $g$). The heavy-flavor production is dominated by the GF process and its higher order QCD corrections. At high $Q^{2}\gg m^{2}$, the heavy quark is treated in the same way as the other light quarks and it is represented by a heavy-quark parton density in the hadron. In the intermediate scale region one has to make a smooth connection between the two different prescriptions. Strictly speaking, the perturbative heavy-flavor density is well defined at high $Q^2\gg m^2$ but does not have a clean interpretation at low $Q^2$. Since the heavy-quark distribution originates from resummation of the mass logarithms of the type $\alpha_s^n\ln^n (Q^{2}/m^{2})$, it is usually assumed that the corresponding PDF vanishes with these logarithms, i.e. for $Q^{2}<Q_{0}^2\sim m^{2}$. On the other hand, the threshold constraint $W^2=(q+p)^2=Q^2(1/x-1)>4m^2$ implies that $Q_0$ is not a constant but "live" function of $x$. To avoid this problem, several solutions have been proposed. (For a review, see Ref.~\cite{sacot12-2}.) In our analysis, the so-called ACOT($\chi$) scheme \cite{chi} is used. According to the ACOT($\chi$) prescription, the lowest order, ${\cal O}(\alpha_{s})$, hadron-level cross section for charm production is \begin{eqnarray} &&\sigma^{(\mathrm{ACOT})}_{2}(x,\lambda)=\hat{\sigma}_{B}(x)c_{+}(\chi,\mu_{F})+\int\limits_{\chi}^{1}\text{d}z\,g(z,\mu_{F}) \nonumber \\ &&\times\!\!\left[\hat{\sigma}_{2,\mathrm{g}}^{(0)} \!\left(x/z,\lambda\right)-\frac{\alpha_{s}}{\pi}\ln\frac{\mu_{F}^{2}}{m^{2}}\;\hat{\sigma}_{B}\left(x/z\right)P^{(0)}_{g\rightarrow c}\left(\chi/z\right)\right]\!\!. \label{38} \end{eqnarray} In Eq.~(\ref{38}), \begin{equation} \chi=x(1+4\lambda), \label{38a} \end{equation} $P^{(0)}_{g\rightarrow c}$ is the LO gluon-quark splitting function, $P^{(0)}_{g\rightarrow c}(\zeta)=\left.\left[(1-\zeta)^{2}+\zeta^{2}\right]\right/2$, $c_{+}(\zeta,\mu_{F})=c(\zeta,\mu_{F})+\bar{c}(\zeta,\mu_{F})$, and the ${\cal O}(\alpha_{s})$ photon-gluon fusion cross section $\hat{\sigma}_{2,\mathrm{g}}^{(0)}$ is given by Eq.~(\ref{8}). One can see from Eqs.~(\ref{8}) that the longitudinal and azimuth-dependent cross sections, $\hat{\sigma}_{L,\mathrm{g}}^{(0)}$ and $\hat{\sigma}_{A,\mathrm{g}}^{(0)}$, are infra-red safe; the contributions of the potentially large logarithms of the type $\ln (Q^{2}/m^{2})$ to these quantities vanish for $\lambda \to 0$. For this reason, the ${\cal O}(\alpha_{s})$ hadron-level longitudinal and azimuth-dependent cross sections within the VFNS have the same form as in the FFNS: \begin{equation}\label{39} \sigma^{(\mathrm{ACOT})}_{k}(x,\lambda)=\int\limits_{\chi}^{1}\text{d}z\,g(z,\mu_{F})\, \hat{\sigma}_{k,\mathrm{g}}^{(0)}\!\left(x/z,\lambda\right) \quad (k=L,A). \end{equation} In Figs.~\ref{Fg7} and \ref{Fg8}, we present the ${\cal O}(\alpha_{s})$ and ${\cal O}(\alpha_{s}^2)$ FFNS predictions for the structure function $F_2(x,Q^2)$ and Callan-Gross ratio $R(x,Q^2)=F_L/F_T$ in charm leptoproduction, and compare them with the corresponding ${\cal O}(\alpha_{s})$ ACOT($\chi$) results \cite{chi}. In our calculations, the CTEQ6M parameterization for PDFs and $m_c=1.3$~GeV for c-quark mass are used \cite{CTEQ6}. \begin{figure*} \begin{center} \begin{tabular}{ll} \mbox{\epsfig{file=F2Q1.eps,width=230pt}} & \mbox{\epsfig{file=F2Q2.eps,width=230pt}} \end{tabular} \caption{\label{Fg7}\small ${\cal O}(\alpha_{s})$ (solid lines), ${\cal O}(\alpha_{s}^2)$ (dashed lines) FFNS results and ${\cal O}(\alpha_{s})$ ACOT($\chi$) (dotted curves) predictions for $F_2(x,Q^2)$ in charm leptoproduction at $x=10^{-1}$ and $10^{-2}$.} \end{center} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{ll} \mbox{\epsfig{file=RQ1.eps,width=230pt}} & \mbox{\epsfig{file=RQ2.eps,width=230pt}} \end{tabular} \caption{\label{Fg8}\small ${\cal O}(\alpha_{s})$ (solid lines), ${\cal O}(\alpha_{s}^2)$ (dashed lines) FFNS results and ${\cal O}(\alpha_{s})$ ACOT($\chi$) (dotted curves) predictions for $R(x,Q^2)$ in charm leptoproduction at $x=10^{-1}$ and $10^{-2}$.} \end{center} \end{figure*} One can see from Fig.~\ref{Fg7} that both the radiative corrections and charm-initiated contributions to $F_{2}(x,Q^{2})$ are large: they increase the ${\cal O}(\alpha_{s})$ FFNS results by approximately a factor of two at $x\sim 10^{-1}$ for all $Q^2$. At the same time, the relative difference between the dashed and dotted lines is not large: it does not exceed $25\%$ for $\xi=Q^2/m^2<10^{3}$. We conclude that it will be very difficult to determine the charm content of the proton using only data on $F_{2}(x,Q^{2})$ due to large radiative corrections (with corresponding theoretical uncertainties) to this quantity. Considering the corresponding predictions for the quantity $R(x,Q^2)$ presented in Fig.~\ref{Fg8}, we see that, in this case, the ${\cal O}(\alpha_{s}^2)$ FFNS and charm-initiated ${\cal O}(\alpha_{s})$ ACOT($\chi$) contributions are strongly different. In particular, the ${\cal O}(\alpha_{s}^2)$ FFNS corrections to $R(x,Q^2)$ are small, less than $15\%$, for $x\sim 10^{-2}$--$10^{-1}$ and $\xi<10^{4}$. At the same time, the ${\cal O}(\alpha_{s})$ charm-initiated contributions to $R(x,Q^2)$ are large: they decrease the ${\cal O}(\alpha_{s})$ FFNS predictions by about $50\%$ practically for all values of $\xi>10$. This is due to the fact that resummation of the mass logarithms has different effects on the structure functions $F_{T}(x,Q^{2})$ and $F_{L}(x,Q^{2})$. In particular, contrary to the transverse structure function, the longitudinal one does not contain leading mass logarithms of the type $\alpha_s\ln (Q^{2}/m^{2})$ at both ${\cal O}(\alpha_{s})$ and ${\cal O}(\alpha_{s}^2)$ \cite{BMSMN}. For this reason, resummation of these logarithms within the VFNS leads to increasing of the quantity $F_{T}$ but does not affect the function $F_{L}$. We conclude that the Callan-Gross ratio $R(x,Q^2)=F_L/F_T$ could be good probe of the charm density in the proton at $x\sim 10^{-2}$--$10^{-1}$. \begin{figure*} \begin{center} \begin{tabular}{ll} \mbox{\epsfig{file=AQ1.eps,width=230pt}} & \mbox{\epsfig{file=AQ2.eps,width=230pt}} \end{tabular} \caption{\label{Fg9}\small ${\cal O}(\alpha_{s})$ FFNS (solid lines) and ACOT($\chi$) (dotted curves) predictions for $A(x,Q^2)$ in charm leptoproduction at $x=10^{-1}$ and $10^{-2}$.} \end{center} \end{figure*} Fig.~\ref{Fg9} shows the ${\cal O}(\alpha_{s})$ FFNS and ACOT($\chi$) predictions for the azimuthal asymmetry $A(x,Q^2)=2xF_{A}/F_{2}$ at $x=10^{-1}$ and $10^{-2}$.\footnote{We do not provide the radiative corrections for $A(x,Q^2)$ because the corresponding exact ${\cal O}(\alpha_{s}^2)$ predictions are not presently available while the soft-gluon approximation is unreliable for high $Q^{2}\gg m^{2}$.} One can see from Fig.~\ref{Fg9} that the mass logarithms resummation leads to a sizeable decreasing of the ${\cal O}(\alpha_{s})$ FFNS predictions for the $\cos2\varphi$ asymmetry. In the ACOT($\chi$) scheme, the charm-initiated contribution reduces the FFNS results for $A(x,Q^{2})$ by about $(30$--$40)\%$. The origin of this reduction is the same as in the case of $R(x,Q^{2})$: in contrast to $F_{2}(x,Q^{2})$, the azimuth-dependent structure function $F_{A}(x,Q^{2})$ is safe in the limit $m^2\to 0$. We see that the impact of the mass logarithms resummation on the $\cos2\varphi$ asymmetry is essential at $x\sim 10^{-2}$--$10^{-1}$ and therefore can be tested experimentally. \begin{figure*} \begin{center} \begin{tabular}{ll} \mbox{\epsfig{file=cg_10nn.eps,width=230pt}} & \mbox{\epsfig{file=cg_100nn.eps,width=230pt}} \end{tabular} \caption{\label{Fg10}\small Predictions of the CT14n (solid line), CT10n (dotted line), CTEQ6M (dashed line), CT14nn (dash-dotted line) and CT10nn (long-dashed line) versions of the PDFs for the quantity $(c+\bar{c})/g~(x,Q^2)$ at $Q^2/m^2=10$ and $10^2$.} \end{center} \end{figure*} Note that our conclusions depend weakly on the PDFs in use since we analyze the ratios of the hadron-level cross sections. Moreover, one can see from Fig.~\ref{Fg10} that all the last "nlo" and "nnlo" sets of the CTEQ PDFs \cite{CTEQ6,CT10,CT14} predict practically the same values for the charm content of the proton: $(c+\bar{c})/g~(x,Q^2)\approx$ (6--7)$\%$ in wide region of $x$ and $Q^{2}$. In Figs.~\ref{Fg7}--\ref{Fg9}, we present the ${\cal O}(\alpha_{s})$ ACOT results for $R(x,Q^{2})$ and $A(x,Q^{2})$ as simplest illustrative examples. Our main conclusions about the resummation of mass logarithms are valid at both ${\cal O}(\alpha_{s})$ and ${\cal O}(\alpha_{s}^2)$. Indeed, a simple consideration of the FFNS LO and NLO results for the photon-gluon fusion \cite{LRSN,BMSMN} shows that the contribution of the leading mass logarithms to $F_2(x,Q^{2})$ has the form $\alpha_{s}^{n}\ln^{n}(Q^{2}/m^{2})$. The contribution of the leading mass logarithms to $F_L(x,Q^{2})$ is suppressed and has the form $(m^{2}/Q^{2})~\alpha_{s}^{n}\ln^{n}(Q^{2}/m^{2})$.\footnote{As to the subleading logarithms, $\alpha_{s}^{n}\ln^{n-1}(Q^{2}/m^{2})$, their resummation is expected to be suppressed by $\alpha_{s}$.} Thus we can conclude that, contrary to $F_2(x,Q^{2})$, the resummation of mass logarithms for $F_L(x,Q^{2})$ is of subleading twist to all orders in $\alpha_{s}$. The same situation takes also place for $F_A(x,Q^{2})$. We have verified this statement in first two orders of perturbation theory. One can see from Eqs.~(\ref{37}) that the lowest order QS subprocess is $\varphi$-independent, $\hat{\sigma}_{A,\mathrm{Q}}^{(0)}(z,\lambda)=0$, and has suppressed longitudinal component, $\hat{\sigma}_{L,\mathrm{Q}}^{(0)}(z,\lambda)\sim m^{2}/Q^{2}$. In Ref.~\cite{we5}, the radiative corrections to the QS subprocess have been calculated. Our analysis shows that the ${\cal O}(\alpha_{s})$ predictions for both $\hat{\sigma}_{A,\mathrm{Q}}^{(1)}(z,\lambda)$ and $\hat{\sigma}_{L,\mathrm{Q}}^{(1)}(z,\lambda)$ are negligible for $Q^{2}/m^{2}\gg 1$. So, we conclude that, contrary to the transverse component of the QS contribution, the longitudinal and azimuthal ones are of subleading twist to all orders in $\alpha_{s}$. This fact implies that resummation of the mass logarithms for the longitudinal and azimuth-dependent cross sections is, in principle, not necessary. For this reason, the VFNS predictions for $R(x,Q^{2})$ and $A(x,Q^{2})$ are smaller than the FFNS ones in both ${\cal O}(\alpha_{s})$ and ${\cal O}(\alpha_{s}^2)$. The difference between the FFNS and VFNS predictions for $R^c(x,Q^{2})$ is determined by the relative value of the charm density contribution to $F_2^c(x,Q^{2})$. The smaller the relative size of the charm-initiated contribution to $F_2^c(x,Q^{2})$, the less the difference between the FFNS and VFNS predictions for $R^c(x,Q^{2})$. The same situation takes also place for $A^c(x,Q^{2})$. The ${\cal O}(\alpha_{s}^2)$ S-ACOT results presented in Refs.~\cite{sacot12-2,sacot12} clearly support our expectations. In particular, one can see from Fig.5 in Ref.~\cite{sacot12} that the difference between the VFNS and FFNS curves for $F_L^c(x,Q^{2})$ is of about 5$\%$ at $x=10^{-2}$ and $\xi\lesssim 10^2$.\footnote{At very high $Q^2$, there is ambiguity in separation of the heavy- and light-quark components for the structure functions, $F_{2,L}^{l,c}(x,Q^{2})$ within the ${\cal O}(\alpha_{s}^2)$ VFNS. For this reason, the definition (42) in Ref.~\cite{sacot12} for heavy-quark components may be inappropriate at $\xi > 10^2$.} The corresponding difference for $F_2^c(x,Q^{2})$ at ${\cal O}(\alpha_{s}^2)$ is of about (15--20)$\%$. Using $F_L^{\mathrm{VFNS}}(x,Q^{2})=F_L^{\mathrm{FFNS}}(x,Q^{2})$, one can obtain from ~\cite{sacot12} that resummation of the mass logarithms reduces the FFNS results for $R^c$ within the ${\cal O}(\alpha_{s}^2)$ S-ACOT scheme by about 20$\%$. Remember, that the corresponding reduction within the ${\cal O}(\alpha_{s})$ ACOT($\chi$) approach is of about 50$\%$. We see that the ${\cal O}(\alpha_{s}^2)$ FFNS and VFNS predictions for $R^c(x,Q^{2})$ are closer to each other than the ${\cal O}(\alpha_{s})$ ones. This fact is in accordance with our expectations because the relative contribution of the charm-initiated component to $F_2^c(x,Q^{2})$ at ${\cal O}(\alpha_{s}^2)$ is less than at ${\cal O}(\alpha_{s})$. Indeed, while the ratio $(c+\bar{c})/g~(x,Q^2)$ is practically the same in both "nlo" and "nnlo" sets of available PDFs, the ${\cal O}(\alpha_{s}^2)$ predictions for $F_2^c(x,Q^{2})$ contain sizable light-quark initiated contributions which are absent at ${\cal O}(\alpha_{s})$. \section{Conclusion} We conclude by summarizing our main observations. In the present paper, we first review the available theoretical results for the Callan-Gross ratio, $R(x,Q^{2})$, and azimuthal $\cos(2\varphi)$ asymmetry, $A(x,Q^{2})$, in heavy-quark leptoproduction. It turned out that large (especially, at non-small $x$) radiative corrections to the structure functions cancel each other in their ratios $R(x,Q^2)=F_L/F_T$ and $A(x,Q^{2})=2xF_A/F_2$ with good accuracy. As a result, the ${\cal O}(\alpha_{s}^2)$ contributions to the ratios $R(x,Q^{2})$ and $A(x,Q^{2})$ do not exceed $10$--$15\%$ in a wide region of the variables $x$ and $Q^2$. Our analysis shows that, sufficiently above the production threshold, the pQCD predictions for $R(x,Q^2)$ and $A(x,Q^{2})$ are insensitive (to within ten percent) to standard uncertainties in the QCD input parameters and to the DGLAP evolution of PDFs. We conclude that, unlike the production cross sections, the Callan-Gross ratio and $\cos(2\varphi)$ asymmetry in heavy-quark leptoproduction are quantitatively well defined in pQCD. Measurements of the quantities $R(x,Q^2)$ and $A(x,Q^{2})$ in charm and bottom leptoproduction would provide a good test of the conventional parton model based on pQCD. Then we discuss some experimental and phenomenological applications of the observed perturbative stability. Our main conclusion is that the quantities $R(x,Q^{2})$ and $A(x,Q^{2})$ will be good probes of the heavy-quark densities in the proton. The VFN schemes have been proposed to resum the mass logarithms of the form $\alpha_{s}^{n}\ln^{n}(Q^{2}/m^{2})$ which dominate the production cross sections at high energies, $Q^2\to \infty$. Evidently, were the calculation done to all orders in $\alpha_{s}$, the VFNS and FFNS would be exactly equivalent. There is a point of view advocated in Refs.~\cite{ACOT,Collins} that, at high energies, the perturbative series converges better within the VFNS than in the FFNS. There is also another opinion \cite{Stratmann,BMSN,Neerven} that the above logarithms do not vitiate the convergence of the perturbation expansion so that a resummation is, in principle, not necessary. Our analysis indicates two promising experimental ways to resolve this problem: using the Callan-Gross ratio and/or azimuthal $\cos(2\varphi)$ asymmetry in DIS. The quantities $R(x,Q^2)$ and $A(x,Q^2)$ are perturbatively stable in the FFNS but sensitive to resummation of the mass logarithms of the type $\alpha_{s}\ln\left( Q^{2}/m^{2}\right)$ within the VFNS. Our analysis shows that resummation of the mass logarithms leads to reduction of the ${\cal O}(\alpha_{s})$ FFNS predictions for $A(x,Q^2)$ and $R(x,Q^2)$ by $(30$--$50)\%$ at $x\sim 10^{-2}$--$10^{-1}$ and $Q^2\gg m^2$.\footnote{Within the ${\cal O}(\alpha_{s}^2)$ S-ACOT VFNS \cite{sacot12}, the corresponding reduction of the FFNS predictions for $R^c(x,Q^2)$ is estimated to be (15--20)$\%$.} Therefore measurements of the ratios $R(x,Q^2)$ and $A(x,Q^2)$ in heavy-qaurk leptoproduction would make it possible to clarify the question whether the VFNS perturbative series is more reliable than the FFNS one. As to the experimental aspects, the Callan-Gross ratio and azimuthal $\cos(2\varphi)$ asymmetry in heavy-flavor leptoproduction can be measured in the current COMPASS and proposed EIC \cite{EIC} and LHeC \cite{LHeC} experiments. \begin{acknowledgements} The author is grateful to S.~J.~Brodsky, A.~V.~Efremov, A.~V.~Kotikov, A.~B.~Kniehl, E.~Leader, S.~O.~Moch, A.~G.~Oganesian O.~V.~Teryaev and C.~Weiss for useful discussions. We thank S.~I.~Alekhin and J.~Bl\"umlein for providing us with fast code \cite{Blumlein} for numerical calculations of the NLO partonic cross sections. This work is supported in part by the State Committee of Science of RA, grant 15T-1C223. \end{acknowledgements}
\section{Introduction} Thanks to astronomical observations the modern cosmology has elaborated the concept of the standard cosmological model called the cold dark matter model with the cosmological constant ($\Lambda$CDM model). From a methodological point of view cosmology has achieved the similar status as the particle physics with its standard model of particles. The $\Lambda$CDM model acts as an effective theory describing the Universe from redshift $z = 0$ (today) to $z = 10^9$ (the epoch of primordial nuclesynthesis). In the very structure of the $\Lambda$CDM model there are essentially two components: matter (dark matter and baryonic matter) and dark energy. It is assumed that the universe is spatially homogeneous and isotropic, and that its evolution is governed by Einstein field equations with the energy momentum tensor for the ideal fluid satisfying the barotropic equation of state $p = p(\rho)$, where $\rho$ is the energy density of fluid. From the observational point of view it is convenient to use dimensionless density parameters $\Omega_i$, defined as the fractions of critical density $3 H_0^2$ which corresponds to a flat model. These parameters are observables which can be determined from astronomical observations \cite{Ade:2015xua}. In the $\Lambda$CDM model it is assumed that all fluids are non-interacting. This implies that the densities of baryonic matter and dark matter are scaling with respect to redshift as $(1 + z)^3$ and dark energy has constant density. The natural interpretation of the cosmological constant is to treat it as the energy of the quantum vacuum \cite{Weinberg:1988cp}. The cosmological model with the cosmological constant term and pressureless matter fits well to the observation data of measurements of SNIa luminosity distance as a function of redshift \cite{Perlmutter:1997zf} and observations microwave relic radiation (WMAP, Planck). Measurements of large-scale structures also remain consistent with the $\Lambda$CDM model. Although the $\Lambda$CDM model describes well the present Universe the nature of its basic constituents (dark energy and dark energy) remains unknown. So dark energy and dark matter are like some useful fictions in the terminology of Nancy Cartwright \cite{Cartwright:1983hl}. Comparing the value of the cosmological constant required to explain the effect of the accelerated expansion of the Universe observations of distant supernovae type SNIa with the value of the cosmological constant interpreted as the energy of the quantum vacuum, we get the most incredible gap in history of physics $\rho_{vac} / \rho_{\Lambda} = 10^{143}$. In this context, the question is born why the value of the cosmological constant is the small, why not simply a zero? This is the known problem of the cosmological constant. In the model under the consideration, the comparison of $\rho_{vac} / \rho_{\Lambda}$ still gives 83 order of magnitude from the measurement value because $\rho_{vac}/\rho_{\Lambda}\simeq10^{60}$. Another problem related closely with the problem of the cosmological constant is the coincidence problem \cite{Peebles:1987ek}. This problem is caused by the lack of explanation of why in today's era of the density of dark matter and dark energy are comparable although it is assumed that they have different periods of recombination. In this paper we construct a cosmological model in which it is assumed that the process of interaction between sectors of dark matter and dark energy is continuous. Relativistic diffusion describes the transfer of energy to the sector of dark matter. As a result, we go beyond the standard model assuming from the outset that dark matter and dark energy interact. This effect is described by the running cosmological constant and the modification of the standard scaling law of the dark matter density. If we assume that general relativity is an effective theory which can be extrapolated to the Planck epoch, then the interpretation of the cosmological constant parameter appeared in the $\Lambda$CDM model as a vacuum energy seems to be natural. By equating this density to the energy density of the zero point energy that is left in a volume after removing all particles, then we obtain that its value is about 120 orders of magnitude higher than the corresponding value required for explanation of acceleration of the Universe in current epoch. In the Universe with such a high value of cosmological constant (dark energy) we have a rapid inflation and galaxies would have no time to form. The lack of explanation of this difference is called the cosmological constant problem. Its solution can be possible if we can find some physical mechanism lowering dramatically this value during the cosmic evolution. Of course this process should be defined in a covariant way following general relativity principles. Our hypothesis is that diffusion cosmology can offer the possibility of obtaining a low value of the cosmological constant today because the effects of diffusion effectively produce the running cosmological constant. We study how a value of effective running cosmological constant parameter changes during the cosmic evolution and for late time is going to be a small constant value. From the astronomical observations of distant supernovae SNIa and measurements of CMB by Planck, measurement of BAO and other astronomical observations we obtain that the present value of the energy densities of both dark energy and dark matter are of the same order of magnitude \cite{Velten:2014nra}. If we assume that standard cosmological model ($\Lambda$CDM model) is an adequate way to describe the cosmic evolution, then the value $\rho_{de}/\rho_{dm}$ will depend on the cosmological time or redshift and the question arises: Why are two quantities with different time recombination comparable at the present epoch? It is called the cosmic coincidence problem. We are looking for some physical relativistic mechanism which gives rise to this coincidence observed for the current Universe. In the opposite case very special initial conditions are required for its realization (fine tunning problem). In the framework of diffusion cosmology our investigation of this problem shows that while the values of dark matter and dark energy densities are comparable today they were significantly different in the past history of the Universe. Because the diffusion effects act effectively as fluids which interact each other during the cosmic evolution. In the consequence dark energy is running and a canonical rule of scaling dark matter $\rho_m$ proportional to $a^{-3}$ is adjusted. The main aim of our paper is to demonstrate how the coincidence problem can be naturally solved in the framework of diffusion cosmology. The interacting dark energy models have been considered by many authors in the context of this problem. One of reasons to study these models is to solve the cosmic coincidence problem \cite{Cai:2004dk,Berger:2007iw,Copeland:2006wr,Pavon:2005yx,Steinhardt:1997ccp}. To this aim different ad-hoc proposed models of an interacting term were postulated {\'a} prori. In these models the covariance of general relativity is usually violated and therefore they have limited application to cosmology. In the present work we consider a unique relativistic diffusion model where an interaction mechanism is motivated physically. In the study of evolutional scenarios of the model under consideration we apply the dynamical systems methods \cite{Perko:2001de}. Our model belongs to a general class of jungle type of cosmologies represented by coupled cosmological models in a Lotka-Volterra framework \cite{Perez:2013zya}. The crucial role in the organization of the phase space plays the critical point located inside the physical region. The possible bifurcation of this point are studied in details for extracting variability of DE and DM density as the function of the cosmological time. It is interesting that at this critical point $\rho_{dm} \propto \rho_{de}$ (scaling type solution). \section{Friedmann equation for diffusion interacting of dark matter with dark energy} Haba et al. postulated a particular model of an energy-momentum exchange between DM and DE sectors, while a baryonic sector was preserved \cite{Haba:2016swv}. In this paper we reconsider this model in the light of aforementioned cosmological problems. We assume Einstein equations in the form \begin{equation} R^{\mu \nu}-\frac{1}{2}g^{\mu\nu} R = T^{\mu\nu}, \end{equation} where $g^{\mu\nu}$ is the metric, $R^{\mu\nu}$ is the Ricci tensor. In this paper we use the natural units $8\pi G=c=1$. Because of a cosmological application we assume that the universe has topology $R\times M^3$, where $M^3$ is homogeneous and isotropic space. Then the spacetime metric depends only on one function of the cosmic time $t$--the scale factor $a(t)$. Additionally for simplicity we also assume flatness ($k=0$) of sections $t=\text{const}$. We decompose the energy-momentum tensor on two parts \begin{equation} T^{\mu \nu} = T_{de}^{\mu \nu} + T_{dm}^{\mu\nu}. \end{equation} We assume the conservation of the total energy momentum, which gives \begin{equation} -\nabla_\mu T_{de}^{\mu \nu} = \nabla_\mu T_{dm}^{\mu \nu}\equiv 3\kappa^2 J^{\nu},\label{con} \end{equation} where $\kappa^2$ is the diffusion constant and $J^{\nu}$ is the current, which represents a flow of stream of particles. We also assume that energy density of the dark matter consisting of particles of mass $m$ is transfered by a diffusion mechanism in an environment described by a perfect fluid. There is only unique diffusion which is relativistic invariant and preserves the particle-mass $m$ \cite{Dudley:1965lm}. The corresponding energy-momentum satisfies the conservation law (\ref{con}). The Friedman equation in the FRW metric with baryonic matter, dark matter and dark energy reads \begin{equation} 3H^2 =\rho_b+\rho_{dm}+ \rho_{de}, \end{equation} where $\rho_{dm}$ and $\rho_{\text{de}}$ are determined by relations \begin{gather} \rho_{\text{dm}} = \rho_{b,0}a^{-3}+\rho_{dm,0}a^{-3}+\gamma (t-t_0)a^{-3}, \\ \rho_{\text{de}} = \rho_{de}(0)-\gamma\int^{t}a^{-3}dt. \end{gather} The current $J^\mu$ in equation (\ref{con}) is conserved \cite{Haba:2010qj,Calogero:2011re,Calogero:2012kd} \begin{equation} \nabla_\mu J^\mu = 0. \end{equation} The above conservation condition for the FRW metric reduces to \begin{equation} J^0=\gamma/3\kappa^2 a^{-3} \end{equation} with a positive constant $\gamma$ which can be computed from the phase space distribution $\Omega(p, x)$ of diffusing particles \cite{Haba:2016swv}. The condition (\ref{con}) after calculation of divergence reduces to the continuity conditions for energy density of both matter and dark energy \begin{align} \dot\rho_{m}&=-3H\rho_{m}+\gamma a^{-3},\label{darkmatter}\\ \dot\rho_{de}&=-\gamma a^{-3},\label{darkenergy} \end{align} where we assume the equation of state for dark energy as $p_{de}=-\rho_{de}$ and for matter as $p_{m}=0$; a dot denotes differentiation with respect to the cosmological time $t$. Equation (\ref{darkmatter}) can be rewritten as \begin{equation} a^{-3}\frac{d}{dt}(\rho_{m} a^3)=\gamma a^{-3}\Leftrightarrow \frac{d}{dt}(E)=\gamma,\label{energy} \end{equation} where $E$ is the total energy of matter in the comoving volume $V\sim a^3$. From relation (\ref{energy}), we can obtain that $E=\gamma(t-t_0)$. In our paper \cite{Haba:2016swv} we considered one unique model of an energy transfer from dark energy (DE) to dark matter (DM) with the diffusive interaction in the dark sector where DE and DM can be treated as ideal fluids. Particles are scattering in an environment of other particles. If we assume that the subsequent scattering events are independent, the particle motion is described by a Markov process. In order, the assumption that the energy of the particle remains finite leads to the conclusion that the Markov process must be a diffusion. Therefore diffusion is in some sense unique because there is only one diffusion which is relativistic invariant and preserves the mass \cite{Dudley:1965lm,Franchi:2007rd,Haba:2008uy}. In consequence the interaction between DM and DE fluids is defined in a unique way. \section{Diffusion cosmology} In the investigation, the dark energy and dark matter interaction plays role in continuity equation. This equation are special case of jungle cosmological models \cite{Perez:2013zya}. We assume that $\rho_m=\rho_b+\rho_{dm}$, where $\rho_{b}$ is density of baryonic matter and $\rho_{dm}$ is density of dark matter. The equation of state for dark energy is expressed by $p_{de}=-\rho_{de}$ in our model, where $p_{de}$ is pressure of dark energy and the equation of state for matter is given by $p_{m}=0$, where $p_m$ is pressure of matter. The Friedmann equation is expressed in the following form \begin{equation} 3H^2=\rho_{b,0}a^{-3}+\rho_{dm,0}a^{-3}+\gamma (t-t_0)a^{-3}+\rho_{de}(0)-\gamma\int^{t}a^{-3}dt,\label{friedmann} \end{equation} where $\rho_{b,0}a^{-3}\equiv\rho_b$, $\rho_{dm,0}a^{-3}+\gamma (t-t_0)a^{-3}\equiv\rho_{dm}$, $\rho_{de}(0)-\gamma\int^{t}a^{-3}dt\equiv\rho_{de}$. From the Friedmann formula we get a condition \begin{equation} 1=\Omega_{m}+\Omega_{de},\label{condition} \end{equation} where $\Omega_{m}=\frac{\rho_{m}}{3H^2}$ and $\Omega_{de}=\frac{\rho_{de}}{3H^2}$ are dimensionless density parameters. We can obtain equations (\ref{darkmatter})-(\ref{friedmann}) in the form of the dynamical system $x'=f_{x}(x,y,\delta)$, $y'=f_{y}(x,y,\delta)$ and $\delta'=f_{\delta}(x,y,\delta)$, where $x=\Omega_{m}$, $y=\Omega_{de}$, $\delta=\frac{\gamma a^{-3}}{H\rho_{m}}$ and $'\equiv\frac{d}{d\ln{a}}$ is a differentiation with respect to the reparametrized time $\ln a(t)$. For these variables, the dynamical system is in the following form \begin{align} x'&=x(-3+\delta+3x),\label{dyn1}\\ y'&=x(-\delta+3y),\\ \delta'&=\delta(-\delta+\frac{3}{2}x).\label{dyn3} \end{align} From equation (\ref{condition}), we have the following relation \begin{equation} x+y=1. \end{equation} Then dynamical system (\ref{dyn1})-(\ref{dyn3}) is reduced to the two-dimension dynamical system. For analysis of the critical points in the infinity, we use the Poincar{\'e} sphere. Let us introduce new variables in which one can study dynamical behavior at infinity \begin{equation} X=\frac{x}{\sqrt{x^2+\delta^2}}, \quad \Delta=\frac{\delta}{\sqrt{x^2+\delta^2}}.\label{eq:var-xd} \end{equation} For these variables we get the dynamical system \begin{align} X'&=X\left[-\Delta^2(\frac{3}{2}X-\Delta)+(1-X^2) (3X+\Delta-3\sqrt{1-X^2-\Delta^2})\right],\label{poincare1} \\ \Delta'&=\Delta\left[(1-\Delta^2)(\frac{3}{2}X-\Delta)-X^2 (3X+\Delta-3\sqrt{1-X^2-\Delta^2})\right],\label{poincare2} \end{align} where $'\equiv \sqrt{1-X^2-\Delta^2}\frac{d}{d\tau}$. Critical points, for equation (\ref{poincare1})-(\ref{poincare2}), are presented in Table~\ref{table:1}. The phase portrait for the dynamical system (\ref{poincare1})-(\ref{poincare2}) is presented in Figure~\ref{fig:fig11}. In the phase portrait there is an interesting class of trajectories labeled as ``I'', starting from the critical point 6 and approaching the de Sitter state. Because the diffusion has the physical sense only for interval $t>t_0$ the corresponding cosmological solution should be cut off for any $t<t_0$ from the de-Sitter solution. Hence, we obtain that $a(t_0)$ is positive number, i.e. solution which represent critical point (6) is non singular. All trajectories starting from the de-Sitter state can be treated as a models of extended idea of emergent cosmology \cite{Guendelman:2014bva} \cite{Bag:2014tta}. While in the standard emergent cosmology universe is starting from the static Einstein model, trajectories of type I are simple realization of extended idea of emergent Universe in which Universe is starting rather from the stationary state. The results of our previous paper \cite{Haba:2016swv} show that the density parameter for total (dark and visible) $x$ and dimensionless parameter $\delta$ are constrained to $x\in(0.2724,\text{ }0.3624)$, $\delta \in (0.0000,\text{ }0.0364)$ at the $95\%$ confidence level. This domain is represented in the phase space by a shaded rectangle. Only these trajectories which intersect this domain are in good agreement with the observation at a 2$\sigma$ confidence level. Therefore observation favored the cosmological models starting from the Einstein-de Sitter solution and going toward the de Sitter attractor (trajectory II on the phase portrait). Note that on the phase portrait there are trajectories labeled as ``I'' starting from the de Sitter state and approaching the de Sitter state at late times. They are going toward a saddle point--representing a non-singular solution. However all these trajectories do not intersect the rectangle and therefore they are not favored by observation. The saddle point in the phase space is representing the Milne universe (see Table~\ref{table:1}). Therefore, the interacting term is proportional to $t^{-3}$ and consequently is of the form $\rho_{\text{dm}} = \Lambda_{\text{bare}} + \alpha^2 t^{-2}$. The cosmological model with such a parametrization of dark energy was studied by Szydlowski and Stachowski \cite{Szydlowski:2015bwa,Szydlowski:2015rga}. Figures \ref{fig:fig2a} and \ref{fig:fig2b} present the evolution of dark matter $\rho_{dm}$ as a function of the cosmological time $t$ for trajectories of type II. The evolution of the cosmological time for matter is determined by the following formula \begin{equation} \rho_{m}(t)=\rho_{m,0}a(t-t_0)^{-3}+\gamma (t-t_0) a(t-t_0)^{-3}.\label{matter} \end{equation} The addictive form of the scaling relation for dark matter (\ref{matter}) suggests that dark matter consists of two components: the first term scaling like $\rho_{m,0} a(t)^{-3}$ and the seconf term scaling like $\gamma t a(t)^{-3}$. The latter describes an amount of dark energy density which is transfered to dark energy sector by the diffusion process. In the unit of $\Omega_{\text{total}}$ the canonically scaling dark matter is $25.23\%$ while transfered dark energy is about $1.35\%$. The amount of transfered dark energy today is of the order $\gamma T$ where $T$ is the age of the Universe. In consequence of equation (\ref{matter}), $\delta(t)$ can be rewritten as \begin{equation} \delta(t)=\frac{1}{H\left(\frac{\rho_{m,0}}{\gamma}+t-t_0\right)}. \end{equation} Therefore at the present epoch we have \begin{equation} \delta(T)=\frac{1}{H_0 \left(\frac{\rho_{m,0}}{\gamma}+T\right)}, \end{equation} where $t_0=0$ and $t=T$ is the present age of the Universe. Note that while at late time $\delta(t)=\sqrt{\frac{3}{\Lambda}}\frac{1}{t}$ for small time $\delta(t)=\frac{3\gamma}{2\rho_{m,0}}(t-t_0)$. If we give $\gamma=0$ in equation (\ref{matter}) then $\rho_m$ is scaling in the canonical way. From equation (\ref{matter}) one can simply obtain that the density of dark matter is \begin{equation} \rho_{dm}=(\rho_{m,0}-\rho_{b,0})a(t-t_0)^{-3}+\gamma (t-t_0) a(t-t_0)^{-3}. \end{equation} Note that the interval of the values of $\rho_{dm}$ is $(0,+\infty)$ or $(0,\rho_{dm}^{\text{max}})$, which depends on a type of trajectories. The evolution of the scale factor $a(t)$ with respect to the cosmological time, for trajectories of type II, is demonstrated in Figure~\ref{fig:fig3}. The function $\delta(t)$, for trajectories of type II, is presented in Figure~\ref{fig:fig4} and the consideration for the maximum of this function is in the form \begin{equation} \left(\frac{\rho_{m,0}}{\gamma}+t_{max}\right)\rho_{m}(t_{\text{max}})=2 H(t_{\text{max}}), \end{equation} where $t_{\text{max}}$ is corresponding the value of the cosmological time at the maximum. The Hubble function $H(t)$, for trajectories of type II, is presented in Figure~\ref{fig:fig5}. Note that the Hubble function in the late time is constant. The evolution of $\rho_{de}$, for trajectories of type II, is shown in Figure~\ref{fig:fig10} and for late time $\rho_{de}$ is going toward constant value. The evolution of $\Omega_{m}/\Omega_{de}$, for trajectories of type II, is demonstrated in Figures \ref{fig:fig8} and \ref{fig:fig9}. For the trajectories of type I, the functions $H(t)$, $a(t)$, $\rho_{dm}$, $\rho_{de}$, $\Omega_{m}/\Omega_{de}$ and $\delta(t)$ are presented in Figures \ref{fig:fig6}, \ref{fig:fig7}, \ref{fig:fig12}, \ref{fig:fig13}, \ref{fig:fig14}, \ref{fig:fig15}. These figures show that there are two distinct behavior of trajectories of type I and II. While the trajectory of type II represents matter dominating model with a singularity the trajectories of type I represents the model without an initial singularity. \begin{table} \caption{Critical points for dynamical system (\ref{poincare1})-(\ref{poincare2}), their type and cosmological interpretation.} \label{table:1} \begin{center} \begin{tabular}{lllllll} \hline \scriptsize{No.} & \scriptsize{critical point} &\scriptsize{type of critical}&\scriptsize{type of universe} &\scriptsize{dominating part in} &\scriptsize{$H(t)$} & \scriptsize{$a(t)$}\\& & \scriptsize{point} & &\scriptsize{the Friedmann equation}& &\\ \hline \hline 1 & \scriptsize{$X_0=0$, $\Delta_0=0$} &\scriptsize{saddle} & \scriptsize{de Sitter universe} &\scriptsize{cosmological}&\scriptsize{$H(t)=\sqrt{\frac{\Lambda_{bare}}{3}}$}&\scriptsize{$a(t)\propto e^{\sqrt{\frac{\Lambda_{bare}}{3}}t}$}\\ & & &\scriptsize{without diffusion effect} &\scriptsize{constant}& &\\ 2 & \scriptsize{$X_0=\sqrt{2/11}$, $\Delta_0=3/\sqrt{22}$ }&\scriptsize{saddle} & \scriptsize{scaling universe} &\scriptsize{matter and}&\scriptsize{$H(t)=(t-t_0)^{-1}$}& \scriptsize{$a(t)\propto(t-t_0)$}\\ &&&\scriptsize{($\rho_{m}\propto\rho_{de}$)}&\scriptsize{dark energy}&&\\ 3 & \scriptsize{$X_0=1/\sqrt{2}$, $\Delta_0=0$ }&\scriptsize{unstable node} & \scriptsize{Einstein-de Sitter}&\scriptsize{matter}&\scriptsize{$H(t)=\frac{2}{3}(t-t_0)^{-1}$}&\scriptsize{$a(t)\propto(t-t_0)^{2/3}$}\\ & & & \scriptsize{universe} & & &\\ 4 & \scriptsize{$X_0=1$, $\Delta_0=0$ }&\scriptsize{stable node} & \scriptsize{static} &\scriptsize{matter and}&\scriptsize{$H(t)=0$}&\scriptsize{$a(t)=const$}\\ &&&\scriptsize{universe}&\scriptsize{running dark energy}&&\\ 5 & \scriptsize{$X_0=4/5$, $\Delta_0=-3/5$ }&\scriptsize{saddle} & \scriptsize{static} &\scriptsize{matter and}&\scriptsize{$H(t)=0$}&\scriptsize{$a(t)=const$}\\ &&&\scriptsize{universe}&\scriptsize{running dark energy}&&\\ 6 & \scriptsize{$X_0=0$, $\Delta_0=1$ }&\scriptsize{unstable node} & \scriptsize{de Sitter universe}&\scriptsize{running dark energy}&\scriptsize{$H(t)=\sqrt{\frac{\rho_{de}(0)}{3}}$}&\scriptsize{$a(t)\propto e^{\sqrt{\frac{\rho_{de}(0)}{3}}t}$}\\ & & & \scriptsize{with diffusion effect} & & &\\ 7 & \scriptsize{$X_0=0$, $\Delta_0=-1$} &\scriptsize{stable node} & \scriptsize{de Sitter universe}&\scriptsize{running dark energy}&\scriptsize{$H(t)=-\sqrt{\frac{\Lambda_{bare}}{3}}$}&\scriptsize{$a(t)\propto e^{-\sqrt{\frac{\Lambda_{bare}}{3}}t}$}\\ & & & \scriptsize{with diffusion effect} & & &\\ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig11} \caption{Phase portrait of dynamical system $x'=x(\delta+3x-3)$, $\delta'=\delta(-\delta+\frac{3}{2}x)$, where $x=\Omega_{m}=\frac{\rho_{m}}{3 H^2}$, $\delta=\frac{\gamma a^{-3}}{H \rho_{m}}$ and $'\equiv\frac{d}{d\ln a}$ on the Poincar{\'e} sphere coordinates are $X=\frac{x}{\sqrt{x^2+y^2+z^2}}$, $\Delta=\frac{\delta}{\sqrt{x^2+y^2+z^2}}$. Critical point (1) represents the de Sitter universe--a global attractor for all physical trajectories. Critical point (2) represents the scaling universe. Critical point (3) represents the Einstein-de Sitter universe. Critical points (4) and (5) represent the static universe. Critical point (6) represents the de Sitter universe. The gray region represents the domain of the present value of $X$ and $\Delta$, which is distinguished by astronomical data. Let us note trajectories lie in the domain with $\Delta<0$ represent the contracting model but there is no symmetry with respect to the $\Delta$-axis. At critical point (6), energy density of baryonic matter is negligible as well as density of dark matter and only effects of the relativistic diffusion are important.} \label{fig:fig11} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig2a} \caption{The evolution of dark matter energy density for trajectories of type II (for the best fitted values of model parameter together with confidence level at $95\%$). Dark matter $\rho_{dm}$ is expressed in [100$\times$km/(s Mpc)]$^2$. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$. } \label{fig:fig2a} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig2b} \caption{The evolution of dark matter energy density for trajectories of type II (for the best fitted values of model parameter together with confidence level at $95\%$ for the present epoch). Dark matter $\rho_{dm}$ is expressed in [100$\times$km/(s Mpc)]$^2$. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$. The value of the age of the Universe for the best fit with errors are presented by the dashed lines.} \label{fig:fig2b} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig3} \caption{Diagram of scale factor as $a$ function of cosmological time $t$ for trajectories of type II (for the best fitted values of model parameter together with confidence level at $95\%$). For the present epoch $T$ $a(T)=1$. A universe is starting from the initial singularity toward a de Sitter universe. This type of behavior is favored by the observational data. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig3} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig4} \caption{Evolution of dimensionless parameter $\delta$ of cosmological time $t$ for trajectories of type II (for the best fitted values of model parameter together with confidence level at $95\%$). Note that as trajectory in the phase space achieved the state of the pericentrum located in the saddle point, this state is corresponding on the diagram the maximum. Note that the existence of a maximum value of $\delta$ parameter $(\frac{\rho_{m,0}}{\gamma}+t_{max}-t_0)\rho_{m}(t_{max})=2 H(t_{max})$. For the late time $\delta(t)$ function is decreasing function of $t$ and $\rho(\infty)$=0. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig4} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig5} \caption{Dependence of Hubble function of trajectories of type II (for the best fitted values of model parameter together with confidence level at $95\%$). For late times $H(t)$ goes to constant values (deS$_+$). The Hubble function $H(t)$ is expressed in [100$\times$km/(s Mpc)]. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig5} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig8} \caption{Diagram of relation $\Omega_{m}/\Omega_{de}$ for trajectories of type II (for the best fitted values of model parameter together with confidence level at $95\%$). We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig8} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig9} \caption{Diagram of relation $\Omega_{m}/\Omega_{de}$ for trajectories of type II for the present epoch. Note that at the present epoch $\rho_{m,0}\propto\rho_{de,0}$ (therefore coincidence problem is solved). We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$. The value of the age of the Universe for the best fit with errors are presented by the dashed lines.} \label{fig:fig9} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig10} \caption{The evolution of dark energy density for the best fitted values of model parameter for trajectories of type II. Dark energy $\rho_{de}$ is expressed in [100$\times$km/(s Mpc)]$^2$. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig10} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig6} \caption{The relation of $H(t)$ for typical trajectory of type I. The $H(t)$ function is expressed in [100$\times$km/(s Mpc)]. Note that $H(0)$ is finite therefore it is not a singularity. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig6} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig7} \caption{Diagram of $a(t)$ for typical trajectory of type I. Note that $a(0)$ is finite therefore it is not a singularity. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig7} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig12} \caption{Diagram of $\rho_m (t)$ for typical trajectory of type I. Note that $\rho_{m}(0)$ is equal zero therefore it is not a singularity. Matter $\rho_{m}$ is expressed in [100$\times$km/(s Mpc)]$^2$. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig12} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig13} \caption{Diagram of $\rho_{de} (t)$ for typical trajectory of type I. Note that $\rho_{de}(0)$ is finite therefore it is not a singularity. Dark energy $\rho_{de}$ is expressed in [100$\times$km/(s Mpc)]$^2$. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig13} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig14} \caption{Diagram of $\Omega_m (t)/\Omega_{de} (t)$ for typical trajectory of type I. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig14} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig15} \caption{Diagram of $\delta (t)$ for typical trajectory of type I. We choose (s Mpc)/(100$\times$km) as a unit of the cosmological time $t$.} \label{fig:fig15} \end{figure} \section{Generalized diffusion cosmology} Dynamical system methods are especially suitable in investigation of dynamics of both fluids, dark energy and dark matter. Presented here dynamical system approach to the study DM-DE interaction in diffusion cosmology can be simply generalized to the case when both dark energy and dark matter satisfy a general form of the equation of state \begin{align} p_{de}&=w \rho_{de}, \\ p_{dm}&=\tilde{w} \rho_{dm}, \\ p_b&=0, \end{align} where $w$ and $\tilde{w}$ are constant coefficients equation of state for dark energy and matter respectively. Then the continuity equations for baryonic and dark matter and dark energy are presented by \begin{align} \dot\rho_{dm}&=-3(1+\tilde{w})H\rho_{dm}+\gamma a^{-3},\label{darkmatter3} \\ \dot\rho_{de}&=-3(1+w)H\rho_{de}-\gamma a^{-3},\label{darkenergy3} \\ \dot\rho_{b}&=-3H\rho_{b}.\label{baryon} \\ \end{align} The corresponding dynamical system assumes the form of a 3-dimensional autonomous dynamical system \begin{align} \frac{dx}{d\ln a}&=3x\left[(1+\tilde{w})(x-1)+(1+w)y+\frac{z}{3}\right],\\ \frac{dy}{d\ln a}&=3y[(1+w)(y-1)+(1+\tilde{w})x]-xz,\\ \frac{dz}{d\ln a}&=z\left[3\tilde{w}-z+\frac{3}{2}[(1+\tilde{w})x+(1+w)y]\right], \end{align} where we choose state variables $x =\Omega_m$, and $y=\Omega_{de}$ and $z =\delta$ like in previous considered case. Because $1=x+y$ the above dynamical system reduces to \begin{align} \frac{dx}{d\ln a}&=3x\left[(\tilde{w}-w)(x-1)+\frac{z}{3}\right],\label{dyn4}\\ \frac{dz}{d\ln a}&=z\left[3\tilde{w}-z+\frac{3}{2}[(1+\tilde{w})x+(1+w)(1-x)]\right].\label{dyn5} \end{align} Critical points of dynamical system (\ref{dyn4})-(\ref{dyn5}) are completed in Table \ref{table:2}. Especially there is an interesting critical point inside the admissible region $D=\{(x,z)\colon x\geq 0, z\geq 0\}$ representing scaling solution: $\rho_{dm}\propto\rho_{de}$. It is a saddle fixed point in the phase space $D$. This critical point is important in the context of solution of cosmic coincidence problem as well as scaling solution in the context of quintessence idea. \begin{table} \caption{Critical points for dynamical system (\ref{dyn4})-(\ref{dyn5}), their positions, types and cosmological interpretation.} \label{table:2} \begin{center} \begin{tabular}{lll} \hline No. & Critical point & type of the universe \\ \hline \hline 1& $x_0=0$, $z_0=0$ & de Sitter universe \\ && without diffusion\\ 2& $x_0=1$, $z_0=0$ & Einstein-de Sitter \\ 3& $x_0=-\frac{1+3w}{3(\tilde{w}-w)}$, $z_0=1 + 3 \tilde{w}$ & scaling universe $(\rho_m\propto\rho_{de})$ \\ 4& $x_0=0$, $z_0=3/2 (1 + 2 \tilde{w} + w)$ & de Sitter universe \\ && with diffusion\\ \\ \hline \end{tabular} \end{center} \end{table} Above system possesses critical points on the planes of coordinate system or inside the phase space $D=\{(x,z)\colon x,z\geq 0\}$. Of course system under consideration is restricted to the submanifold $x+y=1$, because the constrain condition $\Omega_m+\Omega_{de}=1$. The behavior of trajectories of dynamical system (\ref{dyn4})-(\ref{dyn5}) depends on the values of parameters $w$, $\tilde{w}$. By choosing different values of these parameters one can study how phase space structure changes under change of values of parameters. The equivalence of the phase portraits is establish following homeomorphism preserving direction of time along the trajectories. If there exist value of parameter for which phase is not topologically equivalent, then such value is bifurcation value. The stability of critical points depends on the linearization matrix. At the critical point (3), the linearization matrix has the following form \begin{equation} A=\left(\begin{array}{cc} \frac{\partial f_x(x,z)}{\partial x}|_{x_0,z_0} & \frac{\partial f_x(x,z)}{\partial z}|_{x_0,z_0}\\ \frac{\partial f_z(x,z)}{\partial x}|_{x_0,z_0} & \frac{\partial f_z(x,z)}{\partial z}|_{x_0,z_0} \end{array}\right)=\left(\begin{array}{cc} -1-3w & \frac{1+3w}{-3\tilde{w}+3w}\\ \frac{3}{2}(1+3\tilde{w})(\tilde{w}-w) & -1-3\tilde{w} \end{array}\right),\label{matrix} \end{equation} where $f_x(x,z)$ and $f_z(x,z)$ are the right sides of equations (\ref{dyn4}) and (\ref{dyn5}) and $x_0$ and $z_0$ are coordinates of critical point (3) (see table \ref{table:2}). The determinant of matrix (\ref{matrix}) can be expressed by the formula \begin{equation} \det A=\frac{3}{2}(1+3\tilde{w})(1+3w) \label{det} \end{equation} and the trace of matrix (\ref{matrix}) is described by \begin{equation} \text{tr } A=-2-3(\tilde{w}+w).\label{trace} \end{equation} Therefore the critical point (3) is stable when $w+\tilde{w}>-2/3$. The characteristic equation for matrix $A$ at critical point (3) is in the following form \begin{equation} \lambda^2-\text{tr }A\lambda+\det{A}=\lambda^2+(2+3\tilde{w}+3w)\lambda+\frac{3}{2}(1+3\tilde{w})(1+3w)=0.\label{chara} \end{equation} From the characteristic equation (\ref{chara}), we can obtain the eigenvalues for critical point (3) (see Table \ref{table:2}). In Figure \ref{fig:fig16} we demonstrate the stability of critical point (3), depending on $w$ and $\tilde{w}$. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig16} \caption{Diagram of stability of critical point (3), depending on $w$ and $\tilde{w}$. In the gray domains there is the focus type of critical point and the boundaries of this domains is given by the lines $w=\frac{1}{3}(1+6\tilde{w}+\sqrt{-1+9\tilde{w}(2+3\tilde{w})})$ and $w=\frac{1}{3}(1+6\tilde{w}-\sqrt{-1+9\tilde{w}(2+3\tilde{w})})$. In the blue regions there are the saddle type of critical point and is limited by lines $w=-1/3$ and $\tilde{w}=-1/3$. In the white top and bottom regions there are the stable and unstable nodes, respectively.} \label{fig:fig16} \end{figure} The linearized equation (\ref{dyn4})-(\ref{dyn5}) at critical point (3) is given by the following formulas \begin{equation} \begin{array}{c} (x-x_0)'=A_{11}(x-x_0)+A_{12}(z-z_0)\\ =(-1-3w)\left(x+\frac{1+3w}{3(\tilde{w}-w)}\right)+\left(\frac{1+3w}{-3\tilde{w}+3w}\right)(z-1 - 3 \tilde{w}), \end{array} \end{equation} \begin{equation} \begin{array}{c} (z-z_0)'=A_{21}(x-x_0)+A_{22}(z-z_0)\\ =\frac{3}{2}(1+3\tilde{w})(\tilde{w}-w)\left(x+\frac{1+3w}{3(\tilde{w}-w)}\right)+\left(-1-3\tilde{w}\right)(z-1 - 3 \tilde{w}), \end{array} \end{equation} where $x_0=-\frac{1+3w}{3(\tilde{w}-w)}$ and $z_0=1 + 3 \tilde{w}$. The solutions of the above equations are presented by formulas \begin{equation} x=C_1 a^{(-2-3\tilde{w}-3w-\alpha)/2}(a^{\alpha}+C_2)-\frac{1+3w}{3(\tilde{w}-w)}, \end{equation} \begin{equation} z=C_1\frac{3(\tilde{w}-w)}{2+6w}a^{(-2-3\tilde{w}-3w-\alpha)/2}((3\tilde{w}-3w-\alpha)a^{\alpha}+C_2 (3\tilde{w}-3w+\alpha))+1 + 3 \tilde{w}, \end{equation} where $\alpha=\sqrt{-1+9\tilde{w}^2+9w^2-(1+6\tilde{w})(1+6w)}$. It is interesting to check how structure of phase space changes under changing coefficient equation of state for dark matter from 0 (cold dark matter) to $\tilde{w}=1/3$ (hot dark matter). Results of dynamical investigation show that structure of the phase space is preserved under changes of the model parameter. Let us considered some details. \section{Diffusion cosmology with the hot relativistic dark matter} In this section we consider the case with relativistic dark matter ($\tilde{w}=1/3$) and $w=-1$. Then the equation of state for dark matter is in form $p_{dm}=\frac{1}{3}\rho_{dm}$, where $p_{dm}$ is the pressure of dark matter. We get the following equations \begin{equation} x'=x(-4+z+4x),\label{dyn6} \end{equation} \begin{equation} z'=z(1-z+2x).\label{dyn7} \end{equation} We can analyze the critical points in the infinity. In this case we use the Poincar{\'e} sphere. Let $X=\frac{x}{\sqrt{x^2+\delta^2}}$, $\Delta=\frac{\delta}{\sqrt{x^2+\delta^2}}$. For variables $X$ and $\Delta$, we get dynamical system \begin{equation} X'=X\left[-\Delta^2(\sqrt{1-X^2-\Delta^2}+\frac{3}{2}X-\Delta)+(1-X^2) (3X+\Delta-4\sqrt{1-X^2-\Delta^2})\right],\label{poincare3} \end{equation} \begin{equation} \Delta'=\Delta\left[(1-\Delta^2)(\sqrt{1-X^2-\Delta^2}+\frac{3}{2}X-\Delta)-X^2 (3X+\Delta-4\sqrt{1-X^2-\Delta^2})\right],\label{poincare4} \end{equation} where $'\equiv \sqrt{1-X^2-\Delta^2}\frac{d}{d\tau}$. Critical points, for the above equation, are presented in table \ref{table:3}. The phase portrait for the dynamical system (\ref{poincare3})-(\ref{poincare4}) are demonstrated in Figure \ref{fig:fig1}. \begin{table} \caption{Critical points for dynamical system (\ref{poincare3})-(\ref{poincare4}), their type and cosmological interpretation.} \label{table:3} \begin{center} \begin{tabular}{llll} \hline No. & critical point & type of critical point & type of universe \\ \hline \hline 1 & $X_0=0$, $\Delta_0=0$ & saddle & de Sitter universe without diffusion effect \\ 2 & $X_0=2/7$, $\Delta_0=6/7$ & saddle & scaling universe \\ 3 & $X_0=4/5$, $\Delta_0=0$ & unstable node & Einstein-de Sitter universe\\ 4 & $X_0=1$, $\Delta_0=0$ & stable node & static universe\\ 5 & $X_0=4/5$, $\Delta_0=-3/5$ & saddle & static universe\\ 6 & $X_0=0$, $\Delta_0=1$ & unstable node & de Sitter universe with diffusion effect \\ 7 & $X_0=0$, $\Delta_0=-1$ & stable node & de Sitter universe with diffusion effect \\ 8 & $X_0=0$, $\Delta_0=1/\sqrt{2}$ & stable node & de Sitter universe with diffusion effect \\ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig1} \caption{Phase portrait of dynamical system (\ref{dyn4})-(\ref{dyn5}). Note that trajectories for the $\Delta<0$ represent solutions with the negative value of $H$. From the cosmological point of view trajectories representing expanding models with $\Delta>0$ are physical. Critical points (1) represents the de Sitter universe without diffusion effect. Critical point (2) represents the scaling universe. Critical point (3) represents the Einstein-de Sitter universe. Critical points (4) and (5) represent the static universe. Critical point (6) and (8) represent the de Sitter universe with diffusion effect.} \label{fig:fig1} \end{figure} It is interesting to see how different solutions with and without an initial singularity are distributed in the phase space. To answer this question it would be useful to consider a phase space structure of the models under consideration. For this aim we reduce the dynamics to the form of an autonomous 2D dynamical system. In such a system the state variables are dimensionless parameters: the density parameter for radiation dark matter and the parameter $\delta$ characterizing the rate of energy transfer to the dark matter sector. The main advantage of visualization global dynamics on the phase portrait is the possibility to see all solution of the system admitted for all initial conditions. On the phase portrait there is the geometric representation of evolutional paths of both solution types. Critical points are representing asymptotic states of the system, i.e. stationary states. In order trajectories joining different critical points are representing the evolution of the system. Similarly to dynamical investigations presented in our previous paper \cite{Haba:2016swv} we added to the plane circle at infinity via the construction of the Poincar{\'e} sphere. Hence we obtain a compact phase space and consequently a global phase portrait. In Fig. \ref{fig:fig1} we have identified linear solutions without the initial singularity as the representing by saddle critical point (2). In the phase portrait there are critical points at a finite domain as well as located on the boundary at infinity. Note that the phase portrait has no symmetry with respect to the x-axis. Critical point (6) is representing an expanding stationary de Sitter type solution determined by diffusion effects. We denote as typical trajectories starting from this critical point and going toward the de Sitter empty universe. This type of trajectories we called trajectories of type I. In the phase portrait there are also present trajectories of type II. These trajectories are starting from the Einstein-de Sitter universe with the initial singularity and coming toward the de Sitter universe labeled as critical point (8). Looking at the phase portrait one can observe that only critical points of type unstable and stable node (global attractors and global repellers) and saddle appear in the phase space. Therefore the model obtained is structurally stable, i.e. any small change of its r.h.s does not disturb the global phase portrait. Physically this means that corresponding model is realistic. Mathematically this fact has a nice interpretation in the context of the Peixoto theorem \cite{Perko:2001de} that they are generic in this sense because they form open and dense subsets in the space of dynamical systems on the plane. \section{Conclusion} The standard cosmological model ($\Lambda$CDM model) is widely accepted but it has still some problems, namely the cosmological constant problem and the coincidence problem. In the standard cosmological model ($\Lambda$CDM model) it is assumed that all fluids are non-interacting. This implies that the densities of baryonic matter and dark matter are scaling with respect to redshift as $\rho_{\text{dm},0} a(t)^{-3}$ and dark energy has constant density. In this paper we construct a cosmological model in which it is assumed that the process of interaction between sectors of dark matter and dark energy is continuous. Relativistic diffusion describes the transfer of energy to the sector of dark matter. This effect is described by the running cosmological constant and the modification of the standard scaling law of the dark matter density to the form $\rho_{\text{dm},0} a(t)^{-3} + \gamma t a(t)^{-3}$. The dynamics of this model is studied for possible explanations of cosmological puzzles: the cosmological conatant problem and the coincidence problem. In the context of the coincidence problem, our model can explain the present ratio of $\rho_{m}$ to $\rho_{de}$, which is equal $0.4576^{+0.1109}_{-0.0831}$ at a 2$\sigma$ confidence level. In our model, the canonical scaling law of dark matter $(\rho_{dm,0}a^{-3}(t))$ is modified by an additive $\epsilon(t)=\gamma t a^{-3}(t)$ to the form $\rho_{dm}=\rho_{dm,0}a^{-3}(t)+\epsilon(t)$. The analysis of the time dependence of density of dark energy and dark matter, we conclude that the value of effective energy of vacuum runs from an infinite value to a constant value, and the delta amendment to the scaling law goes from zero to zero and being different from zero in a long intermediate period. This characteristic type of behavior is controlled by the diffusion effect. The paper presents a detailed study of the behavior of a state of the system represented by the state variables $(x, \delta)$. In this context, it was natural to consider the diffusion mechanism which controls the change of the ratio of both energy densities and the very dynamics of this process remains in analogy to the description of population changes of competing species \cite{Perez:2013zya}. A crucial role plays the saddle critical point $H a=const$, which is a scaling type of the solution ($\rho_{m}\propto\rho_{de}$). The position of this point cannot be disturbed by a small perturbation (structurally stable point as well as whole system). \acknowledgments{The work was supported by the grant NCN DEC-2013/09/B/ST2/03455. The authors thank prof. Z. Haba and A. Krawiec for remarks and comments.}
\section{Introduction} \label{intro} One of the fundamental aims of high energy physics is to study the properties of matter under extreme conditions i.e. in the domain of high temperature or high density where the hadrons are expected to melt into a plasma of quarks and gluons. Several efforts have been directed for the last two decades mainly to uncover the properties of this plasma at high temperature with zero chemical potential. In the present paper, we mainly focus on the other realm of phase space i.e. the region of dense ultrarelativistic plasma. It has been recently shown that the behavior of quantum liquids in the ultrarelativistic regime is very different from the normal Fermi liquid (FL) behavior. This difference of behavior has recently been exposed both in the context of quantum electrodynamics (QED) and quantum chromodynamics (QCD)\cite{holstein73, bellac97}. It might be mentioned here that for the case of non-relativistic (NR) plasma, the magnetic interaction is supressed in powers of $(v/c)^2$ and hence can be neglected. For the case of NR plasmas; due to the absence of magnetic screening, Fermi liquid (FL) is sufficient to describe all the phenomenon. However for the case of relativistic degenerate plasmas, including the magnetic interactions change the FL picture significantly. Thus it has been seen that for the case of relativistic ultradegenerate plasma, in the vicinity of the Fermi surface, unscreened magnetostatic interactions lead to a logarithmic singularity in the inverse group velocity where the FL picture is no longer applicable \cite{rebhan05}. This non-Fermi liquid (NFL) behavior is manifested in the expression of mean free path (MFP), emissivity of neutrinos and specific heat of degenerate quark matter\cite{pal11,schafer04,holstein73, adhya12}. Motivated with these results, we have recently derived the expressions of the neutrino mean free path (MFP) and the emissivity of the neutrinos from neutron star\cite{adhya12}. In our work, we have incorporated NFL corrections upto next to leading order (NLO) in degenerate quark matter that might exist in the core of the neutron star. All these calculations were previously restricted to the leading order (LO) containing the anomalous $T{ln}(1/T)$ term in these expressions\cite{pal11,schafer04}. We have extended our calculation beyond the known leading logarithmic order and found the appearance of fractional powers in $(T/\mu)$ (where T is the temperature and $\mu$ is the chemical potential of the the degenerate quark matter) in the expressions of MFP and emissivity. As an application, we then show how such corrections affect the neutrino emissivity for which we consider quark direct URCA process and inverse URCA processes given by \cite{iwamoto81}, \begin{eqnarray} &&d\rightarrow u+e^-+\bar{\nu_{e}} \label{dir} \end{eqnarray} \begin{eqnarray} &&u+e^-\rightarrow d+\nu_{e}. \label{inv} \end{eqnarray} Subsequently, we study the cooling behavior of the neutron star (NS). Exploration of the bizarre phenomenon of pulsar kicks i.e. the observed large escape velocities of NS out of supernova remnants has drawn significant attention in recent years. For the past few years, it has been argued that asymmetric neutrino emission is responsible for the pulsar kicks during the evolution of the NS \cite{dorofeev85,sagertarxiv1,sagert08}. The URCA reactions, in presence of a strong magnetic field can give rise to asymmetric neutrino emission as explained in \cite{dorofeev85,sagert08}. Magnetic fields of the order of $10^{15}-10^{19} Gauss$ are known to exist in the core of the NS. Thus the electrons are forced to occupy the lowest Landau level polarising the electron spin opposite to the direction of the magnetic field. The electron polarisation for different conditions of magnetic field and kick velocities has been studied recently by Sagert et. al.\cite{sagert08}. In this work, the authors have studied pulsar acceleration mechanism based on asymmetric neutrino emission from quark direct URCA process. Extension of our calculation of the modified dispersion relation also leads to the modification of the pulsar kick velocity of the neutron star. Additional corrections are included for the effect of external magnetic field on the specific heat capacity of the quarks which leads to further modification of the velocity.\\ In this work, we investigate non-Fermi liquid(NFL) behavior at NLO that enters into the emissivity of the neutrinos from the NS composed of degenerate quark matter core. Such calculation was first done in Ref.\cite{iwamoto81} where the author studied only the Fermi liquid(FL) case. Recently, the calculation was extended to the leading order (LO) NFL effect in Ref.\cite{schafer04} and NLO in Ref.\cite{adhya12}. It is in this context, we visit the problem of the calculation of the kick velocity to see whether such NFL corrections are significant to alter the kick velocity compared to the FL results. In addition, we study the effect of the external magnetic field in the kick velocity of the NS\cite{adhya14}.\\ The ppaer is organised as follows. In section II, we devolop the formalism by calculation of emissivity of neutrinos followed by deriving the expressions of the kick velocity of the NS. We discuss our results in Section III and summarizing in Section IV. \section{Formalism} \subsection*{Emissivity of neutrinos} Now, the emissivity of the neutrinos is given by\cite{iwamoto81}, \begin{eqnarray} \varepsilon=\int \frac{d^{3}p_{\nu}}{(2\pi)^{3}}E_{\nu}\frac{1}{l(-E_{\nu},T)}. \end{eqnarray} where, the mean free path of the neutrinos is given by\cite{iwamoto81}, \begin{eqnarray} \label{mfp01} \frac{1}{l_{mean}^{abs}(E_{\nu},T)}=&&\frac{g^{\prime}}{2E_{\nu}}\int\frac{d^3p_d}{ (2\pi)^3} \frac{1}{2E_d}\int\frac{d^3p_u}{(2\pi)^3} \frac{1}{2E_u}\int\frac{d^3p_e}{ (2\pi)^3 } \frac{1}{2E_e} (2\pi)^4\delta^4(P_d +P_{\nu}-P_u -P_e)\nonumber\\ &&\times|M|^2 \{n(p_d)[1-n(p_u)][1-n(p_e)] -n(p_u)n(p_e)[1-n(p_d)]\}, \end{eqnarray} The Fermi liquid (FL) contribution has been calculated in the works of \cite{iwamoto81}. We know that quasiparticle interactions is responsible for the modification of the dispersion relation given by\cite{bellac97}, \begin{eqnarray} \omega=(E_{p(\omega)} +{\rm Re}\Sigma(\omega,p(\omega))) \end{eqnarray} We mainly focus on the quasiparticle self energy of the quarks in the region of low temperature (T) and high chemical potential ($\mu$) for the calculation of associated quantities. The real part of the quasiparticle self energy ${\rm Re}\Sigma$ is given as \cite{rebhan05}: \begin{eqnarray} &&\rm{Re}\Sigma_{+}(\omega)=-g^2C_Fm\, \Big\{{\epsilon\over12\pi^2m}\Big[\log\Big({4\sqrt{2}m\over\pi \epsilon}\Big)+1\Big]+{2^{1/3}\sqrt{3}\over45\pi^{7/3}}\left({\epsilon\over m}\right)^{5/3} -20{2^{2/3}\sqrt{3}\over189\pi^{11/3}}\left({\epsilon\over m}\right)^{7/3}\nonumber\\ &&-{6144-256\pi^2+36\pi^4-9\pi^6\over864\pi^6}\Big({\epsilon\over m} \Big)^3\Big[\log\left({{0.928}\,m\over \epsilon}\right)\Big] +\mathcal{O}\Big(\left({\epsilon\over m}\right)^{11/3}\Big)\Big\} \end{eqnarray} where $\epsilon=(\omega-\mu)$. Thus we obtain, at the leading order \cite{schafer04}, \begin{eqnarray} \varepsilon_{LO} \simeq \frac{457}{3780}G_{F}^{2}cos^{2}\theta_{c}C_{F}\alpha_{s}\mu_{e}T^{6}\frac{(g\mu)^2}{\pi^2}\text{ln}\Big(\frac{4g\mu}{\pi^{2}T}\Big) \end{eqnarray} which agrees with the result quoted in ref.\cite{schafer04}. Now, we obtain the NLO contribution to the neutrino emissivity as\cite{adhya12}, \begin{eqnarray} \varepsilon_{NLO} \simeq \frac{457}{315}G_{F}^{2}cos^{2}\theta_{c}C_{F}\alpha_{s}\mu_{e}T^{6}\Big[c_{1}T^{2} + c_{2}T^{2/3}(g\mu)^{4/3} - c_{3}T^{4/3}(g\mu)^{2/3} - c_{4}T^{2}\text{ln}\Big(\frac{0.656g\mu}{\pi T}\Big)\Big] \end{eqnarray} where the constants are evaluated as, \begin{eqnarray} c_1 = -0.0036\pi^{2}; c_2 = \frac{2^{2/3}}{9\sqrt{3}\pi^{5/3}}; c_3 = \frac{40\times2^{1/3}}{27\sqrt{3}\pi^{7/3}} \end{eqnarray} and \begin{eqnarray} c_4 = \frac{6144-256\pi^{2}+36\pi^{4}-9\pi^{6}}{144\pi^{4}}. \end{eqnarray} Thus we obtain the expressions for the emissivity of the neutrinos at the leading and the next to leading order. \subsection*{Kick velocity} The pulsar acceleration can be written as a function of the emissivity of the neutrinos, radius of the the quark core. In addition, it also depends on the polarisation of the electron spin and the mass of the neutron star. Thus we obtain\cite{sagertarxiv1}, \begin{eqnarray} dv=\frac{\chi}{M_{NS}}\frac{4}{3}\pi R^{3}\epsilon dt \label{diffv} \end{eqnarray} Now using the cooling equation, we can rewrite the equation in terms of specific heat of the quark matter core. \begin{eqnarray} C_{v}dT=-\epsilon dt \end{eqnarray} In recent literatures, the calculation of specific heat of quark matter is \cite{holstein73, ipp04}, \begin{eqnarray} C_{v}\Big|_{FL}&=&\frac{N_{c}N_{f}}{3}\mu_{q}^{2}T \end{eqnarray} Thus the Fermi liquid contribution to the pulsar kick velocity as reported in \cite{sagert08} can be recast into the following form, \begin{eqnarray} v\Big|_{FL}&\simeq&\frac{8.3 N_{C}N_{f}}{3}\Big(\frac{\mu_{q}}{400MeV}\frac{T}{1MeV}\Big)^{2}\Big(\frac{R}{10km}\Big)^{3}\frac{1.4M_{\odot}}{M_{NS}}\chi\frac{ km}{s} \label{vfl} \end{eqnarray} Now extending the calculation of the pulsar kick velocity by incorporating the effect of the non-Fermi liquid effects into the specific heat can be written by using the modified dispersion relation. Thus the specific heat of the degenerate quark matter up to NLO is given by \cite{holstein73,ipp04}, \begin{eqnarray} \label{spec-heat} \label{finalcv} C_v\Big{|}_{total}=C_v\Big{|}_{FL}+C_v\Big{|}_{LO}+C_v\Big{|}_{NLO} \end{eqnarray} where, \begin{eqnarray} C_v\Big{|}_{LO}=N_g{g_{eff}^2\mu_q^2 T\over36\pi^2}\left(\ln\left({4g_{eff}\mu_q\over\pi^2T}\right)+\gamma_E -{6\over\pi^2}\zeta^\prime(2)-3\right) \end{eqnarray} and \begin{eqnarray} C_v\Big{|}_{NLO}&=&N_g\Big[-40{2^{2/3}\Gamma\left({8\over3}\right)\zeta\left({8\over3}\right)\over27\sqrt{3}\pi^{11/3}} T^{5/3}(g_{eff}\mu_q)^{4/3} +560{2^{1/3}\Gamma\left({10\over3}\right)\zeta\left({10\over3}\right) \over81\sqrt{3}\pi^{13/3}}T^{7/3}(g_{eff}\mu_q)^{2/3}\nonumber\\ &&+{2048-256\pi^2-36\pi^4+3\pi^6\over180\pi^2}T^3 \Big[\ln\left({g_{eff}\mu_q\over T}\right)+\bar c-{\frac{7}{12}}\Big]\Big] \end{eqnarray} where the coupling constant $g$ is related to $g_{eff}$ as, \begin{equation}\label{geffdef} g^2 = \frac{2\ g^{2}_{eff}}{N_{f}}, \end{equation} and $C_v\Big{|}_{total}$ is the sum of the FL, LO and NLO contribution to the specific heat of the quark matter. Thus we obtain the LO and NLO contributions to the kick velocity as\cite{adhya14}, \begin{eqnarray} v\Big|_{LO}\simeq\frac{16.6 N_{C}N_{f}}{3}(C_F\alpha_s)\Big(\frac{\mu_{q}}{400MeV}\frac{T}{1MeV}\Big)^{2}\Big(\frac{R}{10km}\Big)^{3}\frac{1.4M_{\odot}}{M_{NS}}\chi\Big[c_1+c_2\ln\Big(\frac{g\mu_q\sqrt{N_f}}{T}\Big)\Big]\frac{ km}{s} \label{vfllo} \end{eqnarray} where $C_F=(N_c^2-1)/(2N_c)$ and the constants are $c_{1}=-0.13807$ and $c_{2}=0.0530516$. \begin{eqnarray} v\Big|_{NLO}&\simeq&\frac{16.6 N_{C}N_{f}}{3}\Big(\frac{\mu_{q}}{400MeV}\frac{T}{1MeV}\Big)^{2}\Big(\frac{R}{10km}\Big)^{3}\frac{1.4M_{\odot}}{M_{NS}}\chi(C_{F}\alpha_{s})\nonumber\\ &\times&\Big[a_1\Big(\frac{bT}{\mu_q} \Big)^{2/3}+a_2\Big(\frac{bT}{\mu_q}\Big)^{4/3} + \Big[a_3+a_4\ln\Big(\frac {\mu_q}{b T}\Big)\Big]\Big(\frac{bT}{\mu_q}\Big)^2\Big]\frac{km}{s} \end{eqnarray} where the constants are evaluated as, \begin{eqnarray} a_1=-\frac{12\pi\times0.04386}{8};a_2=\frac{12\pi\times0.04613}{10} ;a_3=-2.4162;a_4=-0.4595 \end{eqnarray} and \begin{eqnarray} b=\frac{2\pi}{\sqrt{N_f}g}. \end{eqnarray} The net contribution to the pulsar kick velocity upto NLO is obtained by the sum of the Fermi liquid result and the non-Fermi liquid correction upto NLO: \begin{eqnarray} v\Big{|}_{total}=v\Big|_{FL}+v\Big|_{LO}+v\Big|_{NLO} \label{vtotal} \end{eqnarray} As high magnetic field exists in the core of the neutron star, therefore the modification of the specific heat capacity in presence of such high magnetic field should also be considered. Thus, we have computed the specific heat capacity of the quark matter in presence of such high magnetic field as\cite{adhya14}, \begin{eqnarray} C_{v}\Big{|}_{FL}^{B}=\frac{N_CN_fTm_q^2}{6}\Big(\frac{B}{B_{cr}^q}\Big) \label{cvFLB} \end{eqnarray} Incorporating the effect of the NFL behavior in the specific heat capacity, we obtain, \begin{eqnarray} C_v\Big{|}_{LO}^B\simeq\Big(\frac{N_CN_fC_f\alpha_s}{36\pi}\Big)m_q^2\Big(\frac{B}{B_{cr}^q}\Big)T\Big[(-1+2\gamma_E)+2log\Big(\frac{2m_B}{T}\Big)\Big] \end{eqnarray} The NLO contribution to the specific heat capacity is obtained as, \begin{eqnarray} C_v\Big{|}_{NLO}^B\simeq&&\Big(\frac{N_CN_f}{3}\Big)(C_f\alpha_s)\Big(m_q^2\frac{B}{B_{cr}^q}\Big)T\Big[c_1\Big(\frac{T}{m_B}\Big)^{2/3}\nonumber\\ &+&c_2\Big(\frac{T}{m_B}\Big)^{4/3}+c_{3}\Big(\frac{T}{m_B}\Big)^{2}\Big(c_{4}-\log\Big(\frac{T}{m_B}\Big)\Big)\Big] \end{eqnarray} where the constants are \cite{adhya14}, \begin{eqnarray} c_1=-0.2752; c_2=0.2899; c_3=-0.5919; c_4=5.007. \end{eqnarray} The Debye mass ($m_B$) in the QCD case in presence of magnetic field is obtained as follows\cite{rebhan05}, \begin{eqnarray} m_B^{2}=\frac{N_fg^2m_q^2}{4\pi^2}\Big(\frac{B}{B_{cr}^q}\Big) \end{eqnarray} The pulsar kick velocity obtained taking into account the magnetic field effect on the specific heat capacity of the quarks reads as \cite{adhya14}, \begin{eqnarray} v\Big|_{FL}^B &\simeq&\frac{4.15 N_{C}N_f}{3}\Big(\frac{\sqrt{m_q^2(B/B_{cr}^q)}}{400MeV}\frac{T}{1MeV}\Big)^{2}\Big(\frac{R}{10km}\Big)^{3}\frac{1.4M_{\odot}}{M_{NS}}\chi\frac{ km}{s} \label{vBFL} \end{eqnarray} By implementing the anomalous NFL effect, we obtain the LO contribution to the kick velocity as\cite{adhya14}, \begin{eqnarray} v\Big|_{LO}^B &\simeq&\frac{8.8 N_{C}N_f}{3}(C_f\alpha_s)\Big(\frac{\sqrt{m_q^2(B/B_{cr}^q)}}{400MeV}\frac{T}{1MeV}\Big)^{2}\Big(\frac{R}{10km}\Big)^{3}\nonumber\\ &&\times\frac{1.4M_{\odot}}{M_{NS}}\Big[0.0635+0.05\log\Big(\frac{m_B}{T}\Big)\Big]\chi\frac{ km}{s} \label{vBLO} \end{eqnarray} We have extended our calculation beyond the LO in NFL correction. The NLO correction to the kick velocity is calculated as \cite{adhya14}, \begin{eqnarray} v\Big|_{NLO}^B&\simeq&\frac{8.3 N_{C}N_{f}}{3}\Big(\frac{B}{B_{cr}^q}\Big)\Big(\frac{m_{q}}{400MeV}\frac{T}{1MeV}\Big)^{2}\Big(\frac{R}{10km}\Big)^{3}\frac{1.4M_{\odot}}{M_{NS}}\nonumber\\ &&\times\chi(C_{F}\alpha_{s})\Big[a_1\Big(\frac{T}{m_B} \Big)^{2/3}+a_2\Big(\frac{T}{m_B}\Big)^{4/3}\nonumber\\ &&+ \Big[a_3+a_4\ln\Big(\frac {m_B}{T}\Big)\Big]\Big(\frac{T}{m_B}\Big)^2\Big]\frac{km}{s} \label{vBNLO} \end{eqnarray} The constants are evaluated as, \begin{eqnarray} a_1=-\frac{12\pi\times0.04386}{8};a_2=\frac{12\pi\times0.04613}{10} ;a_3=-2.4162;a_4=-0.4595 \end{eqnarray} It is to be noted that the magnetic interaction which have a long range character leads to the anomalous $T^2\log T^{-1}$ term in the pulsar velocity. The effect of the electron polarisation fraction for different condition of the magnetic field is used for the estimation of the kick velocity of the NS which we present in the Results section. \section{Results and discussions} \label{sec-2} In this section assuming a quark chemical potential of $500 MeV$ and electron chemical potential of $15 MeV$, we have plotted the variation of the emissivity of the neutrinos with temperature present at the core of the neutron star. In the left panel of Fig.\ref{fig1} we show comparison of the neutrino emissivity with temperature for the FL, LO and NLO cases. The right panel shows a comparison of the cooling nature of the neutron star when the core is assumed to be QGP with that with normal nuclear matter. In this section, we have also presented an estimation of the radius of the NS for different temperatures. The numerical computation of the kick velocity has been performed with different values of the polarisation fraction as in \cite{sagert08, sagertarxiv1} for weak and strong external magnetic fields. In the left panel of Fig.\ref{fig2}, we have plotted the quark matter core radius with the temperature considering the electrons are in highly polarised condition whereas in the right panel, we have expressed our results of the kick velocity with temperature assuming that the electrons are partially polarised. We note that the inclusion of the medium modified propagator increases the kick velocity substantially for the LO case. However, when we extend our results to the NLO case, we find that there is marginal change in the value of the kick velocity compared to the LO case. \begin{figure}[h] \centering \includegraphics[width=7cm,clip]{emissivityv1.eps}~~~~~~~~\includegraphics[width=7cm,clip]{cooling.eps} \caption{The left panel shows comparison of the emissivity of neutrinos for FL, LO and NLO NFL result with temperature of the quark matter core. The right panel shows the comparison between the cooling nature of the NS with neutron matter (dash dotted line) and quark matter. The dotted line represents the FL contribution, the solid line represents the NFL NLO correction. } \label{fig1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=7cm,clip]{kickvelkihighcvB.eps}~~~~~~~~\includegraphics[width=7cm,clip]{vkickvsTkless.eps} \caption{The left panel shows the comparison between radius and temperature of the core of NS for the case of high magnetic field. The right panel shows the comparison between kick velocity and temperature of the core for weak magnetic field. } \label{fig2} \end{figure} \section{Summary and discussions} The expression of emissivity of neutrinos by incorporating NFL effects upto next-to-leading order (NLO) is calculated. It is found that the emissivity contains terms at the higher order which involves fractional powers and logarithms in $(T/\mu)$. It is also found that there is increment of emissivity due to NLO corrections over FL and LO results. On examining the cooling behavior it is seen that the cooling is affected moderately compared to the simple FL case. The results show that the pulsar kick velocity receives significant contribution from the logarithmic corrections. Further, incorporation of results upto next-to-leading order to include plasma/quasiparticle effects which are anomalous (NFL) effects has been done. The contribution from electron polarisation for different cases has been taken into account to calculate the velocities. The presence of the logarithmic term and the magnetic field considerably enhances the kick velocity of the neutron star. \subsection*{Acknowledgements} One of the authors [SPA] would like to thank UGC, India for providing the fellowship (Sr. no.: 2120951147) during the tenure of this work.
\section*{Acknowledgments} \label{sec:acks} We dedicate this work to Stewart Grant, Sophia Peng, Pere Ubu, Yvonne Li, Richard Terdiman, and Kevin Langowski. While unaffiliated with this work's content, they inspired the clarity and strength required for the project to reach fruition. \section{Dictionary Construction} \label{sec:dictdetail} This paper used several dictionaries. In order to ensure the quality of the guessed text, we manually sourced and refined the following: \begin{enumerate} \item \textit{Word} was sourced from the linux spellchecker: \url{/usr/share/dict/american-english} filtered for pronouns (first letter upper case), stop words sans pronouns, and words containing punctuation and digits. Single quotes were made to be curly possessive and straight to match the document under study. \item \textit{FN} were sourced from from the US social security agency~\cite{ssaNames} starting from 1880. For both this and the last name dictionary names like ``McCarthy'' were made to be correctly capitalized. \item \textit{LN} were sourced from the year 2000 US Census~\cite{usCensus}. \item \textit{Ctry} was sourced from wikipedia lists of countries and alternative country names~\cite{altCntry,demonymCntry}, then refined by hand to ensure completeness with repect to potential forms, e.g. United States vs. The United States. For fun we added intialisms. \item \textit{Rgn} is from the NGA GEOnet Names Server (GNS)~\cite{geoNet}. We filter for full name and BGN-approved local official name. We restrict our results to A, P,and L feature categories. \item \textit{Natl} is sourced from wikipedia lists of demonyms~\cite{demonymCntry,demonymList}. \end{enumerate} \section{Estimating Mutual Information} \label{sec:mutinfcalc} In Section~\ref{sec:mutinfmethod}, we defined a stochastic process with respect to a given text and dictionary where a randomly chosen occurrence of a dictionary word in the text is replaced by a another word from the same dictionary chosen at random and then redacted. The amount of information leaked by the redacted document $Y$ about the redacted word $X$ is given by the mutual information $I(X,Y)$. We estimated $I(X,Y)$ by sampling a set of occurrences of dictionary words and then simulated redaction of all possible words from the dictionary, measuring the entropy of the resulting document distribution. The justification for this is as follows. First, let $L$ be a random variable representing the location of the occurrence of the dictionary word chosen in step 1 of the stochastic process in Section~\ref{sec:mutinfmethod}. Note that $H(Y|L,X) = 0$ (no uncertainty about $Y$ given $L$ and $X$), since the formatting and redaction in steps 3 and 4 are deterministic, and $H(L|X,Y) = H(L|Y) = 0$ (no uncertainty about the location of the redaction given the document after redaction). Using these two facts, Using these two facts, \begingroup\small \begin{align*} I(X&;Y) = \\ &= H(X) - H(X|Y) \\ &= H(X) - H(X,Y) + H(Y) \\ &= H(X) - H(L,X,Y) + H(L|X,Y) + H(L,Y) - H(L|Y) \\ &= H(X) - H(Y|L,X) - H(L,X) + 0 + H(L,Y) - 0 \\ &= H(X) - 0 - H(L) - H(X) + H(Y|L) + H(L) \\ &= H(Y|L) \\ &= \sum_\ell Pr[L=\ell]\cdot H(Y|L=\ell). \end{align*} \endgroup Thus, $I(X;Y)$ is the average, taken over redaction locations, of the entropy of $Y$, that is, the entropy of the distribution of possible documents after redaction. For a large corpus and large dictionaries, calculating this quantity exactly is expensive. Instead of calculating the exact value, we sample $H(Y|L=\ell)$ at possible redaction sites. \section{Redaction Location}\label{sec:redact-loc-appdx} \begin{table} \centering \caption{Accuracy of locating nontrivial redactions using a box-walk routine vs. Timothy B. Lee's method, which records graphics state draw commands and looks for rectangles. Here we seperate redactions into ``easy'' and ``hard'' categories, where ``hard'' redactions are, for example, boxes drawn in the document with no surrounding text, or redactions that extend across and entire line and thus are too long to attack.} \label{tab:redactfind} \include{tab/redact-find} \end{table} In Table~\ref{tab:redactfind} we report the comparison between our algorithm for locating nontrivial redactions and the one provided by Timothy B. Lee~\cite{timblee}. We give an outline of this algorithm in Figure~\ref{fig:nontriv-alg}. This algorithm, we note, is optimized to find boxes with respect to \emph{other non-redacted words} on the page, whereas Lee's method looks for rectangle draw commands. Lee's method is therefore better at detecting redactions with no surrounding text, but these redactions are less useful for the deredaction attacks on nontrivial redactions presented in this paper. We also added a second step to the x-ray algorithm~\cite{xrayTool} for locating trivial redactions which removed a large number of false positives from the results (algorithm in Figure~\ref{fig:xray-alg}). These bash scripts call into C, python, and ruby for various subroutines. The ``pts'' command is our own, and handles lifting a PDF specification into an intermediate representation consisting of glyph coordinates at the scale of displacment units, mentioned briefly in Section~\ref{sec:maxray}. \url{find\_redaction\_box.py} and \url{trivial\_redaction\_loc.py} work as described earlier in this paper. The former checks every sufficiently large space between two words for a redaction rectangle and the latter compares the pixels of the two rasterized images at glyph coordinates piped in by \url{get\_word\_coords.rb}. \lstset{ % language=bash, basicstyle=\ttfamily\footnotesize, numbers=none, numberstyle=\ttfamily\footnotesize, stepnumber=1, numbersep=5pt, backgroundcolor=\color{white}, showspaces=false, showstringspaces=false, showtabs=false, frame=single, tabsize=2, captionpos=b, breaklines=true, breakatwhitespace=false, escapeinside={\%*}{*)} } \begin{figure} \begin{lstlisting} tf=$(mktemp -u) # Removes images from PDF cpdf -draft "$1" -o "$tf".pdf-0 $SRC/painting/remove-tjs.sh "$tf".pdf-0 "$tf".pdf convert -background 'rgb(255,255,255)' \ -colorspace gray -normalize \ -density 300 -quality 100 -depth 8 \ "$tf".pdf "$tf".ppm 2>/dev/null # parses out all the PDF text state and # walks redaction boxes $SRC/c-src/pts "$1" 1 | $SRC/location/find_redaction_box.py "$tf".ppm \end{lstlisting} \caption{Nontrivial redaction location algorithm} \label{fig:nontriv-alg} \end{figure} \begin{figure} \begin{lstlisting} # convert all text to black "$DIR"/lib/camlpdf -blacktext "$1" -o "$1"-a # remove all text from one pdf gs -q -o "$1"-b \ -sDEVICE=pdfwrite -dFILTERTEXT "$1"-a # create two ppms pdftoppm -singlefile -r 300 "$1"-a "$1"-a pdftoppm -singlefile -r 300 "$1"-b "$1"-b # get coordinates of each word and compare pixels for differences "$DIR"/get_word_coords.rb "$1"-a | "$DIR"/trivial_redaction_loc.py \ "$1"-a.ppm "$1"-b.ppm \end{lstlisting} \caption{Two-pass algorithm for locating trivial redactions} \label{fig:xray-alg} \end{figure} \section{PDF Workflows} We peformed an analysis of several different PDF document production workflows across a variety of operating systems and software. Our results are reported in Table~\ref{tab:pdfflows}. This table demonstrates the wide variety of of schemes and transfer functions for displacements in different software sets. Each row grouping represents a different general environment and software creator for the production of the PDF document, and each row represents a specific workflow---mostly which buttons are pressed during the PDF's creation. \begin{table*} \centering \caption{Several different possible PDF workflows. Each entry in the column on the left should be read from left to right, indicating the stages used to produce the document. For example, ``Edge/Firefox, Print/PDFViewer, Save'' can be interpreted as ``using the edge or firefox browsers’ PDF viewer or print dialog, hit save'' and results in the right column's displacements.} \label{tab:pdfflows} \include{tab/flows} \end{table*} \section{Snowden Adjustment Differences} \label{sec:snowden-diffs} We note the letters ``i'' and ``l'' in the snowden document see a displacement of 1 less than the expected MS Word displacement. We were unable to determine exactly \emph{why} these letters were maladjusted and hypothesize it is due to differences in software versioning. Other small errors in our model occurred for this document, but these were statistically insignificant ($\leq5\%$ of characters on the page in the target font) with the exclusion of the ``Albania'' case. \section{Unknown Adobe OCR Displacements} We note there was initially no exact match for any congressperson from the state of Massachusetts (case 4 of the Manafort documents). There was a displacement that could not be inferred from the line in question inside of Bill Keating's name. To discover this, we iterated through \emph{every combination} of displacement values possible given \emph{all} displacements for a given character on the page. This resulted in 729 guesses for Keating (20 of which were exact) and 1,577,097 guesses total. The only other congressperson to match was Ron Barber, 1 of whose guesses was exact out of 81, but Barber's associated state was incorrect. All deredaction results in the Manafort case study were reconfirmed by this displacement combinatin check method. \section{The Acrobat Pro Displacement Scheme} \label{sec:acro-pro-deets} We can divide the effects of Acrobat Pro on PDF displacements along into two functionalities: document editing and document OCR. Clicking on the text of an existing PDF in Acrobat does not change any positioning information. However, when a user edits a text object (typically one line of text) Acrobat sets all the object's displacements to 0, making the scheme unadjusted. During OCR, Acrobat (1) creates text objects and (2) adds noise to otherwise unadjusted text. Acrobat's OCR algorithm creates displacement noise by adding character and word spacing operators to each word, starting at the first letter of each word. The former, \texttt{Tc}, adds a small displacement to each character in the word. The latter, \texttt{Td}, adjusts the word's x,y coordinates relative to the end of the prior word. When redaction tools remove text metadata, the metadata may include these spacing operators. Therefore, a redaction's width is not necessarily equivalent to the removed glyphs' cumulative advance width, posing an additional challenge for deredaction. However, we found these operators' parameters leak via two sidechannels: \begin{enumerate} \item \textbf{Tc}: This operator also applies to the successor space character of the word. Since this space is rarely redacted, the applied Tc command remains in the PDF after redaction. We validated this for all redaction tools in Section~\ref{sec:tools}. \item \textbf{Td}: All of Section~\ref{sec:tools}'s redaction tools draw a box after removing metadata to indicate a redaction. This black or white box's coordinates match those of the first glyph in the redacted word \emph{after} the Td adjustment, revealing the redaction's exact width. If redaction removes the word's predecessor space character, this sidechannel is removed. \end{enumerate} As a result, if a document is run through Acrobat's OCR before redaction and these sidechannels are not removed. Redactions of documents produced by Acrobat's OCR are therefore not much more secure than PDFs with an unadjusted displacement scheme. \lstset{ % language=C++, basicstyle=\ttfamily\footnotesize, numbers=none, numberstyle=\footnotesize, stepnumber=1, numbersep=5pt, backgroundcolor=\color{white}, showspaces=false, showstringspaces=false, showtabs=false, frame=single, tabsize=2, captionpos=b, breaklines=true, breakatwhitespace=false, escapeinside={\%*}{*)} } \begin{figure} \begin{lstlisting} int i = 0; int trackingAdj = 0; do { int accumulatedDiff = 0; int lastNewAdj = 0; int totalWidth = 0; int amountAdjustedSoFar = 0; totalWidth += fontScaledWidths[i]; int newAdjustment = PIXEL_W(totalWidth); accumulatedDiff = pixelWidths[i] - newAdjustment; pixelWidths[i] = amountAdjustedSoFar = lastNewAdj = newAdjustment; uncorrectedPixelWidths[i] = newAdjustment; i++; if (i == numChars) break; do { totalWidth += fontScaledWidths[i]; if (totalWidth > WIDTH_BRKPOINT) break; int origAdjustment = pixelWidths[i]; int newAdj = PIXEL_W(totalWidth); newAdj -= amountAdjustedSoFar; int adjustmentDifference = origAdjustment - newAdj; trackingAdj = adjustmentDifference - accumulatedDiff; if (adjustmentDifference != accumulatedDiff) { int v28 = trackingAdj & 1; int v50 = trackingAdj & 1; if (trackingAdj <= 0) { trackingAdj >>= 1; if (adjustmentDifference < -accumulatedDiff) trackingAdj += v50; if (-newAdj >= trackingAdj) trackingAdj = -newAdj; } else { trackingAdj >>= 1; if (accumulatedDiff < -adjustmentDifference) trackingAdj += v28; if (lastNewAdj < trackingAdj) trackingAdj = lastNewAdj; } } pixelWidths[i - 1] -= trackingAdj; int newTrackAdj = newAdj + trackingAdj; amountAdjustedSoFar = amountAdjustedSoFar + newAdj; accumulatedDiff = origAdjustment - (newTrackAdj); pixelWidths[i] = newTrackAdj; uncorrectedPixelWidths[i] = newTrackAdj; lastNewAdj = newAdj; i++; } while (i < numChars); } while (i < numChars); \end{lstlisting} \caption{Word WYSIWYG width adjustment method.} \label{fig:wysiwyg-alg} \end{figure} \begin{figure} \begin{lstlisting} if (msword_year <= 2016) { int leadingSpace = 1; float dev = 0; float ttf = 0; for (int j = i + 1; j < vs->size(); j++) { if (vs->getChar(j) == ' ' && leadingSpace) { continue; } else if (leadingSpace) { leadingSpace = 0; } float t = textSpaceWidths2007[j] / 1000; double d = deviceWidths[j] / msFSize2007; ttf += t; dev += d; double disp = ttf - dev; if ((disp > 0.003 || disp < -0.003) && i != vs->size() - 1) { int adj = disp * 1000 + 0.5; vs->setDisplacement(j, adj); ttf = dev = 0; } else { vs->setDisplacement(j, 0); } } } else { int leadingSpace = 1; float dev = 0; float ttf = 0; float t = 0; float d = 0; double disp = 0; for (int j = i + 1; j < vs->size(); j++) { if (vs->getChar(j) == ' ' && leadingSpace) { continue; } else if (leadingSpace) { leadingSpace = 0; } t = textSpaceWidths2019[j] / 1000; d = deviceWidths[j] / msFSize2019; ttf += t; dev += d; disp = ttf - dev; if (((disp > 0.003) || (disp < -0.003)) && i != vs->size() - 1) { int adj = disp * 1000 + 0.5; vs->setDisplacement(j, adj); ttf = dev = 0; } else { vs->setDisplacement(j, 0); } } } \end{lstlisting} \caption{Adjustment routine for Word's displacment scheme} \label{fig:adjust-alg} \end{figure} \begin{figure} \begin{lstlisting} std::vector<double> Office::computeEditAdjustments(VectorString *vs) { std::vector<double> editAdjustments; int totalDots = 0; double totalDevWidth = 0; for (int i = 0; i < numChars; i++) { totalDots += (i == numChars - 1) ? uncorrectedPixelWidths[i] : pixelWidths[i]; totalDevWidth += (textSpaceWidths2019[i] - vs->getDisplacement(i)) * (msFSize2019 / 1000); double realWidth, editedWidth, adj; realWidth = totalDevWidth + dotsToPdfUnits2019(600); editedWidth = roundToDigits(dotsToPdfUnits2019(totalDots + 600), 5); adj = (realWidth - editedWidth) / (msFSize2019 / 1000); editAdjustments.push_back(adj); } return editAdjustments; } std::vector<double> Office::edit(VectorString *vs, std::vector<double> eAdjs, int i) { double disp; disp = roundf(eAdjs[i] * (msFSize2019 / 1000) * 1000) / 1000 / (msFSize2019 / 1000); vs->addDisplacement(i, disp); adjust(vs, i); return computeEditAdjustments(vs); } void Office::editSuffix(std::vector<double> savedEAdjs, VectorString savedVs, VectorString *suf) { double guessDisp; /* Current TJ displacement */ double checkDisp; /* TJ displacement of prefix or suffix */ double adjDisp; /* Edit adjustment displacement */ bool matches = true; for (int i = 0; i < suf->size() - 1; i++) { int ind = savedVs.size() - suf->size() + i; guessDisp = roundf(10 * savedVs.getDisplacement(ind)); checkDisp = roundf(10 * suf->getDisplacement(i)); if (ind == savedVs.size() - 1) { savedVs.setDisplacement(ind, suf->getDisplacement(i)); break; } adjDisp = roundf(10 * savedEAdjs[ind]); if (guessDisp + adjDisp == checkDisp) { savedEAdjs = edit(&savedVs, savedEAdjs, ind); } } for (int i = 0; i < savedVs.size() - 1; i++) { if (savedVs.getChar(i) == L'-') { savedEAdjs = edit(&savedVs, savedEAdjs, i - 1); savedEAdjs = edit(&savedVs, savedEAdjs, i); } } } \end{lstlisting} \caption{Microsoft Word edit history positioning scheme algorithm.} \label{fig:edit-alg} \end{figure} \begin{figure} \begin{lstlisting} for (int i = 0; i < numChars; i++) { fontScaledWidth = widths[i] * fontSize * 2; fontScaledWidths.push_back(fontScaledWidth); if (msword_year == 2007) { float textSpaceW = roundf(roundf(((float) widths[i]) / UNIT_TEXT_SPACE_2007 * 10000) / 10); textSpaceWidths2007.push_back(textSpaceW); } else { double textSpaceW = roundf(((double) widths[i]) / UNIT_TEXT_SPACE_2019 * 1000.0); textSpaceWidths2019.push_back(textSpaceW); } uncorrectedPixelWidths.push_back(0); } if (justif) { justify(); } computeWYSIWYG(); \end{lstlisting} \caption{Word initialization of widths.} \label{fig:initalg} \end{figure} \subsection{Microsoft Word Displacement Scheme Details} \label{sec:mic-word-deets} While other displacement schemes leak proportional width information, any accumulation of information conditioned on redacted glyphs potentially leaks information. One such case is the WYSIWYG behavior of Microsoft Word PDF writers dating back to 2007. Appendix~\ref{sec:word-disp-appdx} discusses \maxray's positioning scheme models. Additional redacted information leaks occur because Microsoft Word's \emph{Save As PDF} tool converts between two glyph coordinate representations. When users edit or create a Word document, they operate on a virtual document representation, internal to Word. This representation determines how Word displays the document in the graphical user interface (GUI). Note, however, the resolution of the user's screen and other environmental factors do not effect this virtual representation. Included in the virtual document's representation are a set of \emph{internal widths} used to represent glyphs' sizes. When Word writes a PDF file, it converts this virtual representation into a PDF specification. The PDF file contains a seperate glyph width mapping, typically matching the embedded TrueType font file. For unnecessary reasons, a small amount of error exists between the two representations' rendering specifications. A glyph's virtual width representation can be smaller or larger than that same glyph's PDF width specification. To account for this error, Word's PDF document writer accumulates each glyph's width error from left to right across each line of text twice. The first pass modifies the internal representation of each character's width according to the difference between the current internal width and the TTF width of the line's prefix up to the current position, at around a 600 DPI resolution. In the second pass, a second left-to-right accumulation occurs, accounting for error between the widths output by the first pass ad the TTF line width, converted to displacement units (typicaly 1/1000th of the font size at 72 DPI). When the accumulated error hits a three displacement unit threshold, Word writes a displacement to the PDF and resets the accumulator. In summary, the font metrics of the line to the left of any given threshold value effect the accumulated error, leaking redacted information via the redaction's width and non-redacted glyph displacements.\footnote{ Monospace fonts do not contain conversion width errors. } For example, a redacted glyph with a 10 displacement unit error will overflow the accumulator and the glyph's effective PDF width will be its original advance width plus a displacement between 7 and 13 units depending on the accumulator's state before overflow. We note the line's specific character ordering affects the accumulator state as different orderings will lead to different accumulator overflow positions. For example, the accumulated error could fluctuate between postive and negative three displacment units for several characters and then overflow. Both the point of overflow and the magnitude are conditioned on prior glyphs' modification of the accumulated error value. For example, in Times New Roman, the digits 0 through 9 have no error and do not contribute to the accumulator. However \emph{a lack} of displacement can also leak information about the redacted content. Additionally, due in part to the algorithm's first pass, smaller fonts leak more information as they have more characters per line and more threshold opportunities. We validated \maxray's models for Word versions between 2007 and 2019 using several fonts on hundreds of lines of text from Wikipedia, hundreds of manual tests, and thousands of real-world cases in Section~\ref{sec:wild}. Section~\ref{sec:wild} describes \maxray's process for correctly identifying Word positioning schemes. We also note that Microsoft Word's positioning scheme metadata depends on the Word document's edit history. Appendix Figure~\ref{fig:edit-alg} gives code for edit history modeling. Not all divisions effect glyph positioning and deciding which glyph positions to apply edits to requires a combinatoric search, so the bulk evaluations of Sections~\ref{sec:eval} and~\ref{sec:wild} do not model edits, while Section~\ref{sec:cases} does model edits for precision. \subsubsection{Word Displacement Algorithms}\label{sec:word-disp-appdx} The first routine (Figure~\ref{fig:initalg}) intializes a set of widths from two files, widths corresponds to TTF widths, and pixelWs are Word's internal widths for each character. It then initializes the WYSIWYG sizes. The next routine (Figure~\ref{fig:wysiwyg-alg}) handles calculating the WYSIWYG widths for each character by accumulating errors between the two width representations, consisting of a tested loop that checks sets of runs of characters: Then (Figure~\ref{fig:adjust-alg}), once an actual line of text needs to be adjusted, the deviceWidths initialized during the WYSIWYG modification are checked for overflow. The types of floating point variables, e.g. float vs. double, are precisely chosen and must be identical to the original algorithm or the result will be incorrect. \section{Latex, LibreOffice, Other PDF Producers} \label{sec:other-prod} \textbf{LibreOffice} LibreOffice~\cite{libreOffice} includes a PDF writer as part of its Visual Class Library (VCL), and, at the time of writing, embeds displacements in the drawHorizontalGlyphs method of the PDFWriterImpl class. This method appears to be similar to Microsoft Word in function: displacements are applied whenever the native advance width of the PDF glyph does not match the Pixel x,y coordinates that LibreOffice uses internally, derived from the SalLayout (layout engine) class. However, LibreOffice does not use a second width map for the characters in a given font, and therefore likely leaks less redacted information than Microsoft Office's positioning scheme. \textbf{\LaTeX\ (TeX Live)} The determination of displacements inside \LaTeX\ is dependent upon the particular flow that is used~\cite{latexSource}. For the pdfTeX flow, interglyph displacements are decided by the pdf\_begin\_string procedure. The \LaTeX\ glyph placement algorithm is quite complex, however, our analysis did find one section of the code for determining displacments appears to round the difference between the current horizontal glyph render position and the start of the current text object. Depending on the precision of the rounding, this could leak additional information about the order of glyphs used. Other flows, such as XeLaTeX, can optionally include more complex calculations that determine how to shrink and expand glyphs to fit a certain amount of space and may leak large amounts of redacted information. However, \LaTeX\ redactions are uncommon. \textbf{Other Producers} We found other producers, such as iText, where the default is unadjusted text objects, are accounted for by the schemes presented in the main body of this paper (Section~\ref{sec:pdftext}). \subsection{Document Scanners} \label{sec:doc-scanners} Many document scanners also come with OCR features. We examined the output of three different document scanners: a Canon MP Navigator EX, Xerox AltaLink C8145, and HP MFP M227fdn. Of the three, the Canon and Xerox machines provided built-in software for document OCR. We examined the Canon's positioning scheme first, and found redactions on PDFs produced by this workflow to be equally as vulnerable to deredaction as unadjusted schemes. The Xerox machine was not as simple: the Xerox machine's OCR text objects are positioned stochastically without space characters in between them. After testing several redaction tools on the Xerox document, we found it was as vulnerable as a document with an unadjusted scheme as long as redaction was performed by removing glyphs rather than entire text objects. \section{Case Studies} \label{sec:cases} In this section we discuss targeted rather than comprehensive deredaction. For each case study below, the individual using \maxray\ had no prior information besides the PDF pages under study. \subsection{United States v. Ghislaine Maxwell}\label{sec:epstein} Ghislaine Maxwell was charged with crimes of enticement of minors and sex trafficking of underage girls.~\cite{ghislaineMaxwell} As of the 28th of April 2021, Maxwell has been undergoing trial in The Southern District Court of New York.~\cite{maxwellTrialDocket} We applied \maxray\ to several documents from this case. Our location algorithms discovered \numEpsteinPagesRedacted\ pages matching Microsoft Word positioning schemes across \numEpsteinDocuments\ documents and located \numEpsteinRedactions\ redactions. We did not attempt deredaction on occurrences where the redaction was too long or we could not reasonably determine a dictionary, resulting in \numEpsteinVulnRedactions\ vulnerable redactions. \begin{table*} \centering \caption{Redactions from \emph{real documents} attacked in this section. The ability to reveal this information poses a security risk.} \begin{tabular}{ccc} \toprule Image & Document, Page & Deredacted Content \\ \midrule { \begin{tabular}{c} \includegraphics[width=4in]{fig/epstein-134-13.pdf} \\ \end{tabular} } & { \begin{tabular}{c} Epstein 134, Page 13 \\ \end{tabular} } & { \begin{tabular}{c} the CIA \\ \end{tabular} } \\ \midrule { \begin{tabular}{c} \includegraphics[width=4in]{fig/libya.pdf} \\ \end{tabular} } & { \begin{tabular}{c} Snowden, Page 10 \\ \end{tabular} } & { \begin{tabular}{c} Libyan, Libyan \\ \end{tabular} } \\ \midrule { \begin{tabular}{c} \includegraphics[width=4in]{fig/manafort.pdf} \\ \end{tabular} } & { \begin{tabular}{c} Manafort, Page 64 \\ \end{tabular} } & { \begin{tabular}{c} Prodi \\ \end{tabular} } \\ \bottomrule \end{tabular} \end{table*} \emph{Document 134, page 13.} On page 13 of Document 134, a redaction indicates who ``had come to the government ... asking it to open an investigation ... [of] Epstein and Maxwell''. We attempted deredaction using following dictionaries: all first name and surname pairs, first names, surnames with and without title, single and double word combinations (with and without the first letter capitalized), companies listed by the US Securities and Exchange Commission~\cite{secCompanies}, and five-letter acronyms. We had \emph{no matches} for the given displacements. We then tried three letter acronyms with a premodifier of ``the'', e.g. ``the FBI''. This resulted in matches for 33 acronyms: the first letter was B, C or R, the second letter could only be I, and the third letter was one of 11 possibilities. \emph{None} of these acronyms matched a known organization except one: the CIA. \emph{Other vulnerable redactions.} We reduced another three redactions to 114, 367, and 105 possiblities, with the redactions occuring on document 140, page 11, document 134, page 2, and document 52, page 6. They are as follows: \begin{enumerate} \item because she [Maxwell] reasonably believed $X$ was and would remain private under the Protective Order We tried all pairs of two english words and of the hundred results only ``her testimony'' made sense in the context Only one redacted text was likely used given the context. \item The government did indeed have previous contact with We tried singular and paired english word Of four hundred results, only two made sense in the context. \item ... what was said between the government and X. Based upon the context, this appears to be the witnesses' first name. Not enough information exists in this document to deduce the witnesses' full name, though a reduction to 105 possibilities from 111,472 first names is significant. \end{enumerate} \emph{Unmatched case.} We were unable to find a match for the last of the five redactions. This redaction's content is likely not in \maxray's dictionaries. We had 3,899 matches for width, reduced from a guess set size of 1,296,569. No guess matched the given displacements so this document may use a marginally different workflow than Word's \emph{Save As PDF}. However, if we identified the correct scheme, \maxray\ ruled out over a million possible redacted texts. \subsection{Top Secret Snowden-leaked Document}\label{sec:snowden} In 2013, Edward Snowden leaked Top Secret documents to news organizations, revealing the NSA was spying on American citizens without their consent~\cite{snowdenRevel}. One document details a co-travel technology used to track the movements of foreign diplomats across countries~\cite{topSecretDoc}. This document, included in the DNSA, has 9 named entity redactions matching Word positioning schemes, only one of which was too long to guess. We reached out to the journalists behind this document's release. They confirmed the Washington Post redacted the document rather than Snowden or the NSA. \emph{Regions of operation, demonyms.} We report our results in italics below. When multiple matches occurred we report them in italics. \begin{enumerate} \item The targeted handsets were observed ... between known $Libyan$\footnote{The line's displacements were not an exact match (Appendix~\ref{sec:snowden-diffs}), but Libyan matched the redaction's width. The second case matched line displacements and widths.} government and military installations ... handsets ... were inferred to be $Libyan$ government forces. \item ... filter in or out ... co-travelers with specified prefixes (for instance, return only $[Russian,Omanis]$ mobiles \item ... target travel across spanning countries of interest (e.g., \emph{[No match, likely variation of some name in dictionary]}) \item ... (e.g., ignore activity in $Albania$ or use only ... \item over the past year and then anywhere in \emph{Liberia [France or Eritrea when accounting for Word editing behavior.]}\footnote{ We briefly mentioned such cases in Section~\ref{sec:ms-word}. } \item ... was tested using an $Indonesian$ terrorist case study. \end{enumerate} \emph{Project codename.} Width and NSA naming conventions suggest the redacted codename for the ``Meet \& Greet Spatial Chaining Analytic'' is a four-to-five letter acronym. Cmap leakages reduce the space to 8,294,400 possible codenames: \maxray\ narrows this down to 3,528 codenames. Less than 20 results match english words. \subsection{United States v. Paul J. Manafort, Jr.}\label{sec:manafort} Paul Manafort's two criminal trials started due to investigation into Russian interference in the 2016 presidential election. The first trial began in July of 2018, with Manafort facing 18 criminal counts. We focus on a trip itenerary and emails surrounding a redacted Prime Minister's visit to the United States, exhibits G and H of\ \cite{manafortTrial}. \maxray\ identified 79 redactions, though many were too wide. We deredacted congresspeople who met with the prime minister, the prime minister's name, and the embassy providing transportation. Some pages use Acrobat's OCR (Section~\ref{sec:adobe-ocr}) and others use a convert-to-PDF workflow from a Mac OS email client. The latter created a fixed per-character displacement per-line, e.g. the letter ``r'' on one line of page 63 always occurs with a displacement of 4.8. For these pages, we use the non-redacted parts of each line to infer the set of displacements possible for each character. We apply these displacements alongside Acrobat's OCR sidechannels (Section~\ref{sec:pdftext}). Because of these particularities, where possible we validated our guesses for these redactions after deredaction. For instance, we correlated guessed congresspeople with their guessed state acronyms and meeting location information---some pages indicated the meetings would be held in office buildings where the congresspeople worked at the time of the document's release. We found deredaction returned a context-matching result in all cases.\footnote{ For this reason we do not include \maxray's guesses for the Senator on page 61. Unlike the other congress members, this redaction had little contextual information. } \emph{The Embassy.} Pages 60 and 61 have two redactions indicating the embassy providing transportation. Both matched \emph{Italian}: one matched exactly and the other had a one displacement unit error, likely due to rounding of displacements. The next closest result, Welsh, was five units away from the correct width in both cases. \emph{The Prime Minister.} Four redactions on pages 63 and 64 determine the Prime Minister / President. We used a dictionary of world leaders (surnames) starting from 1951 until present day, totaling 4,281 names.~\cite{listStateLeaders} These four redactions occurred in three different fonts. Different font metrics leak different information, so extraneous results for any single font were eliminated. For two of the redactions (Romano) Prodi was the only exact match, and the other two had four and three exact matches, respectively. The only other match for all four redactions was (Wilhelm) Pieck, a german politician who died in 1960. Other portions of the document (not known during deredaction) confirmed the redacted president was Prodi. \emph{The meetings.} Who met with Romano Prodi on March 13th and 14th, 2013? Our dictionaries were the 113th congress~\cite{113thcongress}, demonyms (Section~\ref{sec:synth}), and state acronyms. We reproduce our results in italics: \begin{enumerate} \item Senator \emph{Chris Murphy}, D-\emph{CT}). \item Congressman \emph{Ed Royce} R-\emph{CA}). \item Congressman \emph{Bill Keating} (D-\emph{MA}). \item Congressman \emph{Dana Rohrabacher} R-\emph{CA}). This redaction included the leading space character and destroyed the TJ break created by Acrobat's OCR (Appendix~\ref{sec:acro-pro-deets}); however, Dana Rohrabacher was the closest match by $\approx$30 units. \end{enumerate} \section{Conclusion} \label{sec:concl} In the course of this work, we developed \maxray, a tool capable of locating and breaking redactions in thousands of PDF documents. We found, for example, that redacting a surname from a PDF generated by Microsoft Word set using 10-point Calibri leaves enough residual information to uniquely identify the name in \puwavgpctnytlnwxxibx\ of all cases. We surveyed the behavior of 11 different PDF redaction software toolkits and found the majority do not defend against our attacks. We discovered over 100,000 trivially broken word redactions in US court documents and broke 348 non-trivial redactions in Office of Inspector General reports and Freedom of Information Act requests. Moreover, we demonstrated the danger of deredaction by breaking redactions in three case studies of public interest: the Epstein case, a released Snowden document, and the Manafort trial. \subsection{Unaddressed Information Losses}\label{sec:loss} This paper did not address three important cases: attacking rasterized documents, typos and augmentation of redacted content, and unconstrained PDF edits using tools like inkscape. \subsubsection{Rasterization}\label{sec:raster} In this paper, we left the problem of document rasterization largely untouched due to the large number of documents suffering from metadata attacks alone. Our original intention with this work was to attack black-box style raster redactions, but upon discovering the complexity of displacement schemes themselves, we found several preliminary works would be needed to properly explore \emph{how} images get generated from PDF documents. We note that (typically, depending on the PDF document configuration), 72 displacement units correspond to one pixel in a 72 DPI image; this suggests the information loss occurring from rasterization may be too great to deredact text accurately, but for higher DPI, it may indeed still be possible. It may also be possible to determine the accumulated affects of displacments in the anti-aliasing of pixels for certain PDF-to-image converters, and recover the Word displacements. We leave further exploration of this problem to future work. \subsubsection{Typos, Changing Redacted Text}\label{sec:disc-editing} Another source of ``information loss'' occurs when either the text underneath the redaction is changed to a meaningless value, such as ``X's'', and it is possible to check if any of these glyph-removal redaction styles have the same width as the redaction. A potentially more difficult ambiguity to resolve is unintentional information loss due to typos; this will require further analysis and work to model typos in redacted text, and we leave this to future work. \subsubsection{Unconstrained Edit Behaviors}\label{sec:editing-behavior} To better understand how unconstrained editing of PDF documents affects deredaction we performed tests of Acrobat's editing behavior and found a variety of different transforms dependent upon the highlighted text selection and edit type. Editing glyphs can trigger a split of the text-showing operator into two: this appears to be tied to the ``undo'' functionality. Selecting just one character and adding inter-character spacing will create a displacement. However, selecting multiple characters and performing the same operation will split the text showing operator use \texttt{Tc} for the left-exclusive and right-inclusive boundaries of the highlighted text selection, i.e. ``(,]''. Inter-word spacing operators are also added when edits occur with respect to word boundaries. Modeling these edits requires modeling the \emph{configurability} of the Adobe interface: inter-character spacing values (and thus displacements) can be entered as real numbers via the UI. Theoretically, this is infinitely more complex than modeling edits to a Word document because the human operator can choose the input spacing, and the potential for false positives and bias in prediction is large. None-the-less we have full confidence that with further human subjects tests, consideration of further information channels, and advanced modeling this problem can be made tractable. \subsection{Dictionary Generation, NLP}\label{sec:disc-dict-gen} The generation of effective dictionaries for use in attacking redactions is also something the current paper does not address. We found that many redactions we attacked had \emph{no results} in Section~\ref{sec:nonsynth}; manual analysis of these cases makes us confident that this is due to incomplete dictionary modeling of potential name formats. We seperate this problem into three relevant subproblems: \begin{enumerate} \item \textit{Semantic prediction.} Determining whether a redaction is a name, region, pronoun, etc. is currently a highly manual analysis: we attempted to use RoBERTa~\cite{liu2019roberta} to predict the part of speech and rank the resulting matches for our deredaction attacks, but found it to be insufficient to guarantee the accuracy we desired in evaluating our techniques. Future work will be needed on automatically determining the semantic content of redactions, so that cases of vulnerability may be more effectively analyzed. \item \textit{Dictionary content.} The dictionaries used to provide the evaluation of this paper we mostly in English and ASCII-text based. In developing this paper, we found a relative drought of high quality dictionaries to use in attacking redactions, and so had to develop our own from several sources (Appendix~\ref{sec:dictdetail}), but future work could provide, for example, a more robust and complete public dictionary of names and name variations across different languages and in unicode format. \item \textit{Text augmentations.} In the evaluation of Section~\ref{sec:nonsynth}, we included several titles such as ``Mr.'' along side our base dictionary of names, however, a much larger number of modifiers exists, such as the title ``Sgt.'' referring to a member of the military, or cases where slightly more or less than the name is redacted, e.g. a comma after the name is redacted along with the name. Enumerating and understanding the different possible modifications to redacted text is a complex subject and will require further work. \end{enumerate} \section{Discussion} \label{sec:disc} The prior sections dicussed how deredaction works and the extent of the problem. We now shift focus to defenses and future work, without which deredaction attacks may remain an unresolved issue. \subsection{Defenses} \label{sec:defenses} Of clear concern is whether redaction may be safeguarded without users' explicit knowledge of technical details discussed by prior sections. Tradeoffs may be necessary between aesthetics and security---redactions, for example, could be quantized to a certain set of fixed widths to defend against our attacks. Alternatively, use of monospaced fonts or positioning information removal prevents deredaction, however, these incur significant formatting drawbacks. We propose the five potentially less visually intrusive defenses: \emph{Randomizing Displacements.} Positioning information noise may be added to a line without effecting readability.\footnote{ Recall each displacement is 1/6,000~in (0.0042~mm). } If this noise is indistiguishable from legitimate positioning information, accounting for it requires increasing the set of accepted redacted text guesses. \emph{Positioning Scheme Sanitization.} Removing positioning scheme information could help prevent information leaks. One solution would be to remove all displacement information but other solutions may be more desirable, such as changing the scale, position, or removing displacements contributing to a redaction's vulnerability. \emph{Document Layout and Redaction Obsfucation.} Obsfucating the occurence of a redaction by modifying the PDF commands used to render the redaction will stymie the process of automated redaction location. We are confident in our redaction location algorithms, however, many more vulnerable documents likely exist in the wild. \emph{Increasing Redactions.} Redacting additional adjacent words prevents deredaction by increasing the number of words the attacker must guess when matching leaked information to potential texts. \emph{Rasterization.} A 600 DPI raster quantizes PDF displacements to $\approx$10 displacement units. Displacements for a 12pt font in the Word positioning scheme are mostly less than 10 units. Rasterization may also require modeling an additional PDF transformer (Section~\ref{sec:schemes}). To estimate rasterization's precision loss, we chose ten name occurrences from court documents using Word displacements for 12pt Times New Roman. For each selected text line, we substituted the name with each entry in our 235,560 name dictionary, redacted the entry, and calculated the redaction's width. This resulted in 2,017 unique redaction widths, $\approx$11 bits of information leaked, on average. Quanization to 300 DPI (20 displacement units) resulted in 252 widths on average ($\approx$8 bits) and quanization to 600 DPI performed slightly worse with a result of 377 widths ($\approx$8.6 bits). \emph{Adversarial Redactions} One defense against deredaction is to change the document's original text before it is redacted, for example, by replacing a sensitive name with the letter ``X''. It is also possible to \emph{lie} to deredaction by changing the redacted content to something seemingly valid, potentially misinforming an adversary. \subsection{Future Work} \label{sec:future-work} \label{sec:locating-disc} \label{sec:pdf-drivers} Table~\ref{tab:wildres} revealed that a great variety of PDF workflow models exist. Appendix Table~\ref{tab:pdfflows} supplements this finding by analyzing 10 different software workflows with 31 different glyph displacement outputs. Because of this, the current work provides a lower bound on redacted information leaks and vulnerable documents. For an indication of the work involved in reverse engineering other workflows, reverse engineering and perfecting models for the Microsoft Word positioning scheme analyzed in this paper took months of dedicated effort by an expert. There may be additional barriers to analysis: we found some Windows PDF workflows include displacements generated by a simulated printer based upon a Print-to-PDF driver. The displacements depend on the Print-to-PDF driver version used by the operating system producing the PDF. However Microsoft does not publicly release multiple versions of the Print-to-PDF driver. Despite these challenges, there may be sufficient incentive with respect to documents' redacted contents to induce effort---the three case studies discussed above motivated this paper. In Section~\ref{sec:wild} we reported a negative result on the redacted content inference of state-of-the-art natural language processing algorithms. While RoBERTa identified whether a given redacted word was an entity, ambiguity prevented it from distinguishing between, for instance, a personal name and a noun, like knife. There are many potential solutions to this problem; one of the most promising may be the use of important keywords from the redacted document to search the internet for potential deredaction guess dictionary entries, as this is how we validated several results in Section~\ref{sec:wild}. Finally, we note challenges in efficent redaction location. Despite having access to the RECAP corpus, nontrivial redaction location had high computational costs due to the difficulty of identifying the presence of what should look like nothing other than a space. \subsection{Ethics} This paper's findings are a lower bound on the extent of vulnerable redactions. This work poses a persistent and urgent privacy concern and threat to court witnesses, victims of crime, and national security. We are notifying all affected parties including RECAP and the OIG and working to correct discovered redaction vulnerabilities. We are working with RECAP and the Free Law Project to automatically find and correct broken redactions. We have submitted CVEs and reached out to the two redaction toolkits with full text redaction leaks. We are also leveraging the \maxray\ system to develop better deredaction defenses and vulnerable redaction notification systems. \section{Measuring Leaked Information} \label{sec:synth} \label{sec:eval} All redactions, whether performed by physical or digital means, reveal the width of the redacted text with greater or lesser accuracy. In this section, we consider attacks against two of the document vulnerability classes discussed in Section~\ref{sec:tools}, namely documents that leak the unadjusted width (\emph{Width} in Table~\ref{tab:vulns}) of redacted text and those that leak Microsoft Word displacements (\emph{Word} in Table~\ref{tab:vulns}). We can quantify the amount of information leaked by glyph positioning in information-theoretic terms, as the \emph{mutual information} of the redacted text and the glyph positioning, measured in bits~\cite{cover-and-thomas}. Mutual information measures the reduction in uncertainty about the redacted text from knowledge of glyph positions of the surrounding text. In the remainder of this section, we give an empirical estimate of mutual information for the unadjusted, Word 2007, and Word 2019 positioning schemes processed with a displacement-preserving redaction tool under different conditions. We estimate the amount of leaked information by simulating redaction in a known body of text. The amount of information leaked about redacted text depends on several factors, described below. For each combination of factors, we measure and report the amount of information leaked. These results also provide an upper bound on the amount of information leaked by rasterizing tools. \subsection{Experiment Setup} \paragraph{Workflow, Font, Text Size, Page Format.} We simulate redaction in a PDF created with three workflows from Section~\ref{sec:pdftext} (Unadjusted, Word 2007, Word 2019), in 10 point Times New Roman, Calibri, and Arial, the three most common fonts in our document corpus (Sec.~\ref{sec:wild-corpora}). These fonts account for 71.6\% of text lines in the 40,000 documents in Table~\ref{tab:wildres}. (The fourth most common, Helvetica, comes in at about 10\%.) We include Courier as an example of a monospaced font. The simulated PDF, formatted for US Letter size paper, is left-justified with 1-inch margins, giving a 468 point line width. We use a 10 point font as larger font sizes leak slightly less information in the Word positioning schemes. This is because the lines have fewer characters, and provide fewer displacement measurements. \paragraph{Dictionaries.} The amount of information leaked also depends on the prior information about the redacted text. For example, if we know the redacted text is one of \neltln\ American surnames, then this redaction will leak at most $\log_2 \neltln \approx \hxln$ bits. In our experiment, we model this prior information as a \emph{dictionary}, a set of strings from which we assume the redacted text is drawn. Because our technique is targeted at short redactions, we evaluate several possible dictionaries of short strings. Based on our examination of real redacted documents, we constructed the following dictionaries: \begin{itemize}[nosep] \item\textit{Str.} All strings of 3--16 characters in length starting with a uppercase or lowercase letter followed by lowercase letters. \item\textit{Acrn.} All strings of 2--5 uppercase characters (most acronyms). \item\textit{Pron.} English third person pronouns (\emph{Pro}), which are often redacted to avoid revealing a person's gender. \item\textit{Word.} English words including some proper nouns. \item\textit{FN.} American given (first) names. \item\textit{LN.} American surnames (last names). \item\textit{FILN.} Given name initial followed by surname (\emph{LN}). \item\textit{FNLN.} Given name (\emph{FN}) followed by a surname (\emph{LN}). \item\textit{Ctry.} Official and common names of countries. \item\textit{Rgn.} Names of regions, a superset of \emph{Ctry}. \item\textit{Natl.} Nationalities, demonyms, and adjectives of regions and nationalities, sourced from lists on Wikipedia. \end{itemize} Table~\ref{tab:dicts} lists the dictionaries and their statistics. in the table, the \emph{Size} gives the number of words or phrases in the dictionary, $H_u(X)$ given the information-theoretic entropy of the dictionary, given by $\log_2 \mathit{Size}$. Appendix~\ref{sec:dictdetail} details the construction and source used for each dictionary. When calculating the information leaked, we consider two cases: all dictionary strings distributed uniformly, and all dictionary strings distributed according to their frequency in the text corpus used (described next). We do not include numbers in our dictionaries as their glyph information is almost always identical. \begin{table} \centering \caption{Text used for evaluation. Stop words are excluded from \emph{total words}, \emph{unique words}, and \emph{word entropy} statistics.}\label{tab:dicts} \input{tab/dicts} \end{table} \begin{table*} \centering\small \caption{Number of bits leaked (left) and unique match probability (right) for different positioning schemes in simulated redaction of New York Times text set in 10 point font. A weighted prior information distribution has less information leaked but more unique matches because there is less information in the frequency based distribution overall. } \label{tab:nyt10} \input{tab/nyt10} \end{table*} \paragraph{Text corpus.} For the Microsoft Word glyph positioning schemes, the information leaked about the redacted text also depends on the text surrounding the redaction. To model this dependence, we draw the context for each redaction from the New York Times Annotated Corpus (\emph{NYT} corpus for short)~\cite{nytCorp}. The corpus contains articles written and published by the New York Times between January 1, 1987 and June 19, 2007. It contains 1,855,658 documents in total, with 1,027,427,021 total tokens (excluding punctuation), 1,505,676 of which are unique. This corpus also gave us an empirical frequency distribution for our dictionaries. In Table~\ref{tab:dicts}, the \emph{Occ.} column gives the number of occurrences of words from the given dictionary in the corpus, and $H_e(X)$ the entropy of the empirical distribution of dictionary words in the corpus. In our results, we report the amount of information leaked for the case where the dictionary elements occur with the same frequency (uniform) and where elements occur with the measured distribution. Columns $H_u(X)$ and $H_e(X)$ are thus an upper bound on the amount of information a document can leak about a redacted element of that dictionary, for elements distributed uniformly and according to the corpus empirical distribution. \subsection{Experiment Procedure} \label{sec:mutinfmethod} To calculate the amount of information leaked by a redacted document for a given set of experimental parameters (positioning scheme, text font, text size, dictionary, text corpus, word frequency distribution), we consider the following stochastic process: \begin{enumerate}[nosep] \item Choose an occurrence of a word in the text matching the dictionary uniformly at random. \item Replace the dictionary word at the chosen point of occurrence with another word from the dictionary chosen at random (either uniformly or according to its empirical distribution). Let $X$ be a random variable for this chosen word. \item Format the document using the workflow, text font, and text size prescribed by the experiment parameters. \item Replace the word in the text object with an adjustment equaling its width, thus simulating redaction using one of the displacement-preserving redaction tools described in Section~\ref{sec:tools}. Let $Y$ be the random variable representing the document, including glyph positioning, after redaction. \end{enumerate} We measure an adversary's information about the redacted word as numbers of bits of information leaked by $Y$ (the document after redaction) about the redacted word $X$. This is the \emph{mutual information} of $X$ and $Y$, conventionally denoted $I(X;Y)$. To calculate $I(X;Y)$, we simulate the redaction of all possible dictionary words at a sample of occurrences of dictionary words in the NYT. (See Appendix Section~\ref{sec:mutinfcalc} for a justification of this method.) In addition to mutual information $I(X;Y)$, we calculate the probability glyph positioning constraints the result to a unique dictionary match. \subsection{Results} Table~\ref{tab:nyt10} shows the results. The top half of the table shows the case where dictionary elements are drawn uniformly at random in step 2 above and the bottom half shows the case where dictionary elements are drawn according to their empirical distribution in the NYT corpus. The left part of the table shows the mutual information of the dictionary and the resulting document. This is the number of bits of information leaked by the document about the redacted word. The table's right half gives the probability that a randomly chosen dictionary word is uniquely determined by the resulting document---the probability of exactly recovering the redacted text. The table shows the results of the experiment for 10 point font for four fonts: Courier, Times New Roman, Arial, and Calibri. For each font, we show the results for unadjusted text (\nadjshortname), text produced using Word 2007--2016 (\wxiishortname), and Word 2019--2021 (\wxxishortname). As a monospaced font, Courier behaves identically in the unadjusted and Word cases so we omit the \wxiishortname\ and \wxxishortname\ columns for this font. The residual information in the document after redaction reveals the number of characters in the redacted word. The Courier column in Table~\ref{tab:nyt10} thus tells us how much information is revealed by knowing the number of letters in the word, and is simply the entropy of the distribution dictionary word lengths. As shown in the table, knowing the length of an American surname provides \hyunytlnnadjrx\ bits of information about the name, chosen either uniformly at random (top half of table) or according to its popularity in the NYT corpus (bottom half of table). On the other hand, knowing the width of a surname (\emph{\nadjshortname} column) set in Times New Roman provides \hyunytfnnadjtx\ bits of information about the surname if it is chosen uniformly at random, and \hywnytfnnadjtx\ bits if it is chosen according to the NYT corpus empirical distribution. This is how much information is leaked by an unadjusted document (e.g. a PDF produced by Google Docs) after width-preserving redaction. If we instead redact a surname from a PDF produced by Word 2007--2016, the resulting document leaks \hyuavgnytlnwxiitx\ bits of information about a name chosen uniformly at random and \hywavgnytlnwxiitx\ bits when chosen according to the empirical distribution. Calibri, the default font in Microsoft Office since 2007, leaks the most information (\hyuavgnytlnwxiibx\ bits uniform and \hywavgnytlnwxiibx\ bits empirical distribution) because the font has a greater variety of character widths than Times New Roman or Arial. Note that the empirical entropy of the surnames dictionary is \hxnytln\ bits (Table~\ref{tab:dicts}), so a redacted surname set in 10-point Calibri leaks almost all the information available. Table~\ref{tab:nyt10} does not show results for the \emph{Str} dictionary under the uniform distribution using Word schemes because simulating redaction of all \neltstr\ strings is prohibitively expensive. Results for the unadjusted positioning scheme were obtained without simulating redaction by exploiting the regular structure of the dictionary. Our results show that redacting a name from a document using PDF redaction tools leaves enough residual information to infer the redacted name in many cases. Other proper nouns also cannot be safely redacted alone using standard PDF redaction tools. \section{Introduction} \label{sec:intro} In the United States, it is common practice for government agencies to release documents to the public as a means of ensuring transparency. To balance the need for transparency with the need to protect confidential information, portions of documents may be \emph{redacted}---removed or obscured---prior to release. Paper documents are redacted by physically cutting out text, pasting strips of paper over it, or covering the text with a black marker, and then photo-copying the document to prevent recovery of obscured text. Digital documents are redacted by altering their digital representations. The most popular digital document representation today is the Portable Document Format (PDF). PDF documents represent text as \emph{text objects} that specify the font, size and exact render position of each displayed character. This representation exactly specifies the document's intended printed or graphical appearance. A number of software tools allow a user to redact portions of a PDF document by marking text for redaction. After redaction, some tools leave redacted text in the PDF, which can be selected and copied to the operating system's clipboard, nullifying the redaction. Most PDF redaction software, including from Adobe Systems, removes the redacted text and replaces it with a black rectangle, leaving the rest of the document untouched. However, this practice preserves exact positioning information for non-redacted glyphs (rendered character representations) before and after redacted text. In this work, we find non-redacted glyphs can disclose a significant information about the redacted text, including the text's exact width and the ordering of some characters in the redacted word. For monospace fonts, such as Courier, a redacted word's width tells us the how many letters are in the word. For example, this is enough to distinguish, if redacted, between the pronouns ``he'' and ``she''. Proportional fonts such as Times New Roman, whose glyphs do not all have the same width, disclose more information. In one case, this was thought to be enough to infer the name of a country redacted from a classified document~\cite{egyptian}. However, this document contained a raster image of the text rather than text objects, limiting the precision of recovered width information. We estimate a document rasterized to 300 Dots-Per-Inch (DPI) can leak about 8 bits of information about a redacted name (Section~\ref{sec:defenses}). In an all-digital workflow, however, where a PDF is generated directly by a typesetting system, word processor, or ``Print to PDF'' print driver, there is no loss of precision due to rasterization: exact recovery of specified glyph positions and widths is possible. Worse yet, documents originating from Microsoft Word leak information beyond the redacted text's width. We find redacting a name from a PDF produced by Microsoft Word can leak about 13 bits of redacted information, enough to uniquely determine the name in 3\% of cases. In this work, we present a comprehensive investigation of PDF text redaction. First, we report on the information preserved by popular digital PDF redaction tools. We find, alongside broken tools which preserve redacted text, all current non-rasterizing PDF redaction software retains the glyph positioning information necessary for the deredaction of short text fragments, such as personal names. Next, we measure the redacted information leaked by different PDF document creation workflows. We examine how a PDF's creator affects the glyph positioning information needed for deredaction. As noted, documents originating from Microsoft Word leak information beyond the redaction's width. Even scanned documents post-processed by OCR before redaction leak redacted texts' width. To demonstrate this attack, we built a tool, called \maxray, to deredact fragments of redacted text in PDF documents. We found redacted text in hundreds of publicly-available redacted PDF documents can be deredacted using our tool, a lower bound on the impact of redacted information leaks. We found many trivial (copy-pastable) redactions by scanning $\approx$$10^{7}$ US court documents. We also present three case studies where we were able to deredact multiple fragments of high-profile publicly-available redacted documents. In summary, our contributions are: \begin{prettylist} \item An analysis of information leaked by glyph positioning schemes in ostensibly correctly redacted PDF documents. We find Microsoft Word's glyph positioning can leak up to 15 bits of information, enough to deredact short fragments of text. \item A survey of 11 popular PDF redaction tools. We find two tools have redaction features which do not actually remove text marked for redaction and the rest preserve glyph positioning information which may later be used to infer redacted text. \item An extensive survey of publicly-available redacted documents, finding vulnerable redactions in documents released by US Government agencies. We found over 100,000 words redacted using broken PDF redaction tools, trivially recovered by selecting the text under the black rectangle and copying it to the clipboard. Among less trivial redactions, we found over 300 cases revealing information about names of individuals. \item Three studies of redaction in documents of public interest. We deredact sensitive text in a released Snowden document, Ghislaine Maxwell trial documents, and Manafort trial documents. \end{prettylist} This rest of this paper is organized as follows. Section~\ref{sec:pdftext} details the PDF text specification and PDF document creation. Section~\ref{sec:tools} surveys PDF redaction tools' preservation of redacted information. Section~\ref{sec:maxray} describes \maxray's implementation. Section~\ref{sec:eval} measures mutual information shared by redacted and non-redacted PDF text. Section~\ref{sec:wild} measures the number and diversity of redaction information leaks in public PDF documents. Section~\ref{sec:cases} presents three deredaction case studies. Section~\ref{sec:disc} discusses defenses and future work. Section~\ref{sec:related} addresses related work and Section~\ref{sec:concl} concludes. \section{The \maxray\ Tool Suite} \label{sec:maxray} To demonstrate the feasibility of width-based attacks, we developed a suite of tools we call \maxray.\footnote{The \maxray\ tool suite s being made available to reviewers of this manuscript only. We do not plan to make \maxray\ available to the public because of the risk of misuse.} At \maxray's core are models for the unadjusted, Microsoft Word 2007, and Microsoft Word 2019 glyph positioning schemes (Sec~\ref{sec:schemes}). The Word 2007 scheme is used in Microsoft Word 2007--2016 for Windows desktop, while the Word 2019 scheme is still used at the time of writing. Appendix Section~\ref{sec:mic-word-deets} describes these schemes in detail. \maxray\ enables exploration of the Word, Width, and Width$^+$ attack classes presented in Table~\ref{tab:vulns}. To deredact redacted text, \maxray\ uses a set of candidate words or phrases, forming the \emph{dictionary} for an attack. For each word or phrase, \maxray\ generates a text object for a line of text consisting of the non-redacted text prefix, the redacted guess, and the non-redacted text suffix. It then compares the emitted text object against targeted text objects and reports any guesses that result in a match. In other words, \maxray\ reports which of a set of guesses is consistent with the text objects in the redacted PDF. Thus, deredaction described in this paper should be understood as \emph{ruling out} a set of possible guesses, rather than finding the redacted phrase. The implication is that deredaction is only as good as the starting dictionary. There is always a possibility that an inference is wrong because the text actually redacted was not in the candidate dictionary. To minimize the chances of this, in Section~\ref{sec:wild} we use a dictionaries consisting of multiple variations of possible redacted terms. Running on an eight-core consumer laptop (a ThinkPad T420), \maxray\ generates around 80,000 guesses per second. \maxray\ also includes tools to identify redacted text. These scripts serve to identify full text and width vulnerabilities in PDF documents. We discuss these features in depth in Section~\ref{sec:wild}. We note \maxray's unadjusted and Word models could be used to generate guesses for raster width and approximate width attacks (\emph{RW} and \emph{AW} in Table~\ref{tab:vulns}), however, to attack raster width documents, one would need to render the text object generated by the glyph positioning model and compare its output to the actual raster. Such a matching function would need to account for pixel grid misalignment (for raster width attacks) and analog noise (for approximate width attacks). We did not implement raster width because rasterizing redaction tools are not as common as those that operate on text objects directly. Approximate width attacks leak the least amount of information and provide less confidence. Currently, as a proof-of-concept tool, \maxray\ relies on a human user to select the proper guessing dictionary. While this process could be automated using machine learning, we chose to do this step manually in order to exclude a potential source of error. \subsection{Attack Implementation}\label{sec:impl} The Microsoft Word displacement schemes reimplemented in C. Routines for attacking Adobe OCR and examining PDF files for the evaluation of later sections were built using a series of modular, UNIX/Plan9-style scripts, operating on glyph displacement abstractions provided by a augmented version of the Poppler PDF (xpdf) library, used by Qt and a number of other core Linux applications. We found that \emph{no} existing libraries for PDF file parsing contained correct APIs for performing the attacks in this work, so we wrote them. The system operates by coordinating scripts with GNU Parallel~\cite{} to populate two databases, a PSQL database for storing parsed PDF document content and redaction metadata, and a MongoDB database for storing unstructured data relating to specific cases of redaction. All parts of the system are performant enough to be run on a consumer laptop computer. \section{PDF Text Representation} \label{sec:pdftext} In this work, we show an attacker can recover portions of redacted text in a PDF based on the information remaining in the document after redaction. Specifically, we are concerned with text represented using PDF \emph{text objects}, PDF elements that instruct a PDF reader to display a sequence of characters using a specified font at a specified location on the page. Because text objects represent the text as a sequence of characters, they can be searched, selected, and copied by the PDF reader. Text objects are typically created when producing a PDF, for example, using a word processor's \emph{Export to PDF} command. Documents produced by scanning a paper original instead contain a \includegraphics[width=0.6in]{fig/raster-im.png} of text. Both text objects and raster text are vulnerable to attacks that infer redacted text from the width of the redaction. However, the precision of glyph positions recoverable from raster text is lower, leaking less information about the redacted text. We briefly consider attacks on raster text as a point of comparison, however, we focus on text object information leaks. The rest of this section describes how PDF text objects are represented in PDF documents, and how the PDF document workflow affects the positions of text object glyphs, determining how much information can be recovered about redacted text after redaction. \begin{comment} \begin{table*} \centering\small \caption{Output descriptions for the text positioning schemes of several different PDF creation workflows. Displacement values are rounded to fit on page.} \label{tab:flows} \input{tab/flows} \end{table*} \end{comment} \subsection{Text Objects} \label{sec:textobj} When rendering text, a PDF document specifies the exact position of each glyph---a character's graphical representation---either directly or relative to other glyphs. Nearly all of the PDF workflows we examined rendered text using the \tjop\ text rendering operator, which takes a text string argument and renders it at the current position. Figure~\ref{fig:tj} shows a \tjop\ operator which renders ``Exhibit A.'' A glyph's relative horizontal position rendered by the \tjop\ operator may be adjusted by a \emph{displacement}. In the example, there is a displacement of $-2$ units between the \emph{x} and \emph{h} glyphs. (Displacement units express displacements, where 1,000 units equal the point size of the font times \nicefrac{1}{72} of an inch. For a 12-point font, 1 unit equals 1/6,000~in (0.0042~mm).) After rendering each glyph, the \tjop\ operator advances the PDF's current horizontal text render position by each glyph's width (called the \emph{advance width}) plus any displacement value. As we describe in Section~\ref{sec:tools}, most PDF redaction tools replace text selected for redaction with a single large displacement of the same width as the redacted text (sum of advance widths and any displacements), preserving the exact positions of glyphs around the redacted fragment. It is therefore possible to recover the width of the redacted text from the positions of the remaining glyphs. To exploit this information and recover redacted text, it is necessary to understand how PDF document writers determine glyph positions. \begin{figure}[h!] \centering \vskip-10pt \includegraphics[width=1.75in]{fig/tj.pdf} \vskip-10pt \caption{The TJ text-rendering operator.}\label{fig:tj} \end{figure} \subsection{Text Positioning Schemes} \label{sec:schemes} The software that first creates a PDF file is called the \emph{PDF producer} by the PDF standard~\cite{adobePDFReference}, while the software used to author the source document the PDF document represents is called the \emph{PDF creator}. For example, for a PDF document generated by Acrobat Distiller from an Microsoft Word document, the producer would be Acrobat Distiller while the creator would be Word. In practice, there may be an additional intermediate representation between creator and producer. For example, the \emph{Print to PDF} feature on many operating systems first translates the document to a intermediate format for physical printing. The \emph{Print to PDF} printer driver then translates this format to a PDF capturing the appearance of a printed page. In other cases, the creator may also be the producer. For example, a document produced using the \emph{Save As PDF} command in Word is created by Word directly, and does not use the printing system. A PDF document may also be modified by other software after production, for example, by PDF redaction tools. We call such software \emph{PDF transformers} (the PDF standard does not have a term for such software), and the complete chain of software operations used to produce a PDF document from a initial source the \emph{PDF workflow}. As we will see, even starting from the same source document, each workflow may produce a slightly different final PDF document. The creator and initial producer in a workflow play an important role because together they define the initial \emph{positioning scheme} which determines exactly where glyphs are placed in the PDF document. We reverse-engineered several positioning schemes to determine how the combination of text, font, and formatting determine the text objects appearing in the resulting PDF. In the following, we describe positioning schemes of several notable PDF workflows, which we chose based on a informal preliminary survey of publicly-available redacted documents. The results in Section~\ref{sec:wild} confirm these schemes represent a non-negligible portion of redacted documents. \subsubsection{Unadjusted} \label{sec:gdocs} The simplest positioning scheme is one without glyph displacements. For example, Google Docs' \emph{Export to PDF} option produces such PDF output. In this case, the redacted text width is the sum of the advance widths of each glyph in the text object. We call this positioning scheme \emph{unadjusted}. The prevalence of this adjustment scheme varies by redacted document corpus (see Section~\ref{sec:wild}); in our FOIA corpus, for example, about 6\% of redacted text fragments are unadjusted, in contrast to 5.3\% across all corpora. Glyphs in a proportional font such as Times New Roman have different advance widths, so the width of a redacted string gives us information about the characters in it. Times New Roman, for example, has the following letter width sets (in internal font units): \begin{center} \small \begin{tabular}{@{\quad}ll@{\quad\quad\quad}ll} 569 & ijlt & 1251 & ELTZ \\ 683 & Ifr & 1366 & BCR \\ 797 & Js & 1479 & ADGHKNOQUVXYw \\ 909 & acez & 1593 & m \\ 1024 & bdghknopquvxy & 1821 & M \\ 1139 & FPS & 1933 & W \end{tabular} \end{center} For example, the glyphs of \emph{I}, \emph{f}, and \emph{r} are are exactly half the width of the glyphs of \emph{B}, \emph{C}, and \emph{R}. When typeset using Times New Roman, the words \emph{martian}, \emph{templar}, and \emph{mineral} all have the same width, as do the anagrams of those words \emph{tamarin}, \emph{trample}, and \emph{railmen}. When a redacted word comes from a small dictionary, such as names of countries, the unadjusted width may be enough to unambiguously identify the redacted word (see Section~\ref{sec:mutinfmethod}). \subsubsection{Scanned Documents} \label{sec:adobe-ocr} Paper documents that are scanned and converted to PDF contain a raster image of the text on the page. However, it is common for PDF workflows to perform Optical Character Recognition (OCR) on the resulting document.\footnote{ We discuss physical document scanners in Appendix~\ref{sec:doc-scanners}. } The recovered text is added as an overlay of non-rendered text objects that allow the underlying text to be searched and copied. Redaction tools that operate on this OCR overlay must modify both the raster image, removing the image of the redacted text, and the text objects, replacing redacted text with a displacement of the same width. Thus, such documents also leak the exact width of the redacted text. We examined the OCR overlay produced by Adobe Acrobat Pro, one of the most popular PDF editing tools. OCR text objects produced by Acrobat Pro do not contain displacements. Instead, to adjust text object positions, Acrobat Pro uses two other operators, \tcop\ and \tdop. The parameters to these operators, and consequently their effect on redacted glyph positions, can be recovered from non-redacted parts of the PDF (see Appendix~\ref{sec:acro-pro-deets} for details). In most cases it is therefore possible to recover the exact unadjusted width of redacted text. Documents processed with OCR using Acrobat Pro and then redacted using any of the redaction tools described in Section~\ref{sec:tools} are thus vulnerable to the attacks described in this paper. \subsubsection{Microsoft Word Save As PDF} \label{sec:ms-word} The positioning schemes above allow an attacker to learn the exact width of a redacted piece of text, which can be attacked using the techniques in Section~\ref{sec:maxray}. However, we found in Section~\ref{sec:wild} that a large number of redacted documents are authored using Microsoft Word and converted directly to PDF. The most direct way to produce a PDF document in Word is to use the \emph{Save As PDF} command.\footnote{ Other methods for producing a PDF in Word are detailed in Appendix Table~\ref{tab:pdfflows}. } We reverse engineered the glyph positioning scheme produced by this command in Microsoft Word for Windows desktop versions 2007 to 2019.\footnote{Word 2007 to 2016 uses the same scheme. Word 2019 to present versions use another.} We found Word's glyph positioning leaks more information than redacted text width. For brevity, we provide an overview of the Word glyph positioning scheme here and give a complete description in Appendix~\ref{sec:mic-word-deets}. Microsoft Word's \emph{Save As PDF} command converts between two glyph coordinate representations when it generates PDF output. When users edit or create a Word document, they operate on a virtual document representation that determines how Word displays the document on the user's screen. In this internal representation, each glyph has a width that is calculated by Word and used to place text on the screen. However, Word's internal glyph width is not always the same as the glyph advance width in the PDF representation. Word adjusts for this error using glyph displacements (Sec.~\ref{sec:textobj}), however, because of the way Word keeps track of this error, a given character has a cascading effect on the rest of the line. There is an additional level of complexity because Word's positioning scheme depends on the Word document's edit history. For example, changing a character inside a word splits Word's internal representation of a text fragment containing into two fragments. This fragmentation affects the accumulation of glyph width error, and thus the displacements emitted for a line of text. Our \maxray\ tool accounts for this by considering potential edits to a line of text. We validated our Word model on hundreds of lines of Wikipedia text rendered to PDF by Word, several manually-crafted test cases, and text fragments from redacted documents found in the wild. \section{Related Work} \label{sec:related} We are not the first work to discuss deredaction, however, our work is the first to consider the PDF text specification and the first robust attack on redactions where the underlying text is removed. Forrester and Irwin~\cite{forrester2005investigation} discuss trivial redactions and unscrubbed metadata such as the Producer field of PDF documents but do not mention glyph positioning based deredaction. Hill et al., used hidden markov models to recover text obscured either by mosaic pixelization or a related tactic, e.g. guassian blur~\cite{hill2016effectiveness}. While M{\"u}ller et al.~\cite{muller2021processing} do not explicitly tackle redaction, they discuss hidden information present in PDF documents, specifically PDF document revision information and author name metadata. Beyond PDF redaction, other file formats may also be deredacted: Murdoch and Dornseif~\cite{murdochOnline} dicuss how cropped JPEGs can preserve uncropped image information. The primary predecessor to our work is Lopresti and Spitz~\cite{lopresti2004quantifying}, which presents a manual technique for matching glyphs to a redaction's width in a raster image of text. The authors attempted to use natural language processing to predict redacted words, something our work found imprecise given current models. The Lopresti and Spitz work also conflates PDF glyph position specifications with TTF glyph widths and assumes both are equivalent to a raster document's character widths. This presents two problems. First, a rasterization workflow may change a document's glyph positioning: physical printing may not be a pixel-perfect reproduction of the digital document. Second, TTF glyph widths do not necessarily equate with PDF or raster glyph widths. The Lopresti and Spitz techniques also rely on (potentially inaccurate) human determination of glyph positions. Recall the present work provides a fully automatic deredaction method, a precise analysis of leaked information, and a clear measurement of this problem's prevalence in real documents. Outside the scientific community, some government agencies have studied redaction vulnerabilities. The Australian Cyber Security Center~\cite{australiaRedact} analyzed Adobe Acrobat 2017's redaction security and considered several features including encryption, CMap leaks, redactions of text metadata, images, revision metadata, and form metadata. However, this work does not address glyph positioning information leaks and incorrectly determines that Adobe Acrobat leaks no redacted information. The National Security Agency's redaction guide~\cite{nsaRedact} does not mention glyph positioning information but notes any underlying redacted text should be removed from the document before producing a PDF. Changing the underlying text before redaction is a great defense against deredaction attacks, however, we found redactors do not always follow this advisory. Other works consider the problem of \emph{document sanitization} and security~\cite{cumby2011machine, sanchez2013automatic, orekondy2018connecting, sanchez2014utility}). Researchers have developed methods for encoding cryptographic signature schemes into PDF content and analyzing text to find semantically similar content to content marked for redaction. Our work benefits this line of research by identifying additional sources of information inside PDF documents and methodologies for protecting the confidentiality of redacted text. There have been many other cases of poorly redacted, high-profile documents. Trivial redactions have been found in classified US Military documents~\cite{topSecretTrivial}, Manafort case documents~\cite{manafortTrivial}, and documents relating to Larry Page's house~\cite{larryPageTrivial}. The AstraZeneca Contract with the EU disclosed funding information in the PDF's bookmarks~\cite{astraZenica}. One Ghislaine Maxwell interview leaked redacted information on President Clinton in the document's index~\cite{epsteinTrivial}. We also note unpublished work by the Free Law Project, the organization hosting RECAP (Section~\ref{sec:wild}). Their tool, x-ray, uses PDF graphics commands to identify text intersecting rectangles in PDFs (trivial redactions)~\cite{xrayTool}. We improved x-ray's method for locating trivial redactions by incorporating a second pass and notified RECAP of potential and real nontrivial redaction vulnerabilities. \section{Redaction Tools} \label{sec:tools} PDF redaction tools attempt to simulate the effect of physical paper document redaction. Users select parts of a PDF document for redaction and the redaction tool usually replaces those parts of the document with a black rectangle. Ideally, it should be impossible to recover redacted content. Unfortunately, even physical redaction may not remove redacted information because surrounding non-redacted characters may indicate redacted texts' approximate width. We surveyed popular redaction tools to determine how much redacted text information they leak. To find these tools, we searched the web for the terms ``PDF redaction tool,'' ``redaction tool,'' and ``document redaction tool.'' We then downloaded all redaction tools listed in the first five result pages. Table~\ref{tab:tools} lists the tools surveyed. Generally, redaction tools operate in one of three ways. The first family of tools draws a black box over text selected for redaction. As the original text object is still present, users can select and copy the text behind the black box. Two tools that fall into this category are listed as leaking \emph{full text} in Table~\ref{tab:tools}. We consider this to be a severe vulnerability and reported this to their developers (see Sec.~\ref{sec:disc}). The second family of redaction tools, referred to as \emph{displacement preserving}, attempts to perform redaction while maintaining the document's vector format. When applied to text objects, these tools replace the redacted text with a displacement of equivalent width so that text outside the redaction remains at the same position as in the original document. Finally, the tools draw a black box over the redacted area. Table~\ref{tab:tools} lists these as leaking \emph{width and displacements}. Two tools, Opentext Brava and IText PDFSweep, operate as above, but alter the displacements in text objects. Specifically, Opentext Brava changes the underlying PDF font and displacements for non-redacted glyphs, however, the width of the original redaction is maintained to a close approximation and the number of characters in the redacted word leaks because every redacted character is replaced with a space character and displacement. On the other hand, IText PDFSweep considers the displacements applied to each word, removes them, and applies their sum to the last character in the word before redaction. This has the effect of maintaining the exact width of redacted text but removing information relating to how a specific glyph in a given word was originally displaced. Despite technical complexities these are listed as only leaking \emph{width}. The last family of tools, referred to as \emph{rasterizing tools}, starts by converting every document page into a raster image and then blacks out those pixels of the image that fall inside the area selected for redaction. Because pixels outside the redaction area are untouched, it is possible to recover the approximate width of redacted text from the spacing between glyphs on either side of the redaction box. (The width is only approximate because of rasterization.) These tools are listed in Table~\ref{tab:tools} as leaking the \emph{raster width} of the redacted text. How much redacted information can be recovered from a PDF document depends on the source document glyph positioning scheme and on the redaction tool. Table~\ref{tab:vulns} lists the possible vulnerability classes, described below in order of decreasing severity. \emph{Full text recoverable (Full).} Tools that leak the full text only provide the appearance of redaction. The redacted text can be recovered by selecting and copying the text behind the redaction's box. We refer to these redactions as \emph{trivial}. \emph{Microsoft Word attack (Word).} Tools that preserve the original width of the redacted text and displacements in the text following redacted portion make documents vulnerable to attacks against the Microsoft Word positioning scheme (Sec.~\ref{sec:ms-word}). That is, a PDF document created using the \emph{Export as PDF} command in Word and then redacted using these tools leaks more information about redacted text than PDFs produced with unadjusted glyph positioning schemes. Based on our estimates, such an attack leaks up to 4 additional bits of information compared to width alone (Sec.~\ref{sec:synth}). \emph{Exact width attack (Width).} If a document only contains the width of the redacted text, it is vulnerable to a guessing attack, however, the amount of information leaked is less than in the case of the Word positioning scheme. As we show in Section~\ref{sec:synth}, width can leak up to 14 bits of information about the redacted text. There are two kinds of width attacks: those against unadjusted positioning schemes and those against adjusted positioning schemes. The first case happens when the original document contains unadjusted text. Tools that leak both width and displacements and tools that remove displacements will both produce the same output, from which we can recover the unadjusted width of the redacted text. The second case occurs if, for instance, the PDF document had Word displacements and was then redacted using Opentext Brava or IText PDFSweep: the resulting document then leaks the width of the adjusted text but not the displacements required for a complete Word positioning scheme attack. We label this Width$^+$ in Table~\ref{tab:vulns}. This case provides slightly more information than width alone. \emph{Raster width attack (RW).} Rasterizing redaction tools in principle leak width and displacement information, however, we rank the amount of information leaked by these tools slightly below exact width attacks because a displacement unit in a 12-point font measures 1/6,000 of an inch, 10 times the dot size of a 600~dpi raster image. To fully exploit the information leaked by the Word positioning scheme, one would need to differentiate displacement differences of 1 unit, making a Word positioning scheme attack less likely to succeed. However, it may be possible to recover positioning information with sub-pixel accuracy by carefully measuring a glyph's pixels in order to identify minute differences in width.\footnote{We leave the problem of estimating information leaks in raster text to future work.} \emph{Approximate width attack (AW).} Documents that are printed and then scanned may have considerable noise and therefore much lower position accuracy than all-digital PDF workflows. We do not consider attacks on redacted printed PDF documents in this paper. \begin{table} \centering \caption{Type of information leaked by redaction tools. Many tools leak redacted text width and displacement information.} \label{tab:tools} \input{tab/tools} \end{table} \begin{table} \caption{Document vulnerability to deredaction depends on source application and redaction tool. In order of decreasing attack severity we find: Full $>$ Word $>$ Width $\ge$ RW $>$ AW.} \label{tab:vulns} \begin{tabular}{lcccc} \toprule & \multicolumn{4}{c}{Tool family} \\ \cmidrule(lr){2-5} Source document & Full & W+D & Width & Rast \\ \midrule Word 2007--2019 & Full & Word & \phantom{$^+$}Width$^+$ & RW \\ OCR or Unadj. & Full & Width & Width & RW \\ Scanned rast. & Full & AW & AW & AW \\ \bottomrule \end{tabular} \end{table} \subsubsection{Hybrid-Synthetic} The first is \numRECAPPDFs\ PDF documents from the entire CourtListener (RECAP) document corpus. RECAP is a free legal research website that mirrors legal documents obtained from PACER, a paid service for accessing United States court documents. We combed these PDFs for trivial redactions, returning \numRECAPredactions\ redacted words in total, including social security numbers and other sensitive information. We then located names inside this set of redactions matching our dictionaries of first names and surnames from Section~\ref{sec:synth}, including titles (Mr., Mrs., Ms., and Dr.) and first initial, last name combinations, resulting in the \numRECAPNameredactions\ name redactions given in Table~\ref{tab:wildres}. While the number of name redactions is small relative to the total number of documents, ($\alpha$) the expected number of redacted documents among the whole population of court documents is small and ($\beta$) we rely on finding ``naive'' redactions, which are trivially broken and require a supreme degree of negligence (or a small degree of malintent) to produce. We found redacted names within this set of redactions, correctly redacted them, and then used knowledge of the ground truth (the redacted name) to validate (blindly) the efficacy of our techniques (Section~\ref{sec:hybrid-synth}). For these attacks we used a combined dictionary that includes all first names and surnames, \nameDictSize\ names in total. \subsubsection{Nonsynthetic} The second corpus contains four sources of documents: \begin{enumerate} \item A large set of publically available Freedom of Information Act (FOIA) documents~\cite{}. The FOIA allows for public access to government documents on request to US local and national governments, with some restrictions~\cite{}. \item All publically available Office of the Inspector General (OIG) investigation reports~\cite{}. The OIG is an oversight branch of the US government aimed at preventing unlawful or inefficient operation. \item All Digital National Security Archive (DNSA) documents produced after 2010. The DNSA is a set of historically important documents relating to US national security curated by scholars in the field~\cite{}. \item All RECAP documents for which the word ``redact'' occurs somewhere in the document's metadata (denoted rRECAP). \end{enumerate} These corpora were not previously available; data and associated scripts are released alongside our source code. For this second set of documents we had no ground truth; we tagged redactions as names manually from the set of all redactions with a size less than 2.1cm and greater than 0.7cm. This resulted in \totalNonSynthNames\ names from a set of \numNonSynthRedactions\ redactions matching the Word adjustment scheme. An analysis of unadjusted redactions was unnecessary given the pessimistic results of Section~\ref{sec:hybrid-synth}. We found that \emph{none} of the rRECAP redactions we discovered contained guessable names; this is a negative result that indicates that more advanced techniques for redaction location will be necessary for handling extremely large dataset of PDF files (Section~\ref{sec:disc}). Nonetheless we attack the remaining names, demonstrating the extent to which deredaction can be leveraged in the wild, in Section~\ref{sec:nonsynth}. For these attacks we use the same \nameDictSize\ name dictionary we used to attack the Hybrid-Synthetic cases. \subsection{Location} The first step in assessing the vulnerability of redactions is locating redactions. For this, we divide redactions into \emph{trivial}, and \emph{nontrivial} categories. Trivial redactions occur when the text underneath the redaction can be highlighted and copy-pasted using a standard PDF reader. Nontrivial redactions occur when the text underneath the redaction is removed and must be inferred using residual information leakages present elsewhere in the document (Section~\ref{sec:pdftext}). We wrote a location algorithm for trivial redactions based upon a two-pass scheme, where the first pass checks for a rectangle drawn in the PDF intersecting with the coordinates of a text object on the page. If this check passes, we generate two PNGs from the PDF document, one with text objects and the other without, and then, iterating over the coordinates of each word, we check whether the pixel color values do not change across both PNGs: this indicates the words in question were not visible in the original PDF document. Locating nontrivial redactions required a different, more expensive redaction location strategy: we generate a PNG from the PDF document and attempting to ``walk'' the boundaries of boxes placed in-between words on the page. Unavoidable computational costs involved in an effective location algorithm prevented us from running on the entire RECAP corpus: we discuss the costs and challenges of large corpora further in Section~\ref{sec:disc}. We compared this algorithm to one developed by Timothy B. Lee which records graphics state draw commands~\cite{}, finding that there are fewer false positives and more vulnerable width-based redactions are flagged (we also found it equivalent to results attained by manual analysis; see Appendix Table~\ref{tab:redactfind}). \subsubsection{Dictionary Selection}\label{sec:dict-select} We did not choose to attack names by chance: to determine the most common redaction type in these documents we conducted a random sample of 100 redactions from the population and manually tagged their dictionary type, resulting in the following distribution: \begin{center} \begin{tabular}{c|c|c|c|c} Word & Too Long & Name & Number & Pronoun \\ 2 & 26 & 52 & 17 & 3 \\ \end{tabular} \end{center} Thus, while the set of dictionaries from Section~\ref{sec:synth} is comprehensive, it is not necessary reflective of population statistics. Redactions are most commonly names (of some length). We note, however, that names of regions were still important for attacking several redactions in one of our case studies (Section~\ref{sec:snowden}). \subsection{Scheme Identification} \begin{table} \centering \caption{Glyph adjustment schemes identified in pages containing text from the redacted document corpora. The * indicates the hybrid-synthetic reduction from \numRECAPredactions\ word-redactions to name redactions.} \label{tab:wildres} \include{tab/wild-res} \end{table} Once a set of redactions are located, we identify the displacement scheme used to generate the PDF document (Section~\ref{sec:pdftext}). In determining the adjustment scheme of trivial redactions, we use the ground truth for the line in question to match the correct scheme. For determining the adjustment scheme of nontrivial redactions, we model the expected scheme and look for greater than 100 characters on the page that abide by the scheme's metrics. This is counted per-line: an entire line is classified and if it matches, then the characters of that line are counted toward the scheme's score. This is done to handle unimplemented cases of formatting that will affect glyph displacements\footnote{Such as footnotes.} for a subset of the lines on a page. For each of the MS Word cases, we ensure that the prefix of the line prior to redaction exactly matches the MS Word model. We include fixes for the \emph{i,l} displacement scheme discussed in Section~\ref{sec:snowden} in our set of exact MS Word displacement scheme matches for completeness.\footnote{We ensure the page contains both \emph{i} and \emph{l} if classified as needing this fix.} Table~\ref{tab:wildres} gives our results. We find a large number of unrecognized displacement schemes as we do not consider multiple applied transforms, that is, a PDF run through OCR and then opened in Microsoft Word. In the case of the RECAP hybrid-synthetic dataset, a random sample of 20 redacted documents found that all of the unrecognized schemes were either due to an unrecognized unicode character or (mostly) due to specific document layout templates used by US courts, typically implemented in Microsoft Word, but needing additional, idiosyncratic scheme modeling (Section~\ref{sec:word-templates}). Similar problems arise for documents created in email clients, spreadsheets, and templating software; we discuss these more complex flows in Section~\ref{sec:flow-disc} and give an example of two such cases in Sections~\ref{sec:snowden} and \ref{sec:manafort}. The unrecognized adjustment schemes reported in Table~\ref{tab:wildres} are sufficiently distant from our Microsoft Word models: we give a notion of this distance with the \emph{near Word} case: we took the $L_{1}$ distance from the expected Word text showing operator and thresholded a maximum distance of 10 units for every 100 characters. Finally, we note that for all cases (both unadjusted and Word), we attack each redaction at a page granularity, correlating the results for redactions that appear to be the same content (the same name redacted multiple times). \section{Hybrid Synthetic Evaluation}\label{sec:hybrid-synth} In this section we combed our first corpus, the entirety of RECAP, for trivially redacted names (highlighted black but not removed from the PDF). We then correctly redacted the sets of \numRECAPmsw\ names matching Word adjusment schemes and \numRECAPnadj\ names matching an unadjusted scheme and used knowledge of the ground truth (the redacted name) to validate (blindly) the efficacy of our techniques. This also allowed us to examine how well statistical ranking works in resolving cases where deredaction returns more than one possible name. \subsection{Results} \begin{table} \centering \caption{Efficacy of deredaction inside a corpus of court documents for which the ground truth is known. Resultant names weighted according to their popularity in the US census. The ``rank'' here refers to the probability of the name occuring in the US census relative to the other names that were exactly matched.} \label{tab:hybrid-synth-res} \include{tab/hybrid-synth-res} \end{table} Table~\ref{tab:hybrid-synth-res} reports the results for attacking the Word redactions. Note that the number of rows here is smaller than \numRECAPmsw\ due to cross-context information correlation (Section~\ref{sec:ms-word}). For all of the results, \maxray\ correctly deredacted the ground truth name, but not always as an unique match. Here, \emph{Widths} refers to unique matches for the font metrics used by the adjustment algorithm; that is, in one case we got a \emph{perfect} reduction for the guessed name as all other exactly matching names were indistinguishable with respect to their effect on the context of the Word algorithm. \emph{Widths} is in contrast to \emph{Matches}, which reports the number of names matching the target from our dictionary of \nameDictSize. With each of these results we rank the correct name by popularity the US census~\cite{} with respect to other Matches, e.g. a rank of 1 indicates the name is the most popular among the exactly matching guesses. $p_{coll}$ gives the sum of the probabilities of the set of incorrect Matches: the population of the census with that set of names divided by the total population. The z-score for $p_{name}$ with respect to the other Matches is given to provide a sense of the distribution. We also include a measure of the number of names it is necessary to select at random from our corpus of names until the likelihood of having two Matches is greater than 5\%, where the correct name is guaranteed to be included: $$\sum_{k = 1}^{n}\binom{n}{k}(\frac{p_{coll}}{(1 - p_{name})})^{k}(\frac{1 - p_{name} - p_{coll}}{(1 - p_{name})})^{n-k}$$ We find on average, a shortlist of \avgShortlist\ names has no more than a 5\% chance of having a colliding Match, with an average 800-fold reduction of the original dictionary to \avgNumNames, where the correct name is on average \avgStdDevRECAPName\ standard deviations above the mean US census popularity of other Matches. In attacking unadjusted redactions (labeled ``nadj.'' in the table), results were pessimistic: the average match set size was far greater than Word and no redaction attacked resolved to an exact match. However, these methods are still effective if the size of the dictionary used is small (Section~\ref{sec:manafort}). \section{Nonsynthetic Evaluation}\label{sec:nonsynth} \begin{table} \centering \caption{Matching results for the set of names that were extracted out of the redacted document corpora. The FILN column indicates results for matching first initial, last name pairs; other columns correspond to single-word names and names prefixed by the given title.} \label{tab:wildres2} \include{tab/wild-res-2} \end{table} In this section we explore how well deredaction generalizes to an uncontrolled setting: redacted documents occuring in the wild. We focus our attack on documents of public interest (FOIA) and documents relating to government investigations (OIG), finding that historically important and court document redactions are sparse (this is resolved by targeting specific, known cases, see Sections~\ref{sec:epstein} and~\ref{sec:snowden}). We attack, in particular, \numFOIANamesAttacked\ names from requested FOIA documents, and \numOIGNamesAttacked\ names from OIG investigation reports. We note that \numOIGUnmatched\ of the OIG cases and \numFOIAUnmatched\ FOIA cases we examined, we were unable to find any exact match; this is due to a number of factors, including typos, alternative formatting of the name (placing it outside of our dictionary), and use of a FNLN pair (Section~\ref{sec:disc-dict-gen}). For a few of the documents for which it was possible to determine the ground truth redacted names, we were able to manually validate our technique returned the correct name (after the attack was performed). We omit discussion of these results here, choosing to analyze specific \emph{cases} of deredaction more comprehensively in the studies of Section~\ref{sec:cases}. \subsection{Results} Table~\ref{tab:wildres2} reports our results. First, we note that some redactions contribute to the values of multiple columns: i.e. there are cases were a single word name will match a redaction as well as a shorter name prefixed by a title. Where it was possible to resolve this ambiguity, we did so manually, but this was not always possible based upon the context of the document. Determining an \emph{exact} match occurs with a frequency of 3\%, close to the predictions provided by Table~\ref{tab:nyt10}, with a slightly lower success rate due to a smaller amount of information leaked in font sizes greater than 10pt. It is also more apparent here that the set of \emph{width classes} is quite small compared to the total number of classes for the dictionary used. An analysis of TNR, Calibri and Arial in 10, 11, and 12 point font reveal the following statistics for the number of width classes in our \nameDictSize\ name dictionary: \begin{center} \begin{tabular}{c|c|c|c} Min & Median & Mean & Max \\ 68,876 & 109,766 & 135,819 & 214,690 \\ \end{tabular} \end{center} That is, the ratio of the averages of names to matches is \ratioNamesToMatches, while the ratio of dictionary width classes to width matches is \ratioWidthClassesToWidths, more than double the reduction over names alone. From these results, we can conclude the techniques presented in this paper are both practicable and circumstances for their use are common enough to warrant attention. In the next section we demonstrate the flexibility of these attacks by deredacting targeted documents of public interest. \section{Redaction in the Wild} \label{sec:wild} We have now demonstrated the theoretical capabilities of deredaction attacks, however, it is not immediately clear whether these attacks apply outside of a controlled environment. In this section, we show this is the case by identifying thousands of vulnerable redactions across distinct, real-world corpora. We immediately discovered a wide variety of positioning schemes, document formats, and types of redacted information present in real documents. Thus, this section presents a lower bound on deredaction's impact, as we were forced to constrain the redactions analyzed by three factors: \textit{Redaction Method.} While some strategies such as rasterizing after redaction make precise deredaction more difficult, other methods are trivially broken. In Section~\ref{sec:tools}, for example, we found some redaction tools do not remove redacted text from the document. In this section, we measure the number of these trivial redactions in RECAP, a public repository of court documents. We also identify non-trivial redactions in PDF documents containing text objects as this requires fewer assumptions about how analyzed redacted documents were produced, i.e. identifying which software created a document before that same document was redacted and rasterized. \textit{Positioning Scheme.} We focus on non-trivial redactions conforming to the Microsoft Word positioning scheme---the most precise and vulnerable of our identified PDF workflows. This allows us to accurately report a best-case reduction in the space of possible redacted texts. Consistent with our findings in Table~\ref{tab:nyt10}, we found unadjusted cases less scientifically rewarding. Our Microsoft Word scheme models apply to significant number (8.8\%) of redactions. \textit{Prior Information.} Deredaction does not imply semantics and does not always return a unique result. Thus, successful deredaction depends on prior information (in the Bayesian sense). We consider two types of prior information: semantic and syntactic. The latter includes, for example, whether the redacted content is an initial and English surname or an acronym. The former includes context, such as whether the redaction relates to a whistleblower at a specific company. We call this prior information the deredactor's \emph{dictionary}. Restrictions \emph{b}, \emph{c}, and \emph{d} below were made imposed after an exploratory analysis. \emph{(b).} We analyze entity name redactions as these are the most common and because revealing a redacted name creates an immediate privacy risk. In a random sample of 100 redactions from our corpora, 52 were names, 26 were multi-word phrases too long to attack, 17 were numbers, 3 were pronouns, and 2 did not have identifiable semantics. \emph{(c).} Section~\ref{sec:pdftext} noted Microsoft Word's positioning scheme leaks redacted information from left to right. Requiring redactions to be leftmost on a line ensures our statistical results' independence. In practice this assumption is unnecessary and prior information learned from one redaction can deredact another. For example, deredacting a first name or surname can constrain another more secure full name redaction's dictionary. \emph{(d).} We do not analyze redactions of entire lines or very short redactions, e.g. of pronouns. We used a sample of pronoun and three-word phrase widths in our corpora to determine arbitrary bounds for inclusion of less than 2.1cm and wider than 0.7cm. Thus for this section, we consider a redaction vulnerable if it is: \begin{enumerate} \item\textit{Trivially broken.} The redacted text is present in the PDF. or \item\textit{Non-trivial.} The redacted text is not present, but the document retains positioning scheme information where: \begin{enumerate} \item The scheme matches a Word \emph{Save As PDF} workflow. \item The redaction appears to be a name, e.g. of a country. \item The redaction is the first from left to right on the line. \item The redaction is of a reasonable width for deredaction. \end{enumerate} \end{enumerate} In the remainder of this section, we discuss the real-world document corpora we evaluated. We describe our experimental setup by providing algorithms for efficiently locating trivial and non-trivial redactions in PDF documents, identifying text object positioning schemes, and named entity redaction tagging. Finally, we report results for hundreds of deredactions and validate our findings. \begin{table} \centering \caption{Top: Positioning schemes identified in redacted corpora pages. Bottom: Deredaction results for ``Names Tagged''.} \label{tab:wildres} \include{tab/wild-res} \end{table} \subsection{Corpora} \label{sec:wild-corpora} For our study of redaction in the wild, we use four bodies of documents. Table~\ref{tab:wildres} summarizes redaction statistics for these corpora. \emph{FOIA.} Documents obtained via the US Freedom of Information Act (FOIA) on governmentattic.org~\cite{govattic}. This corpus provides us with independently selected documents with some public interest. \emph{OIG.} Office of the Inspector General (OIG) reports hosted by oversight.gov~\cite{oigReports}. The OIG is a US Government oversight branch tasked with preventing unlawful operation of other government branches. This corpus allowed us to measure the impact deredaction may have on documents from a high-profileand large organization. \emph{DNSA.} Digital National Security Archive (DNSA) documents produced after 2010~\cite{dnsaSite}. The DNSA is a set of historical US government documents curated by scholars. We find redaction information leaks also affect significant historical documents---the Snowden files. \emph{RECAP.} CourtListener's RECAP court document archive. RECAP mirrors PACER, the US Federal Courts' docketing system~\cite{pacerSite}. As RECAP is large (over 10 million documents), we use a subset of RECAP documents returned for the search string ``redacted'' to evaluate non-trivial court document redactions and the full archive to evaluate trivial redactions. \emph{rRECAP} refers to the former and \emph{RECAP} refers to the full archive. For the former we find this general search term finds \emph{no} non-trivial named entity redactions matching the Microsoft Word positioning scheme. However, targeted searches discover non-trivial redactions and we demonstrate redaction information leaks affect interesting RECAP documents in Section~\ref{sec:cases}. \subsection{Locating Redacted Text}\label{sec:locate-redactions} Reliably locating redacted text in PDF documents poses several challenges. Even locating trivial redactions is non-trivial. On the one hand, there are several ways to remove text from a document and replace it with a box. Redaction location methods must be general enough to capture most redactions. On the other hand, document elements such as text bullets may structurally resemble a redaction box, so care must be taken to exclude such false positives. For our study, we developed a lightweight algorithm for identifying trivial redactions and a more general, computationally expensive technique for locating non-trivial redactions, described below. \emph{Trivial redactions.} Our trivial redaction location algorithm works in two passes. In the first pass, a combination of Perl and the Poppler PDF library identify commands drawing rectangles in the PDF and checks the rectangles for intersections with text object coordinates. If this check passes, we render the PDF document page as a raster image twice, once with text objects and the other without. We then iterate over the bounding box for each redacted glyph:\footnote{ Because the text objects are inthe original PDF, the text objects marked as redacted are present and the PDF contains coordinate information for each redacted glyph. } if the pixels are the same in both buffers, we consider the word to be redacted. We surveyed all of RECAP for trivial redactions and found \numRECAPtrivialRedaction\ copy-pastable redacted words in US court documents. We manually validated our algorithm on a sample of 400 RECAP documents, finding a false positive rate of 2.5\%, due to black text rendered over a black background. To avoid this, we could record the PDF's Z-axis rendering order, however, existing PDF libraries and (to some extent) PDF itself lack support for Z-axis analysis. The algorithm encounters false negatives when it cannot identify rectangular draw commands. We did not encounter false negatives: redaction draw commands are rarely ambiguous enough to prevent detection. \emph{Non-trivial redactions.} Non-trivial redactions require a more general location algorithm. We render each document page using into a raster image. We first use Poppler to attain the bounding box coordinates for each non-redacted word. We then measure the pixels between each pair of words and attempt to locate a black rectangle inside the space between the words by scanning the space from top to bottom. To avoid scanning every single space character, we apply the ``reasonable width'' restriction (rule \emph{(d)} from earlier). If a black rectangle is found, the rectangle's dimensions are compared to the dimensions of the space between the two words and if the difference is within a small threshold, it is marked as a redaction. We evaluated this algorithm's accuracy on a random sample of 1,000 corpora pages. We compared against Tim B. Lee's~\cite{timblee} prior work on redaction location and against manual identification. Lee's method flags every black rectangle drawn as a redaction and has a high false positive rate for some classes of documents (around 30\% for FOIA and nearly 100\% for RECAP). With respect to false positives and false negatives, our algorithm is equivalent to manual analysis when locating what we deem to be vulnerable redactions. Appendix~\ref{sec:redact-loc-appdx} provides an outline of the location algorithms. \subsection{Identifying Positioning Schemes} \label{sec:adjschmident} The next step in deredaction is positioning scheme identification (see Section~\ref{sec:pdftext} for an explanation of glyph positioning schemes). As ground truth is present, trivial redactions do not require this step. To identify a PDF's positioning scheme, we first model the position of each glyph for that scheme and a given line of text. This model is parameterized by several factors including the document's layout, font size, and edit history. In the present analysis, we consider a PDF page to \emph{match} a scheme if we identify greater than 100 glyphs matching the scheme's model on the page. We only count entire lines: all of a line's glyph positioning information must match the scheme exactly to count toward the 100 glyph threshold. This process avoids partial matches and filters out regions of documents with idiosyncratic formatting. For example, Microsoft Word documents may have alternative layouts for text, e.g. text boxes, tables, and line numbers. Recall from Section~\ref{sec:maxray} we also require potential deredactions to exactly match all present PDF positioning information---for Word based schemes, \emph{any} correct match also helps to assure us the deredacted line's scheme was correctly identified. \subsection{Identifying Vulnerable Redactions}\label{sec:ident-weak} Finally, to avoid the computational costs and inaccuracies associated with trying to attack each redaction using all possible dictionaries of potential redacted text, we manually classified each non-trivial redaction matching the Word positioning scheme as potentially hiding an entity name or not. The \emph{Names} row in Table~\ref{tab:wildres} gives results for the number of named entity redactions we identified. This row is also the number of redactions matching our vulnerable redaction criteria outlined at the start of this section.\footnote{ \emph{Names} also enforces rule \emph{(c)}, that the redaction is first from left to right on the line. } We found 711 redactions in the FOIA corpus, 58 in the OIG corpus, 9 in the DNSA corpus, and none in rRECAP (mentioned above). Other non-trivial redactions are likely vulnerable but require additional positioning scheme models, e.g. for saved PDFs of HTML documents, better unicode support, and better prior information about redacted content. To this end, Table~\ref{tab:wildres} counts redactions within a 10 displacement unit $L_{1}$ distance from our Microsoft Word model. These PDFs were the result of a non-standard workflow, e.g. creation using Word 365, the web interface for Microsoft Word. Appendix Table~\ref{tab:pdfflows} enumerates a subset of these and other non-standard workflows. These results demonstrate the positioning schemes we model are sufficiently unique from others, and modeling other schemes will require a seperate effort. We discuss this effort in Section~\ref{sec:future-work}. We also identified redactions with unadjusted and Adobe OCR positioning schemes. In Section~\ref{sec:eval}, we found these schemes leak 2--3 bits less information than documents produced by Microsoft Word. While these redactions are vulnerable to deredaction using smaller dictionaries, automated attacks (like our own) are less rewarding. \subsection{Breaking Vulnerable Redactions} \label{sec:wildres} We measure deredaction's impact with \numWildAttacked\ vulnerable redactions from the FOIA and OIG corpora. Our dictionary consists of \nameDictSizeComb\ names: the FN, LN, and FILN dictionaries described by Section~\ref{sec:eval} and last names preceded by \emph{Mr.} or \emph{Ms.}. We do not include FNLN as testing this dictionary takes 6 hours on an Intel Xeon Silver 4208, 2.10 GHz, 32-core server and the result sets are typically large. Section~\ref{sec:snowden} discusses the nine DNSA redactions and Appendix~\ref{sec:dictdetail} describes our other proper noun dictionaries, e.g. country names. Table~\ref{tab:wildres}'s bottom reports deredaction results for the \numWildAttacked\ cases. Glyph positioning information rarely results in the unique identification of redacted text. However, deredaction presents a serious threat for constrained dictionaries. Deredaction with a prior, for example, knowing a redaction is a U.S. president's name, sees this success in Section~\ref{sec:cases}. We also find matching redaction guess magnitudes are skewed positively. While FOIA's average case sees around a 3,000-fold reduction in the number of potential redacted texts, the majority of cases see at least a 14,000-fold reduction. Where possible, we validated our findings using public information. For example, a web search found ground truth for pages 5 and 6 of~\cite{oigClosing}. We find unmatched cases are also common: false negatives arise if the word does not occur in the dictionary. Manual analysis of trivial RECAP redactions found 28.2\% of names were of a form occuring in our dictionaries, leaving 71.7\% in other forms (e.g. first name, last name). Our own false negative rate (54.7\%) is lower because, while we extensively tested our positioning scheme models, misleading results (false positives) exist in what appears to be 17\% of cases. False positives occur when \maxray\ returns a set of matches but the redacted name is not in the dictionary. Our validation found no such false positives, though our results sugggest they exist. In general, deredaction ruled out many incorrect guesses---barring adversarial cases (see Sec.~\ref{sec:defenses}), deredaction \emph{is} what \emph{is not there}. \subsection{Ground Truth Validation of Our Attacks} However, trivial redactions provide ground truth so we used these redactions to validate our techniques. For every trivially redacted name in RECAP present in our dictionary, we redacted the name weakly, i.e. we removed the redacted text but preserved non-redacted glyph positioning information. We then formed a set of matching names using \maxray, and we were successful in all cases. Two of these redactions were of a name only without additional text. We report these two results as \maxray's penultimate evaluation: \emph{Mr. Hamilton.} The first redacted name had the form \emph{Mr. Hamilton} (actual name different). The result set contained 24 names and Mr. Hamilton's was the 14th most common based on US Census data. No matched name had the title \emph{Ms.}. \emph{Ms. Schuyler.} The second redacted name had the form \emph{Ms. Schuyler} (actual name different). The result set contained 210 names and Ms. Schuyler's was the 127th most common based on US Census data. No matched name had the title \emph{Mr.}.
\section{\bf Introduction} Heavy quarks are emerging as valuable probes for the study of quark gluon plasma produced in relativistic heavy ion collision. This has its origin in the large mass of heavy quarks which lends them quite a few advantages, viz, they are produced at a time $\approx \frac{1}{2m_Q}$ which is smaller than the formation time of quark gluon plasma. Their large mass ensures that their production can be calculated reliably using perturbative QCD and they may not be deflected substantially from their path due to collisions with quarks and gluons and due to radiation of gluons. The drag experienced by the heavy quarks due to these collisions and radiations however leads to medium modification of their production which is quite similar to those for light quarks, as leading particles~\cite{largedrag,raa}. Recent calculations which treat the so called 'dead cone' with more care~\cite{deadcone}, also show that heavy quarks lose energy in a manner quite similar to light quarks~\cite{dedx}. Recently it has been pointed out that correlations of heavy quarks (charm and anti-charm) can add a richness to these studies by adding several features~\cite{corr}. Consider for example heavy quark production at lowest order of pQCD. They would be produced back to back. The two members of the correlated pair may in general lose different amounts of energy as they may cover differing path lengths in the plasma. However if they do not change their directions, they would continue to remain back-to-back. Now consider that there is a strong flow and that the heavy quarks take part in flow~\cite{flow}. It is now possible that one of them is produced with a transverse momentum parallel to flow velocity and its momentum will increase, while the momentum of the other will decrease. In fact if the radial flow velocity $v_T$ $>$ $p_{T}/E$(charm), the charm will change its direction and the back-to-back correlation may change to forward correlations. When, however, the flow velocity is not collinear with the momentum, the final momenta will be separated by 0 $<$ $\phi$ $<$ $\pi$. Thus while the energy loss is not likely to alter the angular correlation of heavy quarks at lowest order in pQCD, a strong elliptic flow will bring in some interesting and rich structure, the analysis of which could throw some light on interplay of energy loss and flow. There is, however, a substantial production of heavy quarks at next to leading order in pQCD. The NLO process $gg \rightarrow Q\overline{Q}$ can proceed in two ways(among others). Either one of the final state gluons in the process $gg \rightarrow gg$ splits ($g \rightarrow Q\overline{Q}$) or one of the heavy quarks radiates a gluon following $gg \rightarrow Q\overline{Q}$. The pair is expected to be collinear in the first case and deviate back-to-back in the second case. The processes where gluon is emitted from the external legs will fill up the region 0 $<$ $\phi$ $<$ $\pi$. Now energy loss will alter the correlations in a complex manner. If our assumption on heavy quarks not changing direction due to energy loss largely holds then $p_T$ integrated correlation is likely to remain unchanged. However if we now study the correlation for different cuts on $p_T$, some interesting patterns may emerge. Different heavy quarks lose different momenta! We can now discuss correlated decay of charm-anti-charm into electron-positron pair. The invariant pair mass distribution of electron pair obtained from decay shows interesting features. It is seen earlier that large suppression of heavy quark as seen through $R_{AA}$, results in increase of D mesons as well as single electron spectrum at low momentum by a few percent. This characteristic increase is quite different from enhancement due to Cronin effect~\cite{cronin} and is found to be due to large effective drag upon charm by thermalized medium. The invariant pair mass distribution of electron pair obtained from decay shows similar feature from effects due to energy loss by charm quarks~\cite{kampfer}. The electron pairs pile up at low invariant mass region resulting in characteristic enhancement in electron distribution. In the following we study some of these features of the correlation of heavy quarks with collision of lead nuclei at 2760 GeV/nucleon as an example. The paper is organized as follows. Sec. 2 contains formalism for charm production cross-section from pp collisions and lead on lead collision at LHC energy. Sec. 3 contains an empirical model of energy loss employed to determine the medium effect on charm correlation. Sec. 4 presents our results and discussions on azimuthal correlation and correlated charm decay. Finally Sec. 5 gives the summary followed by acknowledgement and bibliography. \section{\b Formulation} The correlation of heavy quarks produced in $pp$ collisions is defined as \begin{equation} E_1\,E_2\,\frac{d\sigma}{d^{3}p_{1}\,d^{3}p_{2}}=\frac{d\sigma}{dy_1\,dy_2\,d^{2}p_{T1}\,d^{2}p_{T2}}=C\,, \label{corr1} \end{equation} where $y_1$ and $y_2$ are the rapidities of heavy quark and anti-quark and $\bf{p_{Ti}}$ are the respective momenta. At the leading order, the differential cross-section for the charm correlation for proton on proton collision is given by \begin{equation} C_{LO}=\frac{d\sigma}{dy_1\,dy_2\,d^{2}p_{T}}\delta{(\bf{p_{T1}}+\bf{p_{T2}})} \label{corr2} \end{equation} One can now calculate~\cite{corr,jamil} \begin{eqnarray} \frac{d\sigma_{pp}}{dy_1 dy_2 d^{2}p_{T}} &=& 2 x_{a}x_{b}\sum_{ij} \left[f^{(a)}_{i}(x_{a},Q^{2})f_{j}^{(b)}(x_{b},Q^{2}) \frac{d\hat{\sigma}_{ij}(\hat{s},\hat{t},\hat{u})}{d\hat{t}} \right.\nonumber\\ &+& \left.f_{j}^{(a)}(x_{a},Q^{2})f_{i}^{(b)}(x_{b},Q^{2}) \frac{d\hat{\sigma}_{ij}(\hat{s},\hat{u},\hat{t})}{d\hat{t}}\right] /(1+\delta_{ij})~, \label{sigma} \end{eqnarray} where $p_{T}$ and $y_{1,2}$ are the momenta and rapidities of produced charm and anti-charm and $x_{a} $ and $x_{b} $ are the fractions of the momenta carried by the partons from their interacting parent hadrons. These are given by \begin{equation} x_{a}=\frac{M_{T}}{\sqrt{s}}(e^{y_1}+e^{y_2})~;~~~~ x_{b}=\frac{M_{T}}{\sqrt{s}}(e^{-y_1}+e^{-y_2})~. \label{x} \end{equation} where $M_{T}$ (= $\sqrt{m_{Q}^{2}+p_{T}^{2}}$), is the transverse mass of the produced heavy quark. The subscripts $i$ and $j$ denote the interacting partons, and $f_{i/j}$ are the partonic distribution functions for the nucleons. The invariant amplitude, $\left|M\right|^2$ in differential cross-section $d\hat{\sigma}/d\hat{t}$ is taken from ref.~\cite{invM}. The processes included for LO calculations are: \begin{eqnarray} g+g \rightarrow c+\overline{c}\nonumber\\ q+\bar{q} \rightarrow c+\overline{c}~. \label{processLO} \end{eqnarray} At Next-to-Leading order the subprocesses included are as follows: \begin{eqnarray} g+g \rightarrow c+\overline{c}+g\nonumber\\ q+\bar{q} \rightarrow c+\overline{c}+g\nonumber\\ g+q(\bar{q}) \rightarrow c+\overline{c}+q(\bar{q})~. \label{processNLO} \end{eqnarray} The eq. \ref{corr1} gives the correlation of heavy quarks from initial fusion in proton-proton collision. The azimuthal correlation of heavy quark for Pb+Pb collision at given impact parameter is given by \begin{equation} E_c\,E_{\bar{c}}\,\frac{dN_{AA}}{d^{3}p_c\,d^{3}p_{\bar{c}}}=T_{AA}E_c\,E_{\bar{c}}\,\frac{d\sigma_{pp}}{d^{3}p_c\,d^{3}p_{\bar{c}}} \label{corr3} \end{equation} For lead on lead collisions at LHC, we have used $T_{AA}$ = 292 fm$^{-2}$ for b = 0 fm We have used CTEQ5M structure function. The factorization, renormalization, and fragmentation scales are chosen as 2$\sqrt{m_c^2+p_T^2}$ and the charm quark mass, $m_c$ has been taken as 1.5 GeV. \section{Energy Loss Formalism} We use the empirical model for the energy loss for charm quarks proposed in one of our earlier paper. We perform a Monte Carlo implementation of our model calculations and estimate the azimuthal correlation as well as correlated decay of charm pair with charm cross-section determined using NLO-pQCD calculations. We assume that the energy loss of heavy quarks proceeds through the momentum loss per collision is given by,~\cite{empeloss} \begin{equation} (\Delta p)_i=\alpha \, (p_i)^{\beta}~, \label{deltapt} \end{equation} so that one can write \begin{equation} \frac{dp}{dx}=-\frac{\Delta p}{\lambda} \label{dpdx} \end{equation} where $\alpha$ and $\beta$ are parameters with best values at $\sqrt{s}$= 2760 GeV/nucleon taken from publication by Younus et al~\cite{largedrag} and $\lambda$ is the mean free path of the charm quark, taken as 1 fm throughout. Thus the momentum of the charm quark after $n$ collisions will be given by \begin{equation} p_{n+1}=p_n-(\Delta p)_n \end{equation} The probability for the charm quark to have $n$ collisions, while covering the path length $L$ is given by \begin{equation} P(n,L)=\frac{(L/\lambda)^{n}}{n!}e^{-L/\lambda}. \label{prob} \end{equation} So now we estimate the largest number of collisions- $N$, which the charm quark having momentum $p_T$ can undergo. Next we sample the number of collisions $n$, which the charm undergoes from the distribution \begin{equation} p(n)=P(n,L)/\sum_{n=1}^N P(n,L) \end{equation} to get the final momentum of the charm(anti-charm) quark. Next we fragment the charm quark using Peterson fragmentation function given by $D$. We have assumed that $D(z)$, where $z=p_D/p_c$, is identical for all the $D$-mesons,~\cite{peterson} and \begin{equation} D^{(c)}_D(z)=\frac{n_D}{z[1-1/z-\epsilon_p/(1-z)]^2}~, \label{frag} \end{equation} where $\epsilon_p$ is the Peterson parameter and \begin{equation} \int_0^1 \, dz \, D(z)=1~. \end{equation} We have kept it fixed at $\epsilon_p$=0.13. Then we have included semileptonic decay of $D(\overline{D})$ mesons by parameterizing electron distribution function taken from Ref.~\cite{altarelli}. Finally we show our results for $dN_{c\overline{c}}/d\Delta\phi$, $dN_{D\bar{D}}/d\Delta\phi$, $E_c\,E_{\overline{c}}dN/d^{3}p_{c}d^{3}p_{\overline{c}}$ and $dN/dM_{e^{+}e^{-}}$. \begin{figure*}[h] \begin{center} \includegraphics[height=3in,width=3in,angle=270]{dNdphiDlt2comp.eps} \caption{(colour on-line)Comparison of D mesons azimuthal spectrum for two different structure functions.} \label{fig1} \end{center} \end{figure*} \begin{figure*}[h] \begin{center} \includegraphics[height=3in,width=3in,angle=270]{dndphil2.eps} \includegraphics[height=3in,width=3in,angle=270]{dndphipg2.eps} \includegraphics[height=3in,width=3in,angle=270]{dndphipl6.eps} \includegraphics[height=3in,width=3in,angle=270]{dndphipg6.eps} \caption{(Colour on-line)$dN/d\Delta\phi$ vs $\Delta\phi$ of $c\overline{c}$ pair for (upper left)$p_T$ $<$ 2.0 GeV, (upper right)$p_T$ $>$ 2.0 GeV. (lower left)$p_T$ $<$ 6.0 GeV, (lower right)$p_T$ $>$ 6.0 GeV.} \label{fig2} \end{center} \end{figure*} \begin{figure*}[h] \begin{center} \includegraphics[height=3in,width=2.8in,angle=270]{dNdphiDlt2.eps} \includegraphics[height=3in,width=2.8in,angle=270]{dNdphiDgt6.eps} \caption{(Colour on-line)same as Fig.2, $dN/d\Delta\phi$ vs $\Delta\phi$ of $D\overline{D}$ pair for (left)$p_T$ $<$ 2.0 GeV, (right)$p_T$ $>$ 6.0 GeV.} \label{fig3} \end{center} \end{figure*} \begin{figure*}[h] \begin{center} \includegraphics[height=3in,width=3in,angle=270]{edndphipt2.eps} \includegraphics[height=3in,width=3in,angle=270]{edndphipt3.eps} \caption{(Colour on-line)Azimuhtal correlation of $c\overline{c}$ pair for (left) $<p_T >$=2.0 GeV, (right) $<p_T >$=3.0 GeV} \label{fig4} \end{center} \end{figure*} \begin{figure*}[h] \begin{center} \includegraphics[height=6in,width=4in,angle=270]{my0.5n.eps} \caption{(Colour on-line)Invariant mass distribution for di-electron (inset)Increase in di-electron spectrum for $M_{e^{+}e^{-}} <$1.0 GeV, shown in linear scale. } \label{fig5} \end{center} \end{figure*} \section{Results and Discussions} We have used NLO-MNR code ~\cite{nlomnr} with CTEQ5M structure function for estimating charm cross-section for all leading and next-to-leading pQCD processes. The scaling factor used is 2$\sqrt{m_{c}^{2}+p_{T}^{2}}$ with $m_{c}$=1.5 GeV. In this paper we have used two different values for parameter '$\beta$'=1.0 for B-H type and $\beta$=0.5 for LPM type of energy loss mechanisms respectively. Correspondingly $\alpha$ = 0.12 for B-H type and 0.25 GeV$^{1/2}$ for LPM type are taken as the best values at $\sqrt{s}$ = 2760 GeV/nucleon. The entire calculation is done for central collision (b=0fm) and for mid rapidity, -0.5$\leq$y$\leq$0.5 To check the consistency in our results we use two different partonic structure functions one of which is CTEQ5M and other an old one MRS125. The comparison is shown in Fig. \ref{fig1}, where the difference in the two distributions is very small and the shape almost identical. However more recent structure functions like CTEQ6M and CTEQ6.6 etc. must be used in order to have more up-to-date results. These issues will be addressed in our next publication on heavy quark correlation. Next let us recall that LO contribution can be differentiated from NLO contribution with different $p_T$ cuts on charm momentum. Leading order processes give back to back charm pairs which are entirely visible around $\Delta\phi$=$\pi$ while NLO contribution is distributed from $\Delta\phi$=0 -- $\pi$. In Fig. \ref{fig2}, we show our results for $dN_{c\bar{c}}/d\phi$ for different $p_T$ cuts. Realizing that all heavy quarks now appear with reduced momenta, we see that if we look at $p_T$ $<$ 2 GeV or $p_T$ $<$ 6 GeV, then the back-to-back correlation rise by up to a factor of 10 for $\phi$ = 0. The results for $p_T$ $>$ 2 GeV or $p_T$ $>$ 6 GeV are more dramatic in that the $\phi$=$\pi$ correlation now reduces by more than a factor of 10 while that for $\phi$=0 decreases from its value for no energy loss. We show $dN_{D\bar{D}}/d\Delta\phi$ for $p_T$ $>$ 6.0 GeV and $p_T$ $<$ 2.0 GeV in Fig. \ref{fig3}. Comparing it with Fig. \ref{fig2} for same $p_T$ regions, we observe certain differences which we now discuss. For $p_T$ $<$ 2.0 GeV, we observe that D meson distribution is slightly higher than charm spectrum at $\Delta\phi$ = $\pi$, although the order of magnitude remains same. While at $\Delta\phi$ = 0, the situation is reversed. Similar observations are noted when figures at $p_T$ $>$ 6.0 GeV are compared. We feel that the above differences are caused by fragmentation function, D(z), which changes the $p_T$ distribution of charm into $p_T$ distribution of D mesons with, 0$\leq$z$\leq$1. Thus the correlation spectra of charm and D mesons may appear slightly different when we look into particular $p_T$ regions. Finally it can be mentioned, D mesons rather than charms are observed in experiments. So calculating D meson correlation and comparing it with charm will give us deeper insights into the correlation study. In Fig. \ref{fig4}, we have $E_c\,E_{\bar{c}}dN/d^{3}p_c\,d^{3}p_{\bar{c}}$ for charm average $p_T$ of 2 GeV and 3 GeV. The figure shows change in azimuthal correlation of charm pairs with pairs at $\Delta\phi$=$\pi$ decreased considerably by inclusion of the energy loss mechanism. To discuss our simple model of charm quark energy loss, we find that most of the charm pairs not only lose energy to shift to the lower momentum region but also back-to-back correlation for many charm pair is altered to almost collinear pairs. Also we find that two different energy loss mechanisms included in our study do not give much different outcomes. Further investigating at much higher momentum regions might bring out the differences between various energy loss mechanisms. The correlation study can be enriched if expanding medium is included in addition to energy loss by charms. Next we move to our results for correlated decay of charm. In Fig. \ref{fig5}, we have $dN/dM_{e^{+}e^{-}}$ for di-electrons from correlated charm decay. We can recall that there is enhancement in D mesons as well single non-photonic electrons due to the effects of large drag on charm quark moving through QGP. Here we find a similar enhancement in di-electron spectrum at midrapidity. For $M_{e^{+}e^{-}}$ less than 1 GeV, there is increase in $dN/dM_{e^{+}e^{-}}$ by almost 12\% which is quite noteworthy considering our model to be simple empirical mechanism of energy loss. \section{Summary} We have studied correlation of charm, D mesons as well as correlated decay of charm using NLO-pQCD processes and a simple empirical model for energy loss. The azimuthal correlations of charm show change when energy loss mechanisms are implemented along with cuts on charm transverse momentum. In case of di-electron distribution, energy loss enhances the electron spectrum slightly for low invariant mass. \section*{Acknowledgments} One of us (MY) acknowledges financial support of the Department of Atomic Energy, Government of India during the course of these studies. \section*{References}
\section{Introduction} \label{intro} The origin of $^{19}\mathrm{F}$ is a widely debated issue in astrophysics. Several stellar environments have been proposed as F production sites: core-collapse Supernovae \citep{Woosley1988}, Wolf-Rayet stars \citep{Meynet2000}, and Asymptotic Giant Branch (AGB) stars \citep{Forestini1992}. Among them, only in AGB stars fluorine synthesis is confirmed by direct spectroscopic observation of [F/Fe] enhancements, see \cite[and references therein]{Jorissen1992, Abia2009}, and recent studies seem to exclude the first two scenarios \cite{Federman2005,Palacios2005}. It was early recognised that the $^{15}\mathrm{N}(\alpha,\gamma)^{19}\mathrm{F}$ reaction is a leading process for the $^{19}\mathrm{F}$ production when He-burning is active. Although the H burning ashes are heavily depleted in $^{15}\mathrm{N}$, which is efficiently destroyed by proton capture, these ashes are enriched in $^{14}\mathrm{N}$. Various reaction chains may lead to the production of $^{15}\mathrm{N}$ nuclei at relatively low temperatures, $\sim100\,$MK. A likely reaction chain is $^{14}\mathrm{N}(\rm n,p)^{14}\mathrm{C}(\alpha,\gamma)^{18}\mathrm{O}(\rm p,\alpha)^{15}\mathrm{N}$, which however requires an efficient neutron source. Some $^{15}\mathrm{N}$ may be also produced by the $^{14}\mathrm{N}(\alpha,\gamma)^{18}\mathrm{F}(\beta^+)^{18}\mathrm{O}(\rm p,\alpha)^{15}\mathrm{N}$, where the protons need to be simultaneously released by the $^{14}\rm N(n,p)^{14}C$ reaction. Therefore the presence of a neutron source is a key requirement. This condition is actually fulfilled in low-mass AGB stars undergoing thermal pulses, where the $^{13}\mathrm{C}(\alpha,\rm n)^{16}\mathrm{O}$ reaction is known to be the main neutron source powering the $s$-process nucleosynthesis in their He-rich mantel \cite{Straniero1995}. The competition with some reactions that destroy $^{15}\mathrm{N}$ and/or $^{19}\mathrm{F}$, such as $^{15}$N(p,$\alpha$)$^{12}$C, $^{19}$F(n,$\gamma$)$^{20}$F, $^{19}$F(p,$\alpha$)$^{16}$O, and $^{19}$F($\alpha$,p)$^{22}$Ne, should also be carefully considered, see e.g. Refs.\cite{Imbriani2012,Lombardo2015} for recent experimental works, and Ref. \cite{Cristallo2014} for a review. \begin{figure*}[!htb] \begin{center} \resizebox{.9\hsize}{!}{\includegraphics{./figure01.pdf}} \caption{Schematic view of the ERNA recoil separator.} \label{fig:ERNA} \end{center} \end{figure*} The rate of the $^{15}$N($\alpha$,$\gamma$)$^{19}$F reaction at relevant AGB temperatures is determined by a number of narrow resonances, the most important being the $E_{\rm c.m.}=364\,$keV one, together with the Direct Capture (DC) component and the tails of two broad resonances at $E_{\rm c.m.}=1323$ and 1487\,keV. The strength of the $E_{\rm c.m.}=364\,$keV resonance has been determined through an indirect measurement reported in \cite{deOliveira1996}. Due to the model dependence of the result an uncertainty of a factor of 2 is assumed for this quantity. In the same work the spectroscopic factors of most of the $^{19}\rm F$ bound states were determined, and on the basis of a single particle transition model the DC component has been estimated. On this latter quantity, according to the survey in Ref. \cite{Longland2010}, an uncertainty of 40\% is generally assumed. The mentioned uncertainties influence the determination of the reaction rate at relevant AGB temperatures. \section{Experimental setup and procedures} The measurement of the $^{15}\mathrm{N}(\alpha,\gamma)^{19}\mathrm{F}$ reaction yield was performed in inverse kinematics, i.e. a $^{15}\mathrm{N}$ beam \cite{DiLeva2012} impinging onto a $^4\mathrm{He}$ windowless gas target, using the European Recoil separator for Nuclear Astrophysics (ERNA). ERNA was originally installed and commissioned at the Dynamitron Tandem Laboratorium of the Ruhr-Universit\"at Bochum, Germany \cite{Rogalla2003,Gialanella2004,Schuermann2004}. In 2009 it was moved to the Center for Isotopic Research on Cultural and Environmental heritage (CIRCE) laboratory in Caserta, Italy \cite{Terrasi2007}. The separator underwent a major upgrade with the addition of the Charge State Selection dipole Magnet (CSSM) directly downstream of the target. A schematic view of the present ERNA layout is shown in Fig. \ref{fig:ERNA}. The ion beam emerging from the 3\,MV tandem accelerator is transported through the CIRCE AMS beamline: a $90^\circ$ analyzing magnet and an electrostatic analizer provide the necessary ion beam purification from recoil-like contaminants. The magnetic field of the analyzing magnet is used to determine the beam energy, while its uncertainty is determined by the opening of the magnet's image slits. The settings used in the presented measurements result in a beam energy uncertainty of about 7\,keV \cite{Buompane2016}. The beam is guided into the $40^\circ$ beam line of ERNA by a switching magnet. A quadrupole triplet after the switching magnet is used to focus the beam onto the windowless gas target \cite{Schuermann2013}. After the gas target, the separator consists sequentially of the following elements: a dipole magnet (CSSM) a quadrupole triplet (QT), a Wien filter (WF1), a quadrupole singlet (QS), a $60^\circ$ dipole magnet, a quadrupole doublet (QD), a Wien filter (WF2), and a detector for recoil identification and counting. Finally, several Faraday cups (FC), and slit systems are installed along the beam line for diagnostic purposes. A Si detector is placed at about $25^\circ$ in the laboratory frame with respect to beam axis, and is collimated with a $\phi=1$\,mm diameter aperture in the second downstream pumping stage of the gas target. This is used to monitor the scattering rate of $^{15}\mathrm{N}$ ions on the post-stripper Ar gas, see below, needed to determine the number of projectiles impinging on the target, $N_p$. The scattering on Ar ensures a smooth behaviour of the elastic scattering yield. Calibration measurements are performed several times between the cross section measurements. The reaction yield is given by: \begin{equation} Y_i = N_p\Phi_qT_{RMS}\eta\int^{E_{\elem{N}{15}}}_{E_{\elem{N}{15}}-T_t}\frac {\sigma(E)}{\varepsilon(E)}\,dE\enspace, \label{eq:Yield} \end{equation} where $\Phi_q$ is the probability of recoils in the $q+$ charge state to enter the separator, $T_{RMS}$ is the separator transmission of recoils in charge state $q+$ to the end detector, $\eta$ is the detection efficiency, $E_{\elem{N}{15}}$ is the beam energy, $T_t$ is the target thickness, $\varepsilon(E)$ is the stopping power of N ions in He. All of these quantities have to be determined in order to extract the cross section $\sigma$ from the observed yield. \subsection{\maybebm{\elem{He}{4}} target characterisation} \label{sec:target} The recoil separator ERNA, in order to measure the $\elem{Be}{7}(p,\gamma)\elem{B}{8}$, has been recently provided with a windowless differentially pumped $\rm H_2$ extended gas target cell \cite{Schuermann2013} with an effective length of about 300\,mm. This cell is too long to achieve the necessary angular acceptance for the measurement of the $\elem{N}{14,15}(\alpha,\gamma)\elem{F}{18,19}$ reaction cross sections. Therefore the central target cell was sectioned with a wall and appropriate apertures, as schematically shown in figure \ref{fig:TargetChamber}. \begin{figure}[!b] \begin{center} \resizebox{.9\hsize}{!}{\includegraphics{./figure02.pdf}} \caption{Schematic top view of the modifications to the extended gas target chamber. The relevant parts discussed in the text are indicated, for further details see Ref. \cite{Schuermann2013}.} \label{fig:TargetChamber} \end{center} \end{figure} As reported in \cite{Schuermann2013}, in the aperture between the first and the second downstream pumping stages, Ar gas is injected in order to have an additional gas layer (post-stripper) that allows recoil ions to reach charge state equilibrium regardless of their actual reaction coordinates within the target. In order to reach the needed angular acceptance, see Sec. \ref{sec:acceptance}, the downstream apertures have the following diameters: post-stripper aperture has $\phi=15$\,mm, aperture toward the downstream cube and the aperture between the two cube pumping stages $24$\,mm and $27$\,mm, respectively. \subsubsection{Target thickness} We have determined the total target thickness through the measurement of the energy loss of several ions, see Table \ref{tab:CSSMscans}. The uncertainties are due to the $\Delta B$ determination, and to the uncertainty on the stopping power values. The total thickness value is $(0.54\pm0.03)\times10^{18}\rm\,atoms/cm^2$. \begin{table}[!b] \begin{center} \small \begin{tabular}{ccccccc} \hline \hline & $E_{\rm Lab}$ & $\Delta$B($\elem{He}4$) & $\varepsilon(\elem{He}4)$ & $\Delta$E($\elem{He}4$) & Thickness \\ Ion & [MeV] & [mT] & [keV\,cm$^{2}$/1E18] & [keV] & [1E18/cm$^{2}$] \\ $\elem{C}{12}$ & 3.5 & 6.13 & 64.3 & $43.3\pm3.1$ & $0.67\pm0.11$ \\ $\elem{N}{14}$ & 3.0 & 7.72 & 79.0 & $46.7\pm2.5$ & $0.59\pm0.09$ \\ $\elem{N}{15}$ & 6.3 & 3.39 & 85.0 & $43.3\pm4.8$ & $0.51\pm0.10$ \\ $\elem{O}{16}$ & 4.5 & 5.05 & 89.8 & $52.6\pm3.5$ & $0.59\pm0.08$ \\ $\elem{F}{19}$ & 4.8 & 5.09 & 103 & $50.2\pm2.7$ & $0.49\pm0.06$ \\ $\elem{F}{19}$ & 3.5 & 5.85 & 94.7 & $49.5\pm3.0$ & $0.52\pm0.07$ \\ \hline \hline \end{tabular} \end{center} \caption{Measured values, results and relevant quantities used in the target thickness determination.} \label{tab:CSSMscans} \end{table}% \noindent It is worth noting that there are some issues regarding the stopping power values of N ions in He gas. This is particularly relevant since the stopping power value at resonance energy is needed to calculate the strength of a resonance from the reaction yield. In general there are not many experimental data available for gaseous targets, see e.g. \cite{IAEAwebsite}, however the stopping power of N in He was measured a significant number of times. The SRIM2003 tables appear to have a worse agreement to the experimental data with respect to the older Ziegler's 1996 calculations \cite{IAEAwebsite}, therefore stopping power values of N in He according to this latter calculation have been used in this work. The stopping power of N in He in the energy range used in the present work is essentially determined by the data of Ref. \cite{Price1993}, where a 2.5\% systematic uncertainty is reported. However since the Ziegler's 1996 is not an actual fit to the experimental data a more conservative 5\% uncertainty is assumed. Thickness of the post-stripper alone, needed to estimate the effect on angular straggling of the recoils, was measured at the working pressure of 10\,mbar, using a 2.5\,MeV $\elem{F}{19}^{2+}$ beam. The observed shift in CSSM field is $\Delta B=(3.96\pm0.08)\,$mT, for a reference field of 1057.3\,mT. SRIM2003 tables report for F in Ar a stopping power of 412\,keV/(1E18\,atoms/cm$^2$) at this energy, thus the corresponding thickness is $(4.5\pm0.5)\times10^{16}\rm\,atoms/cm^2$. Most of the uncertainty is due to an assumed 10\% error on the stopping power. \subsubsection{Density profile} \begin{figure}[!t] \begin{center} \resizebox{.9\hsize}{!}{\includegraphics{./figure03.pdf}} \caption{Top panel: target chamber's walls absorption, experimental data (filled circles) are compared with the predictions of a Geant4 simulation (dots). Both measurements and simulation are scaled to unity at position $\sim0\,$mm. Middle panel: detail of the target chamber top view. Bottom panel: gas density profile of the extended $\elem{He}{4}$ target determined through the $\elem{Li}{7}(\alpha,\alpha')$ reaction. The points are corrected for the absorption of the target chamber, according to top panel. The black line is the calculated transmission of recoils, in a selected charge state, to the end detector. The error bars shown in both panels accounts for counting statistics only.} \label{fig:7Besource} \end{center} \end{figure} \noindent The distribution of the He gas within the target cell was determined through the measurement of the yield of the broad resonance, $\Gamma_{\rm c.m.}=130$\,keV, in $\elem{Li}{7}(\alpha,\gamma\alpha')\elem{Li}{7}$ at the energy of $E_{\rm lab}=3325$\,keV, in a similar way as reported in \cite{Schuermann2013}. In order to correct the observed $\gamma$-ray yield for the absorption by the chamber walls, the experimental setup was simulated with Geant4 \cite{Geant4}. The simulation was validated against a measurement of the relative attenuation of an uncalibrated $\elem{Be}{7}$ source that could be moved along the beam axis of the target chamber. A comparison of the experimental data with the predictions of the Geant4 simulation is shown in Fig. \ref{fig:7Besource}. \noindent The tails of the profile are well determined and account for about 25\% of the total target thickness. The fact that a significant portion of the target gas is located outside the central cell is not an issue with respect to the separator acceptance if the yield of narrow resonances is to be measured, since beam energy can be adjusted to have the reaction to take place mainly at the center of the target. The effect of this feature on the measurement of non resonant cross section is discussed in Sec. \ref{sec:acceptance} \subsection{\maybebm{\elem{F}{19}} charge state probability} The Ar post-stripper equilibrium thickness for $\elem{F}{19}$ ions was determined through a measurement of the charge state probabilities, at several energies as a function of the stripper inlet pressure $P_{\rm stripper}$. In Fig. \ref{fig:stripper} the charge state probability as a function of the post-stripper inlet pressure is shown for the case of of 5\,MeV $^{19}\rm F^{3+}$ ions. On the basis of this measurement the working pressure of $P_{\rm stripper}=10\rm\,mbar$ was chosen. \begin{figure}[!hbtp] \begin{center} \resizebox{.9\hsize}{!}{\includegraphics{./figure04.pdf}} \caption{Charge state probability of $\elem{F}{19}$ ions as a function of the Ar post-stripper inlet pressure $P_{\rm stripper}$ at 5.0\,MeV beam energy. Lines connecting the points are to guide the eye only. The dotted line represents the unmeasured current at this energy, due to non accessible 1+, 2+ charge states and further charge exchanging in the CSSM chamber, see text for details.} \label{fig:stripper} \end{center} \end{figure} We have also measured the charge state probabilities $\Phi_q$ of $\elem{F}{19}$ as a function of ion speed. \begin{figure}[!hbtp] \begin{center} \resizebox{.9\hsize}{!}{\includegraphics{./figure05.pdf}} \caption{Charge state probability of $\elem{F}{19}$ ions emerging from the target as a function of velocity. Filled symbols are experimentally determined values, while empty symbols are estimated values for the unmeasured 1+, 2+ charge states and for the further charge exchanging in the CSSM chamber, see text for details. Curves through points are uncorrelated Gaussian fits. Vertical shaded areas indicate the energy intervals where cross section measurements were performed.} \label{fig:ChargeStateProbability} \end{center} \end{figure} Results are shown in Fig. \ref{fig:ChargeStateProbability}, the curves through the points are gaussian fits to the data, performed independently for each charge state. Due to the limitations of the CSSM magnetic field, not all of the charge states could be measured at all energies. In these cases, the unmeasured charge state probabilities, namely of 1+ and 2+, were estimated from the measured ones. In fact at a given energy, provided that the neutral and fully stripped states are negligibly populated, the probability as a function of the charge state can be assumed to be gaussian. \noindent It has to be noted that the derived $\Phi_q$ do not correspond exactly to the charge state probabilities at the exit of the post-stripper. More precisely they are the fraction of ions that enter the triplet in the given charge state. A small difference is introduced by recoils further charge exchanging in the CSSM, where some residual Ar gas is present over a relatively long distance, leading to the loss of the ions. This feature has been verified observing the variation of the beam current after the CSSM while injecting Ar gas in the CSSM chamber only. The difference amounts altogether to about 5\%, in fact summing all the $\Phi_q$ a value of about 95\% is obtained at all energies, see Fig. \ref{fig:ChargeStateProbability}. \subsection{Acceptance} \label{sec:acceptance} \begin{figure}[!b] \begin{center} \resizebox{.9\hsize}{!}{\includegraphics{./figure06.pdf}} \caption{Ratio of the observed yield $Y$ with respect to the central yield $Y_0$ of the $E_{\rm c.m.}=1323$\,keV resonance as a function of the energy set for the separator.} \label{fig:acceptancescan} \end{center} \end{figure} The transmission of the recoils to the end detector, $T_{RMS}$, was measured to be 100\% using a $\rm^{19}F$ ion beam varying the energy and angle to scan the volume of the phase space occupied by the recoils. An electrostatic deflection unit has been used to mimic the recoil cone with a maximum opening angle $\vartheta_{\rm max}$, which is calculated according to reaction kinematics and straggling effects due to the interaction with target and post-stripper gas. As a further test of the separator acceptance, we have used the yield of the $E_{\rm c.m.}=1323$\,keV resonance. A scan of the target was performed and then the energy of the beam was set to the middle of the plateau. Then several measurement were performed varying the energy to which the separator was {\em tuned}. Results are shown in Fig. \ref{fig:acceptancescan}. The experimental points show a flat-top plateau, indicating a broad region of full acceptance, and then the reaction yield sharply drops, indicating that the limit of the energy acceptance, or the limit of angular acceptance, or both, is reached. Moreover reaction yield measurements of the $1323\,$keV resonance performed in the 3+ and 4+ charge states, characterised by quite different charge state probabilities, have given very consistent results, see Fig. \ref{fig:Yield} top panel. \section{Experimental results and analysis} The reaction yield of the two broad resonances at $1323\,$keV and $1487\,$keV, corresponding to the $\rm^{19}F$ states at $E_x=5337$ and $5500.7\,$keV respectively, was measured. Ion identification and counting was done using an Ionization Chamber with a fractioned anode as a $E_{\rm rest}$-$\Delta E$ telescope (ICT). In Fig. \ref{fig:spectrum} a sample spectrum is reported. The reaction yield as a function of energy is shown in Fig. \ref{fig:Yield}. \begin{figure}[!hbtp] \begin{center} \resizebox{.9\hsize}{!}{\includegraphics{./figure07.pdf}} \caption{Sample ICT $E_{\rm rest}$-$\Delta E$ spectrum for ions identification and counting, collected at $E_{^{15}\rm N}=7.06\,$MeV.} \label{fig:spectrum} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \resizebox{0.9\hsize}{!}{\includegraphics{./figure08A.pdf}} \resizebox{0.9\hsize}{!}{\includegraphics{./figure08B.pdf}} \caption{Reaction yield per incident projectile observed for the two broad resonances at $1323\,$keV and $1487\,$keV, top and bottom respectively. Open circles and filled squares indicate measurements of recoils in the 4+ and in the 3+ charge state, respectively.} \label{fig:Yield} \end{center} \end{figure} Since both resonances are relatively broad, the expected yield has been calculated through the convolution of the resonance cross section $\sigma_{BW}(E)$ and the target profile according to Eq. \ref{eq:Yield}. The stopping power of N ions in He has a negligible variation over the target thickness, and the average value of $\rm77.2\,keV\,cm^{2}/1E18\,atoms$ is used for the analysis of both resonances. The cross section $\sigma_{BW}(E)$ is calculated using the Breit-Wigner formula \begin{equation} \sigma_{BW}(E) = \pi\lambdaslash^2\frac{2J+1}{(2J_t+1)(2J_p+1)}\frac{\Gamma_\alpha(E)\Gamma_\gamma(E)}{(E_R-E)^2+\left(\frac{\Gamma(E)}2\right)^2}\enspace, \end{equation} where $\lambdaslash$ is the projectile reduced de Broglie wavelength, $J$, $J_t$, $J_p$ are the total angular momenta of the resonance, the target nucleus and the projectile, respectively, $E_R$ is the resonance energy, and $\Gamma_\alpha$ and $\Gamma_\gamma$, are the observed partial widths. Their energy dependence is calculated according to \cite{BookIliadis}: \begin{equation} \Gamma_\alpha(E) = 2P_\alpha(E)\gamma_\alpha^2 \enspace, \end{equation} where $\gamma_\alpha^2$ is the observed reduced width and $P_\alpha(E)$ is the penetration factor \begin{equation*} P_\alpha(E) = R\left(\frac k{F_l^2+G_l^2}\right)\enspace, \end{equation*} with the radius $R=\rm5.07\,fm$, $F_l$ and $G_l$ are the regular and irregular Coulomb wave functions, respectively, while \begin{equation} \Gamma_\gamma(E) = \Gamma_\gamma(E_R) \sum_i B_{\gamma i}\left[ \frac{E + Q - E_{xi}}{E_R + Q - E_{xi}}\right]^{2L_i+1} \enspace, \label{eq:GammagammaE} \end{equation} where $Q$ is the reaction $Q$-value, $B_{\gamma i}$ is the primary $\gamma$-ray branching ratio to the final state having excitation energy $E_{xi}$, and $L_i$ is the multipolarity of the $i$-th $\gamma$-ray transition. The $B_{\gamma i}$ values are taken from the ENSDF database \cite{ENSDF}. While the multipolarity of the primary transitions are known for the 1487\,keV resonance, they are not for the 1323\,keV resonance and are assumed to be 1. However it has to be noted that the energy dependent term of Eq. \ref{eq:GammagammaE} differs from unity at most by a fraction of a percent over the measurement energy range even in case of transitions of multipolarity 2. \noindent Fits of $\sigma_{BW}(E)$, according to Eq. \ref{eq:Yield}, to the experimental data are performed using a least square function (LSF). The expected yields calculated according to our best fit values are shown in Fig. \ref{fig:Yield}. In order to exclude that this result might be an artefact of a wrong target thickness determination rather than a sizeably larger resonance total width ($\simeq\Gamma_\alpha$), a study of the correlation of these two quantities has been performed. This check was done choosing uniformly distributed random values for $T_t$ and $\Gamma_\alpha$, that were kept fixed and the LSF minimised with respect to the other parameters, namely resonance energy $E_R$ and $\Gamma_\gamma$. Results are shown, for both resonances, in Fig. \ref{fig:LSF}. Our determination of the target thickness $T_t$ leads to fit of the data with a LSF close to the absolute minimum, for both resonances, thus excluding possible issues with respect to this aspect. Literature values for $\Gamma_\alpha$ would lead to LSF minimum values quite far from the absolute minimum. \begin{figure}[!hbt] \begin{center} \resizebox{.83\hsize}{!}{\includegraphics{./figure09A.pdf}} \resizebox{.83\hsize}{!}{\includegraphics{./figure09B.pdf}} \caption{LSF minima contour plots, as a function of the target thickness and $\Gamma_\alpha$, for the 1.323\,MeV and the 1.487\,MeV resonances, top panel and bottom panel, respectively. The vertical line is the experimentally determined target thickness, the shaded area its uncertainty. Horizontal lines are literature values of $\Gamma_\alpha$ and shaded area their uncertainties. The dot indicates the best fit values.} \label{fig:LSF} \end{center} \end{figure} \noindent It has to be noted that even at the absolute minimum, the LSF for the 1323\,keV resonance shows quite high values (reduced $\chi^2\sim20$). Therefore for the calculation of the LSF in the fit of the 1323\,keV resonance, the statistical uncertainty of the experimental points has been inflated by a factor of 1.5\,. However this inflation has no influence on final parameter values nor on the final uncertainties estimation, since these are obtained through a Monte Carlo (MC) procedure, described below, rather than the error matrix at LSF minimum. \begin{table}[t] \begin{center} \begin{tabular}{lccc} \hline \hline & this work & \cite{Tilley1995} & \cite{Wilmes2002} \\ \hline \multicolumn{2}{l}{1323\,keV resonance} \\ $E_R$ [keV] & $1331.4\pm1.6$ & $1323\pm2$ & \\ $\Gamma_\gamma$ [eV] & $1.62\pm0.09$ & & $1.69\pm0.14$ \\ $\Gamma_\alpha$ [keV] & $2.51\pm0.10$ & & $1.3\pm0.5$ \\ \hline \multicolumn{2}{l}{1487\,keV resonance} \\ $E_R$ [keV] & $1486.1\pm1.9$ & $1486.7\pm1.7$ & \\ $\Gamma_\gamma$ [eV] & $2.2\pm0.2$ & 2.13 & $1.78\pm0.17$ \\ $\Gamma_\alpha$ [keV] & $6.0\pm0.3$ & $4\pm1$ & $4.7\pm1.6$ \\ \hline \hline \end{tabular} \end{center} \caption{Parameters of the measured resonances as obtained from the MC procedure. Most of the uncertainty on the $E_R$ values is due to the beam energy determination.} \label{tab:Results} \end{table} The recommended values and uncertainty on the resonances parameters, reported in Table \ref{tab:Results}, are obtained through a MC procedure, so that besides the statistical uncertainty also the uncertainties on the target thickness and the other quantities contributing to the overall systematic uncertainty are correctly reflected in the results. In the MC procedure 5000 fits are performed. For each fit a pseudo-dataset is generated through a gaussian distribution of the measured values, used as central values and the uncertainty as $\sigma$, in addition the target thickness and an overall normalisation parameter are set to randomly generated values. The target thickness is generated according to a normal distribution, while the normalisation parameter is in part normally distributed, according to charge state probability, scattering rate and stopping power uncertainties, as reported in Table \ref{tab:systematics}, and in part uniformly distributed, according to current reading uncertainty, estimated to be 3\% at all energies. Then the LSF is minimised with respect to parameters $E_R, \Gamma_\alpha$ and $\Gamma_\gamma$. The parameters distributions are shown in Fig. \ref{fig:ParDistribution}. \begin{figure} \begin{center} \resizebox{.46\hsize}{!}{\includegraphics{./figure10A.pdf}} \resizebox{.46\hsize}{!}{\includegraphics{./figure10B.pdf}} \caption{Parameter value distributions, $\Gamma_\alpha, \Gamma_\gamma, E_R$ (top to bottom), of the 1323\,keV (left) and the 1487\,keV (right) resonances, as obtained from the MC procedure.} \label{fig:ParDistribution} \end{center} \end{figure} Some of the distributions obtained are slightly asymmetric but still rather close to normal. Therefore best value and uncertainty are obtained through a gaussian fit to the histograms, the uncertainty on beam energy determination, that contributes to $\delta E_R$, is added afterwards. \begin{table}[!t] \begin{center} \begin{tabular}{lccc} \hline \hline resonance energy [keV] & $\delta \Phi_{q}$ & $\delta N_p$ & $\delta\varepsilon_{^{15}\rm N}$ \\ \hline 1323 & 2.1\% & 2.2\% & 5.0\% \\ 1487 & 3.2\% & 4.0\% & 5.0\% \\ \hline \hline \end{tabular} \end{center} \caption{Relative uncertainties affecting overall normalisation: charge state probability $\delta\Phi_{q}$, number of incident projectiles $\delta N_p$, and stopping power $\delta\varepsilon_{^{15}\rm N}$. Uncertainty on current integration is 3\% at all energies.} \label{tab:systematics} \end{table} Our determination of $E_R$ for the lower energy resonance is significantly different from the literature value of 1323\,keV reported in \cite{Tilley1995}, that in turn is based on the data of \cite{Rogers1972}. It is worth noting that \cite{Tilley1995} as regards this resonance makes a reference to \cite{Kraewinkel1982}. In that work this resonance is not explicitly discussed, however the resonance profile shown, Fig. 23, panel g, appears to be consistent with a larger $E_R$ value. In addition $E_R$ values derived from $p$ and $\alpha$ inelastic scattering experiments, and $(p,\gamma)$ measurements are somewhat larger than 1323\,keV, although with larger uncertainties \cite{Tilley1995}, in better agreement with our result. As concerns the widths, the $\Gamma_\gamma$ and $\Gamma_\alpha$ values obtained in the present work for the 1487\,keV resonance are compatible with earlier determinations \cite[and references therein]{Wilmes2002}, also the 1323\,keV resonance $\Gamma_\gamma$ is found in an excellent agreement with literature value, while a significant difference is found for the $\Gamma_\alpha$. Most notably the precision on the $\Gamma_\alpha$ values has been improved to about 5\%. \section{Conclusions} The recoil separator ERNA has been used to directly measure the reaction yield of the two broad resonances at $E_{\rm c.m.}=1323$ and 1487\,keV. On the basis of the experimental data their $\Gamma_\gamma$ and $\Gamma_\alpha$ are determined. While agreement within uncertainty is found with earlier determination of the 1487\,keV resonance widths, a significant difference is found for the 1323\,keV $\Gamma_\alpha$. The improved determination of the broad resonances widths, influences the reaction rate, and its uncertainty, at AGB relevant temperatures. However at low temperatures the reaction rate is dominated by the DC component and the narrow resonance at $E_{\rm c.m.}=364\,$keV. Both components are presently known only through indirect measurements \cite{deOliveira1996} and, as mentioned, are affected by large uncertainties. In Fig. \ref{fig:FractionalRate} the contribution of each resonance with respect to the total reaction rate is shown as a function of the temperature. It is worth noting that fractional contributions to the reaction rate presented in Fig. \ref{fig:FractionalRate} are calculated according to central values and do not bring any information on the uncertainties. As mentioned the DC component and the $E_{\rm c.m.}=364\,$keV are largely uncertain, and therefore the relative contributions may vary sizeably. \begin{figure}[!htb] \begin{center} \resizebox{.9\hsize}{!}{\includegraphics{./figure11.pdf}} \caption{Fractional contribution of resonances and DC component to the total reaction rate of the $\rm^{15}N(\alpha,\gamma)^{19}F$, as a function of the temperature. The resonances are identified with their center-of-mass energy in keV.} \label{fig:FractionalRate} \end{center} \end{figure} The two investigated resonances contribute to the low temperature reaction rate through their tails. Our new determination of the $\Gamma_\alpha$s increases their contribution to the reaction rate by about 15\% at relevant astrophysical energies, with respect to the rate calculated according to literature values. The relative astrophysical implications will be discussed elsewhere. We plan to extend the measurements towards lower energies, hopefully as far as to directly determine the strength of the $E_{\rm c.m.}=364\,$keV resonance that is presently known only through indirect measurements \cite{deOliveira1996} with a factor of 2 of uncertainty. Possibly also a direct determination of the DC component at around $E_{\rm c.m.}\sim1\,$MeV will be possible. \acknowledgments The Authors thank F. de Oliveira for enlightening discussions.\\* This work was partially supported by INFN and by MIUR under the grants FIRB RBFR08549F and PRIN 20128PCN59. L.R.G. acknowledges financial support under the grants FAPESP 2014/11670-0 and Internationalization Program UCLV 2016.
\section{Introduction} Cell-free massive multiple-input multiple-output (CF-mMIMO) system is envisioned as a key enabler for 6G communication systems. \cite{Zhang2019:CFMM_paradigm,akyildiz20206g,zhang2020prospective,matthaiou2021road}. CF-mMIMO system contains many access points (APs) that are connected to a central processing unit (CPU) and jointly serve all the user equipment (UE) by coherent joint transmission, and reception \cite{Ngo2015:CellFree_SPAWC, Nayebi2015:CFMMS_Asilomar}. The name cell-free comes from the fact that there are no boundaries, and each AP serves all the UEs. This differs from a conventional small-cell system where each AP serves only a particular set of UEs. Early works in \cite{Ngo2017:CellFree, Nayebi2017:CellFree} compare the CF-mMIMO system with a conventional small-cell system and show a multifold improvement in 95\%-likely throughput can be expected from CF-mMIMO. \subsection{Prior Art} The spectral efficiency (SE) has been studied extensively for both uplink and downlink CF-mMIMO systems for various receiver schemes and fading channels. The authors in \cite{Nayebi2016:CFMMS_Receiver} analyze the SE of the uplink CF-mMIMO system under Rayleigh fading, with minimum mean squared error (MMSE) and large-scale fading deciding (LSFD) receivers. In \cite{Bashar2019:Uplink_CFMMS}, the minimum uplink achievable rate is derived and then maximized under per-user power constraint. In the downlink scenario, the CF-mMIMO system with APs having multiple antennas was considered in \cite{Nguyen2017:EEinCFMM_ZF, Ngo_2018:CellFreeTotal_Energy}. Here, the system's total energy efficiency was maximized through the power allocation algorithm and AP selection scheme. The uplink and downlink SE of a CF-mMIMO system over Rician fading where the phase shift of the line-of-sight (LoS) component is modeled as a uniform random variable (RV), were analyzed in \cite{Emil2019:RicianTWC}. \par Furthermore, several prior works in literature have comprehensively analyzed the SE for different hardware constraints. For example, the downlink SE of the CF-mMIMO system with low-resolution ADC at APs and UEs is investigated in\cite{hu2019cell}. They considered multiple antennas at APs and a single antenna at UEs. It was found that by increasing the number of antennas at the APs, the performance loss due to low-resolution ADCs at the APs can be mitigated. In \cite{zhang2019performance}, authors consider the CF-mMIMO system with multiple antennas at APs and UEs. The low-resolution ADCs presence was considered only at the APs, and the uplink SE of the considered system was derived. The authors in \cite{Zhang2018:CFMM_HardImp} considered a CF-mMIMO system with transceiver hardware impairment and derived the achievable SE for both uplink and downlink. A CF-mMIMO system with a limited capacity link between APs and CPU \textit{i.e.,} only the quantized signal is assumed to be available at CPU, is considered in \cite{bashar2019max}. \par The robustness of a CF-mMIMO system in the presence of an active eavesdropper was studied in \cite{Hoang2018:CFSecurityPC}. The authors in \cite{papazafeiropoulos2020performance} derived the coverage probability of the CF-mMIMO system using the tools from stochastic geometry under the assumption that the AP locations follow the Poisson point process. Some fundamental aspects of the CF-mMIMO system, like channel hardening and favorable propagation, are investigated in \cite{Chen2018:ChanHard_FavProp_CFMM_SG} using stochastic geometry. It was shown that channel hardening is not expected in general, but for the three-slope pathloss model \cite{Ngo2017:CellFree} and multiple antennas at APs, it can be achieved. Similarly, favorable propagation is experienced better under smaller pathloss and higher antenna density. Finally, there has been a detailed SE analysis of variants of the CF-mMIMO system, known as the user-centric CF-mMIMO system (UC CF-mMIMO) in \cite{Buzzi2017:CFMM_UC,buzzi2019user,alonzo2019energy,shekhar2022joint}. In a UC CF-mMIMO system, each AP serves only a predefined number of UEs rather than all. All the papers mentioned above derived and analyzed SE using the popular use-and-then-forget (UaTF) bound. Also, one can utilize the use-and-then-forget bound only for deriving a lower bound on the SE rather than for other vital metrics, such as outage probability (OP) that depend on the tail characteristics of the signal-to-interference-plus-noise ratio (SINR). \subsection{Characterization of Outage Probability (OP)} For deriving any expressions for OP, characterization of the probability density function (PDF) or the cumulative distribution function (CDF) of the SINR at the APs is imperative. However, in a CF-mMIMO system, for Rayleigh fading, the numerator and denominator of the SINR are sums of correlated Gamma random variables (RVs). Determining the PDF/CDF of the ratio of correlated Gamma RVs is mathematically intractable \cite{Suman2015:OutageKappa}. Therefore, to the best of our knowledge, there has been no prior literature analyzing the OP of a CF-mMIMO system. \par Even in massive MIMO (mMIMO), there have been very few papers studying OP, all of which consider various approximations. For example, in \cite{Feng2016:PoutMassiveMIMO}, the authors consider the OP of a downlink mMIMO system with matched-filter precoding. The numerator term of SINR is treated as a deterministic quantity replaced by its mean, and the interference term is treated as an RV. The PDF of SINR can therefore be obtained by a simple transformation of the interference term's PDF. A similar method is used in \cite{Ding2018:OutageADCMassiveMIMO}, where the base station (BS) is equipped with ADCs of different resolution levels. Here, it is shown that the squared coefficient of variation (SCV) of all the terms except one of the interference terms approaches zero as the number of antennas approaches infinity. Therefore, one can determine the \emph{approximate} PDF of the SINR by determining the PDF of that interference term. However, it may not always be possible to show that the SCV of all but one of the terms of the SINR becomes zero. Even if two non-zero terms exist, the PDF becomes intractable to characterize, and such is the case in CF-mMIMO. In \cite{srinivasan2019analysis}, the SINR is approximated to a Gamma RV by moment matching. The OP is then obtained in terms of the CDF of the Gamma RV. However, the efficacy of moment-matching depends on the distribution to which the metric is matched. Also, in many cases, it is algebraically complex to determine the expressions for the moments\footnote{We conducted trials to approximate the SINR RV by Gamma RVs. However, the resultant expressions did not match the simulated OP. This is elaborated in Section \ref{Sec: CFOP_Outage_Results}.}. In a few other works, such as \cite{Atapattu2017:ExactOutageMIMO, Beiranvand2018:MRCMassiveMIMO}, the exact expressions for OP are derived under perfect CSI and i.i.d. channel. \subsection{Contributions} For CF-mMIMO, assuming perfect CSI knowledge at the APs and i.i.d. channels are not practical \cite{Ngo2017:CellFree}. Also, it is essential to consider the effect of pilot contamination during the channel estimation phase on the resultant OP. Although an exact expression for OP can be written using conditional expectation, which results in a multi-fold integral of order $M$, where $M$ denotes the number of APs. It is generally challenging to evaluate such multi-fold integrals even numerically when $M$ is large, which is the typical case for the CF-mMIMO systems. Therefore, this paper uses two approaches to obtain novel OP expressions for the CF-mMIMO system. For the case of no-pilot contamination, to evaluate the multi-fold integrals, we propose to exploit a uni-variate dimension reduction method to reduce the $M$th order integration into $M$ single-order integration approximations \cite{Rahman2004:Integral_DimensionReduction}. Secondly, for the case of pilot contamination, we provide a two-step moment matching method to approximate the SINR by a Log-normal RV from which OP can be directly evaluated. We compare and contrast our work to the existing critical literature in Table \ref{Tab: CFOP_prior_work}. The obtained expressions are straightforward to evaluate and novel. Our contributions through this paper are summarized as follows: \begin{itemize} \item \textbf{Two-step moment matching method} We study the OP of the uplink of the CF-mMIMO system with and without pilot contamination for Rayleigh fading. We approximate the SINR with a Log-normal RV using a two-step moment matching method. A simple approximation for the OP is obtained using the Log-normal CDF. \item \textbf{Uni-variate dimension reduction method:} For the case of no-pilot contamination, we derive an exact expression for the OP in terms of a multi-fold integral. A novel approximation is then derived by reducing the multi-fold integral using the uni-variate dimension reduction method\cite{Rahman2004:Integral_DimensionReduction}. \item \textbf{Special cases:} Using the SINR as mentioned earlier characterization, we propose an alternative to UaTF for evaluating SE. Furthermore, we obtain approximate OP expressions in terms of simple elementary functions for a single-cell collocated mMIMO system. \end{itemize} \begin{table}[] \centering \begin{tabular}{|C{1.5cm}|C{1.5cm}|C{1cm}|C{4cm}|C{1.5cm}|} \hline Reference & Scenario & Metric & Methodology & Imperfect CSI \\ \hline \cite{Ngo2017:CellFree,Nayebi2017:CellFree} & Cell-Free & SE & Use and forget bound & $\checkmark$ \\ \hline \cite{Feng2016:PoutMassiveMIMO} & mMIMO & OP & SCV & \texttimes \\ \hline \cite{Ding2018:OutageADCMassiveMIMO} & mMIMO & OP & SCV & $\checkmark$ \\ \hline \cite{srinivasan2019analysis} & mMIMO & OP & Moment matching & \texttimes \\ \hline \cite{Atapattu2017:ExactOutageMIMO} & mMIMO & OP & Exact analysis & \texttimes \\ \hline \cite{Beiranvand2018:MRCMassiveMIMO} & mMIMO & OP & Moment matching & \texttimes \\ \hline This work & Cell-Free \& mMIMO & OP & Two-step moment matching \& uni-variate dimension reduction & $\checkmark$ \\ \hline \end{tabular} \caption{Our paper vis-a-vis key existing literature} \label{Tab: CFOP_prior_work} \end{table} \subsubsection*{Organization} The rest of the paper is structured as follows. In Section \ref{Sec: CFOP_SystemModel}, the considered system model of the CF-mMIMO system is discussed in detail. Section \ref{Sec: CFOP_Outage_OPAnalysis} presents the analytical expression of OP and rate obtained via two-step moment matching and uni-variate dimension-reduction approach. The simulation results and discussion is given in Section \ref{Sec: CFOP_Outage_Results} and finally the conclusion are drawn in Section \ref{Sec: CFOP_Conclusion} \subsubsection*{Notation} In this paper, $\mathcal{CN}\left(a,b\right)$ denotes the complex Gaussian distribution with mean $a$ and variance $b$. $\operatorname{LN}(\mu, \sigma^2)$ represent the Log-normal distribution with parameters $\mu$ and $\sigma$. The mean and variance of RV $ X $ are denoted by $\mathbb{E}\left[X \right]$ and $ \mathbb{V}\left[X \right] $. $\operatorname{Cov}\left( X,Y\right)$ represents the covariance between RVs $X$ and $Y$. $\operatorname{diag}(a_{1},\cdots,a_{M})$ denotes a $M \times M$ diagonal matrix with entries $a_{1},\cdots,a_{M}$. Also, $ \left(a\right)_{n} $ denotes the Pochhammer symbol, $ \operatorname{U}\left(\cdot \right) $ is the unit step function, and $\mathfrak{Re}\left(z \right)$ is the real part of $z$. \section{System Model}\label{Sec: CFOP_SystemModel} We consider a cell-free massive MIMO system with $ M $ APs and $ K $ users where $ M \gg K $, \textit{i.e.}, the number of APs is much more than that of users. Each AP is equipped with $ N $ antennas, and the users are equipped with a single antenna. The channel between the $ m $th AP and the $ k $th user is modeled as a Rayleigh fading channel. Let $ \mathbf{g}_{mk} \in \mathbb{C}^N$, represent the channel vector between the $ m $th AP and the $ k $th user and we have, \begin{equation} \begin{aligned} \mathbf{g}_{mk} \sim \mathcal{CN}\left(\mathbf{0},\beta_{mk}\mathbf{I}_{N}\right), \end{aligned} \end{equation} where $ \beta_{mk}$ represents the large scale fading coefficients between the $ m $th AP and the $ k $th user. Note that this model is similar to the one assumed in \cite{Ngo_2018:CellFreeTotal_Energy}. We assume that the knowledge of $ \beta_{mk}$ is available at both the AP and the user. Let $ \tau_{c} $ be the length of the coherence interval (in samples). Typically, in a cell-free massive MIMO system, the coherence interval is partitioned into three phases, namely the uplink training phase, uplink data transmission phase, and downlink data transmission phase \cite{Ngo2017:CellFree}. In this work, we do not focus on downlink data transmission. Let $ \tau_{p} $ be the length of the uplink training duration (in samples). Therefore, $\left(\tau_{c} - \tau_{p}\right) $ is the duration of the uplink data transmission phase. The process of uplink training and uplink data transmission is described in the following subsections. \subsection{Uplink Training}\label{SubSec:CFOP_Outage_UT} Before the transmission of uplink data by users, APs will acquire the channel state information (CSI) through a training phase. This acquired CSI is then used to process the received data symbols during the uplink data transmission phase. During this phase, all the users simultaneously transmit their pilot sequence to the APs. Let $\sqrt{\tau_{p}} \boldsymbol{\phi}_{k} \in \mathbb{C}^{\tau_{p} \times 1}$ be the pilot sequence transmitted by the $ k $-th user, $ \forall k = 1, \dots , K $ where $ \Vert \boldsymbol{\phi}_{k} \Vert^{2} = 1 $. The signal received at $ m $-th AP during the training phase is \cbstart \begin{equation} \begin{aligned} \mathbf{Y}_{p,m} = \sqrt{\tau_{p} \rho_{p}} \sum_{k=1}^{K} \mathbf{g}_{mk}\boldsymbol{\phi}_{k}^{H} + \mathbf{W}_{p,m}, \end{aligned} \end{equation} \cbend where $ \rho_{p} $ is the normalized transmit SNR of each pilot symbol, and $ \mathbf{W}_{p,m} \in \mathbb{C}^{N \times \tau_{p}}$ is the noise matrix whose entries are i.i.d. zero-mean complex Gaussian with variance $1$. Now, to estimate the channel coefficient using the observation $ \mathbf{Y}_{p,m}$, we first project the received vector on $ \boldsymbol{\phi}_{k}^{H} $ and then use the minimum-mean squared error (MMSE) estimator. Let $ \check{\mathbf{y}}_{p,mk} \triangleq \mathbf{Y}_{p,m} \boldsymbol{\phi}_{k} $, \textit{i.e.}, \begin{equation} \begin{aligned} \check{\mathbf{y}}_{p,mk} &= \sqrt{\tau_{p} \rho_{p}} \mathbf{g}_{mk} + \sqrt{\tau_{p} \rho_{p}}\sum_{ i \neq k}^{K} \mathbf{g}_{mi}\boldsymbol{\phi}_{i}^{H} \boldsymbol{\phi}_{k} + \tilde{\mathbf{w}}_{p,mk}, \end{aligned} \end{equation} where $ \tilde{\mathbf{w}}_{p,mk} = \mathbf{W}_{p,m} \boldsymbol{\phi}_{k} $ is a vector with i.i.d. $\mathcal{CN}\left(0,1 \right)$ entries. The MMSE estimator is hence given by, \begin{equation}\label{Eq:CFOP_MMSE_g} \begin{aligned} \hat{\mathbf{g}}_{mk} &= \mathbb{E}\lbrace \mathbf{g}_{mk}\check{\mathbf{y}}_{p,mk}^{H} \rbrace \left(\mathbb{E}\lbrace \check{\mathbf{y}}_{p,mk}\check{\mathbf{y}}_{p,mk}^{H} \rbrace\right)^{-1}\check{\mathbf{y}}_{p,mk},\\ &= c_{mk} \check{\mathbf{y}}_{p,mk}, \end{aligned} \end{equation} where $ c_{mk} = \frac{\sqrt{\tau_{p} \rho_{p}}\beta_{mk}}{\tau_{p} \rho_{p}{\sum\limits_{i = 1}^{K} {\beta}_{mi} \abs{\boldsymbol{\phi}_{k}^{H} \boldsymbol{\phi}_{i}}^{2}} + 1 }. $ Here, the term $ \tau_{p} \rho_{p}{\sum\limits_{i \ne k}^{K} {\beta}_{mi} \abs{\boldsymbol{\phi}_{k}^{H} \boldsymbol{\phi}_{i}}^{2}} $ corresponds to pilot contamination due to the non-orthogonality of the pilot sequence of different users. Also, $ \hat{\mathbf{g}}_{mk} \sim \mathcal{CN}\left( \mathbf{0},\gamma_{mk} \mathbf{I}_{N} \right)$, where $ \gamma_{mk} = \sqrt{\tau_{p} \rho_{p}}\beta_{mk}c_{mk}. $ In the case of orthogonal pilots, \textit{i.e.}, no pilot contamination, we have $ c_{mk} = \frac{\sqrt{\tau_{p} \rho_{p}}\beta_{mk}}{{\tau_{p} \rho_{p}} {\beta}_{mk} + 1 } $, and therefore, $ \gamma_{mk} = \frac{\tau_{p} \rho_{p}\beta_{mk}^2}{{\tau_{p} \rho_{p}} {\beta}_{mk} + 1 }$. \subsection{Uplink Data Transmission}\label{SubSec:CFOP_Outage_UplinkDT} In the uplink data transmission phase, each user transmits its data symbol to all the APs. Let $ p_{k} $ be the symbol of the $ k $-th user, such that $ \mathbb{E} \lbrace \vert p_{k}\vert^{2} \rbrace = 1 $. Hence, the received signal at the $ m $-th AP is \begin{equation}\label{Eq:CFOP_yum} \mathbf{y}_{u,m} = \sqrt{\rho_{u}}\sum_{k=1}^{K} \mathbf{g}_{mk}p_{k} + \mathbf{w}_{u,m}, \end{equation} where $ \rho_{u} $ is normalized uplink SNR and $ \mathbf{w}_{u,m} $ is the additive Gaussian noise with $ \mathbf{w}_{u,m} \sim \mathcal{CN}\left(\mathbf{0}, \mathbf{I}_{N} \right)$. Since all the APs employ MRC, they multiply their copies of the received signal with the estimated channel coefficients $\hat{\mathbf{g}}_{mk}^{H}$. The APs then send their received signal to the CPU\footnote{It is assumed that the APs are connected with the CPU through a flawless backhaul network.}. Therefore, the received signal at the CPU is given by \begin{equation}\label{Eq:CFOP_ruk} \begin{aligned} r_{u,k} &= \sum_{m=1}^{M} \hat{\mathbf{g}}_{mk}^{H} \mathbf{y}_{u,m}, \\ &= \sqrt{\rho_{u}}\sum_{m=1}^{M} \hat{\mathbf{g}}_{mk}^{H} \hat{\mathbf{g}}_{mk}p_{k} + \sqrt{\rho_{u}}\sum_{m=1}^{M}\hat{\mathbf{g}}_{mk}^{H} \boldsymbol{\varepsilon}_{mk}p_{k} +\sqrt{\rho_{u}}\sum_{i\ne k}^{K}\sum_{m=1}^{M}\hat{\mathbf{g}}_{mk}^{H} \mathbf{g}_{mi}p_{i} + \sum_{m=1}^{M} \hat{\mathbf{g}}_{mk}^{H} \mathbf{w}_{u,m}, \end{aligned} \end{equation} where $ \boldsymbol{\varepsilon}_{mk} $ is the channel estimation error, \textit{i.e.}, $ \mathbf{g}_{mk} - \hat{\mathbf{g}}_{mk} $. $ \boldsymbol{\varepsilon}_{mk} $ is a $\mathcal{CN}\left(0,\left(\beta_{mk}- \gamma_{mk}\right)\mathbf{I}_{N} \right)$ random vector. The symbol $ p_{k} $ transmitted by the $k$-th user is detected using $ r_{u,k} $. Let $ \hat{\mathbf{g}}_{k} = \begin{bmatrix} \hat{\mathbf{g}}_{1k} \ \dots \ \hat{\mathbf{g}}_{Mk} \end{bmatrix}^{T} $ denote the channel estimate vector for $ k $-th user $ \forall k=1,\dots, K$. Note that, $ \hat{\mathbf{g}}_{k} \sim \mathcal{CN}\left(\mathbf{0},\mathbf{C}_{\hat{\mathbf{g}}_{k},\hat{\mathbf{g}}_{k}} \right) $ is $ MN \times 1 $ complex Gaussian vector, where \begin{equation} \begin{aligned} \mathbf{C}_{\hat{\mathbf{g}}_{k} ,\hat{\mathbf{g}}_{k}} = \operatorname{diag}\left(\gamma_{1k} \mathbf{I}_{N},\dots,\gamma_{Mk} \mathbf{I}_{N} \right) \end{aligned} \end{equation} is the covariance matrix of $ \hat{\mathbf{g}}_{k} $. Using (\ref{Eq:CFOP_ruk}), the effective SINR of the $ k $-th user is as follows: \begin{equation}\label{Eq: CFOP_SINRInner} \begin{aligned} \lambda_{u,k} = \frac{X_{u,k}}{Y_{u,k}} = \frac{\rho_{u} \norm*{\hat{\mathbf{g}}_{k}}^{4} }{ \rho_{u} \sum\limits_{i\ne k}^{K} \abs*{\mathbf{\hat{g}}_{k}^{H} \mathbf{\hat{g}}_{i}}^{2} + \hat{\mathbf{g}}_{k}^{H}\left( \rho_{u}\sum\limits_{i=1}^{K} \boldsymbol{\Lambda}_{i} + \mathbf{I}_{MN} \right)\hat{\mathbf{g}}_{k} }, \end{aligned} \end{equation} where $ \boldsymbol{\Lambda}_{i} = \operatorname{diag}\left( \left(\beta_{1i} - \gamma_{1i} \right) \mathbf{I}_{N},\dots,\left(\beta_{Mi} - \gamma_{Mi} \right) \mathbf{I}_{N} \right) $ is a $MN \times MN$ diagonal matrix, $X_{u,k} = \rho_{u} \left( \norm*{\hat{\mathbf{g}}_{k}}^{2} \right)^{2} $ is the desired signal power over estimated channel, and $Y_{u,k} = \rho_{u} \sum\limits_{i\ne k}^{K} \abs*{\mathbf{\hat{g}}_{k}^{H} \mathbf{\hat{g}}_{i}}^{2} + \hat{\mathbf{g}}_{k}^{H}\left( \rho_{u}\sum\limits_{i=1}^{K} \boldsymbol{\Lambda}_{i} + \mathbf{I}_{MN} \right)\hat{\mathbf{g}}_{k} $ is the interference plus noise power. Note that this SINR expression is similar to the SINR expression given in \cite[Eq. 12]{Bjornson2020:CompetitiveCellFree}. \par Using the effective SINR in (\ref{Eq: CFOP_SINRInner}), one can calculate various performance metrics such as achievable rate, outage probability, etc. In the following section, we derive novel OP approximations utilizing the two-stage Log-normal moment matching and uni-variate dimension reduction method. \section{Outage Probability Analysis}\label{Sec: CFOP_Outage_OPAnalysis} The exact expression for OP involves characterizing the CDF of the SINR at the APs. Note that the numerator and denominator of the SINR involve correlated Gamma RV, and determining the CDF of their ratio is mathematically intractable \cite{Suman2015:OutageKappa}. Exact expressions are tractable only for perfect CSI conditions and i.i.d. channels as in the case of mMIMO channels \cite{Atapattu2017:ExactOutageMIMO, Beiranvand2018:MRCMassiveMIMO}. However, to assume that all the channels from the UEs to APs are i.i.d. or that perfect CSI is known at APs is impractical. In literature, it is common to approximate end-to-end SINR via Gamma or Log-normal RV using the technique of moment matching. This method has been successfully employed for various scenarios like intelligent reflecting surface (IRS) assisted communication system \cite{charishma2021outage}, mMIMO system\cite{srinivasan2019analysis}. The challenging part in such approximation is to derive the moments of SINR $ \lambda_{u,k} $, which becomes more difficult due to the correlation between numerator $ X_{u,k} $ and denominator $ Y_{u,k} $. To circumvent this issue, a bi-variate Taylor's series-based approximation for the first two moments of SINR is presented in \cite{srinivasan2019analysis}. We tried to mimic the approach described in \cite{srinivasan2019analysis}, but the resultant expressions did not match the simulated OP. This is elaborated in section \ref{Sec: CFOP_Outage_Results}. Through extensive simulation, we discovered that the numerator and denominator are closely approximated via Log-normal RV, separately. Therefore, to derive approximate OP, we approximated the numerator and denominator of (\ref{Eq: CFOP_SINRInner}) as Log-normal RV using moment matching then it is easy to show that ratio, \textit{i.e.,} $ \lambda_{u,k} $ is also a Log-normal RV. The result associated with Log-normal approximation is presented in Theorem \ref{Thm: CFOP_SINRLogNormal} and \ref{Thm: CFOP_RateLogNormal}, which is valid for both scenarios \textit{i.e.,} the system with and without pilot contamination. \begin{theorem}\label{Thm: CFOP_SINRLogNormal} For a threshold $ T $, the OP of $ k $th user is approximated as \begin{equation}\label{Eq: CFOP_POutLogNormalPC} \begin{aligned} P_{out}^{k}\left(T\right) &\approx \frac{1}{2} \operatorname{erfc}\left(-\frac{\ln T - \mu_{\lambda_{u,k}}}{\sigma_{\lambda_{u,k}} \sqrt{2}}\right), \end{aligned} \end{equation} where parameter $ \mu_{\lambda_{u,k}}, \sigma_{\lambda_{u,k}} $ of Log-normal distribution are evaluated as \begin{equation}\label{Eq: CFOP_MuLogNormalPC} \begin{aligned} \mu_{\lambda_{u,k}} &= \mu_{X_{u,k}} - \mu_{Y_{u,k}}, \end{aligned} \end{equation} \begin{equation}\label{Eq: CFOP_SigmaLogNormalPC} \begin{aligned} \sigma _{\lambda_{u,k}} &= \sqrt{\sigma^{2}_{X_{u,k}} + \sigma^{2}_{Y_{u,k}} - 2 \log\left( \frac{\mathbb{E}\left[X_{u,k}Y_{u,k}\right]}{\mathbb{E}\left[X_{u,k}\right]\mathbb{E}\left[Y_{u,k}\right]}\right)}. \end{aligned} \end{equation} Here, $ \operatorname{erfc}\left(\cdot\right) $ is the complementary error function \cite{erfcdef} and $ \mu_{X_{u,k}},$ $ \mu_{Y_{u,k}},$ $ \sigma_{X_{u,k}} ,$ $ \sigma_{Y_{u,k}} $ is evaluated using (\ref{Eq:CFOP_XMuSigma}) and (\ref{Eq:CFOP_YMuSigma}). \end{theorem} \begin{proof} Please refer to Appendix \ref{App: CFOP_ProofSINRLogNormal} for the proof. \end{proof} \begin{corollary}\label{Cor: CFOP_LogNormalNPC} In the absence of pilot contamination, the OP of $ k $th user is approximated using \eqref{Eq: CFOP_POutLogNormalPC}, where \eqref{Eq: CFOP_MuLogNormalPC} and \eqref{Eq: CFOP_SigmaLogNormalPC} are evaluated using the following expression for the moments of $ Y_{u,k} $. \begin{equation}\label{Eq: CFOP_YMeanNPC} \begin{aligned} \mathbb{E}\left[Y_{u,k}\right] &= N\rho_{u} \sum\limits_{i\ne k}^{K} \sum\limits_{m=1}^{M}\gamma_{mk}\gamma_{mi} + N\sum\limits_{m=1}^{M}\gamma_{mk} + N\rho_{u}\sum\limits_{m=1}^{M} \sum\limits_{i=1}^{K}\left(\beta_{mi} - \gamma_{mi}\right)\gamma_{mk}. \end{aligned} \end{equation} \begin{equation}\label{Eq: CFOP_Y2MeanNPC} \begin{aligned} \mathbb{E}\left[Y_{u,k}^{2}\right] &= \rho_{u}^{2}\left(\sum_{i\ne k}^{K}N\sum_{m=1}^{M}\gamma_{mk}^{2}\gamma_{mi}^{2} + \sum_{i\ne k}^{K} \left(N\sum\limits_{m=1}^{M}\gamma_{mk}\gamma_{mi}\right)^{2}+ N\sum_{m=1}^{M}\gamma_{mk}^{2} \left(\sum_{i\ne k}^{K}\gamma_{mi}\right)^{2} \right. \\& + \left. \left(N\sum_{i\ne k}^{K}\sum\limits_{m=1}^{M}\gamma_{mk}\gamma_{mi}\right)^{2} \right) + N\sum_{m = 1}^{M} \gamma_{mk}^{2} + N^{2}\left(\sum_{m = 1}^{M} \gamma_{mk}\right)^{2} \\& + \rho_{u}^{2}\left(N\sum_{m = 1}^{M}\left(\beta_{mk} - \gamma_{mk}\right)^{2}\gamma_{mk}^{2} + N^{2}\left(\sum\limits_{m=1}^{M}\left(\beta_{mk} - \gamma_{mk}\right)\gamma_{mk}\right)^{2}\right) \\&+ 2\rho_{u}\sum_{i\ne k}^{K}\left(N\sum_{m = 1}^{M}\gamma_{mk}^{2}\gamma_{mi} + N^{2}\left(\sum\limits_{m=1}^{M}\gamma_{mk}\right)\left(\sum\limits_{m=1}^{M}\gamma_{mk}\gamma_{mi}\right)\right) \\&+ 2\rho_{u}\left(N\sum_{m = 1}^{M}\left(\beta_{mk} - \gamma_{mk}\right)\gamma_{mk}^{2} + N^{2}\left(\sum\limits_{m=1}^{M}\gamma_{mk}\right)\left(\sum\limits_{m=1}^{M}\left(\beta_{mk} - \gamma_{mk}\right)\gamma_{mk}\right)\right) \\&+ 2\rho_{u}^{2}\left(N\sum_{i\ne k}^{K}\sum_{m = 1}^{M}\left(\beta_{mk} - \gamma_{mk}\right)\gamma_{mk}^{2}\gamma_{mi} + N^{2}\left(\sum\limits_{m=1}^{M}\left(\beta_{mk} - \gamma_{mk}\right)\gamma_{mk}\right)\left(\sum\limits_{m=1}^{M}\gamma_{mk}\gamma_{mi}\right) \right). \end{aligned} \end{equation} \end{corollary} \begin{proof} Equation \eqref{Eq: CFOP_YMeanNPC} and \eqref{Eq: CFOP_Y2MeanNPC} are obtained from \eqref{Eq:CFOP_YMean} and \eqref{Eq:CFOP_Y2} by considering the fact that $ \nu_{mi}^{j} = 0 $ if $ i \ne j $ and $ \nu_{mi}^{i} = \gamma_{mi} $. \end{proof} \par Next, we used the Log-normal approximation of $ \lambda_{u,k} $ to derive the approximate ergodic rate of $k$th user. We also derive simple closed-form lower and upper bound on the derived ergodic rate. The results are presented in the following theorem. \begin{theorem}\label{Thm: CFOP_RateLogNormal} Given $ \lambda_{u,k} \sim \operatorname{LN}\left(\mu_{\lambda_{u,k}},\sigma_{\lambda_{u,k}}^{2}\right) $ and ergodic rate of $ k $-th user is \begin{equation}\label{Eq: CFOP_ER_PC} \begin{aligned} R_{u,k} &\approx \frac{1}{2} \int_{0}^{\infty} \operatorname{erfc}\left(\frac{\ln\left(2^{t} - 1\right) - \mu_{\lambda_{u,k}}}{\sigma_{\lambda_{u,k}} \sqrt{2}}\right) dt. \end{aligned} \end{equation} \begin{equation}\label{Eq: CFOP_ERBs_PC} \begin{aligned} \log_{2}\left(\operatorname{e}^{\mu_{\lambda_{u,k}}} + 1\right) < &R_{u,k} < \log_{2}\left(\operatorname{e}^{\mu_{\lambda_{u,k}}} + 1\right) + \frac{\operatorname{e}^{-\mu_{\lambda_{u,k}}}}{\ln 2\left(1 + \operatorname{e}^{-2\mu_{\lambda_{u,k}}}\right) }\left(\operatorname{e}^{\frac{\sigma_{\lambda_{u,k}}^{2}}{2}} - 1\right). \end{aligned} \end{equation} \end{theorem} \begin{proof} Please refer to Appendix \ref{App: CFOP_ProofRateLogNormal} for the proof. \end{proof} \begin{corollary}\label{Cor: CFOP_NPC_Rate_LogNormal} In the case of no pilot contamination also, the ergodic rate and the respective lower and upper bounds of $ k $-th user are computed using the \eqref{Eq: CFOP_ER_PC}, and \eqref{Eq: CFOP_ERBs_PC} with the parameters of Log-normal are calculated using \eqref{Eq: CFOP_YMeanNPC} and \eqref{Eq: CFOP_Y2MeanNPC}. \end{corollary} Along with this, for the case of no pilot contamination, we first derive the conditional OP assuming that $ \hat{\mathbf{g}}_{K}$ is a constant. We then integrate the conditional OP over $\hat{\mathbf{g}}_{K}$, which gives an exact expression of OP in terms of a multi-fold integral of the order $M$ that is difficult to be solved in close form or evaluated in Mathematica/MATLAB for large values of $M$. Therefore, we explore the use of a dimension-reduction method known as a uni-variate approximation that approximates $M$th order integration with $M$ single order integrals. The result associated with the uni-variate approximation is given in the Lemma \ref{Lem: CFOP_OPUnivariate} and Theorem \ref{Thm: CFOP_OPUnivariate}. \begin{lemma}\label{Lem: CFOP_OPUnivariate} In the absence of pilot contamination, the OP of $K$th user for threshold $ T $ is given as \begin{equation}\label{Eq: CFOP_PoutOrthoSim2} \begin{aligned} P_{out}^K(T) &= 1 - \int\dots\int \left( \sum_{i=1}^{K-1} \frac{\theta_{i}^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} \left(\theta_{i} - \theta_{j} \right)} \left[ 1 -e^{-\frac{d_{K}^{T}}{\theta_{i}}} \right] \right)\operatorname{U}\left(d_{K}^{T}\right) \prod\limits_{m=1}^{M} \prod\limits_{n=1}^{N}e^{-x_{mn}} d \mathbf{x}, \end{aligned} \end{equation} where $\mathbf{x} = \left[ x_{11},\dots,x_{1N},x_{21},\dots,x_{MN} \right]$ is a vector with each $x_{mn}$ being an exponential RV with scale parameter $1$, $ \theta_{i} = \sum\limits_{m=1}^{M} \left(\sum\limits_{n=1}^{N} x_{mn} \right)\gamma_{mK} \gamma_{mi} $ and \begin{equation} \begin{aligned} \delta_{K}^{T} &= \dfrac{\left(\rho_{u} \left( \sum\limits_{m=1}^{M} \left(\sum\limits_{n=1}^{N} x_{mn} \right) \gamma_{mK}\right)^{2} - T \sum\limits_{m=1}^{M} \left(\rho_{u}\sum\limits_{i=1}^{K}\left(\beta_{mi} - \gamma_{mi} \right) + 1 \right) \left(\sum\limits_{n=1}^{N} x_{mn} \right) \gamma_{mK}\right)}{T \rho_{u}}. \end{aligned} \end{equation} \end{lemma} \begin{proof} Please refer to Appendix \ref{App: CFOP_ProofUnivariateLemma} for the proof. \end{proof} To evaluate (\ref{Eq: CFOP_PoutOrthoSim2}), we need to solve a $MN$th order integration. For typical values of $M$ and $N$ used in cell-free massive MIMO systems, say $M = 32$ and $N=1$, it is intractable to solve a $32$th order integration even in popular software such as \textit{Matlab}, \textit{Mathematica}, etc. Thus, it is important to approximate (\ref{Eq: CFOP_PoutOrthoSim2}) for evaluation and analysis. To circumvent the intractability, we propose to utilize the uni-variate approximation from \cite{Rahman2004:Integral_DimensionReduction}. Using this method, one can tightly approximate an $M$th order integration by a sum of $M$ single-order integration. The approximation and detailed proof are presented in the following theorem. \begin{theorem}\label{Thm: CFOP_OPUnivariate} For the case of no pilot contamination, the OP of $ K $th user for threshold $ T $ is approximated as \begin{footnotesize} \begin{equation}\label{Eq: CFOP_Pout_NoPilot_AppFinal} \begin{aligned} P_{out}^K(T) &\approx 1 - N\sum_{m=1}^{M}\int_{0}^{\infty}\left( \sum_{i=1}^{K-1} \left(\frac{\left(x C_{1,m}^{i} + C_{2,m}^{i}\right)^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} x C_{3,m}^{i,j} + C_{4,m}^{i,j}} \left[ 1 - e^{-\left(\frac{x^{2} C_{5,m} + x C_{9,m} + C_{10,m}}{ x C_{1,m}^{i} + C_{2,m}^{i}}\right)} \right]\right)\operatorname{U}\left(x^{2} C_{5,m} + xC_{9,m} + C_{10,m}\right) \right) e^{-x} dx \\ &+ (MN - 1)\left( \sum_{i=1}^{K-1} \frac{C_{i}^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} \left(C_{i} - C_{j} \right)} \left[ 1 -e^{-\frac{C_{K}^{T}}{C_{i}}} \right] \right)\operatorname{U}\left(C_{K}^{T}\right). \end{aligned} \end{equation} \end{footnotesize} \end{theorem} \begin{proof} The derivation details are given in Appendix \ref{App: CFOP_ProofUnivariateTheorem}. \end{proof} Consider the special case when all the APs are collocated, and $ K $ pilot sequences are pairwise orthogonal, then we have $\beta_{mk} = \beta_{m^{\prime}k} \triangleq \beta_{k}, \gamma_{mk} = \gamma_{m^{\prime} k} \triangleq \gamma_{k}$ and no pilot contamination. The Corollary \ref{Cor: CFOP_LogNormalNPC} and Theorem \ref{Thm: CFOP_OPUnivariate} are applicable to calculate the OP for this special case \textit{i.e.,} the single-cell collocated mMIMO. However, the integral involved in \eqref{Eq: CFOP_Pout_NoPilot_AppFinal} is simple enough to be solved in closed form for this special case. For a fair comparison, we consider $MN$ antenna single-cell collocated mMIMO system so that the total number of antenna remains same in both CF-mMIMO and collocated mMIMO system. The following corollaries present the OP and rate approximations for the single-cell collocated mMIMO system. \begin{corollary}[Corollary to Theorem \ref{Thm: CFOP_SINRLogNormal}]\label{Cor: CFOP_Pout_LogNormal_mMIMO} For the single cell collocated mMIMO scenario, the OP of $ k $th user is approximated using \eqref{Eq: CFOP_POutLogNormalPC}, where \eqref{Eq: CFOP_MuLogNormalPC} and \eqref{Eq: CFOP_SigmaLogNormalPC} are evaluated using the following simplified expression for the moments. \begin{equation}\label{Eq: CFOP_XMean_mMIMO} \begin{aligned} \mathbb{E}\left[X_{u,k}\right] &= \rho_{u}\left( MN\right)_{2}\gamma_{k}^{2}. \end{aligned} \end{equation} \begin{equation}\label{Eq: CFOP_X2Mean_mMIMO} \begin{aligned} \mathbb{E}\left[X_{u,k}^{2} \right] &= \rho_{u}^{2} \left( M N \right)_{4}\gamma_{k}^{4}. \end{aligned} \end{equation} \begin{equation}\label{Eq: CFOP_YMean_mMIMO_NPC} \begin{aligned} \mathbb{E}\left[Y_{u,k}\right] &= MN\gamma_{k} \left[ \rho_{u} \left(\sum\limits_{i\ne k}^{K} \gamma_{i} \right) + 1 + \rho_{u} \sum_{i=1}^{K}\left(\beta_{i} - \gamma_{i}\right) \right]. \end{aligned} \end{equation} \begin{equation}\label{Eq: CFOP_Y2Mean_mMIMO_NPC} \begin{aligned} \mathbb{E}\left[Y_{u,k}^{2}\right] &= \left( M N \right)_{2} \gamma_{k}^{2} \left[ \rho_{u}^{2}\left( \sum_{i\ne k}^{K}\gamma_{i}^{2} + \left(\sum_{i\ne k}^{K}\gamma_{i}\right)^{2} \right) + 1 + \rho_{u}^{2} \sum_{i=1}^{K}\left(\beta_{i} - \gamma_{i}\right)^{2} \right. \\ & \left. + 2\rho_{u} \left(\sum_{i\ne k}^{K}\gamma_{i} \right) + 2\rho_{u} \sum_{i=1}^{K}\left(\beta_{i} - \gamma_{i}\right) + 2\rho_{u}^{2} \sum_{i=1}^{K}\left(\beta_{i} - \gamma_{i}\right) \left(\sum_{i\ne k}^{K} \gamma_{i} \right) \right]. \end{aligned} \end{equation} \begin{equation}\label{Eq: CFOP_CorrXY_mMIMO_NPC} \begin{aligned} \mathbb{E}\left[X_{u,k}Y_{u,k}\right] &= \rho_{u}\left(M N\right)_{3} \gamma_{k}^{3} \left[ \rho_{u} \left(\sum_{i\ne k}^{K} \gamma_{i}\right) + 1 + \rho_{u}\sum_{i=1}^{K}\left(\beta_{i} - \gamma_{i}\right)\right]. \end{aligned} \end{equation} \end{corollary} \begin{corollary}[Corollary to Theorem \ref{Thm: CFOP_RateLogNormal}]\label{Cor: CFOP_Rate_LogNormal_mMIMO} For the single cell collocated mMIMO scenario, the ergodic rate and the respective lower and upper bounds of $ k $-th user are computed using the \eqref{Eq: CFOP_ER_PC} and \eqref{Eq: CFOP_ERBs_PC} with the parameters of Log-normal are calculated using \eqref{Eq: CFOP_XMean_mMIMO} - \eqref{Eq: CFOP_CorrXY_mMIMO_NPC}. \end{corollary} \begin{corollary}[Corollary to Theorem \ref{Thm: CFOP_OPUnivariate}]\label{Cor: CFOP_Pout_NPC_mMIMO} For the single cell collocated mMIMO scenario, the OP of $K$th user is approximated as \begin{equation}\label{Eq: CFOP_Pout_NPCmMIMO_Case1} \begin{aligned} P_{out}^K(T) &\approx 1 - M N\sum_{i = 1}^{K-1}D_{1}^{i}\left[1 - \frac{e^{-D_{3}^{i}}}{D_{2}^{i} + 1}\right] + \left(MN -1 \right)\sum_{i = 1}^{K-1}D_{1}^{i}\left[ 1 -e^{- \left( D_{2}^{i} + D_{3}^{i}\right)} \right] \end{aligned} \end{equation} for $ T \le \frac{\rho_{u} \left( MN-1 \right)\gamma_{K}}{\left( \rho_{u} \sum\limits_{i=1}^{K}\left(\beta_{i} - \gamma_{i} \right) + 1\right)} $, and \begin{equation}\label{Eq: CFOP_Pout_NPCmMIMO_Case2} \begin{aligned} P_{out}^K(T) &\approx 1 - MN \sum_{i = 1}^{K-1}D_{1}^{i}\left[ e^{-\kappa} - \frac{e^{-D_{3}^{i}} e^{-\kappa (D_{2}^{i} + 1)}}{D_{2}^{i} + 1}\right] \\ &+ \left(MN-1\right)\sum_{i = 1}^{K-1}D_{1}^{i}\left[ 1 -e^{- \left( D_{2}^{i} + D_{3}^{i}\right)} \right]\operatorname{U}\left(D_{4} + D_{5} + D_{6}\right) \end{aligned} \end{equation} for $ T > \frac{\rho_{u} \left( MN-1 \right)\gamma_{K}}{\left( \rho_{u} \sum\limits_{i=1}^{K}\left(\beta_{i} - \gamma_{i} \right) + 1\right)} $, where $D_{1}^{i},D_{2}^{i},D_{3}^{i},D_{4},D_{5}$, $D_{6}$ and $\kappa $ are defined in (\ref{Eq: D1D2D3mMIMO}), (\ref{Eq: D4D5D6mMIMO}) and \eqref{Eq: kappamMIMO}. \end{corollary} \begin{proof} Using the fact that $\beta_{mk} = \beta_{m^{\prime}k} \triangleq \beta_{k}, \gamma_{mk} = \gamma_{m^{\prime} k} \triangleq \gamma_{k}$. We have simplified the \eqref{Eq: CFOP_Pout_NoPilot_AppFinal}. The details are provided in Appendix \ref{App: CFOP_UnivariatemMIMOCor} \end{proof} Note that the expressions in Corollary \ref{Cor: CFOP_Pout_NPC_mMIMO} are in closed form and do not require any numerical integration. The obtained expressions for the single-cell collocated mMIMO mMIMO system with imperfect CSI are novel to the best of our knowledge. \cbend \section{Results \& Discussion}\label{Sec: CFOP_Outage_Results} The simulation setup is similar to that in \cite{Ngo2017:CellFree} and is repeated here for completeness. A cell-free massive MIMO system with various $M$ and $K$ values have been considered. All $M$ AP and $K$ users are dispersed in a square of area $D \times D \: \text{km}^{2}$. The large-scale fading coefficient $\beta_{mk}$ models the path loss and shadow fading according to \begin{equation} \beta_{mk}= PL_{mk}10^{\frac{\sigma_{th}z_{mk}}{10}}, \end{equation} where $PL_{mk}$ represents the path loss, $\sigma_{th}$ represents the standard deviation of the shadowing and $z_{mk} \sim \mathcal{N }(0, 1)$. The relation between the path loss $PL_{mk}$ and the distance between the distance $d_{mk}$ between the $m$th AP and $k$th user is obtained using the three slope model in \cite[Eq. 52]{Ngo2017:CellFree}. The other parameters are summarized in Table \ref{Tab:CFOP_params}. The normalized transmit SNRs $ \rho_{p}$ and $ \rho_{u}$ are obtained by dividing the actual transmit powers $\bar \rho_{p}$ and $\bar \rho_{u}$ by the noise power, respectively. \begin{table}[!t] \centering \begin{tabular}{|l|r|} \hline Parameter & value \\ \hline \hline Carrier frequency & $1.9~GHz$ \\ \hline Bandwidth & $20~MHz$\\ \hline Noise figure & $9$ dB\\ \hline AP antenna height & $15~m$\\ \hline User antenna height & $1.65~m$\\ \hline $\sigma_{sh}$ & $8$ dB\\ \hline $\bar \rho_p$, $\bar \rho_u$ & $100~mW$\\ \hline \end{tabular} \caption{Simulation parameters} \label{Tab:CFOP_params} \end{table} \begin{figure}[!t] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOPComp_PC_Pout_Sim_LN_Gam_M_300_K_30_NumIter_10000_NoOfSetup_100.eps} \caption{With Pilot Contamination ($ M = 300, N = 1 $ and $ K = 30 $) } \end{subfigure} \hspace{3mm} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOPComp_NPC_Pout_Sim_LN_Gam_M_100_K_5_NumIter_10000_NoOfSetup_100.eps} \caption{Without pilot contamination ($ M = 100, N = 1 $ and $ K = 5 $)} \end{subfigure} \caption{Comparison of OP using different existing approaches with the proposed methods} \label{Fig:CFOPComp_DiffApproaches} \end{figure} \par We first compare our OP approximation in Theorem \ref{Thm: CFOP_SINRLogNormal} and \ref{Thm: CFOP_OPUnivariate} with the existing moment-matching approximation approaches in Fig. \ref{Fig:CFOPComp_DiffApproaches}. Two different moment-matching approaches are compared with our approximation. In the first, the SINR is approximated by a Gamma RV following \cite{srinivasan2019analysis}, whereas, in the second, the numerators and denominators of the SINR are separately approximated by Gamma RVs. Then the ratio of two Gamma RV is Beta-prime RV \cite{cordeiro2012mcdonald}. Note that the Gamma approximation and beta-prime approximation fail to capture the tail behavior of OP in both scenarios \textit{i.e.,} with and without pilot contamination. This is because the single Gamma approximation relies on the approximate first and the second moment of SINR obtained using bi-variate Taylor series expansion, which does not provide a good approximation of SINR's moments for the CF-mMIMO system. Next, the Gamma-by-Gamma or the Beta-prime approximation fails due to the correlation between the numerator and denominator of SINR owing to the use of MRC. Other treatises such as \cite{Ding2018:OutageADCMassiveMIMO} approximate the OP by proving that SCV of all but one component of the SINR is zero and obtaining OP by transforming the CDF of the remaining term. For our case, through extensive simulations, we determined that the SCV of more than one component of SINR is non-zero, and hence the method cannot be applied. These results justify the necessity of the new approximations proposed in this work. In the subsequent subsections, we investigate the OP performance of CF-mMIMO (with and without pilot contamination) for various values of $M$ and $K$. The numerical results are generated as follows. \begin{enumerate} \item We realized $100$ random deployment of APs and UEs. The large-scale fading coefficients, with the shadowing effect, are calculated for each realization. \item For each realization, $10,000$ Monte Carlo iterations are performed to calculate the SINR for each UE. Using this SINR, we calculated the OP and ergodic rate of the UEs. \item System's OP and the system's ergodic rate are the averages of the OP and ergodic rate in each realization. \end{enumerate} \subsection{Results for With Pilot Contamination} In this sub-section, the expression in Theorem \ref{Thm: CFOP_SINRLogNormal} and \ref{Thm: CFOP_RateLogNormal} are validated through numerical simulation. Fig. \ref{Fig: CFOP_PC_OP_Fixed_M} and \ref{Fig: CFOP_PC_OP_Fixed_K} show the trend of OP for varying $K$ and $M$, respectively. The number of antennas per AP \textit{i.e.,} $N$ is chosen to be $4$. It is evident that the proposed two-step Log-normal approximation is closely matching with the simulated OP. It is clear from Fig \ref{Fig: CFOP_PC_OP_Fixed_M} that the OP increases as the number of users increase due to the corresponding increase in pilot contamination and interference. Similarly, the OP decreases with the increase in the number of APs in the system, as shown in Fig \ref{Fig: CFOP_PC_OP_Fixed_K} for fixed $ K = 30 $ and $N=4$. \begin{figure}[!t] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOP_PC_Pout_Sim_LN_M_80_N_4_K_Vary_NumIter_10000_NoOfSetup_100.eps} \caption{For fixed $M = 80, N = 4$.} \label{Fig: CFOP_PC_OP_Fixed_M} \end{subfigure} \hspace{3mm} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOP_PC_Pout_Sim_LN_M_Vary_N_4_K_30_NumIter_10000_NoOfSetup_100.eps} \caption{For fixed $K = 30, N = 4$.} \label{Fig: CFOP_PC_OP_Fixed_K} \end{subfigure} \caption{Impact of $M$ and $K$ on the OP of CF-mMIMO system with pilot contamination} \label{Fig:CFOP_PC_OP_Results_VaryM_K} \end{figure} Fig. \ref{Fig:CFOP_PC_Rate_Fixed_M} and \ref{Fig:CFOP_PC_Rate_Fixed_K} shows the simulated values for ergodic sumrate and the approximate sumrate obtained using Theorem \ref{Thm: CFOP_RateLogNormal} for CF-mMIMO with pilot contamination. It can be observed that the popular use-and-then-forget (UaTF) lower bound (LB) severely underestimates the ergodic sumrate as compared to the simulated one. Theorem \ref{Thm: CFOP_RateLogNormal} proposed an ergodic rate expression in terms of an integral and also provided simple and closed-form lower and upper bound for the proposed integral. The approximate rate values and bounds better match the simulated one compared to UaTF LB. \begin{figure}[!t] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOP_PC_Rate_Sim_LN_M_80_N_4_K_Vary_NumIter_10000_NoOfSetup_100.eps} \caption{For fixed $M = 80, N = 4$.} \label{Fig:CFOP_PC_Rate_Fixed_M} \end{subfigure} \hspace{3mm} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOP_PC_Rate_Sim_LN_M_Vary_N_4_K_30_NumIter_10000_NoOfSetup_100.eps} \caption{For fixed $K = 30, N = 4$.} \label{Fig:CFOP_PC_Rate_Fixed_K} \end{subfigure} \caption{Impact of $M$ and $K$ on the ergodic rate of CF-mMIMO system with pilot contamination} \label{Fig:CFOP_PC_Rate_Results_VaryM_K} \end{figure} \subsection{Results for Without Pilot Contamination} In this sub-section, we compare the performance of CF-mMIMO and the single-cell collocated mMIMO systems without pilot contamination. Fig. \ref{Fig: CFOP_NPC_OP_Fixed_M} and \ref{Fig: CFOP_NPC_OP_Fixed_K} shows that the approximation presented in Theorem \ref{Thm: CFOP_OPUnivariate}, Corollary \ref{Cor: CFOP_LogNormalNPC} and Corollary \ref{Cor: CFOP_Pout_LogNormal_mMIMO}, \ref{Cor: CFOP_Pout_NPC_mMIMO} for CF-mMIMO and mMIMO, respectively are closely matching with simulation results. Here, also we observed that not only does the CF-mMIMO performs better than mMIMO, but the improvement it shows with varying parameter is also more significant than the mMIMO. For example, when $M$ increases to $80$ from $40$ at target SINR of $-5$ dB and $N=4$, OP decreases by $69.45 \%$ for the CF-mMIMO system, whereas it decreases by only $16.61 \% $ for the mMIMO system when antennas are increased from $160$ to $320$. Hence, it is better to increase the density of APs as compared to increasing the antenna at a single collocated AP. \begin{figure}[!t] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOP_NPC_Pout_Sim_LN_M_80_N_4_K_Vary_NumIter_10000_NoOfSetup_100.eps} \caption{For fixed $M = 80, N = 4$.} \label{Fig: CFOP_NPC_OP_Fixed_M} \end{subfigure} \hspace{3mm} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOP_NPC_Pout_Sim_LN_M_Vary_N_4_K_10_NumIter_10000_NoOfSetup_100.eps} \caption{For fixed $K = 10, N = 4$.} \label{Fig: CFOP_NPC_OP_Fixed_K} \end{subfigure} \caption{Impact of $M$ and $K$ on the OP of CF-mMIMO and mMIMO system without pilot contamination} \label{Fig:CFOP_NPC_OP_Results_VaryM_K} \end{figure} Next, Fig. \ref{Fig: CFOP_NPC_Rate_Fixed_M} and \ref{Fig: CFOP_NPC_Rate_Fixed_K} present the ergodic sumrate of CF-mMIMO and mMIMO system without pilot contamination. Again, it is observed that the ergodic sumrate calculated using Corollary \ref{Cor: CFOP_NPC_Rate_LogNormal} provides a better estimate for sumrate as compared to the UaTF bound, which heavily underestimates the performance of CF-mMIMO as well as mMIMO system. Also, the bounds provided for the integral in \eqref{Eq: CFOP_ER_PC} tightly bounds it and are easy to compute as expressions are available in closed form. \begin{figure}[!t] \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOP_NPC_Rate_Sim_LN_M_80_N_4_K_Vary_NumIter_10000_NoOfSetup_100.eps} \caption{For fixed $M = 80, N = 4$.} \label{Fig: CFOP_NPC_Rate_Fixed_M} \end{subfigure} \hspace{3mm} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{CFOP_NPC_Rate_Sim_LN_M_Vary_N_4_K_10_NumIter_10000_NoOfSetup_100.eps} \caption{For fixed $K = 10, N = 4$.} \label{Fig: CFOP_NPC_Rate_Fixed_K} \end{subfigure} \caption{Impact of $M$ and $K$ on the ergodic rate of CF-mMIMO and mMIMO system without pilot contamination} \label{Fig:CFOP_NPC_Rate_Results_VaryM_K} \end{figure} \subsection{Correlated fading scenario} For the case of correlated Rayleigh fading the channel between $m$th AP and $k$th is modeled as \begin{equation} \begin{aligned} \mathbf{g}_{mk} \sim \mathcal{CN}\left(\mathbf{0},\mathbf{R}_{mk}\right), \end{aligned} \end{equation} where $\mathbf{R}_{mk} \in \mathbb{C}^{N\times N}$ is the spatial correlation matrix between $m$th AP and $k$th user. Following from \cite{Bjornson2020:CompetitiveCellFree}, the MMSE estimate channel vector is $ \hat{\mathbf{g}}_{mk} \sim \mathcal{CN}\left( \mathbf{0}, \mathbf{R}_{mk} \mathbf{C}^{-1}_{mk} \mathbf{R}_{mk} \right), $ where $\mathbf{C}_{mk} = \tau_{p}\rho_{p}\sum\limits_{i=1}^{K} \mathbf{R}_{mi} \abs{\boldsymbol{\phi}_{k}^{H} \boldsymbol{\phi}_{i}}^{2} + \mathbf{I}_{N}$ and the channel estimation error \textit{i.e.}, $ \boldsymbol{\varepsilon}_{mk} = \mathbf{g}_{mk} - \hat{\mathbf{g}}_{mk} \sim \mathcal{CN}\left(\mathbf{0},\boldsymbol{\Lambda}_{mk} \right)$. The effective SINR of the $ k $-th user is as follows: \begin{equation}\label{Eq: CFOP_SINRInnerCorr} \begin{aligned} \lambda_{u,k} = \frac{X_{u,k}}{Y_{u,k}} = \frac{\rho_{u} \norm*{\hat{\mathbf{g}}_{k}}^{4} }{ \rho_{u} \sum\limits_{i\ne k}^{K} \abs*{\mathbf{\hat{g}}_{k}^{H} \mathbf{\hat{g}}_{i}}^{2} + \hat{\mathbf{g}}_{k}^{H}\left( \rho_{u}\sum\limits_{i=1}^{K} \boldsymbol{\Lambda}_{i} + \mathbf{I}_{MN} \right)\hat{\mathbf{g}}_{k} }, \end{aligned} \end{equation} where $ \hat{\mathbf{g}}_{k} = \begin{bmatrix} \hat{\mathbf{g}}_{1k} \ \dots \ \hat{\mathbf{g}}_{Mk} \end{bmatrix}^{T} $ and $ \boldsymbol{\Lambda}_{i} = \operatorname{diag}\left(\boldsymbol{\Lambda}_{1i},\dots,\boldsymbol{\Lambda}_{Mi} \right) $. Note that the SINR expression is functionally the same as for the independent fading case. Hence, the two-step moment matching method can be extended for the case of correlated fading. The major difference is that the calculation of moments of the numerator and denominator will be more involved algebraically and is beyond the scope of the paper. \section{Conclusion}\label{Sec: CFOP_Conclusion} In this work, we derived approximate OP, and rate expressions for a CF-mMIMO system with and without pilot contamination under the Rayleigh faded channels. We used the two-step moment matching to derive the approximate expressions and provided simple expressions for the OP and ergodic rate. In the case of no pilot contamination, an exact expression is derived in terms of a multi-fold integral. A simple and accurate approximation using the uni-variate dimension reduction method is proposed to circumvent the evaluation of higher-order integration. Specific to the single-cell collocated mMIMO system, approximate OP expressions are obtained in closed form, which involves only elementary functions. The validity of the approximations, derived for both CF-mMIMO and mMIMO, was verified by Monte-Carlo simulations. Investigating the effect of correlated fading with a line-of-sight component will be an interesting future direction to explore. \appendices \section{Proof for Theorem \ref{Thm: CFOP_SINRLogNormal}}\label{App: CFOP_ProofSINRLogNormal} Using technique of moment matching, we first approximate the $ X_{u,k} $ and $ Y_{u,k} $ by Log-normal distribution \textit{i.e.,} $ X_{u,k} \sim \operatorname{LN}\left(\mu_{X_{u,k}},\sigma_{X_{u,k}}^{2}\right) $ and $ Y_{u,k} \sim \operatorname{LN}\left(\mu_{Y_{u,k}},\sigma_{Y_{u,k}}^{2}\right) $ with parameter given as \begin{equation}\label{Eq:CFOP_XMuSigma} \begin{aligned} \mu_{X_{u,k}} &= \log\left(\frac{\left(\mathbb{E}\left[X_{u,k} \right]\right)^{2}}{\sqrt{\mathbb{E}\left[ X_{u,k}^{2}\right]}}\right) \quad \sigma_{X_{u,k}} &= \sqrt{\log\left( \frac{\mathbb{E}\left[ X_{u,k}^{2}\right]}{\left(\mathbb{E}\left[X_{u,k} \right]\right)^{2}}\right)}\textbf{\add{,}} \end{aligned} \end{equation} and \begin{equation}\label{Eq:CFOP_YMuSigma} \begin{aligned} \mu_{Y_{u,k}} &= \log\left(\frac{\left(\mathbb{E}\left[Y_{u,k} \right]\right)^{2}}{\sqrt{\mathbb{E}\left[ Y_{u,k}^{2}\right]}}\right) \quad \sigma_{Y_{u,k}} &= \sqrt{\log\left( \frac{\mathbb{E}\left[ Y_{u,k}^{2}\right]}{\left(\mathbb{E}\left[Y_{u,k} \right]\right)^{2}}\right)}\textbf{\add{.}} \end{aligned} \end{equation} The parameters in (\ref{Eq:CFOP_XMuSigma}) and (\ref{Eq:CFOP_YMuSigma}) can be evaluated using (\ref{Eq:CFOP_XMean}), (\ref{Eq:CFOP_X2Mean}), (\ref{Eq:CFOP_YMean}) and (\ref{Eq:CFOP_Y2}). Consider \begin{equation} \begin{aligned} \log\left(\lambda_{u,k}\right) &= \log\left(X_{u,k}\right) - \log\left(Y_{u,k}\right). \end{aligned} \end{equation} Under Log-normal assumption, $ \log\left(X_{u,k}\right) $ and $ \log\left(Y_{u,k}\right) $ follow normal distribution, \textit{i.e.}, $ \log\left(X_{u,k}\right) \sim \mathcal{N}\left(\mu_{X_{u,k}}, \sigma^{2}_{X_{u,k}}\right) $ and $ \log\left(Y_{u,k}\right) \sim \mathcal{N}\left(\mu_{Y_{u,k}}, \sigma^{2}_{Y_{u,k}}\right) $, hence, $ \log\left(\lambda_{u,k}\right) $ follows normal distribution with \begin{equation} \begin{aligned} \mu_{\lambda_{u,k}} &= \mathbb{E}\left[\log\left(X_{u,k}\right)\right] - \mathbb{E}\left[\log\left(Y_{u,k}\right)\right], \\ &= \mu_{X_{u,k}} - \mu_{Y_{u,k}}. \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \sigma_{\lambda_{u,k}}^{2} &= \mathbb{V}\left[\log\left(X_{u,k}\right)\right] + \mathbb{V}\left[\log\left(Y_{u,k}\right)\right] - 2 \operatorname{Cov}\left(\log\left(X_{u,k}\right),\log\left(Y_{u,k}\right)\right), \end{aligned} \end{equation} where $ \operatorname{Cov}\left(\log\left(X_{u,k}\right),\log\left(Y_{u,k}\right)\right) $ is \cite{vzerovnik2013transformation}, \begin{equation} \begin{aligned} \operatorname{Cov}\left(\log\left(X_{u,k}\right),\log\left(Y_{u,k}\right)\right) &= \log\left( \frac{\operatorname{Cov}\left(X_{u,k},Y_{u,k}\right)}{\mathbb{E}\left[X_{u,k}\right]\mathbb{E}\left[Y_{u,k}\right]} + 1\right). \end{aligned} \end{equation} So, \begin{equation} \begin{aligned} \sigma_{\lambda_{u,k}}^{2} &= \sigma^{2}_{X_{u,k}} + \sigma^{2}_{Y_{u,k}} - 2 \log\left( \frac{\mathbb{E}\left[X_{u,k}Y_{u,k}\right]}{\mathbb{E}\left[X_{u,k}\right]\mathbb{E}\left[Y_{u,k}\right]}\right), \end{aligned} \end{equation} where the $ \mathbb{E}\left[X_{u,k}Y_{u,k}\right] $ is calculated using (\ref{Eq:CFOP_CorrXY}). This completes the proof. \section{Moments Calculation} Here, we have derived the first and second moment of the RV $ X_{u,k} $ and $ Y_{u,k} $. From (\ref{Eq: CFOP_SINRInner}), we have \begin{equation}\label{Eq:CFOP_XMean} \begin{aligned} \mathbb{E}\left[X_{u,k}\right] &= \rho_{u}\mathbb{E}\left[ \left( \sum\limits_{m=1}^{M} \norm*{\hat{\mathbf{g}}_{mk}}^{2} \right)^{2}\right] = \rho_{u}\mathbb{E}\left[\sum_{m = 1}^{M}\norm*{ \hat{g}_{mk}}^{4} + \sum_{m = 1}^{M}\sum_{n \ne m}^{M} \norm*{ \hat{g}_{mk}}^{2} \norm*{\hat{g}_{nk}}^{2} \right], \\ &= \rho_{u}N\left[\sum_{m = 1}^{M} \gamma_{mk}^{2} + N\left(\sum_{m = 1}^{M} \gamma_{mk}\right)^{2}\right]\textbf{\add{.}} \end{aligned} \end{equation} Similarly, after squaring the $ X_{u,k} $ and taking term-by-term expectations, we have \begin{equation}\label{Eq:CFOP_X2Mean} \begin{aligned} \mathbb{E}\left[X_{u,k}^{2}\right] &= \left(\mathbb{E}\left[X_{u,k}\right]\right)^{2} + \rho_{u}^{2}\left( 6N\sum_{m=1}^{M} \gamma_{mk}^{4} + 8 N^{2} \left(\sum_{m=1}^{M} \gamma_{mk}^{3} \right) \left(\sum_{n=1}^{M}\gamma_{nk}\right) \right. \\ & \left. + 2 N^{2} \left(\sum_{m=1}^{M} \gamma_{mk}^{2} \right)^{2} + 4 N^{3} \left(\sum_{m=1}^{M} \gamma_{mk}^{2} \right) \left( \sum_{n=1}^{M}\gamma_{nk} \right)^{2} \right)\textbf{\add{.}} \end{aligned} \end{equation} Let $ Y_{u,k} = \rho_{u} \sum\limits_{i\ne k}^{K}B_{k}^{i} + A_{k} + \rho_{u}C_{k} $, where $ B_{k}^{i} = \abs*{\sum\limits_{m=1}^{M}\mathbf{\hat{g}}_{mk}^{H} \mathbf{\hat{g}}_{mi}}^{2},$ $ A_{k} = \sum\limits_{m=1}^{M} \norm*{\hat{\mathbf{g}}_{mk}}^{2} $, and $ C_{k} = \sum\limits_{m=1}^{M}\sum\limits_{i=1}^{K} \left(\beta_{mi} - \gamma_{mi}\right) \norm*{\hat{\mathbf{g}}_{mk}}^{2} $. So, we have \begin{equation}\label{Eq:CFOP_YMean} \begin{aligned} \mathbb{E}\left[Y_{u,k}\right] &= \rho_{u} \sum\limits_{i\ne k}^{K} \mathbb{E}\left[B_{k}^{i} \right] + \mathbb{E}\left[A_{k}\right] + \rho_{u}\mathbb{E}\left[C_{k} \right]\textbf{\add{,}} \end{aligned} \end{equation} where $ \mathbb{E}\left[B_{k}^{i} \right] = N\sum\limits_{m=1}^{M}\gamma_{mk}\gamma_{mi} + N^{2}\abs*{\sum\limits_{m=1}^{M} \nu_{mk}^{i}}^{2} $, $ \mathbb{E}\left[A_{k}\right] = N\sum\limits_{m=1}^{M}\gamma_{mk} $, $ \mathbb{E}\left[C_{k} \right] = N\sum\limits_{m=1}^{M}\sum\limits_{i=1}^{K} \left(\beta_{mi} - \gamma_{mi}\right) \gamma_{mk} $ and $ \nu_{mk}^{i} = c_{mk}c_{mi} \boldsymbol{\phi}_{i}^{H} \mathbf{C}_{\mathbf{y}_{p,m},\mathbf{y}_{p,m}} \boldsymbol{\phi}_{k} $, and $ \mathbf{C}_{\mathbf{y}_{p,m},\mathbf{y}_{p,m}} = \tau_{p}\rho_{p}\sum\limits_{j=1}^{K} \beta_{mj} \boldsymbol{\phi}_{j}\boldsymbol{\phi}_{j}^{H} + \mathbf{I} $. After expanding $ Y_{u,k}^{2} $, we have \begin{equation}\label{Eq:CFOP_Y2} \begin{aligned} \mathbb{E}\left[Y_{u,k}^{2}\right] &= \rho_{u}^{2} \mathbb{E}\left[\left(\sum_{i\ne k}^{K}B_{k}^{i}\right)^{2}\right] + \mathbb{E}\left[A_{k}^{2}\right] + \rho_{u}^{2}\mathbb{E}\left[C_{k}^{2}\right] \\&+ 2\rho_{u}\sum_{i\ne k}^{K}\mathbb{E}\left[B_{k}^{i}A_{k}\right] + 2\rho_{u}\mathbb{E}\left[A_{k}C_{k}\right] + 2\rho_{u}^{2}\sum_{i\ne k}^{K}\mathbb{E}\left[B_{k}^{i} C_{k}\right]. \end{aligned} \end{equation} The expectation $ \mathbb{E}\left[A_{k}^{2}\right] = \mathbb{E}\left[X_{u,k}\right]/\rho_{u} $ and the expectation of other terms in (\ref{Eq:CFOP_Y2}) is given as follows. \begin{equation}\label{Eq:CFOP_MeanSumBki2} \begin{aligned} \mathbb{E}\left[\left(\sum_{i\ne k}^{K}B_{k}^{i}\right)^{2}\right] &=\sum_{i\ne k}^{K}\sum_{j\ne k }^{K}\Bigg[N\sum_{m=1}^{M}\gamma_{mk}^{2}\gamma_{mi}\gamma_{mj} + N\sum_{m=1}^{M}\gamma_{mk}^{2}\vert \nu_{mi}^{j} \vert^{2} + 2N\sum_{m=1}^{M}\gamma_{mk}\gamma_{mj}\vert \nu_{mk}^{i} \vert^{2}\\ &+2 N\mathfrak{Re}\left(\sum_{m=1}^{M}\gamma_{mk} (\nu_{mk}^{i})^{*}\nu_{mk}^{j} \left(\nu_{mi}^{j}\right)^{*}\right) +4 N^{2} \mathfrak{Re}\left(\sum_{m=1}^{M}\sum_{n=1}^{M}\gamma_{mk}\gamma_{mi}\nu_{mk}^{j} (\nu_{nk}^{j})^{*} \right) \\ & +4 N^{2} \mathfrak{Re}\left( \sum_{m=1}^{M}\sum_{n=1}^{M}\gamma_{mk}\nu_{mk}^{i}\nu_{mi}^{j}(\nu_{nk}^{j})^{*} \right) +2 N^{3} \mathfrak{Re}\left(\sum_{m=1}^{M}\sum_{n=1}^{M}\sum_{u =1 }^{M} \nu_{mk}^{i}\nu_{mk}^{j} \left(\nu_{nk}^{i}\right)^{*}\left(\nu_{uk}^{j}\right)^{*}\right) \\&+ 2 N^{3} \mathfrak{Re}\left(\sum_{m=1}^{M}\sum_{n= 1}^{M}\sum_{u =1 }^{M} \gamma_{mk}\left(\nu_{mi}^{j}\right)^{*}\left(\nu_{nk}^{i}\right)^{*}\nu_{uk}^{j}\right) + N^{2}\abs*{\sum_{m=1}^{M} \nu_{mk}^{i}\nu_{mk}^{j}}^{2} + N^{2}\abs*{\sum_{m=1}^{M}\gamma_{mk}\nu_{mi}^{j}}^{2} \Bigg] \\&+ \left(\sum_{i\ne k}^{K}\mathbb{E}\left[B_{k}^{i}\right]\right)^{2}. \end{aligned} \end{equation} \begin{equation}\label{MeanCk2} \begin{aligned} \mathbb{E}\left[C_{k}^{2}\right] &= N\sum_{m = 1}^{M}\sum\limits_{i=1}^{K} \left(\beta_{mi} - \gamma_{mi}\right)^{2}\gamma_{mk}^{2} + \left(\mathbb{E}\left[C_{k}\right]\right)^{2}. \end{aligned} \end{equation} \begin{equation}\label{MeanAkBkiSimple1} \begin{aligned} \mathbb{E}\left[B_{k}^{i}A_{k}\right] &= N\sum_{m = 1}^{M}\gamma_{mk}^{2}\gamma_{mi} + N\sum_{m = 1}^{M}\gamma_{mk} \vert \nu_{mk}^{i}\vert^{2} + 2 N^{2}\mathfrak{Re}\left(\left(\sum_{m = 1}^{M}\gamma_{mk}\nu_{mk}^{i}\right)\left(\sum_{n=1}^{M}\nu_{nk}^{i}\right)^{*}\right) \\&+ \mathbb{E}\left[A_{k}\right]\mathbb{E}\left[B_{k}^{i}\right]\textbf{\add{.}} \end{aligned} \end{equation} \begin{equation}\label{MeanAkCk} \begin{aligned} \mathbb{E}\left[A_{k}C_{k}\right] &= N\sum_{m = 1}^{M}\sum\limits_{i=1}^{K} \left(\beta_{mi} - \gamma_{mi}\right)\gamma_{mk}^{2} + \mathbb{E}\left[A_{k}\right]\mathbb{E}\left[C_{k}\right] \textbf{\add{.}} \end{aligned} \end{equation} Using the result of $ \mathbb{E}\left[B_{k}^{i}A_{k}\right] $, we have \begin{equation}\label{MeanCkBkiSimple1} \begin{aligned} \mathbb{E}\left[B_{k}^{i}C_{k}\right] &= N\sum_{m = 1}^{M}\sum\limits_{j=1}^{K} \left(\beta_{mj} - \gamma_{mj}\right)\gamma_{mk} \vert \nu_{mk}^{i}\vert^{2} + N\sum_{m = 1}^{M}\sum\limits_{j=1}^{K} \left(\beta_{mj} - \gamma_{mj}\right)\gamma_{mk}^{2}\gamma_{mi} \\&+ 2\mathfrak{Re} N^{2}\left(\left(\sum_{m = 1}^{M}\sum\limits_{j=1}^{K} \left(\beta_{mj} - \gamma_{mj}\right)\gamma_{mk}\nu_{mk}^{i}\right)\left(\sum_{n=1}^{M}\nu_{nk}^{i}\right)^{*}\right) + \mathbb{E}\left[C_{k}\right]\mathbb{E}\left[B_{k}^{i}\right] \textbf{\add{.}} \end{aligned} \end{equation} The correlation of $ X_{u,k} $ and $ Y_{u,k} $ is given as \begin{equation}\label{Eq:CFOP_CorrXY} \begin{aligned} \mathbb{E}\left[X_{u,k}Y_{u,k}\right] &= \rho_{u}^{2}\sum_{i\ne k}^{K}\mathbb{E}\left[A_{k}^{2}B_{k}^{i}\right] + \rho_{u}\mathbb{E}\left[A_{k}^{3}\right] + \rho_{u}^{2}\mathbb{E}\left[A_{k}^{2}C_{k}\right]\textbf{\add{,}} \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \mathbb{E}\left[A_{k}^{2}B_{k}^{i}\right] &= \mathbb{E}\left[A_{k}^{2} \right] \mathbb{E}\left[B_{k}^{i} \right] + 4 N\sum_{m=1}^{M}\vert \nu_{mk}^{i}\vert^{2}\gamma_{mk}^{2} + 2 N \sum_{m=1}^{M}\gamma_{mk}^{3}\gamma_{mi} + 2 N^{2}\sum_{m=1}^{M} \sum_{n=1}^{M} \gamma_{mk} \abs*{\nu_{nk}^{i}}^{2}\gamma_{nk} \\&+ 2 N^{2}\sum_{m=1}^{M}\sum_{n=1}^{M} \gamma_{mk} \gamma_{nk}^{2}\gamma_{ni} + 2 N^{2}\abs*{\sum_{m=1}^{M} \gamma_{mk} \nu_{mk}^{i}}^{2} + 4 N^{2}\mathfrak{Re}\left( \sum_{m=1}^{M}\sum_{n=1}^{M}\gamma_{mk}^{2} \nu_{mk}^{i} \left(\nu_{nk}^{i}\right)^{*} \right) \\&+ 4N^{2}\mathfrak{Re}\left( \sum_{m=1}^{M}\sum_{n=1}^{M} \sum_{p =1}^{M} \gamma_{mk} \nu_{mk}^{i} \gamma_{nk} \left( \nu_{pk}^{i} \right)^{*} \right) \textbf{\add{.}} \end{aligned} \end{equation} \begin{equation}\label{MeanAk3} \begin{aligned} \mathbb{E}\left[A_{k}^{3}\right] &= 2N\sum_{m = 1}^{M}\gamma_{mk}^{3} + 3N^{2}\left(\sum_{m = 1}^{M}\gamma_{mk}^{2}\right)\left( \sum_{m = 1}^{M}\gamma_{mk}\right) + N^{3}\left(\sum_{m = 1}^{M}\gamma_{mk}\right)^{3} \textbf{\add{.}} \end{aligned} \end{equation} \begin{equation}\label{MeanAk2Ck} \begin{aligned} \mathbb{E}\left[A_{k}^{2}C_{k}\right] &= 2N\sum_{m = 1}^{M}\sum\limits_{i=1}^{K} \left(\beta_{mi} - \gamma_{mi}\right)\gamma_{mk}^{3} + 2N^{2}\left(\sum_{m=1}^{M}\sum\limits_{i=1}^{K} \left(\beta_{mi} - \gamma_{mi}\right)\gamma_{mk}^{2}\right)\left(\sum_{n = 1}^{M}\gamma_{nk}\right)\\& +\mathbb{E}\left[A_{k}^{2}\right]\mathbb{E}\left[C_{k}\right] \textbf{\add{.}} \end{aligned} \end{equation} \section{Proof for Theorem \ref{Thm: CFOP_RateLogNormal}}\label{App: CFOP_ProofRateLogNormal} The ergodic rate of $ k $th user is given by \begin{equation} \begin{aligned} R_{u,k} &= \mathbb{E}\left[\log_{2}\left(1 + \lambda_{u,k}\right)\right]. \end{aligned} \end{equation} Since, $ \log_{2}\left(1 + \lambda_{u,k}\right) $ is positive RV, so we have $ R_{u,k} = \int_{0}^{\infty} \mathbb{P}\left[\log_{2}\left(1 + \lambda_{u,k}\right) > t \right] dt $. The logarithm is a monotonically increasing function. Hence, we have $ R_{u,k} = \int_{0}^{\infty} \mathbb{P}\left[ \lambda_{u,k} > 2^t - 1 \right] dt $. Given $ \lambda_{u,k} \sim \operatorname{LN}\left(\mu_{\lambda_{u,k}},\sigma_{\lambda_{u,k}}^{2}\right) $, the $ R_{u,k} $ is nothing but \begin{equation} \begin{aligned} R_{u,k} &= \frac{1}{2} \int_{0}^{\infty} \operatorname{erfc}\left(\frac{\ln\left(2^{t} - 1\right) - \mu_{\lambda_{u,k}}}{\sigma_{\lambda_{u,k}} \sqrt{2}}\right) dt. \end{aligned} \end{equation} Using the transformation of variable $ \ln\left(2^{t} - 1\right) = x $, we have \begin{equation} \begin{aligned} R_{u,k} &= \frac{1}{2 \ln 2} \int_{-\infty}^{\infty} \operatorname{erfc}\left(\frac{x - \mu_{\lambda_{u,k}}}{\sigma_{\lambda_{u,k}} \sqrt{2}}\right) \frac{1}{1 + \operatorname{e}^{-x}} dx. \end{aligned} \end{equation} Again, apply $ \frac{x - \mu_{\lambda_{u,k}}}{\sigma_{\lambda_{u,k}} \sqrt{2}} = y $, we have \begin{equation} \begin{aligned} R_{u,k} &= \frac{\sigma_{\lambda_{u,k}}}{\sqrt{2} \ln 2} \int_{-\infty}^{\infty} \frac{\operatorname{erfc}\left(y\right)}{1 + e^{-\mu_{\lambda_{u,k}} - \sqrt{2} \sigma_{\lambda_{u,k}} y }} dy, \\ &= \frac{\sigma_{\lambda_{u,k}}}{\sqrt{2} \ln 2} \left(\underbrace{ \int_{-\infty}^{0} \frac{\operatorname{erfc}\left(y\right)}{1 + e^{-\mu_{\lambda_{u,k}} - \sqrt{2} \sigma_{\lambda_{u,k}} y }} dy}_{I_{1}} + \underbrace{ \int_{0}^{\infty} \frac{\operatorname{erfc}\left(y\right)}{1 + e^{-\mu_{\lambda_{u,k}} - \sqrt{2} \sigma_{\lambda_{u,k}} y }} dy}_{I_{2}} \right). \end{aligned} \end{equation} \begin{equation} \begin{aligned} I_{1} &= \int_{-\infty}^{0} \frac{\operatorname{erfc}\left(y\right)}{1 + e^{-\mu_{\lambda_{u,k}} - \sqrt{2} \sigma_{\lambda_{u,k}} y }} dy, =\int_{0}^{\infty} \frac{2 - \operatorname{erfc}\left(y\right)}{1 + e^{-\mu_{\lambda_{u,k}} + \sqrt{2} \sigma_{\lambda_{u,k}} y } } dy. \end{aligned} \end{equation} \begin{equation} \begin{aligned} I_{1} &= \underbrace{\int_{0}^{\infty} \frac{2}{1 + e^{-\mu_{\lambda_{u,k}} + \sqrt{2} \sigma_{\lambda_{u,k}} y}} dy }_{I_{1,1}} - \underbrace{\int_{0}^{\infty} \frac{ \operatorname{erfc}\left(y\right)}{1 + e^{-\mu_{\lambda_{u,k}} + \sqrt{2} \sigma_{\lambda_{u,k}} y } } dy }_{I_{1,2}}. \end{aligned} \end{equation} Hence, we have \begin{equation} \begin{aligned} R_{u,k} &= \frac{\sigma_{\lambda_{u,k}}}{\sqrt{2} \ln 2}\left( I_{1,1} + I_{2} - I_{1,2} \right). \end{aligned} \end{equation} The integral $I_{1,1}$ is easy to calculate in closed form as follows \begin{equation} \begin{aligned} I_{1,1} &= \int_{0}^{\infty} \frac{2}{1 + e^{-\mu_{\lambda_{u,k}} + \sqrt{2} \sigma_{\lambda_{u,k}} y}} dy. \end{aligned} \end{equation} Let $ 1 + e^{-\mu_{\lambda_{u,k}} + \sqrt{2} \sigma_{\lambda_{u,k}} y} = u $ then $ dy = \frac{du}{\sqrt{2} \sigma_{\lambda_{u,k}} \left(u-1\right) } $, So we have \begin{equation} \begin{aligned} I_{1,1} &= \int_{1 + e^{-\mu_{\lambda_{u,k}}}}^{\infty} \frac{2}{u \sqrt{2} \sigma_{\lambda_{u,k}} \left(u-1\right)} du, \\ &= \frac{\sqrt{2}}{\sigma_{\lambda_{u,k}}} \int_{1 + e^{-\mu_{\lambda_{u,k}}}}^{\infty} \frac{1}{u\left(u-1\right)} du = \frac{\sqrt{2}}{\sigma_{\lambda_{u,k}}} \int_{1 + e^{-\mu_{\lambda_{u,k}}}}^{\infty} \left( \frac{1}{\left(u-1\right)} - \frac{1}{u}\right) du, \\ &= \frac{\sqrt{2}}{\sigma_{\lambda_{u,k}}} \left(\ln\left(u-1\right) - \ln u\right)\vert_{1 + e^{-\mu_{\lambda_{u,k}}}}^{\infty} = \frac{\sqrt{2}}{\sigma_{\lambda_{u,k}}} \ln\left(\operatorname{e}^{\mu_{\lambda_{u,k}}} + 1\right). \end{aligned} \end{equation} Now, it is interesting to note that \begin{equation} \begin{aligned} I_{2} - I_{1,2} &= \int_{0}^{\infty} \frac{\operatorname{erfc}\left(y\right)}{1 + e^{-\mu_{\lambda_{u,k}} - \sqrt{2} \sigma_{\lambda_{u,k}} y }} - \frac{ \operatorname{erfc}\left(y\right)}{1 + e^{-\mu_{\lambda_{u,k}} + \sqrt{2} \sigma_{\lambda_{u,k}} y } }dy, \\ &= \int_{0}^{\infty} \operatorname{erfc}\left(y\right) \left[\frac{\operatorname{e}^{-\mu_{\lambda_{u,k}}} \left(\operatorname{e}^{\sqrt{2} \sigma_{\lambda_{u,k}} y} - \operatorname{e}^{-\sqrt{2} \sigma_{\lambda_{u,k}} y}\right) }{1 + e^{-\mu_{\lambda_{u,k}} - \sqrt{2} \sigma_{\lambda_{u,k}} y } + e^{-\mu_{\lambda_{u,k}} + \sqrt{2} \sigma_{\lambda_{u,k}} y } + e^{-2\mu_{\lambda_{u,k}}} }\right], \\ &< \frac{\operatorname{e}^{-\mu_{\lambda_{u,k}}}}{1 + \operatorname{e}^{-2\mu_{\lambda_{u,k}}}}\int_{0}^{\infty} \operatorname{erfc}\left(y\right) \left(\operatorname{e}^{\sqrt{2} \sigma_{\lambda_{u,k}} y} - \operatorname{e}^{-\sqrt{2} \sigma_{\lambda_{u,k}} y}\right) dy, \\ &= \frac{\sqrt{2}\operatorname{e}^{-\mu_{\lambda_{u,k}}}}{\left(1 + \operatorname{e}^{-2\mu_{\lambda_{u,k}}}\right) \sigma_{\lambda_{u,k}} }\left(\operatorname{e}^{\frac{\sigma_{\lambda_{u,k}}^{2}}{2}} - 1\right). \end{aligned} \end{equation} It can be easily shown that $ I_{2} - I_{1,2} > 0 $, so we get a simple upper and lower bound on ergodic rate as follows \begin{equation} \begin{aligned} \frac{\ln\left(\operatorname{e}^{\mu_{\lambda_{u,k}}} + 1\right)}{\ln 2} < &R_{u,k} < \frac{1}{\ln 2} \left( \ln\left(\operatorname{e}^{\mu_{\lambda_{u,k}}} + 1\right) + \frac{\operatorname{e}^{-\mu_{\lambda_{u,k}}}}{\left(1 + \operatorname{e}^{-2\mu_{\lambda_{u,k}}}\right) }\left(\operatorname{e}^{\frac{\sigma_{\lambda_{u,k}}^{2}}{2}} - 1\right) \right), \\ \log_{2}\left(\operatorname{e}^{\mu_{\lambda_{u,k}}} + 1\right) < &R_{u,k} < \log_{2}\left(\operatorname{e}^{\mu_{\lambda_{u,k}}} + 1\right) + \frac{\operatorname{e}^{-\mu_{\lambda_{u,k}}}}{\ln 2\left(1 + \operatorname{e}^{-2\mu_{\lambda_{u,k}}}\right) }\left(\operatorname{e}^{\frac{\sigma_{\lambda_{u,k}}^{2}}{2}} - 1\right). \end{aligned} \end{equation} This completes the proof. \section{Proof for Lemma \ref{Lem: CFOP_OPUnivariate}}\label{App: CFOP_ProofUnivariateLemma} The OP of the $K$th user is given by \begin{equation} \begin{aligned} P_{out}^{K}(T) = \mathbb{P}\left(\lambda_{u,K} < T \right)= \mathbb{P}\left(X_{u,K} < T Y_{u,K} \right). \end{aligned} \end{equation} Substituting from (\ref{Eq: CFOP_SINRInner}), we have \begin{equation} \begin{aligned} P_{out}^{K}(T) &= \mathbb{P}\left(\rho_{u} \left(\hat{\mathbf{g}}_{K}^{H} \hat{\mathbf{g}}_{K}\right)^{2} < T \rho_{u} \sum\limits_{i\ne k}^{K} \abs*{\mathbf{\hat{g}}_{k}^{H} \mathbf{\hat{g}}_{i}}^{2} + T \hat{\mathbf{g}}_{k}^{H}\left( \rho_{u}\sum\limits_{i=1}^{K} \boldsymbol{\Lambda}_{i} + \mathbf{I}_{MN} \right)\hat{\mathbf{g}}_{k} \right). \end{aligned} \end{equation} To simplify the above expression, we first calculate the conditional probability $ P_{out}^{K} $ for a fixed $ \hat{\mathbf{g}}_{K} $. Therefore, for $ \hat{\mathbf{g}}_{K} = \mathbf{b} = \left[\mathbf{b}_{1},\dots,\mathbf{b}_{M} \right]^{T} $, the OP is \begin{equation} \begin{aligned} P_{out}^{K}(T)\vert \left(\hat{\mathbf{g}}_{K} = \mathbf{b}\right) &= \mathbb{P}\left(\rho_{u} \left(\mathbf{b}^{H} \mathbf{b}\right)^{2} < T \rho_{u} \sum_{i = 1}^{K-1}\abs*{\mathbf{b}^{H} \hat{\mathbf{g}}_{i}}^{2} \ + \ T \mathbf{b}^{H} \left( \rho_{u}\sum\limits_{i=1}^{K} \boldsymbol{\Lambda}_{i} + \mathbf{I}_{MN} \right) \mathbf{b} \right). \end{aligned} \end{equation} Rearranging the constants to one side of equality, we have \begin{equation}\label{Eq:CFOP_CCDFPout} \begin{aligned} P_{out}^{K}(T) &= \mathbb{P}\left( \sum_{i=1}^{K-1}\vert \mathbf{b}^{H} \hat{\mathbf{g}}_{i} \vert^{2} > d_{K}^{T} \right), \end{aligned} \end{equation} where \begin{equation}\label{Eq:CFOP_dkt} \begin{aligned} d_{K}^{T} &= \frac{\left(\rho_{u} \left(\sum\limits_{m=1}^{M} \norm*{\mathbf{b}_{m}}^{2}\right)^{2} - T \left( \rho_{u}\sum\limits_{i=1}^{K} \sum\limits_{m=1}^{M}\left(\beta_{mi} - \gamma_{mi} \right)\norm*{\mathbf{b}_{m}}^{2} + \sum\limits_{m=1}^{M}\norm*{\mathbf{b}_{m}}^{2} \right) \right)}{T \rho_{u}}. \end{aligned} \end{equation} To further simplify, the CCDF of $\sum\limits_{i=1}^{K-1}\vert \mathbf{b}^{H} \hat{\mathbf{g}}_{i} \vert^{2}$ is required. Hence, note that for the case of no pilot contamination $ Z_{i} = \mathbf{b}^{H} \hat{\mathbf{g}}_{i} $, is a complex Gaussian RV with mean \cite[Eq. 15.25]{kay1993fundamentals} \begin{equation}\label{Eq:CFOP_meanZiNoPilot} \begin{aligned} \mathbb{E}\left[Z_{i} \vert \hat{\mathbf{g}}_{K} = \mathbf{b} \right] &= \mathbf{b}^{H} \mathbb{E}\left[\hat{\mathbf{g}}_{i} \vert \hat{\mathbf{g}}_{K} =\mathbf{b} \right] = 0, \end{aligned} \end{equation} and variance \begin{equation}\label{Eq:CFOP_varianceZiNoPilot} \begin{aligned} \mathbb{V}\left(Z_{i}\vert \hat{\mathbf{g}}_{K} = \mathbf{b} \right) &= \mathbf{b}^{H} \operatorname{Cov}\left(\hat{\mathbf{g}}_{i} \vert \hat{\mathbf{g}}_{K} \right) \mathbf{b} = \sum_{m=1}^M \norm*{\mathbf{b}_m}^2 \gamma_{mi} := \alpha_{i}. \end{aligned} \end{equation} Eq. (\ref{Eq:CFOP_meanZiNoPilot}) and (\ref{Eq:CFOP_varianceZiNoPilot}) follow from the fact that $ \hat{\mathbf{g}}_{K} $ and $ \hat{\mathbf{g}}_{i} $ are independent $\forall i\neq K$. Further, $ \vert Z_{i} \vert^{2} $ is an exponential RV with parameter $\alpha_{i}$. Hence, $W = \sum\limits_{i=1}^{K-1}\vert \mathbf{b}^{H} \hat{\mathbf{g}}_{i} \vert^{2}$ is a sum of exponential RVs with CCDF \begin{equation}\label{Eq:CFOP_PoutGivengK} \begin{aligned} P_{out}^{K} \left(T \right) \vert \left(\hat{\mathbf{g}}_{K} = \mathbf{b}\right) &= 1 - \mathbb{P}\left( W \le \delta_{K}^{T} \right) \\ &= 1 - \left( \sum_{i=1}^{K-1} \frac{\alpha_{i}^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} \left(\alpha_{i} - \alpha_{j} \right)} \left[ 1 -e^{-\frac{d_{K}^{T}}{\alpha_{i}}} \right] \right)\operatorname{U}\left(d_{K}^{T}\right). \end{aligned} \end{equation} Now that we have obtained the conditional OP, to obtain the final OP, we integrate over the multivariate Gaussian PDF of $ \hat{\mathbf{g}}_{K}$. Hence, \begin{equation}\label{Eq:CFOP_PoutOrtho} \begin{aligned} P_{out}^K(T) &= \int\dots\int \left(1 - \left( \sum_{i=1}^{K-1} \frac{\alpha_{i}^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} \left(\alpha_{i} - \alpha_{j} \right)} \left[ 1 -e^{-\frac{ d_{K}^{T}}{\alpha_{i}}} \right] \right)\operatorname{U}\left( d_{K}^{T}\right) \right) f_{ \hat{\mathbf{g}}_{K}} \left(\mathbf{b} \right) d \mathbf{b}, \\ &= 1 - \int\dots\int \left( \sum_{i=1}^{K-1} \frac{\alpha_{i}^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} \left(\alpha_{i} - \alpha_{j} \right)} \left[ 1 -e^{-\frac{ d_{K}^{T}}{\alpha_{i}}} \right] \right)\operatorname{U}\left(d_{K}^{T}\right) \prod\limits_{m=1}^{M}\frac{e^{-\frac{\norm*{ \mathbf{b}_{m}}^{2}}{\gamma_{mK}}}}{\pi^{N} \gamma_{mK}^{N} } d \mathbf{b}_{1} \dots d \mathbf{b}_{M} , \end{aligned} \end{equation} where the PDF of $\hat{\mathbf{g}}_{K} \left(\mathbf{b}\right)$ is $ f_{\hat{\mathbf{g}}_{K}} \left(\mathbf{b} \right) = \prod\limits_{m=1}^{M}\frac{e^{-\frac{\norm*{\mathbf{b}_{m}}^{2}}{\gamma_{mK}}}}{\pi^{N} \gamma_{mK}^{N} }. $ After a cartesian to polar transformation, \textit{i.e.}, $b_{mn} = r_{mn}e^{j\phi_{mn}}$, and then using the transformation $\frac{r_{mn}^{2}}{\gamma_{mK}} = x_{mn}$, we get the result in \eqref{Eq: CFOP_PoutOrthoSim2} and this completes the proof. \section{Proof for Theorem \ref{Thm: CFOP_OPUnivariate}}\label{App: CFOP_ProofUnivariateTheorem} Let $ \mathbf{X} = \left[x_{11},\dots,x_{MN} \right]$ is a random vector with i.i.d. exponential RVs of scale parameter $1$. Then, (\ref{Eq: CFOP_PoutOrthoSim2}) can be described as \begin{equation}\label{Eq:CFOP_PoutOrthoSim3} \begin{aligned} P_{out}^K(T) &= 1 - \mathbb{E}\left[\left( \sum_{i=1}^{K-1} \frac{\theta_{i}^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} \left(\theta_{i} - \theta_{j} \right)} \left[ 1 -e^{-\frac{\delta_{K}^{T}}{\theta_{i}}} \right] \right)\operatorname{U}\left(\delta_{K}^{T}\right) \right], \\ &= 1 - \int_{\mathbb{R}^{MN}}g\left(\mathbf{x}\right)f_{\mathbf{X}}\left(\mathbf{x}\right) d\mathbf{x}. \end{aligned} \end{equation} Using \cite[eq. 20]{Rahman2004:Integral_DimensionReduction}, \eqref{Eq:CFOP_PoutOrthoSim3} is approximated as \begin{equation}\label{Eq: CFOP_Pout_NoPilot_Sim1} \begin{aligned} P_{out}^K(T) &\approx 1 - \sum_{m=1}^{M}\sum_{n=1}^{N}\mathbb{E}\left[g\left( \mu_{11},\dots,x_{mn},\dots,\mu_{MN}\right) \right] + (MN-1)g\left(\mu_{11},\dots,\mu_{MN}\right), \end{aligned} \end{equation} where $\mu_{mn} = \mathbb{E}[x_{mn}] = 1, \ \forall \ m = 1,\dots,M $ and $n = 1,\dots,N $. Using the values of $ \mu_{mn} $, The $ g\left( \mu_{11},\dots,x_{mn},\dots,\mu_{MN}\right) $ is calculated as \begin{equation}\label{Eq: CFOP_gxm} \begin{aligned} \sum_{i=1}^{K-1} \left(\frac{\left(x_{mn} C_{1,m}^{i} + C_{2,m}^{i}\right)^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} x_{mn} C_{3,m}^{i,j} + C_{4,m}^{i,j}} \left[ 1 - e^{-\left(\frac{x_{mn}^{2} C_{5,m} + x_{mn} C_{9,m} + C_{10,m}}{ x_{nm} C_{1,m}^{i} + C_{2,m}^{i}}\right)} \right]\right)\operatorname{U}\left(x_{mn}^{2} C_{5,m} + x_{mn} C_{9,m} + C_{10,m}\right) \end{aligned} \end{equation} for $ 1 \le m \le M $ and $n = 1,\dots,N $, where $ C_{1,m}^{i} = \gamma_{mK}\gamma_{mi} $, $ C_{2,m}^{i} = N\sum\limits_{m^{\prime}\ne m}^{M}\gamma_{m^{\prime} K}\gamma_{m^{\prime} i} + \left(N-1\right)\gamma_{mK}\gamma_{mi}$, $ C_{3,m}^{i,j} = C_{1,m}^{i} - C_{1,m}^{j} $, $ C_{4,m}^{i,j} = C_{2,m}^{i} - C_{2,m}^{j} $, $ C_{5,m} = \frac{1}{T}\gamma_{mK}^{2} $, $C_{6,m} = \left(N-1\right)\gamma_{mK} + N \left(\sum\limits_{m^{\prime}\ne m}^{M}\gamma_{m^{\prime} K} \right) $, $C_{7,m} = \left(\rho_{u}\sum\limits_{i=1}^{K}\left(\beta_{mi} - \gamma_{mi} \right) + 1 \right) \gamma_{mK}$, $C_{8,m} = N \sum\limits_{m^{\prime} \ne m }^{M} \left(\rho_{u}\sum\limits_{i=1}^{K}\left(\beta_{m^{\prime}i} - \gamma_{m^{\prime} i} \right) + 1 \right) \gamma_{m^{\prime}K} + \left(N -1\right) C_{7,m} $, $ C_{9,m} = \frac{2}{T}C_{6,m}\gamma_{mK} - \frac{1}{\rho_{u}} C_{7,m} $, $ C_{10,m} = \frac{ C_{6,m}^{2} }{T} - \frac{1}{\rho_{u}} C_{8,m} $, and \begin{equation}\label{Eq: CFOP_gone} \begin{aligned} g\left(\mu_{11},\dots,\mu_{MN}\right) &= \left( \sum_{i=1}^{K-1} \frac{C_{i}^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}} \left(C_{i} - C_{j} \right)} \left[ 1 -e^{-\frac{C_{K}^{T}}{C_{i}}} \right] \right)\operatorname{U}\left(C_{K}^{T}\right), \end{aligned} \end{equation} where $ C_{i} = N\sum\limits_{m=1}^{M}\gamma_{mK} \gamma_{mi} $ and $ C_{K}^{T} = \frac{N^{2}}{T}\left( \sum\limits_{m=1}^{M} \gamma_{mK}\right)^{2} - \frac{N}{\rho_{u}} \sum\limits_{m=1}^{M}\gamma_{mK} \left(\rho_{u}\sum\limits_{i=1}^{K}\left(\beta_{mi} - \gamma_{mi} \right) + 1 \right) $. Finally, the approximation in (\ref{Eq: CFOP_Pout_NoPilot_AppFinal}) is obtained after substituting values from (\ref{Eq: CFOP_gxm}), (\ref{Eq: CFOP_gone}) into (\ref{Eq: CFOP_Pout_NoPilot_Sim1}), and this completes the proof. \cbend \section{Proof for Corollary \ref{Cor: CFOP_Pout_NPC_mMIMO} }\label{App: CFOP_UnivariatemMIMOCor} Note that, for the case of mMIMO, $\gamma_{mk} = \gamma_{k} \ \forall m$ and $\beta_{mk} = \beta_{k} \ \forall m$. Hence, the \eqref{Eq: CFOP_Pout_NoPilot_AppFinal} simplifies to \begin{equation}\label{Eq: CFOP_Pout_NoPilot_mMIMO} \begin{aligned} P_{out}^K(T) &\approx 1 - MN \sum_{i=1}^{K-1} \underbrace{\int_{0}^{\infty} \left(D_{1}^{i}\left[ 1 - e^{-\left(D_{2}^{i}x + D_{3}^{i}\right) } \right]\right)\operatorname{U}\left(x^{2} D_{4} + x D_{5} + D_{6}\right) e^{-x} dx}_{I} \\ &+ (MN -1) \sum_{i=1}^{K-1} D_{1}^{i} \left[ 1 -e^{-\left( D_{2}^{i} + D_{3}^{i} \right)} \right]\operatorname{U}\left(D_{4} + D_{5} + D_{6}\right), \end{aligned} \end{equation} where, \begin{equation}\label{Eq: D1D2D3mMIMO} \begin{aligned} D_{1}^{i} = \frac{\gamma_{i}^{K-2}}{\prod\limits^{K-1}_{{\substack{j=1 \\ j \ne i}}}\left(\gamma_{i} - \gamma_{j} \right)}, D_{2}^{i} = \frac{\gamma_{K}}{T \gamma_{i} }, & \quad D_{3}^{i} = \frac{\left( MN-1 \right)\gamma_{K}}{T \gamma_{i}} - \frac{\left( \rho_{u} \sum\limits_{i=1}^{K}\left(\beta_{i} - \gamma_{i} \right) + 1 \right)}{\rho_{u} \gamma_{i}} , \end{aligned} \end{equation} and \begin{equation}\label{Eq: D4D5D6mMIMO} \begin{aligned} &D_{4} = \frac{1}{T}\gamma_{K}^{2}, \quad D_{5} = \frac{2}{T} \left( MN-1\right)\gamma_{K}^{2}- \dfrac{1}{\rho_{u}} \gamma_{K} \left( \rho_{u} \sum\limits_{i=1}^{K}\left(\beta_{i} - \gamma_{i} \right) + 1 \right), \\ &D_{6} = \left( MN-1 \right)\gamma_{K}\left( \frac{\left( MN-1\right)\gamma_{K}}{T } - \frac{\left( \rho_{u} \sum\limits_{i=1}^{K}\left(\beta_{i} - \gamma_{i} \right) + 1 \right)}{\rho_{u}}\right). \end{aligned} \end{equation} The presence of the unit-step function in Eq. (\ref{Eq: CFOP_Pout_NoPilot_mMIMO}) results in different domains of integration depending on the nature of the roots of a quadratic equation $ D_{4} x_{m}^{2} + D_{5} x_{m} + D_{6}=0$. One can easily verify that $D_{5}^{2} - 4D_{4}D_{6} = \frac{\gamma_{K}^{2}}{\rho_{u}^{2}}\left( \rho_{u} \sum\limits_{i=1}^{K}\left(\beta_{i} - \gamma_{i} \right) + 1\right)^{2} > 0$. \subsection{When both the roots are non-positive}\label{App:OutageRootNegativemMIMO} In such a scenario, the region of integration will be the entire $\mathbb{R}^{+}$. This is true for $D_{6} \ge 0 $, \textit{i.e.}, \begin{equation} \begin{aligned} T \le \frac{\rho_{u} \left( MN-1 \right)\gamma_{K}}{\left( \rho_{u} \sum\limits_{i=1}^{K}\left(\beta_{i} - \gamma_{i} \right) + 1\right)}. \end{aligned} \end{equation} Therefore $I$ reduces to \begin{equation}\label{Eq: IForCase1} \begin{aligned} I &= D_{1}^{i}\int_{0}^{\infty} \left(\left[ 1 - e^{-\left(D_{2}^{i}x + D_{3}^{i}\right) } \right]\right)e^{-x} dx = D_{1}^{i} \left[1 - \frac{e^{-D_{3}^{i}}}{D_{2}^{i} + 1}\right]. \end{aligned} \end{equation} \subsection{One root is negative, and the other is positive} In this case, the quadratic $D_{4} x^{2} + D_{5} x + D_{6} < 0$ for $ 0 \le x \le \kappa $, where \begin{equation}\label{Eq: kappamMIMO} \begin{aligned} \kappa = \frac{-D_{5} + \sqrt{D_{5}^{2} - 4D_{4}D_{6}}}{2D_{4}} \end{aligned} \end{equation} is the positive root of the quadratic. This is true when $ D_{6} < 0 $, \textit{i.e.}, \begin{equation} \begin{aligned} T > \frac{\rho_{u}\left( MN-1 \right)\gamma_{K}}{\left( \rho_{u} \sum\limits_{i=1}^{K}\left(\beta_{i} - \gamma_{i} \right) + 1\right)}. \end{aligned} \end{equation} Therefore $I$ reduces to \begin{equation}\label{Eq: IForCase2} \begin{aligned} I &= D_{1}^{i}\int_{\kappa}^{\infty} \left(\left[ 1 - e^{-\left(D_{2}^{i}x + D_{3}^{i}\right) } \right]\right)e^{-x} dx = D_{1}^{i} \left[ e^{-\kappa} - \frac{e^{-D_{3}^{i}} e^{-\kappa (D_{2}^{i} + 1)}}{D_{2}^{i} + 1}\right]. \end{aligned} \end{equation} Substitution of \eqref{Eq: IForCase1} and \eqref{Eq: IForCase2} in \eqref{Eq: CFOP_Pout_NoPilot_mMIMO} gives the result in \eqref{Eq: CFOP_Pout_NPCmMIMO_Case1} and \eqref{Eq: CFOP_Pout_NPCmMIMO_Case2}. This completes the proof. \bibliographystyle{IEEEtran}
\section{Introduction} Beginning with the identification of the TW Hya Association (TWA; \citealt{Kastner:1997}) several stellar associations with ages $\sim$8--200~Myr have been identified in close proximity to the Earth \citep{ZS04,Torres:2008}. These young moving groups serve as excellent laboratories to explore the evolution of stellar and planetary properties (e.g., \citealt{Marois:2008, Lagrange:2010, Rodriguez:2010}). Of particular interest is the growing number of M-dwarfs identified as members of these groups and how their properties, such as disk lifetimes, compare with those of higher mass members. Surveys of star forming regions have shown that protoplanetary disk lifetimes are on the order of $\sim$2--3~Myr \citep{Williams:2011}. To date, we know only of a handful of pre-main sequence stars within 100~pc of Earth that, despite having ages of $\sim$5--20~Myr, still host disks that appear primordial in nature. These include TW~Hya ($\sim$8~Myr, $\sim$50~pc; \citealt{Kastner:1997}), V4046~Sgr ($\sim$20~Myr, $\sim$70~pc; \citealt{Kastner:2008b}), MP~Mus ($\sim$5--7~Myr, $\sim$100~pc; \citealt{Kastner:2010}), and T~Cha ($\sim$5--7~Myr, $\sim$110~pc; \citealt{Sacco:2014}). All of these are roughly solar-mass (K type) stars; whether or not lower-mass (M type) stars can retain primordial disks to such relatively advanced ages remains to be determined. Young M-dwarfs can exhibit strong levels of UV and X-ray radiation, either as a result of chromospheric and coronal activity or active accretion. This high energy radiation can drive disk dissipation via photoevaporation (e.g., \citealt{Gorti:2009}). Nevertheless, the limited studies that have been performed to explore the disk lifetimes of substellar objects have found disk dissipation timescales are at least as long as for solar-mass stars (see, e.g., \citealt{Luhman:2012}, \citealt{Ercolano:2011}, \citealt{Williams:2011}, and references therein). Investigations aimed at establishing the masses of gas and dust around similarly young M stars are key to advance our understanding of disk evolution timescales and processes. \begin{figure*} \begin{center} \includegraphics[width=8cm,angle=0]{TWA_30B.pdf} \includegraphics[width=8cm,angle=0]{TWA_32.pdf} \includegraphics[width=8cm,angle=0]{TWA_33.pdf} \includegraphics[width=8cm,angle=0]{TWA_34.pdf} \end{center} \caption{Spectral energy distributions for TWA~30B, 32, 33, and 34. Data points are 2MASS (green), ALLWISE (blue), Herschel (purple; see \citealt{Liu:2015}), and ALMA (orange). Two blackbody dust disks are fit in each case. The fractional luminosities, $\tau=L_{IR}/L_{bol}$, are indicated. The poor fit to TWA~30B suggests a more complex model is required.} \label{fig:seds} \end{figure*} Among the nearby young moving groups, the TWA is particularly interesting, as it represents an important evolutionary stage in the lives of protoplanetary disks. With an age of $\sim$8~Myr \citep{Torres:2008,Ducourant:2014}, the TWA represents an epoch coinciding with the time required for giant planet formation via core accretion (eg, \citealt{Chabrier:2014}). The disks in the TWA range from evidently gas-poor debris disks to at least one example of a long-lived, gas-rich, apparently primordial disk --- i.e., the disk orbiting TW~Hya itself (e.g., \citealt{Riviere:2013, Schneider:2012a, Andrews:2010}; and references therein). Recent work has yielded many new M-dwarf members and candidate members of the TWA \citep{Looper:2007,Looper:2010a,Looper:2010b,Looper:2011,Shkolnik:2011,Rodriguez:2011,Schneider:2012b,Gagne:2014b}. With masses lower than $\sim$0.2~$M_\odot$, these new mid/late M-dwarfs constitute a sample of objects that allows us to probe disks among the poorly explored substellar population, and to do so at the relatively advanced age of the TWA. \section{Observations} We carried out an ALMA survey of 15 members or proposed members of the TWA as drawn from \citet{Looper:2007,Looper:2010a,Looper:2010b,Looper:2011,Shkolnik:2011,Rodriguez:2011,Schneider:2012b,Gagne:2014b}. These stars, listed in Table~\ref{tab:targets}, were chosen as targets as they constitute some of the lowest mass members suggested to date for the TWA. None of these had yet been observed with ALMA, though most are known to host dusty circumstellar disks as inferred via WISE and Herschel infrared excesses (eg, \citealt{Schneider:2012a,Schneider:2012b, Liu:2015}). For those not in published IR surveys, we noted the WISE color $W1-W4$ and marked those with colors redder than 2 as having an IR excess. Some targets, notably J1326--5022 and TWA~29, have low TWA membership probabilities ($<$10\%) according to BANYAN~II \citep{Gagne:2014a}, but were nevertheless included in our survey. BANYAN~II returns a 98\% likelihood of membership for TWA~31; however, other studies do not consider it a likely member (e.g., \citealt{Ducourant:2014}). Our ALMA Cycle~2 program (2013.1.00457.S) consisted of observations of continuum dust emission at 230~GHz and observations of the $^{12}$CO(2--1) and $^{13}$CO(2--1) emission lines with a resolution of 488~kHz, corresponding to a velocity resolution of 0.6~km/s. We reached a sensitivity of 0.05~mJy/beam in the continuum and 5~mJy/beam per 0.6~km/s channel in $^{12}$CO and $^{13}$CO. Calibration and cleaning was performed by the ALMA staff with CASA version 4.2.2. Briggs weighting was used with robust=0.5. The final restored beam was, on average, $1.5\times0.8$\arcsec\ and corresponds to a scale of 40--80~AU at the mean distance of the stars in the TWA. \section{Results} \label{results} \begin{table*} \begin{center} \begin{tabular}{lccccccccrr} \hline Name & RA & Dec.\ & Sp.\ & W1-W3 & W1-W4 & IR & Distance & Flux & $M_{dust}$ & Ref. \\ & & & Type & (mag) & (mag) & Excess & (pc) & (mJy) & ($10^{-2} M_{E}$) & \\ \hline \hline TWA 30B & 11:32:18 & -30:18:31 & M4 & 4.96 & 7.23 & Y & $45.8\pm4.8$ & $0.83\pm0.07$ & 3.7 & 1 \\ TWA 30A & 11:32:18 & -30:19:51 & M5 & 1.72 & 3.64 & Y & $45.8\pm4.8$ & $<$0.20 & $<$0.9 & 2 \\ TWA 31 & 12:07:10 & -32:30:53 & M5 & 1.80 & 3.92 & Y & $55.0\pm6.4$ & $<$0.20 & $<$1.3 & 3 \\ TWA 33 & 11:39:33 & -30:40:00 & M5 & 1.63 & 3.25 & Y & $46.6\pm5.2$ & $0.33\pm0.03$ & 1.5 & 4 \\ TWA 34 & 10:28:45 & -28:30:37 & M5 & 1.35 & 2.75 & Y & $47.0\pm5.6$ & $0.54\pm0.06$ & 2.5 & 4 \\ TWA 32 & 12:26:51 & -33:16:12 & M6 & 1.77 & 3.77 & Y & $59.4\pm6.2$ & $2.10\pm0.05$ & 15.8 & 3, 5 \\ J1045-2819 & 10:45:52 & -28:19:30 & M6 & 0.32 & $<$2.73 & ? & $52.6\pm6.4$ & $<$0.15 & $<$0.9 & 6 \\ J1111-2655 & 11:11:28 & -26:55:02 & M6 & 0.47 & 0.93 & N & $43.8\pm5.0$ & $<$0.16 & $<$0.7 & 6 \\ J1203-3821 & 12:03:59 & -38:21:40 & M8 & 0.80 & $<$3.09 & ? & $65.8\pm8.6$ & $<$0.15 & $<$1.4 & 6 \\ J1252-4948 & 12:52:09 & -49:48:28 & M8/9 & 0.86 & 3.00 & Y & $91.8\pm9.4$ & $<$0.13 & $<$2.3 & 6 \\ J1106-3715 & 11:06:44 & -37:15:11 & M9 & $<$0.88 & $<$4.02 & ? & $53.4\pm8.0$ & $<$0.12 & $<$0.7 & 6 \\ J1326-5022 & 13:26:53 & -50:22:27 & M9 & 1.44 & $<$3.32 & ? & $71.0\pm9.0$ & $<$0.13 & $<$1.4 & 6 \\ J1247-3816 & 12:47:44 & -38:16:46 & M9 & 2.16 & 4.27 & Y & $63.8\pm6.4$ & $<$0.15 & $<$1.3 & 7 \\ TWA 29 & 12:45:14 & -44:29:07 & M9/9.5 & $<$0.43 & $<$3.90 & ? & $33.3\pm7.2$ & $<$0.14 & $<$0.3 & 8 \\ J1207-3900 & 12:07:48 & -39:00:04 & L0/1 & 0.43 & $<$4.44 & ? & $60.2\pm5.2$ & $<$0.14 & $<$1.1 & 7 \\ \hline\end{tabular} \caption{Targets for our ALMA observations. The third column indicates if the source had prior indications of IR excess from WISE ($W1-W4>2$); "?" denotes upper limits at WISE bands $W3$, $W4$, or both. Distances listed here have been calculated via the BANYAN II kinematic tool \citep{Gagne:2014a}. Also listed are the continuum detections and 3-$\sigma$ upper limits. Dust masses are estimated as described in Section~\ref{results} and using $T_{dust}\sim40$~K. All sources are unresolved with ALMA. \newline References: (1) \citealt{Looper:2010b}, (2) \citealt{Looper:2010a}, (3) \citealt{Shkolnik:2011}, (4) \citealt{Schneider:2012b}, (5) \citealt{Rodriguez:2011}, (6) \citealt{Looper:2011}, (7) \citealt{Gagne:2014b}, (8) \citealt{Looper:2007}. } \label{tab:targets} \end{center} \end{table*} Of the 15 targets observed, four systems were detected in continuum emission. All four sources were unresolved. These first-time detections are listed in Table~\ref{tab:targets} along with the measured flux from the primary beam corrected images. Fluxes were measured by fitting a Gaussian to the cleaned image. We also list 3-$\sigma$ limits, which are three times the RMS error estimated in each case. The measured fluxes are consistent with emission from T$\sim$40~K dust grains as determined by modeling the spectral energy distribution (SED). Figure~\ref{fig:seds} shows SEDs of these 4 systems along with blackbody fits to their IR/submm excesses (see, e.g., \citealt{Schneider:2012a}). In the case of TWA 30B, which displays strong variability and may have complex disk structure as a result of the edge-on disk geometry (\citealt{Looper:2010b}; Principe et al., submitted), a more sophisticated model is clearly required to accurately describe its SED. These SED models are overly simplistic; unless the emission is coming from narrow rings, the real disks will have a range of temperatures. Nevertheless, the models in Fig.~\ref{fig:seds} are useful to demonstrate the presence of cold dust in the system. The fractional luminosity, $\tau = L_{IR}/L_{bol}$, ranges from 2 to 4\% for TWA~32, 33, and 34. If we assume the continuum emission is coming from large (mm-sized) grains which radiate as blackbodies at $\sim$40~K, then the dust grains need to be located a few AU from these low-mass stars. In addition to the four Table~\ref{tab:targets} stars that are coincident with submm continuum sources, we detected continuum sources that are well offset (typically by $\sim$10\arcsec) from three systems (TWA 31, J1247-3816, and J1207-3900). Given the typical positional accuracy of ALMA observations is better than 0.1\arcsec, we conclude these are background sources, likely of extragalactic origin. Assuming optically thin dust, we can estimate the dust mass from: \begin{gather*} M_{dust} = \frac{F_\nu D^2}{\kappa_\nu B_\nu(T_{dust})}, \end{gather*} where we adopt $\kappa_\nu = 1.15$~cm$^2$/g (see \citealt{Rodriguez:2010} and references therein) and $T_{dust}=40$~K. We tabulate the resulting estimates of $M_d$, as well as 3-$\sigma$ upper limits, in Table~\ref{tab:targets}. The inferred dust masses range from $\sim$1 to 16\% of an Earth mass. An alternative approach to determine dust masses is to use the temperature-luminosity relationship for protoplanetary disks derived in \citet{Andrews:2013}, namely $T_d \approx 25 (L_*/L_\odot)^{1/4}$. While consistent with results for disks orbiting young, earlier type stars, the \citet{Andrews:2013} relationship does not necessarily hold for very low-mass objects such as those in Table~\ref{tab:targets} (van der Plas et al., in prep). Young M5 stars have log~$L_*/L_\odot \approx -2$, which suggests a dust temperature of $\sim$8~K. This is 5 times lower than the temperatures assumed in the SED model and, under the same assumptions, implies dust masses that are 5 times higher. In addition to obtaining flux upper limits for the individual continuum non-detections, we created a stacked image by averaging non-detections, to assess whether a significant fraction of them might display emission at levels just below detectability. No detection is apparent in this average image; we obtain a 3-$\sigma$ upper limit of 0.05 mJy/beam. Among our sample only TWA~34 displays detectable $^{12}$CO emission, which we discuss in Section~\ref{twa34co}. For those systems in our sample with no CO detections, we can infer a 3-$\sigma$ upper limit of $\sim$0.002~$M_E$ in molecular gas following the prescription in Section~\ref{twa34co} and assuming optically thin $^{12}$CO, CO:H$_2$ of 10$^{-4}$, and a distance of $\sim$60~pc. \subsection{TWA~34: CO Detection}\label{twa34co} Among the Table~\ref{tab:targets} systems, only TWA~34 shows evidence of $^{12}$CO emission. Although the $^{12}$CO emission is unresolved for each individual velocity channel, we find the centroid changes with velocity, allowing us to generate the first moment map in Figure~\ref{fig:mom1}. Figure~\ref{fig:mom1} suggests that TWA~34 is orbited by a molecule-rich disk viewed at intermediate to high inclination, with North-South rotation. \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{twa34_mom1.pdf} \end{center} \caption{Velocity map for the $^{12}$CO(2--1) emission in TWA~34. } \label{fig:mom1} \end{figure} We show in Figure~\ref{fig:spec} the integrated line profile of the CO emission, Hanning smoothed with a kernel size of 3 channels. This double-peaked CO line profile is indicative of Keplerian rotation. Hence, to characterize this emission, we fit a parametrized Keplerian model as described in \citet{Kastner:2008a}. That is, we fit a parametric line profile function described by: \begin{align*} F = \left\{ \begin{array}{lr} F_0 \ ((v-v_0)/v_d)^{3q-5} & |v-v_0|>v_d \\ F_0 \ ((v-v_0)/v_d)^{p_d} & |v-v_0|<v_d \end{array} \right. \end{align*} where $F_0$ is the peak line intensity, $\nu_0$ is the rest frequency in the star/disk system frame, $v_d$ is the projected rotational velocity near the outer edge of the disk, and $p_d$ and $q$ are quasi-physical disk parameters (see \citealt{Kastner:2008a} for details). We fix $q$=0.5 and $p_d$=0.1 for simplicity. The resulting best-fit parametric line profile is displayed in Figure~\ref{fig:spec}. We obtain a peak intensity of $0.034\pm0.002$~Jy, a systemic velocity of $2.3\pm0.1$~km/s in the LSR frame, $v_d$ of $2.49\pm0.09$km/s, and an integrated intensity of $0.34\pm0.03$~Jy~km/s. The parameter $v_d$ can be used to estimate the outer radius of the disk as detected in CO emission, $R_d$, from \begin{gather*} v_d^2 = G M_* / R_d \end{gather*} where $M_*$ is the mass of the star, 0.08~$M_\odot$ \citep{Baraffe:2015}. We thereby estimate the CO disk orbiting TWA~34 is $\sim$11 AU in radius. This suggests that the disk around TWA~34 is more compact than those seen around younger brown dwarfs and low-mass stars in Taurus \citep{Ricci:2014}. We note that the CO emission in Figure~\ref{fig:mom1} is marginally resolved and suggests a larger CO radius of $\sim$20--40~AU. This discrepancy in the CO outer radius estimates can also be seen in other disks when comparing single-dish line measurements to resolved interferometric imaging (e.g., \citealt{Kastner:2008b, Rodriguez:2010, Sacco:2014, Huelamo:2015}). Higher resolution CO imaging will allow an independent measurement of the disk size, which can in turn be used to accurately estimate the mass of TWA~34. We estimate a gas mass of $\sim$0.2~$M_E$ following the prescriptions of \citet{Zuckerman:2008}, \citet{Kastner:2008a}, \citet{Rodriguez:2010}, and references therein. We assume optically thin $^{13}$CO, a $^{12}$C:$^{13}$C ratio of 89, a CO:H$_2$ ratio of 10$^{-4}$, temperature of $\sim$40~K, and a 3-$\sigma$ $^{13}$CO upper limit of $\sim$0.03~Jy~km/s. The $^{13}$CO upper limit is determined assuming a linewidth identical to that of $^{12}$CO. This estimate for the gas mass is $\sim$7 times higher than that for the dust mass. This ratio is comparable to other evolved molecular gas disks such as TW~Hya and V4046~Sgr, as well as some disks in the much younger ($\sim$1--3~Myr) Taurus star-forming region (e.g., \citealt{Sacco:2014,Williams:2014}). As these and many other previous studies suggest, such low inferred gas-to-dust ratios could be indicative of gas removal through accretion or photoevaporation, or could merely reflect overestimates of the CO:H$_2$ ratio; due, for example, to freezing out of CO on cold dust grains. These various processes result in large uncertainties when estimating the gas mass of disks based on CO measurements. \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{figure_2.pdf} \end{center} \caption{$^{12}$CO(2--1) emission line profile of TWA~34, the only target in our sample with detected CO gas. The red, thin line represents the best-fit Keplerian profile.} \label{fig:spec} \end{figure} \subsection{TWA 34: System Velocity} From our best-fit Keplerian profile (see prior section), we have obtained an estimate for the systemic velocity of TWA~34. In the barycentric frame of reference, this velocity corresponds to $13.3\pm0.1$~km/s. This is the first accurate radial velocity measurements available for TWA~34 (see also \citealt{Murphy:2015}). At a distance of $47.0\pm5.6$~pc and using the proper motions listed in PPMXL \citep{Roeser:2010}, we can estimate UVW velocities of $-11.0\pm1.6$, $-16.2\pm0.6$, $-3.9\pm1.4$~km/s. These agree very well with the average velocity of the TWA ($-9.87$, $-18.06$, $-4.52$~km/s; \citealt{Malo:2013}). This new velocity measurement thereby further supports the conclusion that TWA~34 is a member of the TWA. \section{Discussion} Among the 15 low-mass TWA members and candidate members listed in Table~\ref{tab:targets}, only 4 yielded ALMA continuum detection at 1.3 mm, despite the fact that many show some evidence of warm circumstellar dust. In the absence of cold dust grains, the warm dust grains detected by WISE would have 1.3~mm emission $<$0.1~mJy, which is below the sensitivity of our ALMA observations. Because we are only sensitive to cold mm-sized grains, our observations appear to demonstrate that the presence of warm circumstellar dust does not necessarily imply cold dust is also present in the system. This is in agreement with prior studies of M stars (see, e.g., \citealt{Lestrade:2009} and references therein) that have found the incidence of cold disks to be much lower around such stars than around higher mass stars. Although the ALMA non-detections appear to indicate no cold grains exist, an alternative explanation is that any surviving grains in the outer disk have already grown to cm size or larger and become invisible at 1.3~mm wavelengths \citep{Ricci:2010a,Ricci:2010b,Mohanty:2013}. The continuum non-detections suggest dust masses of about $10^{-2} M_E$ or less. This is similar to what has been observed for other $\sim$10~Myr-old debris disks around M stars (see \citealt{Wyatt:2008} and references therein). We were more sensitive to molecular gas masses and achieved a limit of a few times $10^{-3} M_E$ for gas in H$_2$, assuming CO/H$_2$ of $10^{-4}$. As the case for TWA~34 and other disks shows, the gas-to-dust ratio is unlikely to be $\sim$100 as in the ISM (see \citealt{Williams:2014}). It appears likely that by the age of the TWA, the gas in a typical M star's disk has in general been efficiently removed, even in cases where a significant mass of primordial dust has survived. However, studies have identified signatures of on-going gas accretion in some of these systems, for example around TWA~30A, 30B, and 31 \citep{Looper:2010a,Looper:2010b, Shkolnik:2011}. Hence, at least in certain cases, it is likely that disk CO gas has frozen out onto dust grains, suppressing the gas-phase CO abundance. \section{Conclusions} We have carried out an ALMA survey of 15 low-mass TWA members and candidates to search for molecular gas in the form of $^{12}$CO and $^{13}$CO as well as provide constraints on continuum dust emission. Among systems targeted, four (TWA~30B, 32, 33, and 34) have detected dust emission consistent with the existence of cold dust grains in the disk. Circumstellar dust grain temperatures of $\sim$40 K are consistent with the mid-infrared to submm SEDs for these systems. All continuum sources are unresolved. While most of our sample shows indications of warm dust based on WISE measurements, the ALMA non-detections suggest any cold grains present in the outer disk may have already grown to cm size or larger. Only one system, TWA~34, shows signatures of molecular gas in its disk in the form of $^{12}$CO (2--1) emission. The $^{12}$CO emission has velocity structure indicative of Keplerian rotation. The systemic velocity for the system, as determined from the CO detection, is consistent with membership in the TWA. Among the sample of known $\sim$7--10 Myr-old star/disk systems, TWA~34, at just $\sim$50 pc from Earth, is the lowest mass star thus far identified as harboring cold molecular gas in an orbiting disk. \begin{acknowledgements} This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.00457.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We thank our referee, Greg Herczeg, for the detailed and useful review of our manuscript. D.R.R. acknowledges support from FONDECYT grant 3130520. G.v.d.P. acknowledges support from FONDECYT grant 3140393. D.P. acknowledges support from FONDECYT grant 3150550. G.v.d.P. acknowledges support from FONDECYT grant 3140393 and by the Millennium Nucleus RC130007 (Chilean Ministry of Economy). J.K.'s research on young stars near Earth is supported by National Science Foundation grant AST-1108950 and NASA Astrophysics Data Analysis Program grant NNX12H37G, both to RIT. S.M. acknowledges the support of the STFC grant ST/K001051/1 \end{acknowledgements}
\section{Introduction} Topological insulators fall in a class of materials known as symmetry protected topological phases. In these systems the gapless Dirac spectrum of the surface states is protected by symmetries such as charge conservation, time reversal and spatial inversion. Breaking of these symmetries leads to opening of a gap in the spectrum. In this work, we consider a subclass of these systems which are ultrathin topological insulators (TIs) with gapped Dirac cones on their top and bottom surfaces. Thin films of topological insulators have been experimentally fabricated for various TI materials such as $Sb_{2}Te_{3}$ \cite{1a}. The gap in the top and bottom surface states is usually called hybridization gap which arises as a result of coupling between top and bottom surface states of a 3D topological insulator when its thickness is sufficiently small, 5QLs (Quintuple Layers) and thinner \cite{1b,1c,1d}. This gap can be tuned by varying the thickness of topological insulator films \cite{1e}. Hence in addition to breaking of the aforementioned symmetries, gap opening in thin films is also possible through hybridization. We investigate the role of gap opening through symmetry breaking and hybridization on the optical response of the system. Specifically, we study the magneto-optical response of TI thin films with inversion symmetry breaking. Inversion symmetry breaking occurs in thin films grown on a substrate and can also be tuned by electrical gating \cite{1c,1f}. In both cases, chemical potential on the two surface can be different with the result that the Dirac points are not at the same energy on the two surfaces. In addition to gap tuning, another advantage of thin films is that their bulk contribution is small and allows observation of surface properties of topological insulators \cite{1g,1h,1i}. In earlier work, it was revealed that thin film topological insulators show interesting physics when time reversal symmetry is broken; either by magnetic ordering or by the application of external magnetic field \cite{1j,1k,1l,1m,1n,1o}. It was shown that through gap tuning, the system can make a transition from the normal insulator(NI) to a TI phase with finite dc Hall conductivity \cite{1k,1m,1o}. The main focus of the present work is the investigation of inversion symmetry breaking effects on the magneto-optical response of TI thin films. In addition to inversion symmetry, time reversal symmetry can also be explicitly broken by a magnetic field applied to our system, which we consider. Inversion symmetry breaking generates additional gap in the spectrum of topological insulator thin films \cite{1c}. This energy gap is not only controlled by thickness of the films \cite{1e} and external exchange field/magnetic field \cite{1j}, but it can also be generated through interaction with a substrate and can be tuned by electrical gating \cite{2a}. In this paper, we show that in the presence of inversion symmetry breaking, a new feature of the Landau level spectrum is Landau level crossings which lead to new optical transition channels that can be observed in the optical response. These channels were previously forbidden in the presence of inversion symmetry. Further, we determine the dc and the optical conductivity in the quantum Hall regime and show the presence of Hall steps and plateaus even in the ac regime. We also show that by tuning the hybridization and symmetry breaking parameters, a transition from the normal to a topological insulator phase occurs with measurable signatures in the magneto-optical response. The paper is organized as follows: In Sec. II, we present our model system. Sec. III is devoted to the calculation of the optical conductivity tensor for our system. Results for optical Hall conductivity in the quantum Hall regime are presented in Sec. IV with conclusions and summary of results in Sec. V. \section{Model} Our system is a topological insulator thin film, thin enough for hybridization of surface states on the top and bottom, and broken inversion symmetry which can be either due to gating or substrate. In order to highlight the effects of Time Reversal (TR) symmetry breaking we include a term in the Hamiltonian which can arise due to magnetic ordering in the presence of magnetic dopants in the proximity of the top surface which are exchange coupled to the electronic spins. This term has been included here to illustrate the effects of TR symmetry breaking on the energy spectrum. The effective low energy Hamiltonian is \cite{1c,1j,1f}% \begin{equation} H=\hbar v_{f}\tau_{z}\otimes(\sigma_{x}k_{y}-\sigma_{y}k_{x})+\Delta_{h}% \tau_{x}\otimes I+\Delta_{ib}\tau_{z}\otimes I+\Delta_{z}I \otimes\sigma_{z}, \label{1}% \end{equation} with the basis: $|t\uparrow\rangle,|t\downarrow\rangle,|b\uparrow\rangle$ and $|b\downarrow\rangle.$ Here $t,b$ denote the top and bottom surface states and $\uparrow,\downarrow$ represent the spin up and down states. $v_{f}$ is the Fermi velocity, and $I$ is the identity matrix, $\sigma_{i}(i=x,y,z)$ and $\tau_{j}(j=x,y,z)$ are Pauli matrices acting on spin space and opposite surface space (surface pseudospin). $\Delta_{h}$ represents the hybridization between the two surface states. For large thickness $\Delta_{h}\approx0$, hybridization of top and bottom surface states can be neglected, however when thickness is sufficiently reduced $\Delta_{h}$ generates finite gap in the Dirac spectrum. $\Delta_{ib}$ is the inversion symmetry breaking term between the two surfaces, it can result from interaction between the TI thin film and the substrate or by an electric field applied perpendicular to the surface of the thin film \cite{1c}. $\Delta_{z}$ can be the exchange field along the z-axis introduced by possible ferromagnetic ordering of the magnetic impurities. The energy spectrum of the above Hamiltonian is given by% \begin{equation} \varepsilon_{\pm}^{\alpha}(k)=(-1)^{\alpha}\sqrt{\hbar^{2}v_{f}^{2}% k^{2}+\Delta_{h}^{2}+\Delta_{ib}^{2}+\Delta_{z}^{2}\pm2\sqrt{\hbar^{2}% v_{f}^{2}k^{2}\Delta_{ib}^{2}+\Delta_{ib}^{2}\Delta_{z}^{2}+\Delta_{z}% ^{2}\Delta_{h}^{2}}}, \end{equation} where $\alpha=1$ represents the states in valence band and $\alpha=0$ represents the states in conduction band. $\pm$ \ correspond to upper and lower surface. Fig. (1) shows the band structure for our system at different values of $\Delta_{h},\Delta_{ib}$ and $\Delta_{z}$. For $\Delta_{h}% =\Delta_{ib}=\Delta_{z}=0$, both top and bottom surface states are gapless and degenerate. For a thin TI without any source of TR and I symmetry breaking, $(\Delta_{h}\neq0;\Delta_{ib}=\Delta_{z}=0),$ the bands are degenerate separated by insulating gap $\Delta_{h}$. For the case of inversion asymmetry and finite hybridization with the system preserving time reversal symmetry $(\Delta_{z}=0)$, a Rashba-like splitting in the band structure occurs; Fig. 1(c). The bands are degenerate at $k=0$ as $\varepsilon_{+}^{0}% (k=0)=\varepsilon_{-}^{0}(k=0)$ in conduction band and $\varepsilon_{+}% ^{1}(k=0)=\varepsilon_{-}^{1}(k=0)$ in valence band for all values of $\Delta_{h}$ and $\Delta_{ib}$ while for $k\neq0$ the bands are not degenerate. This $k=0$ degeneracy is lifted by the introduction of time reversal symmetry breaking term $\Delta_{z}$, with the result that the degeneracy does not exist for any value of $k$. Therefore, the band structure represents Normal Insulator (NI) regime for small value of $\Delta_{z}$, as the value of $\Delta_{z}$ is increased the lower gap decreases. At a particular value of $\Delta_{z}$ the gap closes. This gapless point represents the phase transition point. As $\Delta_{z}$ is further increased the transition from NI to TI phase takes place and the gap reopens, (see Figs.(1e)-(1f)). The red lines (the blue lines) represent dispersion of upper (lower) surface.\newline Now we consider the effect of Landau quantization on the system by the application of an external magnetic field along $z$-axis directed perpendicular to the surface of TI thin film aligned in the xy-plane. The magnetic field explicitly breaks TR symmetry. In the rest of the paper we will not consider the effect of magnetic ordering. The Hamiltonian of our system takes the form \cite{1m,1o},% \[ \hat{H}_{\sigma\tau}=\hbar v_{f}\tau_{z}\otimes\left[ \sigma_{x}\mathbf{\pi }_{y}-\sigma_{y}\mathbf{\pi}_{x}\right] +\Delta_{h}\tau_{x}\otimes I+\Delta_{ib}\tau_{z}\otimes I+\Delta_{z}I\otimes\sigma_{z}, \] $\mathbf{\pi=k+}e\mathbf{A/}\hbar$ is the two dimensional canonical momentum with vector potential $\mathbf{A}$. Here $\Delta_{z}=g% \mu _{B}B/2$ is the Zeeman energy associated with the applied magnetic field $B=B\hat{z}$, with $g$ effective Lande factor, $\mu_{B}$ is the Bohr magneton. We choose the Landau gauge for the vector potential $A=(0,xB,0).$ Since $p_{x}$ and $x$ do not commute, it is convenient to write the Hamiltonian in terms of dimensionless operators% \[ \hat{H}_{\sigma\tau}=\frac{\hbar v_{f}}{\sqrt{2}l_{B}}\tau_{z}\otimes\left[ \sigma_{x}l_{B}\hat{P}+\sigma_{y}\frac{\hat{Q}}{l_{B}}\right] +\Delta_{h}% \tau_{x}\otimes I+\Delta_{ib}\tau_{z}\otimes I+\Delta_{z}I\otimes\sigma_{z}, \] where $l_{B}=\sqrt{c/eB}$ is the magnetic length. $\hat{Q}=-l_{B}^{2}p_{x}$ and $\hat{P}=p_{y}+\frac{eB}{\hbar}x$ such that $[\hat{Q},\hat{P}]=i\hbar$. Employing the ladder operators $a=1/\sqrt{2}l_{B}(\hat{Q}+il_{B}^{2}\hat{P})$ and $a^{\dagger}=1/\sqrt{2}l_{B}(\hat{Q}-il_{B}^{2}\hat{P}),$ we may express the Hamiltonian as:% \begin{equation} H=\frac{i\hbar\omega_{B}}{\sqrt{2}}\tau_{z}\otimes(\sigma^{+}a-\sigma ^{-}a^{\dagger})+\Delta_{h}\tau_{x}\otimes I+\Delta_{ib}\tau_{z}\otimes I+\Delta_{z}I\otimes\sigma_{z} \label{3}% \end{equation} where $\omega_{B}=v_{F}/l_{B}$, which plays a role analogous to the cyclotron frequency in the LL spectrum of a regular 2DEG. We can write single particle eigenstates in the following form% \begin{equation} \left\vert n\alpha s\right\rangle =u_{nT\uparrow}^{\alpha s}\left\vert n-1,T,\uparrow\right\rangle +u_{nT\downarrow}^{\alpha s}\left\vert n,T,\downarrow\right\rangle +u_{nB\uparrow}^{\alpha s}\left\vert n-1,B,\uparrow\right\rangle +u_{nB\downarrow}^{\alpha s}\left\vert n,B,\downarrow\right\rangle . \label{4}% \end{equation} Here $\left\vert n,T(B),\uparrow(\downarrow)\right\rangle $ is the nth LL eigenstates on the top (bottom) surface with spin up (down), $\alpha=0,1$ and $s=\pm$ label the four eigenstates of Eq. (\ref{3}), corresponding to each LL index $n=0,...,\infty$, and $u_{n}^{\alpha s}$ are the corresponding complex four component spinor wave functions. Thus the Hamiltonian Eq. (\ref{3}) can be written in a $4\times4$ matrix% \begin{equation} H=% \begin{pmatrix} \Delta_{z}+\Delta_{ib} & -i\hbar\omega_{B}\sqrt{2n} & \Delta_{h} & 0\\ i\hbar\omega_{B}\sqrt{2n} & -\Delta_{z}+\Delta_{ib} & 0 & \Delta_{h}\\ \Delta_{h} & 0 & \Delta_{z}-\Delta_{ib} & i\hbar\omega_{B}\sqrt{2n}\\ 0 & \Delta_{h} & -i\hbar\omega_{B}\sqrt{2n} & -(\Delta_{z}+\Delta_{ib}) \end{pmatrix} ., \label{5}% \end{equation} Diagonalizing the hamiltonian in Eq. (\ref{5}), we find the following Landau level spectrum,% \begin{equation} \epsilon_{n\alpha s}=(-1)^{\alpha}\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}% +\Delta_{z}^{2}+2n\hbar^{2}\omega_{B}^{2}+2s\sqrt{\Delta_{ib}^{2}\Delta _{z}^{2}+\Delta_{h}^{2}\Delta_{z}^{2}+2n\Delta_{ib}^{2}\hbar^{2}\omega_{B}% ^{2}}}. \label{6}% \end{equation} The Landau level energy spectrum is shown in Fig. (2). An important feature of the spectrum is that its electron-hole symmetric for $n\neq0.$ The LL spectrum consists of two sets: $(i)\epsilon_{n0s}$ and $(ii)\epsilon_{n1s}$ where $\epsilon_{n1s}$ represents spectrum for occupied states below $\mu=0$ and $\epsilon_{n0s}$ represent the unoccupied states above $\mu=0$. For both sets of occupied and unoccupied Landau levels each Landau level splits in a doublet for $s=\pm1$. This splitting results from time reversal symmetry breaking term $(\Delta_{z}),$ broken inversion symmetry and the hybridization $(\Delta_{h})$ in the Hamiltonian. There are two situations where splitting can vanish, if $(i)$ the system has inversion symmetry $(\Delta_{ib}=0)$ and broken TR symmetry along with no hybridization $(\Delta_{h}=0)$ $(ii)$ system has both TR symmetry$(\Delta_{z}=0)$ and inversion symmetry$(\Delta_{ib}=0)$ with no constraint on the hybridization (for zero hybridization with both TR and inversion symmetry thin film TI will behave like gapless Dirac material with only one $n=0$ partially filled LL at $\mu=0$). Note that the $n=0$ LL, only splits when either $\Delta_{h}$ or $\Delta_{ib}$ is nonzero. A novel feature of LLs in the presence of broken inversion symmetry is their crossings within each set, between $nth$ and $(n+1)th$ levels with opposite $s$ values, at certain values of magnetic field. There is no crossing of $n=0$ LL in our system at any value of magnetic field. We consider terms linear in $k$. However, for large values of $k$ hybridization terms must be redefined as $\frac{\Delta}{2}-Bk^{2}$. This results in crossing of the $n=0$ Landau levels. This crossing becomes an anti-crossing in the presence of inversion symmetry breaking \cite{2b} The $n=0$ LLs behaves differently compared to $n\neq0$ LLs. For $n=0$ Eq. (\ref{4}) shows that the electrons are fully spin-polarized in these levels; only spin down levels are occupied. Hence $n=0$ levels are split into two sublevels with spin down unlike other levels which are split into four sublevels, two with spin up and two with spin down. The corresponding $n=0$ wavefunctions (un-normalized) are% \[ u_{0}^{\alpha s}=\{0,s(-1)^{\alpha}\frac{(-\Delta_{ib}+s\sqrt{\Delta_{ib}% ^{2}+\Delta_{h}^{2}})}{\Delta_{h}},0,1\}. \] The corresponding LL energies are \begin{equation} \epsilon_{0\alpha s}=(-1)^{\alpha}\left\vert \Delta_{z}+s\sqrt{\Delta_{ib}% ^{2}+\Delta_{h}^{2}}\right\vert .\nonumber \end{equation} Explicitly \begin{align} u_{0}^{0-1} & =\{0,\frac{\Delta_{ib}+\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}}% }{\Delta_{h}},0,1\},\label{7}\\ u_{0}^{11} & =\{0,\frac{\Delta_{ib}-\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}}% }{\Delta_{h}},0,1\},\nonumber \end{align} for $\Delta_{z}<\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}}.$% \begin{equation} \epsilon_{00-1}=\left\vert \Delta_{z}-\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}% }\right\vert ,\text{ \ \ \ \ \ }\epsilon_{011}=-\left\vert \Delta_{z}% +\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}}\right\vert . \label{8}% \end{equation} For $\Delta_{z}>\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}},$ $u_{0}^{0-1}$ is replaced by $u_{0}^{1-1}:$ \begin{equation} u_{0}^{1-1}=\{0,\frac{\Delta_{ib}+\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}}% }{\Delta_{h}},0,1\},\nonumber \end{equation} with energy given by% \[ \epsilon_{01-1}=-\left\vert \Delta_{z}-\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}% }\right\vert . \] In the NI phase the states are $u_{0}^{0-1}$ and $u_{0}^{11}$, and in TI phase the states are $u_{0}^{1-1}$ and $u_{0}^{11}.$ Thus one of the $n=0$ electron-like sublevel becomes a hole-like sublevel which can be seen in Fig. (3) for density of states and in the Landau level spectrum in Fig. (2). This change in character of the zeroth LL manifests particle-hole symmetry breaking in the system which results in jump in Hall conductivity from $0$ to a finite value at chemical potential $\mu=0$. This signals a transition from the NI phase to the TI phase. \subsection{Density of states} The Green function associated with our Hamiltonian is \[ G(\omega,n,\alpha,s)=% {\displaystyle\sum\limits_{\alpha,s}} \frac{1}{\omega-(-1)^{\alpha}\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}+\Delta _{z}^{2}+2n\hbar^{2}\omega_{B}^{2}+2s\sqrt{\Delta_{ib}^{2}\Delta_{z}% ^{2}+\Delta_{h}^{2}\Delta_{z}^{2}+2n\Delta_{ib}^{2}\hbar^{2}\omega_{B}^{2}}% }+i\eta}. \] From which we can compute density of states as% \begin{align*} D(\omega) & =\frac{-1}{2\pi l_{B}^{2}}% {\displaystyle\sum\limits_{n}} ImG(\omega,n,\alpha,s),\\ D(\omega) & =\frac{-1}{\pi}\frac{1}{2\pi l_{B}^{2}}(% {\displaystyle\sum\limits_{n\neq0}} Im G(\omega,n,\alpha,s)+Im G(\omega,0,\alpha,s)). \end{align*} The plots for density of states are shown in Fig. (3). $(NI)$ represents the normal insulator phase in which the LL spectrum has perfect particle-hole symmetry. At $\Delta_{z}>\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}},$ both $n=0$ levels are filled as they shift to the valence band, thus breaking particle-hole symmetry for LL spectrum across $\epsilon=0$. This shift in the Landau level is associated with transition from (NI) phase to the TI phase. We notice two interesting features of the density of states: (i) At certain values of $\omega$ two adjacent peaks are closely spaced such that they tend to merge in a single peak. (ii) Some peaks have higher weight than all other peaks. These two distinct features are attributed to crossing (or near crossing) of two Landau levels at certain values of magnetic field, see Fig. (2). The parameters chosen in these plots are $\Delta_{ib}=0.006eV$, $\Delta _{h}=0.004eV$ and $\hbar v_{f}^{2}eB=1.6\times10^{-4}B$. \section{Magneto-optical conductivity} To determine the magneto-optical conductivity tensor we need eigenfunctions of the Hamiltonian given in Eq. (5). The explicit form of the eigenfunctions $u_{n}^{\alpha s}$ in Eq. (4) is \begin{align} u_{n}^{\alpha s} & =(u_{nT\uparrow}^{\alpha s},u_{nT\downarrow}^{\alpha s},u_{nB\uparrow}^{\alpha s},u_{nB\downarrow}^{\alpha s}),\label{11}\\ u_{nT\uparrow}^{\alpha s} & =\dfrac{-i}{d_{1}N}(N_{1}+sN_{2}(\Delta _{ib}+\Delta_{z})+\epsilon_{n\alpha s}(\Delta_{ib}\Delta_{z}+sN_{2})),\\ u_{nT\downarrow}^{\alpha s} & =\dfrac{1}{d_{2}N}(\Delta_{ib}^{2}% +sN_{2}+\epsilon_{n\alpha s}\Delta_{ib}),\\ u_{nB\uparrow}^{\alpha s} & =\dfrac{-1}{d_{3}N}(N_{3}-\Delta_{h}% (-\Delta_{ib}-\Delta_{z}-\epsilon_{n\alpha s})(\Delta_{ib}-\Delta_{z}% -\epsilon_{n\alpha s})),\\ u_{nB\downarrow}^{\alpha s} & =\dfrac{1}{N}. \end{align} $N_{1}$,$N_{2}$,$N_{3}$,$d_{1}$,$d_{2}$,$d_{3}$ are given by \begin{align} d_{1} & =\sqrt{2n}\Delta{_{h}}\omega{_{B}}(\Delta_{ib}-\Delta_{z}% )\label{12}\\ d_{2} & =\Delta{_{h}}(\Delta_{ib}-\Delta_{z})\\ d_{3} & =2i\sqrt{2n}\Delta_{h}\omega_{B}(\Delta_{ib}-\Delta_{z})\\ N_{1} & =\Delta_{ib}^{2}\Delta_{z}+\Delta_{h}^{2}\Delta_{z}+\Delta _{ib}\Delta_{z}^{2}+2n\omega_{B}^{2}\Delta_{ib}\\ N_{2} & =\sqrt{\Delta_{ib}^{2}\Delta_{z}^{2}+\Delta_{h}^{2}\Delta_{z}% ^{2}+2n\omega_{B}^{2}\Delta_{ib}^{2}}\\ N_{3} & =\Delta_{h}(\Delta_{h}^{2}+2n\omega_{B}^{2}), \end{align} and $N$ is the normalization constant. Given the above eigenfunctions we can now evaluate the magneto-optical conductivity tensor within linear response regime using Kubo formula \cite{2c}. \begin{equation} \sigma_{\alpha\beta}(\omega)=\dfrac{i\hbar}{2\pi l_{B}^{2}}\underset{n\alpha s\neq n^{\prime}\alpha^{\prime}s^{^{\prime}}}{\sum}\dfrac{f(\varepsilon _{n\alpha s})-f(\varepsilon_{n^{\prime}\alpha^{\prime}s^{\prime}})}% {\epsilon_{n^{\prime}\alpha^{\prime}s^{\prime}}-\epsilon_{n\alpha s}}% \dfrac{\left\langle n\alpha s\right\vert \hat{\jmath}_{\alpha}\left\vert n^{\prime}\alpha^{\prime}s^{\prime}\right\rangle \left\langle n^{\prime}% \alpha^{\prime}s^{\prime}\right\vert \hat{\jmath}_{\beta}\left\vert n\alpha s\right\rangle }{\hbar\omega-\epsilon_{n^{\prime}\alpha^{\prime}s^{\prime}% }+\epsilon_{n\alpha s}+i\hbar/(2\tau)} \label{13}% \end{equation} where $\hat{\jmath}_{\alpha}=\frac{e}{\hbar}\dfrac{\partial H}{\partial k_{\alpha}}$ and $f(\varepsilon_{n\alpha s})=\frac{1}{1+exp[\beta (\epsilon_{n\alpha s}-\mu)]}$ is the Fermi distribution function with $\beta=1/k_{B}T$ and $\mu$ is the chemical potential. We note that transitions between occupied Landau levels are Pauli blocked. So the only allowed transitions will be from the occupied LLs in valence band to unoccupied LLs in conduction band (i.e. across chemical potential $\mu=0$). After evaluating the matrix elements we have found that the selection rules for allowed transitions is $n^{\prime}=n\pm1$. The absorptive part of the conductivity for $n=0$ Landau level above and below $\mu=0$ is \begin{align} \binom{\operatorname{Re}\sigma_{xx}(\omega)/\sigma_{0}}{Im\sigma_{xy}% (\omega)/\sigma_{0}} & =\hbar ev_{f}^{2}B\underset{s}{[\sum}\dfrac {(f(\epsilon_{11s})-f(\epsilon_{00-1}))N(1,0,s,1,1)\times\eta}{(\epsilon _{00-1}-\epsilon_{11s})((\hbar\omega+\epsilon_{11s}-\epsilon_{00-1})^{2}% +\eta^{2})}\nonumber\\ & \pm\underset{s}{\sum}\dfrac{(f(\epsilon_{011})-f(\epsilon_{10s}% ))M(0,0,-1,1,s)\times\eta}{(\epsilon_{10s}-\epsilon_{011})((\hbar \omega+\epsilon_{011}-\epsilon_{10s})^{2}+\eta^{2})}\nonumber\\ \pm & \underset{n=2,ss^{\prime}\alpha\neq\alpha^{\prime}}{\sum}% \dfrac{(f(\epsilon_{n\alpha s})-f(\epsilon_{n+1\alpha^{\prime}s^{\prime}% }))M(n,s,s^{^{\prime}}\alpha,\alpha^{\prime})\times\eta}{(\epsilon _{n+1\alpha^{\prime}s^{\prime}}-\epsilon_{n\alpha s})((\hbar\omega +\epsilon_{n\alpha s}-\epsilon_{n+1\alpha^{\prime}s^{\prime}})^{2}+\eta^{2}% )}\nonumber\\ & +\underset{n=2,s,s^{\prime}\alpha\neq\alpha^{\prime}}{\sum}\dfrac {(f(\epsilon_{n\alpha s})-f(\epsilon_{n-1\alpha^{\prime}s^{\prime}% }))N(n,s,s^{\prime},\alpha,\alpha^{\prime})\times\eta}{(\epsilon_{n-1\alpha ^{\prime}s^{\prime}}-\epsilon_{n\alpha s})((\hbar\omega+\epsilon_{n\alpha s}-\epsilon_{n-1\alpha^{\prime}s^{\prime}})^{2}+\eta^{2})}], \label{14}% \end{align} and for the case when both $n=0$ LLs are hole-like Landau levels, the conductivity is% \begin{align} \binom{\operatorname{Re}\sigma_{xx}(\omega)/\sigma_{0}}{Im\sigma_{xy}% (\omega)/\sigma_{0}} & =\hbar ev_{f}^{2}B\underset{s}{[\sum}\dfrac {(f(\epsilon_{01-1})-f(\epsilon_{10s}))N(1,0,s,1,1)\times\eta}{(\epsilon _{10s}-\epsilon_{01-1})((\hbar\omega+\epsilon_{01-1}-\epsilon_{10s})^{2}% +\eta^{2})}\nonumber\\ & \pm\underset{s}{\sum}\dfrac{(f(\epsilon_{011})-f(\epsilon_{10s}% ))M(0,1,-1,0,s)\times\eta}{(\epsilon_{10s}-\epsilon_{011})((\hbar \omega+\epsilon_{011}-\epsilon_{10s})^{2}+\eta^{2})}\nonumber\\ \pm & \underset{n=1,\alpha s\neq\alpha^{\prime}s^{^{\prime}}}{\sum}% \dfrac{(f(\epsilon_{n\alpha s})-f(\epsilon_{n+1\alpha^{\prime}s^{\prime}% }))M(n,\alpha,s,\alpha^{\prime},s^{^{\prime}})\times\eta}{(\epsilon _{n+1\alpha^{\prime}s^{\prime}}-\epsilon_{n\alpha s})((\hbar\omega +\epsilon_{n\alpha s}-\epsilon_{n+1\alpha^{\prime}s^{\prime}})^{2}+\eta^{2}% )}\nonumber\\ & +\underset{n=2,s,s^{\prime}\alpha\neq\alpha^{\prime}}{\sum}\dfrac {(f(\epsilon_{n\alpha s})-f(\epsilon_{n-1\alpha^{\prime}s^{\prime}})% )N(n,\alpha,s,\alpha^{\prime},s^{^{\prime}})\times\eta}{(\epsilon _{n-1\alpha^{\prime}s^{\prime}}-\epsilon_{n\alpha s})((\hbar\omega +\epsilon_{n\alpha s}-\epsilon_{n-1\alpha^{\prime}s^{\prime}})^{2}+\eta^{2}% )}]. \label{15}% \end{align} Here $\sigma_{0}=e^{2}/h,$ $\eta=\hbar/2\tau$ is the scattering rate related to broadening of the Landau levels and% \begin{equation} M(n,\alpha,s,\alpha^{\prime},s^{^{\prime}})=[(u_{nT\downarrow}^{\alpha s})^{\ast}u_{n+1T\uparrow}^{\alpha^{\prime}s^{\prime}}-(u_{nB\downarrow }^{\alpha s})^{\ast}u_{n+1B\uparrow}^{\alpha^{\prime}s^{\prime}}]\times \lbrack u_{nT\downarrow}^{\alpha s}(u_{n+1T\uparrow}^{\alpha^{\prime}% s^{\prime}})^{\ast}-u_{nB\downarrow}^{\alpha s}(u_{n+1B\uparrow}% ^{\alpha^{\prime}s^{\prime}})^{\ast}], \end{equation} and% \begin{equation} N(n,\alpha,s,\alpha^{\prime},s^{^{\prime}})=[(u_{nB\uparrow}^{\alpha s}% )^{\ast}u_{n-1B\downarrow}^{\alpha^{\prime}s^{\prime}}-(u_{nT\uparrow}^{\alpha s})^{\ast}u_{n-1T\downarrow}^{\alpha^{\prime}s^{\prime}}]\times\lbrack u_{nB\uparrow}^{\alpha s}(u_{n-1B\uparrow}^{\alpha^{\prime}s^{\prime}})^{\ast }-u_{nT\uparrow}^{\alpha s}(u_{n-1T\downarrow}^{\alpha^{\prime}s^{\prime}% })^{\ast}]. \end{equation} Where $\ast$ denotes complex conjugation. First let us consider the conductivity at zero temperature. The features of the spectrum to bear in mind are: All $n\neq0$ levels are split in a doublet $(s=\pm)$ for finite value of Zeeman energy and hybridization. The inversion symmetry breaking term $\Delta_{ib}$ results in crossing of Landau levels at different values of magnetic field strength. Therefore there are not only the allowed transitions between LLs with same $s$ but transitions can also occur between LLs with different $s.$ These transitions are not allowed in the presence of inversion symmetry\cite{1o}. Hence inversion symmetry breaking opens optical transition channels which were previously forbidden. Fig. (4) shows results for absorptive peaks from Eq. (\ref{14}) and Eq. (\ref{15}). All the absorptive peaks result from the transitions between LLs across $\mu=0$. In Fig. (4(a,b)), in the NI phase, the first peak corresponds to transition from $n=0$ and $n=1$ Landau level with $\omega=\epsilon _{00-1}-\epsilon_{11-1}.$ This is a single transition peak and is close to the 2nd absorption peak between the LL, $\omega=\epsilon_{10-1}-\epsilon_{011}$. Next two peaks are small and are also contributed by the Landau level transition between $n=0$ and $n=1$. The set of transitions involving $n=0$ and $n=1$ in NI phase are $\epsilon_{11s}\rightarrow\epsilon_{00-1}$ and $\epsilon_{011}\rightarrow\epsilon_{10s}$. As the magnetic field is increased which increases Zeeman energy such that at $\Delta_{z}=\sqrt{\Delta_{ib}% ^{2}+\Delta_{h}^{2}}$ , the LL $\epsilon_{00-1}$ becomes partially filled. At this stage one of the peaks corresponding to $n=0$ and $n=1$ transition disappears involving $n=0$ are $\epsilon_{011}\rightarrow\epsilon_{10s}$. Increasing Zeeman energy further by increasing the magnetic field such that it is greater then $4.25T$ the $n=0$ doublet becomes hole like doublet and LL $\epsilon_{00-1}$ changes to $\epsilon_{01-1}$. The allowed transitions for $n=0$ are $\epsilon_{011}\rightarrow\epsilon_{10s}$ and $\epsilon _{01-1}\rightarrow\epsilon_{10s}$ represented by peaks at $\omega =\epsilon_{10s}-\epsilon_{01-1}$ and $\omega=\epsilon_{10s}-\epsilon_{011}$. The transitions $\epsilon_{11s}\rightarrow\epsilon_{00-1}$ in NI is replaced by $\epsilon_{01-1}\rightarrow\epsilon_{10s}$ in the TI phase. The absorption peaks for $n\neq0$ shift to higher energy in TI phase because at high magnetic field the gap between LL increases. To understand the behavior of $Im\sigma_{xy}$ we must keep in mind the minus sign between the two terms in Eqs. (\ref{14}), (\ref{15}). The first two peaks result from transition between $n=0$ and $n=1$ levels, one having positive amplitude and the other having negative amplitude. The transition peaks involving $n\neq0$ transitions have decreased in height which is due to the negative sign. For example, the transitions from $n=2$ to $n=1$ Landau level and $n=1$ to $n=2$ have same denominator and the mismatch between numerators of the two transitions results in a net negative amplitude. Fig. (5a) shows the imaginary part transverse conductivity for absorption peaks due to photon absorption in the NI phase and Fig. (5b) shows the imaginary part transverse conductivity in the TI phase. The behavior of absorption peaks at finite chemical potential is shown in Fig. (6) and Fig. (7). For finite value of $\mu$ in the conduction band, in addition to filled LLs in valence band there are also filled LLs in conduction band resulting in allowed transition within same band(intraband transitions). Here interband and intraband transitions are defined with respect to the position of the chemical potential. These intraband absorption peaks shift to lower energy and do not split. The allowed transition within same bands (intraband transitions) have greater probability as compared to interband transitions. Fig. (9) and Fig. (10)\textbf{ }illustrate the allowed transitions between LLs at different values of magnetic field. The blue lines are for $s=-1$ LLs and green lines are for $s=1$ LLs. Except for $n=0$ all the other Landau levels have perfect particle-hole symmetry. The chemical potential is represented by thick black line. The vertical arrows show the allowed inter band transitions. The shift of chemical potential from $\mu=0$ to some finite value results in additional intraband transitions. \section{Optical Hall Conductivity} To calculate Hall conductivity we use wavefunctions from Eq. (4) in the Kubo formula given in Eq. (20) and obtain% \begin{align} \sigma_{\alpha\beta}(\omega) & =\hbar ev_{f}^{2}B\underset{n\alpha s\neq n^{\prime}\alpha^{\prime}s^{^{\prime}}}{\sum}\dfrac{(f(\varepsilon_{n\alpha s})-f(\varepsilon_{n+1\alpha^{\prime}s^{\prime}}))}{(\epsilon_{n+1\alpha ^{\prime}s^{\prime}}-\epsilon_{n\alpha s})}M(n,\alpha,s,\alpha^{\prime },s^{^{\prime}})\\ & [\dfrac{1}{\hbar\omega-\epsilon_{n\alpha s}+\epsilon_{n+1\alpha^{\prime }s^{\prime}}+i\hbar/(2\tau)}-\dfrac{1}{\hbar\omega-\epsilon_{n+1\alpha ^{\prime}s^{\prime}}+\epsilon_{n\alpha s}+i\hbar/(2\tau)}] \end{align} The effects of broken inversion symmetry that are reflected in the LL spectrum and crossing of LLs are also revealed in the dc\ and optical Hall conductivity, which we now discuss. In Figs. (11,12,13) plots with blue color show the results for dc Hall conductivity ($\sigma_{xy}(\omega=0)$) with $B=2$T (perfect particle hole symmetry across $\mu=0$), $B=4.27$T (one $n=0$ LL is partially filled at $\mu=0$) and $B=5.8$T (one extra filled LL for negative value of $\mu$) at temperature 1K respectively. The results are presented as a function of the chemical potential $\mu$. The LL spectrum is also shown to emphasize the unusual behavior of steps and plateaus at specific magnetic fields. These steps and plateaus show clear deviation from previous results, \cite{1n}, which were in the presence of inversion symmetry. For $n\geq1$ the plateau widths and step heights are symmetrical for both negative and positive values of $\mu,$ reflecting particle-hole symmetry in the system. However, widths and heights are not symmetrical within the same band because of LL crossings. The contribution of $n=0$ LLs to Hall conductivity pleateus shows interesting behavior. For chemical potential fixed at $\mu=0,$ the conductivity jumps from $0$ for $\Delta_{z}<\sqrt{\Delta_{ib}^{2}+\Delta _{h}^{2}}$ to a finite value for $\Delta_{z}>\sqrt{\Delta_{ib}^{2}+\Delta _{h}^{2}}$. This is an indication of phase transition from NI phase to TI phase as the magnetic field is increased. For $\Delta_{z}<\sqrt{\Delta _{ib}^{2}+\Delta_{h}^{2}}$ the $n=0$ doublet has one electron like ($u_{0}^{0-1}$) LL and one hole like ($u_{0}^{11}$). When magnetic field is increased such that $\Delta_{z}>\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}}$ both $n=0$ LLs become hole like, thus increasing the Hall conductivity. For $n=0$ LLs the Hall conductivity jump depends on magnetic field according to $[sgn(\Delta_{z}-\sqrt{\Delta_{ib}^{2}+\Delta_{h}^{2}})+1]$. In Figs. (10,11,12) red plots represent optical Hall conductivity. The steps and plateaus are robust for low values of $n=(0,1)$ but as the value of $n$ increases, they no longer remain robust especially for $n>\pm2.$ We note that as the value of magnetic field is increased the number of robust steps also increases, which is expected. The steps structure is symmetric across $\mu=0$ except for steps involving $n=0$ LLs. The steps corresponding to low value of $n$ are however robust unless frequency $\omega$ is close to a resonance. In the case of static hall conductivity these steps are always robust.\newline Next we examine the robustness of step-like structure in the optical Hall conductivity as function of disorder strength in both the phases of the system. The degree of disorder can be characterized by the scattering rate parameter $\eta$ \cite{2d} We present the results of our calculation for TI phase in Fig. (13). We can see that for dc hall conductivity the step-like structure remains for fairly large values of $\eta$. The step like structure for large $|n|$ begin to dminish when $\eta$ is increased. However the step corresponding to $n=0$ LLs always remains rebust. For ac Hall conductivity the step-like structure is less robust against increase in $\eta$. For $\eta \simeq\omega$ the plateaus are nearly washed out for large $|n|$, while the $n=0$ is again robust. For the case of NI phase a similar behavior is observed ; see Fig. (14). In Fig. (15) and Fig. (16) we show real part of $\sigma_{xy}(\omega)$ as a function of $\omega,$ we find that real part of $\sigma_{xy}(\omega)$ exhibits sharp cyclotron resonance peaks at transition energies for transition involving $n=0$ LL. These resonance peaks change sign near each allowed transition frequency. As for experimental realization, since the Faraday rotation angle is directly proportional to optical Hall conductivity, it is possible to observe the steps in optical Hall conductivity that are predicted here by performing Faraday rotation measurements \cite{2d,2e,2f}. \section{Conclusions} To conclude, we have determined the LL spectrum, density of states and the magneto-optical conductivity tensor with in linear response regime for thin film topological insulators with finite hybridization between the surface states. We find that breaking of time reversal and inversion symmetry can have profound effects on the optical response. We have shown that inversion symmetry breaking, in addition to time reversal symmetry breaking and hybridization, significantly affects the spectrum, transition channels and magneto-optical response of the system. In the inversion symmetry broken TI thin films, we have found the following: The system can exist in Normal Insulating (NI) and Topological Insulating (TI) phases. The phase transition between these phases can be controlled by the degree of hybridization as well as by breaking symmetries: time reversal/inversion symmetry or both. The LL spectrum exhibits level crossings which were not present in the inversion symmetric system. New optical transition channels have been found which were previously forbidden. We show that there are observable signatures of the phase transition from NI to TI phase in both the longitudinal and optical Hall conductivity. \section{Acknowledgement} Kashif Sabeeh would like to acknowledge the support of the Higher Education Commission (HEC) of Pakistan through project No. 20-1484/R\&D/09 and the Abdus Salam International Center for Theoretical Physics (ICTP) for support through the Associateship Scheme.
\section{Introduction} \label{intro} In the last few decades, neutrino oscillation experiments have conclusively shown that neutrinos are massive~\cite{nu}. The minimal version of the Standard Model is thus to be extended to accommodate the neutrino masses. In many possible extension of the Standard Model, neutral heavy leptons are often predicted. In the seesaw mechanism~\cite{Seesaw} for example, the right-handed neutrinos are introduced and they weakly mix with the ordinary neutrinos after the electroweak symmetry breaking. For the masses of the heavy neutrinos, a wide range of possibilities has been discussed in literature. In the canonical picture of the seesaw mechanism, heavy neutrino masses are supposed to be around the grand unification scale. These super-heavy neutrinos can account for the baryon asymmetry of the universe by the leptogenesis~\cite{leptogenesis}. Another possibility to account for the baryon asymmetry has been suggested in~\cite{BAU,Asaka:2005pn} and further studied in~\cite{BAU2}. In this scenario, two quasi-degenerate heavy neutrinos of $\mathcal{O}(100)\,{\rm MeV} - \mathcal{O}(10)\,{\rm GeV}$ play a crucial role in the early universe. Heavy neutrinos in the mass range $\sim 0.2\,{\rm GeV}$ could enhance the energy transport from the core to the stalled shock and favor the supernova explosion~\cite{Fuller:2009zz}. Heavy neutrinos with a few keV mass have also attracted much interests as a viable dark matter candidate~\cite{DM} and an agent of the pulsar velocities~\cite{palsar}. Remarkably, the dark matter and the baryon asymmetry due to keV and GeV heavy neutrinos can originate in a simple framework so called $\nu$MSM~\cite{Asaka:2005an,Asaka:2005pn}, which is an extension of the Standard Model with just three generations of the right-handed neutrinos. Besides the super-heavy range much larger than TeV, such heavy neutrinos can be tested in existing and forthcoming experiments due to lower threshold energies of production (for example, see Ref.~\cite{Gorbunov:2007ak,Atre:2009rg} and references therein). In this paper, we focus on the heavy neutrinos produced by kaon decays. The previous neutrino experiments, including peak searches in the meson decays~\cite{pi, pi2,pi3,K,K2} and the decay searches with accelerators~\cite{acc,PS191}, have placed stringent bounds on the mixing parameters in this mass range. In particular, PS191~\cite{PS191} has placed the strongest bounds for the mixing parameters in the mass range of $140\,{\rm MeV} \lesssim M_N \lesssim 500\,{\rm MeV}$. Since the PS191 experiment in 1984, however, no further experiments of this type of decay search have performed and the bounds have not been updated for about 30 years. On the other hand, great progress has been made in neutrino oscillation experiments over the same period. It is interesting to note that typical long or short baseline experiments are equipped with the (near) detectors placed at $\mathcal{O}(100)$ meters away from the beam targets, which detectors are capable of measuring the charged-particle tracks produced by the heavy neutrino decays. A natural question is then whether the existing accelerator-based neutrino experiments are capable of discovering the heavy neutrinos and how sensitive such experiments are\footnote{Far detectors with large volume are also useful to detect the heavy neutrinos produced in the atmosphere~\cite{Kusenko:2004qc, ATM}.}. We believe that this is a timely question to ask. In fact, the exposure of PS191 is about $200\,{\rm m^3} \times 10^{20}\,{\rm POT}$ while the recent accelerator-based neutrino experiments are expected to achieve $10^{21}\,{\rm POT}$ with the near detectors which are typically no smaller than $200\,{\rm m^3}$ by factor of $10$. Table~\ref{t1} shows a comparison between PS191 and several examples of recent neutrino experiments. Among several options of accelerator experiments, in this paper we focus on the T2K experiment as a typical example. \begin{table} \begin{center} \begin{tabular}{cccccc}\hline & PS191~\cite{PS191} & T2K~\cite{Abe:2011ks} & MINOS~\cite{MINOS} & MiniBooNE~\cite{Mini} & SciBooNE~\cite{Sci}\\\hline POT & $0.86 \times 10^{19}$ & $10^{21}$ & $10^{21}$ & $10^{21}$ & $10^{21}$\\ $({\rm Distance})^{-2}$ & $(128\,{\rm m})^{-2}$ & $(280\,{\rm m})^{-2}$ & $(1\,{\rm km})^{-2}$ & $(541\,{\rm m})^{-2}$ & $(100\,{\rm m})^{-2}$ \\ Volume & $216\,{\rm m}^3$ & $88\,{\rm m}^3$ &$303\,{\rm m}^3$ &$524\,{\rm m}^3$ &$15.3\,{\rm m}^3$ \\ Events & $1$ & $9.9$ & $2.7$ & $15.8$ & $13.5$ \\\hline \end{tabular} \caption{A comparison between PS191 and recent accelerator experiments. The item ``Distance'' means the distance between the beam target and the detector for each experiment. The item ``Events'' shows POT$\times$(Distance)${}^{-2}$$\times$Volume in units of PS191. The POTs for the oscillation experiments are assumed to achieve $10^{21}$.} \label{t1} \end{center} \end{table} The study follows two main steps;~the flux estimation and the event number calculation for various signal decays. First, in the fulx estimation, we use a semi-analytical method with a help of the active neutrino flux $\phi_\nu$ simulated by the T2K collaboration~\cite{flux,D1, Abe:2012av}, making a reasonable simplification for the geometry of the decay tunnel and the detector. More precise fluxes of the heavy neutrino might be obtained by Monte Carlo methods. In a Monte Carlo calculation it is possible to take into account the details of the geometry and the spectrum of the parent mesons. The analytical technique is nevertheless useful, since it allows one to understand the essential physics which determines the behavior of the heavy neutrino spectrum, in particular its mass dependence. We emphasize that the phase-space effect in kaon decay is important and the heavy neutrino flux $\phi_N$ can be significantly deviated from the naive expectation $\phi_N \simeq |\Theta |^2 \phi_\nu$, where $\Theta$ is the active-heavy mixing parameter of interest. As the heavy neutrino mass $M_N$ approaches to the production threshold, the heavy neutrinos tend to be distributed to lower energies with a narrow spread, so that the proportionality $\phi_N \simeq |\Theta |^2 \phi_\nu$ is broken. This phase-space effect leads to larger event numbers than the naive expectation and has a significant impact for the estimation of the sensitivity. Second, in the event number calculation, we take into account various decay modes of the heavy neutrino $N$. The two-body modes $N \to e^\mp \pi^\pm$ and $N \to \mu^\mp \pi^\pm$ are the most promising channels due to their large branching ratios. For these modes, the invariant mass distribution for the lepton and the pion momenta has a peak at the heavy neutrino mass. The three-body modes $N \to e^- e^+ \nu$, $N \to \mu^\mp e^\pm \nu$ and $N \to \mu^- \mu^+ \nu$ are also interesting. While the first mode may suffer higher background by $\pi^0$ decay, the latter two modes have smaller backgrounds and also serve as promising channels for discovering the heavy neutrino. By assuming non-observation of these exotic events in the T2K near detector at $10^{21}\,{\rm POT}$, one finds the upper-bound for the mixing parameter of the electron type better than that of PS191. This means that T2K may have a good chance to discover the heavy neutrino in near future. The layout of this paper is as follows. In Section 2, we introduce the heavy neutrino and briefly review its essential properties. In Section 3, the flux of the heavy neutrino at the near detector is discussed. In Section 4, the decays of the heavy neutrino at the detector and their event numbers are studied. Section 5 is devoted to conclusions. \section{Properties of the heavy neutrino} \label{decay} We consider a heavy (sterile) neutrino $N$ in the mass range $1\,{\rm MeV} \lesssim M_N \lesssim 500\,{\rm MeV}$. The flavor eigenstates of the left-handed neutrinos $\nu_\alpha$ ($\alpha = e,\mu,\tau$) are given by the linear combination of the mass eigenstates as \begin{eqnarray} \nu_\alpha \,=\, U_{\alpha i} \,\nu_i + \Theta_{\alpha}N, \end{eqnarray} where $U_{\alpha i}$ is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, $\nu_i$ ($i = 1,2,3$) are the mass eigenstates of the active neutrinos. The parameter $\Theta_{\alpha}$ is the mixing between (light) active and sterile neutrinos, through which $N$ interacts with the weak gauge bosons. The extension to the multi-generation case is trivially done by replacing $\Theta_{\alpha}N$ with $\sum_I \Theta_{\alpha I}N_I$. In this paper, we assume $N$ is Dirac particle unless otherwise stated. This is simply because we would like to make a comparison to PS191 in which the same assumption is made. If $N$ is Majorana particle, the decay width is doubled since $N$ decays to charge-conjugated states also. Let us overview the way to find the heavy neutrino $N$ in accelerator experiments, especially in the T2K experiment. In the mass range of $140\,{\rm MeV} \lesssim M_N \lesssim 500\,{\rm MeV}$, the heavy neutrino $N$ is produced by $K$ decay. The main production modes are \begin{eqnarray} K^+ \to \mu^+ N,\quad\quad K^+ \to e^+ N \label{KtoN} \end{eqnarray} for $M_N < 388\,{\rm MeV}$ and $M_N < 493\,{\rm MeV}$, respectively. The decay width $K^+ \to \mu^+ N$ ($K^+ \to e^+ N$) is proportional to $|\Theta_\mu |^2$ ($|\Theta_e |^2$), and $\Theta_\tau$ is irrelevant for the production\footnote{The heavier mesons such as $D$ can decay producing tau so that $\Theta_\tau$ is involved in the production. In this work we neglect the contribution from such heavier mesons.}. We describe further details of the decay modes~(\ref{KtoN}) in Section 3 and Appendix A. Since the magnetic horn focuses positively-charged mesons, the contributions from $K^-$ are small~\cite{D1} and in what follows we neglect the $K^-$ contributions. The heavy neutrinos $N$ produced by the meson decays escape from the decay volume and some of them are injected into the near detector ND280 in the T2K experiment and decay to leave signals. Depending on the mass, $N$ decays to lighter particles in various decay modes; \begin{gather} N \to \gamma \nu,\quad N \to 3\nu,\quad N \to e^- e^+ \nu, \quad N \to \mu^\mp e^\pm \nu,\quad\nonumber\\ N \to \nu \pi^0,\quad\ N \to e^- \pi^+,\quad N \to \mu^- \mu^+ \nu,\quad N \to \mu^- \pi^+. \nonumber \end{gather} With the ND280 detector capable of identifying $e^\pm, \mu^\pm$ and $\pi^\pm$, some of the above channels can be detected as signal events. Due to large branching ratios, the two-body decay modes $N \to \nu \pi^0$, $N \to e^- \pi^+$ and $N \to \mu^- \pi^+$ would be the most frequent events. However, $N \to \nu \pi^0$ does not seems to be as promising as the other two modes since $\pi^0$ are copiously produced by the ordinary neutrino interactions, which lead to large backgrounds. On the other hand, $N \to e^- \pi^+$ and $N \to \mu^- \pi^+$ leave two charged-particle tracks with monochromatic energies in the rest frame of $N$. The invariant mass distribution of the two charged-particle momenta sharply peaks at the heavy neutrino mass. Such a peak signal is definitely better than a slight excess of $\pi^0$ events over huge background. We thus proceed without any further analysis of $N \to \nu \pi^0$, but study $N \to e^- \pi^+$ and $N \to \mu^- \pi^+$ in more detail in Section 4. The decay rates of $N \to e^- \pi^+$ and $N \to \mu^- \pi^+$ are proportional to $|\Theta_e|^2$ and $|\Theta_\mu|^2$, respectively. The radiative decay $N \to \gamma \nu$ is induced at one loop and negligible in this work. In spite of small branching ratios, the three-body decay modes $N \to e^- e^+ \nu$, $N \to \mu^\mp e^\pm \nu$ and $N \to \mu^- \mu^+ \nu$ are also interesting to study. In particular, $N \to \mu^\mp e^\pm \nu$ and $N \to \mu^- \mu^+ \nu$ have smaller backgrounds than $N \to e^- e^+ \nu$ and a few these events may lead to the discovery of the heavy neutrino. Notice that $N \to e^- e^+ \nu$ and $N \to \mu^- \mu^+ \nu$ are conducted not only by the charged-current (CC) interaction but also by the neutral current (NC) interaction, so that these decay rates depend on all types of the mixing parameters $\Theta_{e,\mu,\tau}$. On the other hand, $N \to \mu^\mp e^\pm \nu$ are mediated only by the CC interactions and the decay rate depends on $\Theta_{e,\mu}$. Fig.~\ref{PandD} summarizes the production and decay processes of $N$ to be studied in this work. We do not present the formulas for the decay rates here. A complete list of the decay modes and the decay rates is found in Ref.~\cite{Gorbunov:2007ak}. \begin{figure}[t] \begin{center} \scalebox{0.42}{\includegraphics{table.eps}} \end{center} \caption{Summary of the production and the detection processes of the heavy neutrino $N$. The decay mode $N \to \nu \pi^0$ is abbreviated here because it is not as promising detection channel as the other ones.} \label{PandD} \end{figure} \section{Heavy neutrino flux at the near detector} \label{flux} In the T2K experiment, pions and kaons are produced by the interaction of $31\,{\rm GeV}$ protons with the graphite target. The produced mesons are focused by the magnetic horns and enter the decay volume of $96\,{\rm m}$ long filled with helium gas. The parent mesons decay in flight inside the decay volume. The off-axis detector ND280 is located $280\,{\rm m}$ from the target station. The off-axis angle to ND280 from the target position is $2.04^\circ$. The layout of the secondary beam line and the near detector is sketched in Fig.~\ref{layout}. Details of the experiment setup are found in Ref.~\cite{Abe:2012av}. \begin{figure}[t] \begin{center} \scalebox{0.4}{\includegraphics{schematic.eps}} \end{center} \caption{Schematic of the secondary beam line and the near detector ND280.} \label{layout} \end{figure} Calculation of the neutrino flux at the near detector is a complicated task. The T2K collaboration has simulated the fluxes of the active neutrinos by the Monte-Carlo method. In this work, we do not follow their approach, but estimate the heavy neutrino flux by a semi-analytical method similar to Ref.~\cite{Lipari}. We try to reconstruct a reasonable flux of parent particles from the active neutrino flux of Ref.~\cite{D1}, and then evaluate the heavy neutrino flux from the reconstructed parent flux. In this paper, we focus on $K^+$ meson as the parent of the heavy neutrinos since $K^+$ decay covers a wider range of the heavy neutrino mass than $\pi^+$ decay. The following discussion is, however, easily extended to the $\pi^+$ case. \subsection{Modeling the parent flux} \label{recon} For ND280, the neutrino source is a line-like object rather than a point-like one. Let $\phi(p_K,l)$ denotes the $K^+$ spectrum along the decay volume, where $p_K$ is the magnitude of the $K^+$ momentum and $l$ is the flight length of $K^+$. We set $l=0$ to be the upstream end of the decay volume. In the decay volume filled with helium gas, the decay length of $K^+$ is much shorter than the interaction length. One thus finds \begin{eqnarray} \phi_K(p_K,l) \,=\, \phi_K(p_K)e^{-\frac{l}{\Lambda_K}}, \end{eqnarray} where $\phi_K(p_K)$ is the spectrum at $l=0$ and $\Lambda_K$ is the total decay length $\Lambda_K = 3.7 (p_K/m_K)\,{\rm m}$ with the kaon mass $m_K = 493\,{\rm MeV}$. If the parent spectrum $\phi_K(p_K)$ is known, one can calculate the daughter $\nu_\mu$ flux from $K^+ \to \mu^+ \nu_\mu$ decay. The source term of $\nu_\mu$ is given by \begin{eqnarray} S_\nu(E_\nu, \theta, \phi, l) &=& \int_0^\infty \!\!dp_K \,\frac{\phi_K(p_K,l)}{\beta (1/\Gamma) \gamma} \,\, \frac{1}{\Gamma}\frac{d^3 \Gamma}{dE_\nu d\cos\theta d\phi}\nonumber\\ &=& \int_0^\infty \!\!dp_K \,\phi_K(p_K,l)\left( \frac{m_K}{p_K} \right) \frac{d^3 \Gamma}{dE_\nu d\cos\theta d\phi}, \label{Snu} \end{eqnarray} where $E_\nu$ is $\nu_\mu$ energy, $\theta$ and $\phi$ are the polar and the azimuth angles of the emitted $\nu_\mu$ relative to the $K^+$ momentum directions, and $\Gamma$ is the $K^+ \to \mu^+ \nu_\mu$ decay width. Giving the source term, the $\nu_\mu$ flux $\phi_{\nu_\mu}(E_\nu)$ at ND280 is obtained by integrating the source term along $l$ and the angles $\theta$ and $\phi$ covered by ND280. It reads \begin{eqnarray} \phi_{\nu_\mu}(E_\nu) \,=\, \int_0^{l_f}\!\!\! dl \int_{-1}^{1} \!\!d\!\cos\theta \int_{0}^{2\pi} \!\!\!d\phi \, \frac{1}{A}\,S_\nu(E_\nu, \theta, \phi, l)\,P(\theta, \phi), \label{phi1} \end{eqnarray} where $l_f = 96\,{\rm m}$, $A$ is the effective area of ND280, $P(\theta,\phi)$ is the ``projection'' function which turns unity if $\nu_\mu$ enters ND280 and otherwise zero. The parent kaons carry two more degrees of freedom which are not explicitly mentioned above; the polar angle relative to the beam axis (the central axis of the decay volume) and the radial coordinate from the beam axis, both of them are defined at $l=0$. The location of ND280 relative to each $K^+$ momentum depends on these degrees of freedom and the function $P(\theta,\phi)$ is in fact highly complicated. However, the situation is greatly simplified if kaon momenta are assumed to be parallel to the beam axis. Information about the polar angle and the radial coordinate is then to be represented by an effective off-axis angle $\theta_0$, which is an ``virtual'' off-axis angle to ND280 from the upstream end of the decay volume. This angle $\theta_0$ represents the average off-axis angle to ND280 from each $K^+$ momentum, which angle varies kaon by kaon carrying deferent polar angles and radial coordinates. Furthermore, we neglect $\theta$ dependence of the effective area $A$. Adopting these simple modeling of geometry of the decay volume and the detector, we use \begin{eqnarray} \phi_{\nu_\mu}(E_\nu) \,=\, \frac{\Delta \phi}{A} \int_0^{l_f}\!\!\! dl \int_{-1}^{1} \!\!d\!\cos\theta \,S_\nu(E_\nu, \theta, l)\,P'(\theta,\theta_0), \label{phi2} \end{eqnarray} as a formula to relate the daughter $\nu_\mu$ flux with the parent $K^+$ spectrum. Here $\Delta \phi$ is $\phi$ interval defined by the detector width, $P'(\theta,\theta_0)$ denotes the projection function determined by the detector hight and $K^+$ decay point $l$. Since the explicite expression of Eq.~(\ref{phi2}) is rather lengthy, we put details on Eq.~(\ref{phi2}) in Appendix~\ref{fluxculc} . \begin{figure}[t] \begin{center} \scalebox{0.6}{\includegraphics{phiNuND280.eps}} \end{center} \caption{A comparison between Eq.~(\ref{phi2}) and the $\nu_\mu$ flux simulated in Ref.~\cite{D1}. Dots shows the result of the simulation in Ref.~\cite{D1} and solid curve is Eq.~(\ref{phi2}) with the parameters $\theta_0 = 1.48^\circ$, $a_0 = 4.8\times 10^{19}\,{\rm mb^{-1}}$ and $p_0 = 2.1\,{\rm GeV}$.} \label{fND280} \end{figure} Having Eq.~(\ref{phi2}), our strategy is fitting $\phi_{\nu_\mu}(E_\nu)$ calculated in Ref.~\cite{D1} by adjusting the parameters included in the right-handed side of Eq.~(\ref{phi2}), with a kaon spectrum which is physically well-motivated. The $K^+$ spectrum $d\sigma/dp_K$ in the proton collision with a graphite target is measured by NA61/SHINE Collaboration~\cite{Abgrall:2011ae}, which is customized to improve the flux calculation in T2K. It seems reasonable to expect that the shape of the kaon spectrum is not far from this measured spectrum. However, the effects such as the secondary protons~\cite{D1} and the magnetic horns must deform $d\sigma/dp_K$ to some extent. To take into account this deformation, we allow a shift of the peak of $d\sigma/dp_K$. In summary, in order to model the kaon spectrum $\phi_K(p_K)$, we introduce two free parameter $a_0$ and $p_0$, the overall scale factor and the shift of the peak, respectively. See Appendix~\ref{fluxculc} for more details. With the above ansatz for the kaon spectrum, we have three parameters in the right-hand side of Eq.~(\ref{phi2}); the effective off-axis angle $\theta_0$, the scale factor $a_0$ and the shift of the peak $p_0$. Fig.~\ref{fND280} shows the fit by Eq.~(\ref{phi2}). The dots show the result of the simulation in Ref.~\cite{D1}, while the solid curve shows Eq.~(\ref{phi2}) with the parameter set of $\theta_0 = 1.48^\circ$, $a_0 = 4.8\times 10^{19}\,{\rm mb^{-1}}$ and $p_0 = 2.1\,{\rm GeV}$. It is seen that Eq.~(\ref{phi2}) is well tracing the global behavior of the simulated result of Ref.~\cite{D1}. Notice that the effective off-axis angle is taken as $\theta_0 = 1.48^\circ$, which is smaller than the actual one $2.04^\circ$. If this angle is taken as $\theta_0 = 2.04^\circ$, the flux starts to fall off at $E_\nu \sim 6\,{\rm GeV}$ and does not fit the data points above $6\,{\rm GeV}$. This is simply due to the two-body kinematics. In $K^+ \to \mu^+ \nu_\mu$ decay, the maximum energy of $\nu_\mu$ is \begin{eqnarray} E^{\rm max}_\nu = \frac{m_K^2 - m_\mu^2}{2m_K \sin\theta} \end{eqnarray} for $\theta < 90^\circ$. The larger the off-axis, the smaller the maximum neutrino energy. The abundance of $\nu_\mu$ above $6\,{\rm GeV}$ means that certain fraction of kaons must have momentum directing to the near detector so that $\nu_\mu$ with smaller $\theta$ can contribute to the flux. This is of course expected since the magnetic horns do not make the kaon momenta perfectly parallel to the beam axis. The smaller angle $\theta_0 = 1.48^\circ$ effectively takes this effect into account. \subsection{The heavy neutrino flux} \label{Hflux} \begin{figure}[t] \begin{center} \scalebox{0.6}{\includegraphics{fmu.eps}} \end{center} \caption{Fluxes of the heavy neutrino $\phi_N(E_N)$ from the $K^+ \to \mu^+ N$ mode for several sample values of $M_N$. Black dotted marks show $|\Theta_\mu |^2 \phi_{\nu_\mu}$. The masses are taken as $M_N = 350\,{\rm MeV}$ (red, dotted), $300\,{\rm MeV}$ (orange, dashed), $200\,{\rm MeV}$ (magenda, long-dashed), $100\,{\rm MeV}$ (blue, dashed-dot), $50\,{\rm MeV}$ (green, solid).} \label{Nflux1} \end{figure} With the kaon spectrum $\phi_K(p_K)$ discussed above, let us estimate the heavy neutrino flux $\phi_N(E_N)$. The calculation goes as before. The source term is given by \begin{eqnarray} S_N(E_N, \theta, \phi, l) \,=\, \int_0^\infty \!\!dp_K \,\phi_K(p_K,l)\left( \frac{m_K}{p_K} \right) \sum_{i=1}^2 \frac{d^3 \Gamma_i}{dE_N d\!\cos\theta d\phi}, \label{SN} \end{eqnarray} where $\Gamma_1$ and $\Gamma_2$ are the decay width for $K^+ \to \mu^+ N$ and $K^+ \to e^+ N$, respectively. Provided that the decay length of the heavy neutrino is much larger than the distance between the $K^+$ decay point and the detector, the heavy neutrino flux $\phi_N(E_N)$ is given by just replacing $S_\nu$ with $S_N$ in Eq.~(\ref{phi2}). We will see in Section~\ref{Eventrate} that this is the case for the energy and the parameters of the current interests. Details on the differential decay rates $d\Gamma_i$ are shown in Appendix~\ref{fluxculc}. \begin{figure}[t] \begin{center} \scalebox{0.6}{\includegraphics{fe.eps}} \end{center} \caption{Same as Fig.~\ref{Nflux1} but for the $K^+ \to e^+ N$ mode.} \label{Nflux2} \end{figure} Fig.~\ref{Nflux1} and~\ref{Nflux2} present the heavy neutrino fluxes $\phi_N(E_N)$ for several fixed values of $M_N$. Fig.~\ref{Nflux1} and~\ref{Nflux2} show the cases where $(|\Theta_e| ,|\Theta_\mu|) = (0,10^{-1}) $ and $(10^{-1},0)$, respectively. In the both cases of $K^+ \to \mu^+ N$ and $K^+ \to e^+ N$, the spectral shapes are similar to $\phi_{\nu_\mu}$ for $M_N \lesssim 100\,{\rm MeV}$, whereas they are significantly deviated from $\phi_{\nu_\mu}$ for $M_N \gtrsim 100\,{\rm MeV}$. The most remarkable feature is the enhancement of the flux at lower energies for larger $M_N$. In Fig.~\ref{Nflux1}, for example, the heavy neutrinos of $M_N = 350\,{\rm MeV}$ gather around $2-3\,{\rm GeV}$ and the peak intensity reaches $10^{11}\,{\rm cm^{-2} GeV^{-1}}$. This is nearly two orders of magnitude larger than the naive expectation $|\Theta_\mu|^2 \phi_{\nu_\mu}$. There are two reasons for the enhancement. One is the spin conservation in the $K^+$ rest frame. The matrix elements of the decay processes $K^+ \to \mu^+ N$ and $K^+ \to e^+ N$ scale with $M_N^2$ when $M_N$ is much larger than the mass of the charged lepton in each final state. This accounts for the larger fluxes than $|\Theta_{e,\mu}|^2 \phi_{\nu_\mu}$ for $M_N > m_\mu$. This also accounts for the smaller flux than $|\Theta_e|^2 \phi_{\nu_\mu}$ for $M_N = 50\,{\rm MeV}$ in Fig.~\ref{Nflux2}. The another reason is slower motions of the heavy neutrinos at the rest frame of the parent particle. The smaller the daughter velocities, the easier to boost them into the forward directions. In the current setup, the detector may well be regarded as the object placed in the forward direction of the $K^+$ momenta. In the case of $K^+ \to \mu^+ \nu_\mu$, neutrinos can be emitted not only to the forward directions but also to the backward directions since the neutrino masses are very small and the neutrino velocities at the rest frame are larger than the parent typical velocity at the laboratory frame. On the other hand, in the decay $K^+ \to \mu^+ N$, the heavy neutrinos tend to be emitted to the forward directions. In the rest frame of $K^+$, the gamma factor of $N$ at the $K^+$ rest frame is given by \begin{eqnarray} \gamma_N = \frac{m_K^2 - m_\mu^2 + M_N^2}{2m_K M_N}. \label{eN} \end{eqnarray} This goes to unity as $M_N$ approaches to the threshold value $M_N = m_K - m_\mu = 388\,{\rm MeV}$. On the other hand, most of the kaons carry the momentum around $1-4\,{\rm GeV}$, so that the gamma factor of $K^+$ is typically given by $\gamma_K = 2-8$. Then Eq.~(\ref{eN}) tells us that $\gamma_K > \gamma_N$ for $M_N \gtrsim 120\,{\rm MeV}$. Hence for $M_N \gtrsim 120\,{\rm MeV}$, kaon's velocities overcome $N$'s velocities and $N$ are focused into the forward directions. This agrees with the flux behavior seen in Fig.~\ref{Nflux1} and~\ref{Nflux2}. \section{Event rates and expected sensitivity} \label{Eventrate} As we have seen in Section~\ref{decay}, a fraction of the heavy neutrinos passing through the detector decay inside the detector volume and leave signals via various decay modes. In this section, we calculate the number of signal events in ND280 and estimate the potential sensitivity of ND280. We argue that the sensitivity of ND280 is comparable to that of the PS191 experiment. \subsection{Number of the signal events} The total number of events is given by the difference between the number of the heavy neutrinos at the up and down stream end of the detector. For a particular decay channel, the number of events is given by \begin{eqnarray} {\rm Events} = A \int_{M_N}^\infty \!\! dE_N \,\,\frac{1}{\lambda}\int_{x_0}^{x_1}\!\!\! dx \,\, \phi_N(E_N,x), \label{Eventmaster} \end{eqnarray} where $\lambda$ is the (partial) decay length for the signal decay mode of interest, $A$ is the cross-sectional area of the detector, $x$ is the flight distance of the heavy neutrinos and $(x_0, x_1)$ means the detector segment. The number of the heavy neutrinos decreases by the decays. With the total decay length $\Lambda_N$, the $x$ dependence of $\phi_N(E_N, x)$ is determined as \begin{eqnarray} \phi_N(E_N, x) \,=\, \phi_N(E_N)e^{-\frac{x}{\Lambda_N}}, \label{phitot} \end{eqnarray} where $\phi_N(E_N)$ is the heavy neutrino flux discussed in Section~\ref{Hflux}. Eq.~(\ref{Eventmaster}) is further simplified if the total decay length is much larger than the flight distance of the heavy neutrino. Provided that $\Lambda_N \gg x_0, x_1 - x_0$, the number of events reads \begin{eqnarray} {\rm Events} \,\simeq\, \int_{M_N}^\infty \!\! dE_N \,\, \phi_N(E_N)\,\frac{V}{\lambda}, \label{Eventmaster4} \end{eqnarray} where $V = A(x_1 - x_0)$ is the detector volume. In the T2K setup, $x_0 \approx 300\,{\rm m}$ and $x_1 - x_0 \approx 5\,{\rm m}$. Thus the condition $\Lambda_N \gg x_1 - x_0$ holds if $\Lambda_N \gg x_0 \approx 300\,{\rm m}$. It turns out that the condition $\Lambda_N \gg 300\,{\rm m}$ holds in the most parameter and energy region of interest. \begin{figure}[t] \centerline{ \includegraphics[scale=0.28]{TeTOT2.eps}% \includegraphics[scale=0.28]{TmuTOT2.eps}% }% \caption{Parameter region that satisfy $\Lambda_N < 300\,{\rm m}$ for $\gamma_N = \sqrt{2}$ (The filled region labeled by $\Lambda_N < 300\,{\rm m}$). The filled region labeled by $\tau > 0.1 (\rm s)$ shows the region where heavy neutrino decay may spoil BBN.} \label{tot} \end{figure} Fig.~\ref{tot} highlights a ``strong coupling'' regime where $\Lambda_N < 300\,{\rm m}$. Here the region is showing an example with $\gamma_N = \sqrt{2}$~$(p_N = M_N)$. For $\gamma_N = \sqrt{1.5}$~($p_N = M_N /2$), the boundary is pushed down by factor of two. For the energies $\gamma_N \gtrsim \sqrt{2}$, the total decay is effective only for strong coupling regime $|\Theta_{e,\mu} |^2 \gtrsim 10^{-4}$, which is already ruled out by many experiments. In Fig.~\ref{tot}, we put in passing the region where the lifetime of the heavy neutrino becomes long enough so that late time decay of the heavy neutrinos may spoil the success of Big Bang Nucleosynthesis (BBN). Ref.~\cite{BBN,Ruchayskiy:2012si} has studied such a bound for $10\,{\rm MeV} < M_N <140 \,{\rm MeV}$ in detail. For $M_N > 140\,{\rm MeV}$, however, there is no consensus about the constraint from BBN. Here we simply present the region for $\tau > 0.1\, {\rm s}$~\cite{Ruchayskiy:2012si} where the heavy neutrinos are not cleared away before the onset of BBN. \subsection{Comparison between T2K and PS191} For each mass eigenstate of the heavy neutrinos, four additional parameters are introduced into the Standard Model; $M_N$ and $\Theta_{e,\mu,\tau}$. Experiments impose some constrains on the four dimensional space $(M_N, |\Theta_e|, |\Theta_\mu|,|\Theta_\tau|)$. However, the analysis of experimental constraints on the full four-dimensional space is a complicated task. PS191 has made the following assumptions/simplifications in their analysis. \begin{itemize} \item Heavy neutrinos are Dirac particles. \item The NC contributions to the three-body decays of $N$ are neglected. \item Either $K^+ \to \mu^- N$ or $K^+ \to e^- N$ is dominant in the production. \end{itemize} In the following, we first make the same simplifications, just aiming for a comparison between T2K and PS191. In Section~\ref{tau}, we give a comment on the case where the second simplification is relaxed so that the three-body decay depends on $\Theta_\tau$. \subsubsection{Two body channels} \begin{figure}[t] \centerline{ \includegraphics[scale=0.22]{BoundTe2.eps}% \includegraphics[scale=0.22]{BoundTmue.eps}% \includegraphics[scale=0.22]{BoundTmu2.eps}% }% \caption{Expected sensitivity of T2K and upper limits by PS191. The blue-solid curves show the 90\% CL upper bounds by T2K at $10^{21}\,{\rm POT}$ with the full volume $61.25\,{\rm m^3}$, when no signals are observed. The blue-dashed curves show the same bounds but with the partial TPC volume $9.0\,{\rm m^3}$. The filled regions (with the red-boundary curves) are excluded by PS191 at 90\% CL~\cite{PS191}.} \label{bounds2} \end{figure} Let us first focus on the two-body decays. Fig.~\ref{bounds2} shows the expected sensitivity for the chains $K^+ \to e^+ N \to e^+ (e^- \pi^+)$ (left), $K^+ \to e^+ N \to e^+ (\mu^- \pi^+)$ (middle) and $K^+ \to \mu^+ N \to \mu^+ (\mu^- \pi^+)$ (right), respectively. In the figures, the red curves show 90\% CL upper bound by PS191~\cite{PS191}. The blue solid curves show the contour for $2.44$ events, which corresponds to 90\% CL limit when the measured signal and the expected background are null~\cite{Feldman:1997qc}. Here the fiducial volume of the detector is taken as $V =3.5 \times 3.5 \times 5.0 = 61.25\,{\rm m^3}$~\cite{ND280}. The interactions between the active neutrinos and the nuclei in the detector provide backgrounds for the decay signals $N \to \mu^- \pi^+$ and $N \to e^- \pi^+$. For instance, the reactions \begin{eqnarray} &&\nu_\mu + n \to \mu^- + \pi^+ + n \quad\quad\quad\quad ({\rm CC}- n \pi^+)\nonumber\\ &&\nu_\mu + {}^{16}{\rm O} \to \mu^- + \pi^+ + {}^{16}{\rm O}\quad\quad ({\rm CC- coherent \,\pi^+}) \nonumber \end{eqnarray} may become background for $N \to \mu^- \pi^+$. It is expected that these events account for $4$\% of the whole neutrino events in ND280, resulting $7300\,{\rm events/10^{21}POT/ton}$~\cite{ND280}. However the background can be reduced by taking the invariant mass of $\mu^-$ and $\pi^+$ momenta in the final state. For the heavy neutrino signal, the event distribution sharply peaks at the heavy neutrino mass while the $\nu_\mu$ events provide continuous background. A serious sensitivity should be estimated together with the invariant mass distribution for the $\nu_\mu$ reactions, the energy resolutions, all sort of uncertainties, etc. Such a thorough analysis is interesting but beyond the scope of this work. \begin{figure}[t] \centerline{ \includegraphics[scale=0.22]{BoundTe2three.eps}% \includegraphics[scale=0.22]{BoundTmuethree.eps}% \includegraphics[scale=0.22]{BoundTmu2three.eps}% }% \caption{Same as Fig.~\ref{bounds2} but for the three-body decay channels.} \label{bounds3} \end{figure} We can further reduce the background by selecting the events taking place in the TPC volume which is filled by argon gas. Due to low density of the gas region, the $\nu_\mu$ events are significantly reduced while keeping the signal rates unchanged. Out of the full volume of $61.25\,{\rm m^3}$, $9.0\,{\rm m^3}$ is filled with the argon gas~\cite{ND280} and available for this purpose. According to Ref.~\cite{Karlen}, the total neutrino events taking place in the argon gas are about $2000$ at $10^{21}\,{\rm POT}$. Since $4$\% of them become the background, the number of the background event is expected to be around $80$. These $80$ events will be further reduced in the bin around the heavy neutrino mass. Keeping this in mind, we plot the contour for $2.44$ events with $V = 9.0\,{\rm m^3}$ by the dashed curves. For $N \to e^- \pi^+$, the background processes are produced by the CC interactions of $\nu_e$. However, the $\nu_e$ flux is about two orders of magnitude smaller than that of $\nu_\mu$ around the peak energy $\sim 600\,{\rm MeV}$~\cite{flux,D1,Abe:2012av}. By selecting the events in the gas volume, the background rate is expected to be less than one for $10^{21}\,{\rm POT}$. The decay channel $N \to e^- \pi^+$ is more promising than $N \to \mu^- \pi^+$ in view of signal/background ratio if $|\Theta_e| \sim |\Theta_\mu|$. \subsubsection{Three body channels} Fig.~\ref{bounds3} presents the same plots as Fig.~\ref{bounds2} but for the three-body channels; $K^+ \to e^+ N \to e^+ (e^- e^+ \nu_e)$ (left), $K^+ \to \mu^+ N \to \mu^+ (e^- e^+ \nu_e)$ (middle), $K^+ \to \mu^+ N \to \mu^+ (\mu^- e^+ \nu_e)$ (right). A major obstacle to successful identification of $N \to e^- e^+ \nu$ would be $\pi^0$ that is copiously produced by neutrino interactions. The two photons from $\pi^0$ decay develop to electromagnetic cascades in the detector material and may mimic the signals. The subdominant decay mode $\pi^0 \to e^- e^+ \gamma\, (1.17\%)$ may also contribute to the background when one of the final-state particle is undetected. The invariant mass distribution of the electron pair is useful since it moderately peaks at one-half of the heavy neutrino mass~\cite{ATM}. The analysis needs anyway precise understanding of the background, and the detection via $N \to e^- e^+ \nu$ seems less promising than the two-body modes. As for $N \to \mu^- e^+ \nu$, the charmed-meson production by the neutrino CC interaction~\cite{charm} and successive semi-leptonic decay may become the background. According to Ref.~\cite{charm}, the cross section of the charm production is about $1\%$ ($4\%$) of the total CC cross section for $E_\nu = 5\,{\rm GeV}\,(15\,{\rm GeV})$. Due to the off-axis technic, however, the contributions from such high-energy neutrinos are suppressed in the T2K setup. By selecting the events in argon gas, the neutrino reduction rates can be further reduced, and $N \to \mu^- e^+ \nu$ may become more or less background free. Although PS191 has not studied the signal process $N \to \mu^- \mu^+ \nu$ open for $M_N > 211\,{\rm MeV}$, it should be emphasized that searching for this di-muon signal is also a promising method. The main background for $N \to \mu^- \mu^+ \nu$ may be the charmed-meson production by the neutrino CC interaction~\cite{charm} and successive semi-leptonic decay as in the case of $N \to \mu^- e^+ \nu$. This rate is, however, expected to be small for the neutrino energy in T2K. \subsection{Implications of $\Theta_\tau \neq 0$} \label{tau} \begin{figure}[t] \begin{center} \scalebox{0.3}{\includegraphics{TtauBBN.eps}} \end{center} \caption{Allowed region of $M_N$-$|\Theta_\tau|^2$ plane. The upper-filled regions are excluded by CHARM~\cite{Orloff:2002de} and DELPHI~\cite{Abreu:1996pa} at 90\% and 95\% CL, respectively. The lower-filled region is the regime where the lifetime of the heavy neutrino is longer than $0.1\,{\rm s}$.} \label{Taubbn} \end{figure} \begin{figure}[t] \begin{center} \scalebox{0.5}{\includegraphics{BoundTtau2three.eps}} \end{center} \caption{Expected sensitivity for $N \to \mu^- \mu^+ \nu$. $|\Theta_\mu | =0$ is assumed.} \label{BoundTtau} \end{figure} So far we have focused on the comparison between T2K and PS191 and $\Theta_\tau$ is accordingly neglected. In this subsection, we comment on several implications of the $\Theta_\tau \neq 0$ case. Since $\Theta_\tau$ is not involved in the main production processes such as pion and kaon decays, the experimental constraints of $|\Theta_\tau|^2$ are much wearker than that of $|\Theta_e|^2$ and $|\Theta_\mu|^2$. Fig.~\ref{Taubbn} shows the allowed region of $M_N$-$|\Theta_\tau|^2$ plane. The upper-filled regions are excluded by CHARM~\cite{Orloff:2002de} and DELPHI~\cite{Abreu:1996pa} at 90\% and 95\% CL, respectively. The lower-filled region is the regime where the lifetime of the heavy neutrino is longer than $0.1\,{\rm s}$. By comparing Fig.~\ref{Taubbn} and Fig.~\ref{tot}, it is seen that, unlike $|\Theta_{e,\mu}|^2$, $|\Theta_\tau|^2$ can be large enough so that the BBN constraint is avoided without contradicting with the upper bound from the direct experiments. While $\Theta_\tau$ is not involved in the production processes, the detection processes in general depend on $\Theta_\tau$ since $N \to e^- e^+ \nu$ and $N \to \mu^- \mu^+ \nu$ are induced not only by the CC but also by the NC interactions~\cite{Kusenko:2004qc}. This means the experimental setup discussed in this paper has sensitivities to certain combinations of $|\Theta_{e,\mu}|^2$ and $|\Theta_\tau|^2$. To demonstrate this, let us focus on a simple case where $|\Theta_e|^2, |\Theta_\tau|^2 \gg |\Theta_\mu|^2$ and $M_N > 211\,{\rm MeV}$. In this case, the heavy neutrinos are produced by $K^+ \to e^+ N$ and can be detected by $N \to \mu^- \mu^+ \nu$. Since $|\Theta_\mu|^2$ is small, the decay process $N \to \mu^- \mu^+ \nu$ is conducted only by the NC interactions and the decay width becomes proportional to $|\Theta_e |^2 + |\Theta_\tau |^2$. Therefore by analyzing the dimuon signals in the detector, one is able to constraint $|\Theta_e |\sqrt{|\Theta_e |^2 + |\Theta_\tau |^2}$ for $M_N > 211\,{\rm MeV}$ (or discover the heavy neutrino). Fig.~\ref{BoundTtau} shows the expected sensitivity for $N \to \mu^- \mu^+ \nu$. As in Fig.~\ref{bounds2} and Fig.~\ref{bounds3}, the solid (dashed) curve is the contour for $2.44$ events with $V =61.25\,\, (9.0)\,{\rm m^3}$. From Fig.~\ref{BoundTtau} and Fig.~\ref{Taubbn}, one can see that there exists the parameter regime where the dimuon signal is sizable while the success of BBN is unspoiled. For example, it is seen that the combination $|\Theta_e | = 10^{-4.5}$ and $|\Theta_\tau|^{-2.5}$ is BBN safe but $\mathcal{O}(10)-\mathcal{O}(10^2)$ events of $N \to \mu^- \mu^+ \nu$ are expected for $300\,{\rm MeV} \lesssim M_N \lesssim 400\,{\rm MeV}$. Interestingly, this case also predicts $\mathcal{O}(1)-\mathcal{O}(10)$ events of $N \to e^-\pi^+$ (see the left pannel of Fig.~\ref{bounds2}), so that the heavy neutrino model can be tested in multi-dimensional way. \section{Conclusions} \label{conclusion} In this paper, we have focused on the heavy (sterile) neutrinos produced by kaon decays and explored the feasibility of their detection at the existing facilities of the accelerator-based neutrino experiment. Taking the T2K experiment as a typical example, we have estimated the heavy neutrino fluxes produced in the beam line and calculated the event rates of their decay taking place inside the detector. Due to massive nature of the heavy neutrino, the spectrum of the heavy neutrino is significantly different from that of the ordinary neutrinos. The ordinary neutrinos are emitted to various directions in the laboratory frame due to their tiny masses. On the other hand the heavy neutrinos carrying a large mass tend to be emitted to the forward directions and frequently hit the detector. This is a unique advantage of the experiments in which the parent mesons decay in flight with sufficient gamma factors. Among various decay modes, $N \to e^- \pi^+$ open for $M_N > 140\,{\rm MeV}$ is one of the most promising channels for detection because of its larger rate and lower background. The backgrounds from the active neutrino reactions can be reduced by selecting the events occurring in the regions filled with no material. In the T2K near dector, the TPC volume $9\,{\rm m^3}$ filled with argon gas plays this role. The expected sensitivity for this mode is better than that of PS191, which has placed the most stringent bound on the heavy neutrino mixing. The three body modes $N \to e^- e^+ \nu$, $N \to \mu^- e^+ \nu$, and $N \to \mu^- \mu^+ \nu$ are also interesting signals to search for. In particular, $N \to e^- e^+ \nu$ and $N \to \mu^- \mu^+ \nu$ are conducted not only by the charged current but also the neutral current, so that the tau flavor mixing $\Theta_\tau$ is involved in the detection probabilities. Since $|\Theta_\tau|$ is less constrained than $|\Theta_e|$ and $|\Theta_\mu|$, the above two modes are not necessarily suppressed when $|\Theta_e|$ and $|\Theta_\mu|$ are small such that the two-body modes $N \to e^- \pi^+$ and $N \to \mu^- \pi^+$ are beyond the reach. Finally, we would like to emphasize that two quasi-degenerate heavy neutrinos of $\mathcal{O}(100)\,{\rm MeV} - \mathcal{O}(10)\,{\rm GeV}$ can account for not only the neutrino masses in oscillation experiments but also the baryon asymmetry of the universe~\cite{BAU,Asaka:2005pn,BAU2}. The heavy neutrinos studied in this work are thus quite interesting targets to search for. Furthermore, Ref.~\cite{Fuller:2009zz} reports that the sterile neutrinos with mass $\sim 200\,{\rm MeV}$ could facilitate the energy transport from the supernova core to the schock front, prompting a successful explosion. The needed mixing is either $|\Theta_\tau|^2 > 10^{-8}$ or $10^{-7}-10^{-8}$ for $|\Theta_\mu|^2$. Interestingly, T2K can probe latter case via the $N \to \mu^- e^+ \nu$ mode (see the right pannel of Fig.~\ref{bounds3}). In addition, heavy neutrinos with masses smaller than ${\cal O}(100)$ MeV may give a significant effect on the neutrinoless double beta decays~\cite{AEI}. According to a rough estimation in Table~\ref{t1}, MiniBooNE and SciBooNE have comparable abilities to T2K so that these experiments equally have the chance to probe these interesting possibilities. Serious analyses by these collaborations may lead to the discovery of the heavy neutrinos and revolutionize neutrino physics. \subsection*{Acknowledgments} This work is supported by the Young Researcher Overseas Visits Program for Vitalizing Brain Circulation Japanese in JSPS (No.~R2209). T.A. is supported by KAKENHI (No. 21540260) in JSPS. A.W. would like to thank E.~K.~Akhmedov and T.~Schwetz for useful discussions. We would like to thank Particle and Astroparticle Division of Max-Planck-Institut f\"ur Kernphysik at Heidelberg for hospitality. \bigskip
\section{Introduction} \noindent The study of the properties of hadronic matter under extreme conditions is perhaps the most challenging task in contemporary nuclear physics. Due to its non-perturbative behavior, quantum chromodynamics (QCD), the fundamental theory of strong interactions, cannot be solved using conventional field-theoretical methods, making any prediction of its properties an extremely challenging task. While \textit{ab-initio} lattice calculations and current heavy-ion experiments at the RHIC and LHC shed some light on the properties of strongly interacting matter at high temperatures, the opposite region of the QCD phase diagram, associated to high density and low temperature conditions, is still largely unknown. On the theoretical side, lattice calculations at nonzero chemical potentials are hindered by the sign problem and most of the current predictions rely on phenomenological models, while experimentally none of the current heavy-ion colliders can reach sufficient densities to probe this region. In spite of the current uncertainties, the phase structure of QCD at finite densities is nevertheless expected to be very rich (see e.g. \cite{Fukushima:2013rx}). In particular, a growing consensus has been recently building around the idea that crystalline phases might appear in the intermediate density region (up to a few times nuclear matter density), before the onset of color superconductivity. The formation of such phases could in principle delay the restoration of chiral symmetry, dramatically altering the properties of cold quark matter (for a recent review, see \cite{Buballa:2014tba}). In particular, it has been suggested that the presence of strong background magnetic fields, a natural element in astrophysical scenarios, might significantly enhance the window for inhomogeneous phases \cite{Klimenko2010,Tatsumi:2014}, possibly leading to significant effects in the equation of state (EoS) of dense quark matter, as we discuss below. While waiting for the next generation of heavy-ion colliders such as the FAIR experiment in Darmstadt and NICA in Dubna, which promise to access experimentally this window of the QCD phase diagram, the best laboratory for investigating properties of dense matter is given by compact stellar objects, which provide the only known realization of ultradense systems in nature. Of particular interest are measurements of masses and radii, which indirectly provide numerous hints on the possible EoS for QCD at high densities. In particular, the recent discoveries of stars with masses close to $2M_{\odot}$ ($M_{\odot}$ being the solar mass) \cite{Demorest,Antoniadis} impose rather strong limitations on the possible EoS. It is worth recalling that a lot of modeling of the microscopic physics inside these compact stellar objects is involved when making this kind of prediction, introducing a large number of uncertainties. Nevertheless, while the role of hyperons as a softening ingredient of the nuclear EoS is still under debate \cite{hyper1, *hyper2, *hyper3, *hyper4, *hyper5, *Bednarek, BlaschkeEVA, Baldo}, as well as the presence of quark matter \cite{QM1, *QM2, QM3}, numerous studies seem to indicate that most of the ordinary phases of (confined or deconfined) matter might not be able to support the large stellar masses observed. All these considerations suggest that some fundamental aspect in the physics of dense matter might still be missing from these calculations. Due to the high densities reached in the core of compact stellar objects, it might be reasonable to expect a transition to more exotic phases, whose EoS could be stiff enough to sustain these massive stars. Of course, the issue of the maximum mass is subject to other effects as well, such as high rotation rates (see e.g. \cite{Weber} and references therein) or the existence of strong magnetic fields \cite{mag1, *mag2, H_profile, mag3, Sotani} that affect the EoS and may allow those objects to support higher masses than a static, nonmagnetized star would. The main purpose of this work is to investigate the effects of the formation of inhomogeneous chiral-symmetry-breaking condensates in a magnetic field background on the EoS of cold and dense matter, and whether they can lead to predictions for compact stellar objects which are compatible with current experimental observations. In particular, we aim at building hybrid stars with a crystalline quark matter core, using the resulting EoS as input for the Tolman-Oppenheimer-Volkoff (TOV) equations. In order to build a realistic description of matter for astrophysical scenarios, the models under consideration will include the effects of strong magnetic fields, which are naturally expected to be present in the compact stellar medium. This work is structured as follows: in Secs. II-III, we introduce the phenomenological models employed to describe quark and nuclear matter. In Sec. IV, we consider a medium-dependent magnetic field and give its density profile for various parametrizations. In Sec. V, we build the EoS for a hybrid star with a crystalline quark matter core in the presence of such magnetic field, and obtain the corresponding mass-radius (M-R) plots. Finally in Sec. VI we summarize the results and give our concluding remarks. \section{Models of Neutral Magnetized Quark Matter with Inhomogeneous Condensates} \noindent In this section we introduce the models we are going to use to investigate quark matter with spatially inhomogeneous chiral condensates in a magnetic field background. Since our ultimate goal is to investigate the structure of compact stars, we shall impose the physical conditions of electrical neutrality and $\beta-$equilibrium in all our calculations. Vector interactions will also be included. For our calculations, we will consider both two- and three-flavor models, which will be described in the following. \subsection{Two-flavor model} \noindent To study two-flavor quark matter in a magnetic field background we consider the following Nambu--Jona-Lasinio (NJL)-type Lagrangian density \begin{equation} \label{L-2fl} \mathcal{L}^{(2f)} =\bar\psi \left(i \gamma^\mu D_\mu + \mu \gamma^{0} - m_q\right) \psi +\bar{\psi}_e \left(i \gamma^\mu D^{(e)}_\mu - m_e \right)\psi_e+\mathcal{L}_{int} \,, \end{equation} containing a doublet of quark fields $\psi^T= (\psi_u,\psi_d)$ in flavor space, with current mass matrix $m_q=\textrm{diag} (m_u,m_d)$ and an electron field $\psi_e$ of mass $m_e$. A nonzero baryon density has been introduced via the quark baryon chemical potential $\mu$. The covariant derivative describing the coupling of matter with a static external magnetic field along the $z$-direction is $D_\mu=\partial_\mu+iQA^{ext}_\mu$, with electric charge matrix in flavor space $Q=\textrm{diag}(e_u,e_d) = \textrm{diag}(\frac{2}{3}e,-\frac{1}{3}e)$, $e$ being the unit electric charge and $A^{ext}_\mu$ being the external electromagnetic four-potential taken in the Landau gauge $A^{ext}_\mu= (0,0,Hx,0)$. The quark interaction Lagrangian $\mathcal{L}_{int}$ is given by \begin{equation} \label{Lint} \mathcal{L}_{int} =\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{V} \,, \end{equation} with \begin{equation}\label{l-1} \mathcal{L}_1 =G_1 \left[ (\bar\psi\psi)^2 + (\bar\psi i\gamma^5\psi)^2 + (\bar\psi\tau^a\psi)^2 + (\bar\psi i\gamma^5 \tau^a \psi)^2 \right], \end{equation} \begin{equation}\label{L2} \mathcal{L}_2 =G_2 \left[ (\bar\psi\psi)^2 - (\bar\psi i\gamma^5\psi)^2 - (\bar\psi\tau^a\psi)^2 + (\bar\psi i\gamma^5 \tau^a \psi)^2 \right], \end{equation} and vector channel \begin{equation}\label{l-V} \mathcal{L}_V =-G_V \left[ (\bar\psi \gamma_\mu \psi)^2 + (\bar\psi \gamma^5\gamma_\mu\psi)^2 + (\bar\psi\gamma_\mu\tau^a\psi)^2 + (\bar\psi \gamma^5\gamma_\mu \tau^a \psi)^2 \right] \,. \end{equation} Here, the matrices $\tau_a$ are the generators of the SU(2) flavor group and a sum in the color index is assumed in all the quark terms. For applications to compact stellar objects, one needs to consider electrically neutral and $\beta$-equilibrated matter. To incorporate neutrality, we insert an electric charge term $-\mu_e \mathbb{Q}_e$ in (\ref{L-2fl}), with charge operator \begin{equation}\label{elctricQ2f} \mathbb{Q}_e =\frac{2}{3}\bar\psi_u\gamma_0 \psi_u - \frac{1}{3}\bar\psi_d\gamma_0 \psi_d-\bar\psi_e\gamma_0 \psi_e. \end{equation} The electric chemical potential $\mu_e$ is not an independent parameter; it has to be determined self-consistently from the condition of electrical neutrality. A nonzero $\mu_e$ gives rise to an isospin asymmetry between the quarks, which now have chemical potentials \beq \label{eq:splitmu} \mu_u = \mu - \frac{2}{3} \mu_e \,, \qquad \mu_d = \mu + \frac{1}{3} \mu_e \,. \end{equation} At this point we perform the standard mean-field approximation and introduce an inhomogeneous ansatz for the expectation values $\langle\bar\psi_f\psi_f\rangle$ and $\langle\bar\psi_f i\gamma_5\psi_f\rangle$, $f=u,d$ (in the following we neglect flavor off-diagonal mean fields corresponding to charged pion condensation). We note that the interaction term (\ref{L2}), which corresponds to the instanton contribution $G_2[\det\bar\psi (1+\gamma_5)\psi+\det\bar\psi (1 -\gamma_5)\psi]$ \cite{Asakawa,Klevansky} and is added to the Lagrangian to include the $U(1)_A$ anomaly of QCD, contains flavor mixing terms like $\langle\bar\psi_u\psi_u\bar\psi_d\psi_d\rangle$ that would significantly complicate our calculation when dealing with asymmetric inhomogeneous matter. As a first step, we then choose to avoid dealing with mixing terms by considering a simpler version of our model in which different quark flavors can be completely decoupled by neglecting the instanton term in the interaction Lagrangian (i.e. taking $G_2=0$). We expect the main features of the quark EoS to be qualitatively unaffected by this simplification, and note that at any rate the quark condensates will still influence each other through the neutrality condition. From now on, we will consider $G_1=G_S$ and work in the chiral limit $m_u = m_d = 0$. Let us consider the following plane-wave ansatz \begin{align} \label{condensates2f} -4G_S\langle\bar\psi_u\psi_u\rangle & =\Delta_u \cos (q_u z) \,, \quad -4G_S\langle\bar\psi_u i\gamma_5\psi_u\rangle = \Delta_u \sin (q_u z) \,, \nonumber \\ -4G_S\langle\bar\psi_d\psi_d\rangle & = \Delta_d \cos (q_d z) \,, \quad -4G_S\langle\bar\psi_d i\gamma_5\psi_d\rangle =\Delta_d \sin (q_d z) \,. \end{align} % In the isospin-symmetric case, this ansatz reduces to the so-called ``chiral density wave'' (CDW) \cite{Nakano:2005}, characterized by plane-wave condensates with magnitude $\Delta=\Delta_u=\Delta_d$ for each flavor and equal and opposite wave vectors, $q=q_u=-q_d$. However, in isospin-asymmetric matter, there is no reason why different flavors should have the same condensate, and one should allow for two separate amplitudes $\Delta_f$ and modulations $q_f, f=u,d$. \noindent In an external magnetic field, the only rotational symmetries that survive are the SO(2) group of rotations about the field direction, making that direction special. That is why we have chosen the modulation of the condensates along the magnetic field direction. We note that a modulation along the field direction is known to be energetically favored in the isospin-symmetric case \cite{Nakano:2005}. Since we are working at nonzero baryon density in a theory with vector interactions, we also introduce expectation values of the individual quark number densities for each flavor, \beq \label{quarkdensities} \langle\bar\psi_u\gamma_0\psi_u\rangle= \rho_u, \quad \langle\bar\psi_d\gamma_0\psi_d\rangle= \rho_d, \end{equation} connected to the baryon density through $3 \rho_B= (\rho_u+\rho_d)$. While for an arbitrary spatial dependence of the chiral condensate a proper self-consistent inclusion of the expectation values of the quark number densities could be challenging, as they could be themselves inhomogeneous, in the case of a CDW modulation the procedure is actually straightforward, thanks to the fact that, for this particular ansatz, the quark number density is spatially constant \cite{CNB:2010}. The net effect of including vector interactions then amounts, just like for homogeneous matter, to the introduction of a shifted chemical potential for each flavor, given by \cite{Zhang:2009,Abuki:2009} \beq \label{eq:splitmu+v} \widetilde{\mu}_u = \mu_u -4G_V \rho_u \,, \qquad \widetilde{\mu}_d = \mu_d-4G_V \rho_d \,. \end{equation} Expanding around the expectation values introduced in Eqs. (\ref{condensates2f}) and (\ref{eq:splitmu+v}), we obtain the mean-field Lagrangian \begin{eqnarray} \label{L-2flMF} \mathcal{L}^{(2f)}_{MF} &=&\bar\psi_u \left(i \gamma^\mu D^{(u)}_\mu + \widetilde{\mu}_u \gamma^{0} -\Delta_{u}e^{i\gamma_5 q_uz}\right) \psi_u + \bar\psi_d \left(i \gamma^\mu D^{(d)}_\mu + \widetilde{\mu}_d \gamma^{0} -\Delta_{d}e^{i\gamma_5 q_dz}\right) \psi_d \nonumber \\ &+&\bar{\psi}_e \left(i \gamma^\mu D^{(e)}_\mu+\mu_e\gamma^{0} - m_e \right)\psi_e -\frac{\Delta^2_u}{8G_S}-\frac{\Delta^2_d}{8G_S}+\frac{(\widetilde{\mu}_u-\mu_u)^2}{8G_V} +\frac{(\widetilde{\mu}_d-\mu_d)^2}{8G_V} \,. \end{eqnarray} \noindent Since this Lagrangian is now bilinear in the matter fields, the corresponding thermodynamic potential can be readily obtained. In the following, we neglect thermal effects and work at zero temperature, a reasonable approximation when describing cold and dense stellar matter. The zero-temperature, mean-field thermodynamic potential of the two-flavor theory (\ref{L-2flMF}) is thus given by \beq \label{eq:omega2f} \Omega^{(2f)}=\Omega_e + N_c \sum_{f=u,d} \Omega_f + \sum_{f=u,d}\left[\frac{\Delta^2_f}{8G_S}-\frac{(\widetilde{\mu}_f-\mu_f)^2}{8G_V}\right] \,, \end{equation} where $N_c=3$ is the number of colors and the quark contributions for each flavor are \begin{align} \Omega_f&= \Omega_f^{vac}+\Omega_f^{med} \,, \\ \Omega_f^{vac}&= \frac{1}{4\sqrt{\pi}}\frac{\vert e_f H\vert}{(2\pi)^2}\int^{\infty}_{-\infty} dp_3\int_{1/\Lambda^2}^{\infty}\frac{ds}{s^{3/2}} \left(\sum_{\epsilon}e^{-sE_{f,0}^2}+\sum_{n>0,\zeta,\epsilon}e^{-sE_{f,n}^2}\right) \,, \label{eq:omega2fvac} \\ \Omega_f^{med}&= -\frac{\vert e_f H\vert}{2\pi^2}\widetilde{\mu}_f b_f-\frac{\vert e_f H\vert}{8\pi^2}\int^{\infty}_{-\infty} dp_3\sum_{\epsilon}(|E_{f,0}-\widetilde{\mu}_f|-|E_{f,0}|)|_{reg} \nonumber \label{eq:omega2fmedium} \\ &-\frac{\vert e_f H \vert}{4\pi^2}\int^{\infty}_{-\infty} dp_3\sum_{n>0,\zeta}(\widetilde{\mu}_f-E_{f,n})\Theta( \widetilde{\mu}_f-E_{f,n})|_{\epsilon=1}, \end{align} with quark energies given by \begin{eqnarray} \label{eq:Equarks} E_{f,0} = \epsilon\sqrt{\Delta_f^2+p_3^2}+b_f,&\quad \epsilon=\pm\,,\, n=0 \,, \\ E_{f,n}= \epsilon\sqrt{\left(\zeta\sqrt{\Delta_f^2+p_3^2}+b_f\right)^2+2\vert e_f H\vert n}, & \quad \epsilon=\pm \,,\, \zeta=\pm \,,\, n>0 \,, \end{eqnarray} where $b_f=\frac{q_f}{2}$, and $n=0,1,2,..$ denotes the Landau levels. Notice that this spectrum exhibits a drastic distinction between the modes of the lowest Landau level (LLL), $n=0$, and the rest, $n>0$. The spectrum is asymmetric about zero for the LLL, while it is symmetric for any $n>0$. The index $\zeta$ is connected to the spin projection, while $\epsilon$ labels particle/antiparticle energies for all $n>0$. This last interpretation is not valid however for the LLL, due to the spectral asymmetry. Note that for $n>0$, the presence of the modulation, $b_f\neq 0$, breaks the spin degeneracy, creating a Zeeman effect in the absence of an anomalous magnetic moment term. The first two terms in the r.h.s. of (\ref{eq:omega2fmedium}) were found using the regularization procedure discussed in \cite{Klimenko2010}, and the vacuum term (\ref{eq:omega2fvac}) was regularized with the help of Schwinger's proper time scheme. The electron thermodynamic potential is \beq\label{reg-omega-e} \Omega_e=\Omega_e^{vac}+\Omega_e^{med}=\frac{1}{4\sqrt{\pi}}\frac{\vert eH\vert}{(2\pi)^2}\int^{\infty}_{-\infty} dp_3\sum_{n\epsilon}d(n)\int_{1/\Lambda^2}^{\infty}\frac{ds}{s^{3/2}}e^{-sE_e^2}-\frac{\vert eH\vert }{4\pi^2}\sum_{n}d(n)\int^{\infty}_{-\infty} dp_3(\mu_e-E_e)\Theta( \mu_e-E_e)|_{\epsilon=1}, \end{equation} where the degeneracy factor $d(n)=2-\delta_{n0}$ takes into account the lack of spin degeneracy of the LLL and the modes are given by the well-known spectrum of a free charged fermion in a magnetic field, \begin{equation}\label{eq:electron} E_e= \epsilon\sqrt{m_e^2+p_3^2+ 2\vert eH \vert n}, \quad \epsilon=\pm \,. \end{equation} To find the expectation values of $\Delta_f$, $b_f$, and $\tilde\mu_f$ we must solve the set of equations \beq \label{minimumeq} \frac{\partial\Omega^{(2f)}}{\partial \Delta_f}=0, \qquad \frac{\partial\Omega^{(2f)}}{\partial b_f}=0,\qquad \frac{\partial\Omega^{(2f)}}{\partial \tilde{\mu}_f}=0 \,, \quad f \in \lbrace u,d \rbrace \,, \end{equation} together with the electrical neutrality condition \beq \frac{\partial\Omega^{(2f)}}{\partial \mu_e}=0 \,. \end{equation} The third equation in (\ref{minimumeq}) is equivalent to the equation for the baryon density $3\rho_B=-\partial\Omega^{(2f)}/{\partial \mu}$. Its solution corresponds to a maximum of $\Omega^{(2f)}$ \cite{Koide}. The contribution $-\frac{\vert e_f H\vert}{2\pi^2}\widetilde{\mu}_f b_f$ in (\ref{eq:omega2fmedium}) originates from the asymmetry of the spectrum at the LLL and is directly connected to the baryon charge anomaly \cite{Tatsumi:2014}. Just as in the isospin-symmetric case \cite{Klimenko2010, Tatsumi:2014}, the presence of this term favors nonzero values for $b_f$ for all $\mu > 0$ within the region of validity of the model. Therefore, strictly speaking, for quark matter in the presence of a magnetic field, the formation of the CDW condensate is energetically favored from arbitrarily small to intermediate values of chemical potentials. This is clearly seen in the plots of Fig. \ref{Delta_q_vs_mu} where the behavior of the magnitude and modulation of the condensates as functions of the baryon quark chemical are depicted for $H=2.5 \times 10^{18}G$. The separation of the parameters $b_f ,\Delta_f$ for each quark is a consequence of the neutrality condition that leads to different quark chemical potentials and hence different condensate solutions for different flavors. However, the behavior of the individual condensates with the chemical potential is similar. For small chemical potentials, the inhomogeneity is present, but the modulation is very small and the magnitudes of the up and down condensates coincide and are equal to the magnitude of the homogenous case. In contrast, in the region of interest for star applications, $330$ MeV$<\mu< 550$ MeV, the two inhomogeneous condensates are quite distinct and robust. At very large chemical potential a competition may occur between the CDW solution and some form of color superconductivity, a topic worth being explored, but out of the scope of this paper. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{orderparamsb2_5e18g.pdf} \caption{ Amplitudes $\Delta_f$ ($f \in (u,d))$ and wave numbers $b_f$ for the CDW modulation (Eq. \ref{condensates2f}) as a function of chemical potential in the presence of a constant external magnetic field. \label{Delta_q_vs_mu}} \end{center} \end{figure} \subsection{Three-flavor model} \noindent While the exact densities reached in the core of compact stars are unknown, it is likely that strange quarks might play a role in the thermodynamics of these systems. In particular, when building a hybrid EoS, if the transition from nuclear to quark matter occurs beyond the hyperon onset in the nuclear phase, a two-flavor description of quark matter will clearly lead to inconsistencies and the inclusion of a third flavor in our quark model might therefore be necessary. As in the two-flavor case, imposing the condition of $\beta$-equilibrium leads to a split of the different flavor chemical potentials, which are given by \beq \label{eq:splitmu3f} \mu_u = \mu - \frac{2}{3} \mu_e \,, \qquad \mu_d = \mu + \frac{1}{3} \mu_e \,, \qquad \mu_s = \mu + \frac{1}{3} \mu_e \,. \end{equation} We then consider a three-flavor version \cite{Klevansky} of the neutral NJL model discussed above, with Lagrangian density \begin{equation} \label{L-ENJL} \mathcal{L}^{(3f)} =\mathcal{L}_0 +\mathcal{L}_1+\mathcal{L}_V \,, \end{equation} with \begin{equation}\label{L-0} \mathcal{L}_0 =\bar\psi \left(i \gamma^\mu D_\mu + \hat{\mu} \gamma^{0} - \hat{m}\right) \psi +\bar{\psi}_e \left(i \gamma^\mu D^{(e)}_\mu+\mu_e \gamma^{0}- m_e \right)\psi_e \,, \end{equation} \begin{equation}\label{l-13f} \mathcal{L}_1 =G_S \sum^{8}_{a=0}\left[(\bar\psi\lambda^a\psi)^2 + (\bar\psi i\gamma^5 \lambda^a \psi)^2 \right] \,, \end{equation} \begin{equation}\label{l-V3f} \mathcal{L}_V =-G_V \sum^{8}_{a=0}\left[(\bar\psi\gamma_\mu\lambda^a\psi)^2 + (\bar\psi\gamma^5 \gamma_\mu\lambda^a \psi)^2 \right] \,, \end{equation} Here $\hat{m} = {\rm diag}(m_u,m_d,m_s) $ is the current mass matrix (for our calculations we again neglect the light quark masses, while choosing for the strange current mass a value of $m_s = 150$ MeV), $ \hat{\mu}= {\rm diag}(\mu_u,\mu_d,\mu_s) $ is the flavor chemical potential matrix, $\lambda^a$ are the Gell-Mann matrices in flavor space for $a$ = 1, \dots 8, and $\lambda_0=\sqrt{\frac{2}{3}}\textbf{1}$. The covariant derivatives are defined as before, but with the replacement of the electric charge matrix by $Q = {\rm diag}\lbrace \frac{2}{3}e , -\frac{1}{3}e,-\frac{1}{3}e \rbrace $ for the quarks. When choosing our specific ansatz for the mean fields we follow \cite{Moreira:2013ura} and allow only the light quark condensates to become spatially inhomogeneous with the same plane-wave form of \Eq{condensates2f}, while implementing strange quarks as a homogeneous background of quasiparticles with constituent mass $\Delta_s$. Here again, the introduction of a nonzero $\mu_e$ breaks the isospin symmetry in the system, allowing for different values of the quark condensates of different flavors. As in the two-flavor case, we introduce quark number densities $\rho_f$ for each flavor, and replace each quark chemical potential by the effective one $\tilde{\mu}_f = \mu_f - 4 G_V \rho_f$. We have chosen not to include the six-fermion interaction generated by the instanton contribution in the three-flavor case \cite{Klevansky} because for isospin-asymmetric matter, with three condensates and two spatial modulations, it would give rise to coordinate-dependent terms in the mean-field Lagrangian due to the mixing of different condensates like for instance, $\langle\bar\psi_u\psi_u\rangle\langle\bar\psi_d\psi_d\rangle\bar\psi_s\psi_s$, that would make our numerical calculations quite involved. Nevertheless, even though our simplified model is just a first attempt to tackle a very complicated problem, we do not expect that including the instanton term will change the physical picture significantly, as the mass-radius curves should be less sensitive to the degree of mixing than to the main features of the model, namely the asymmetry of the LLL Hamiltonian of the light quarks, and the existence of condensates with different magnitudes and modulations induced by the neutrality condition, all properties that are present whether there is mixing or not. Working in the mean-field approximation, we readily find that the spectrum of the $u$ and $d$ quarks is still given by Eq. (\ref{eq:Equarks}), thus identical to the two-flavor case. The spectrum of the electrons is the same as before and the energies of the $s$ quarks are given by \begin{equation}\label{eq:Es} E_s= \epsilon\sqrt{\Delta_s^2+p_3^2+ 2\vert e_sH \vert n}, \quad \epsilon=\pm \,. \end{equation} The thermodynamic potential for the three-flavor case is then given by \beq \label{eq:omegaiso} \Omega^{(3f)} = \Omega^{(2f)}+N_c \Omega_s + \frac{(\Delta_s - m_s)^2}{8 G_S}-\frac{(\widetilde{\mu}_s-\mu_s)^2}{8G_V} \,, \end{equation} where $\Omega^{(2f)}$ is given by \Eq{eq:omega2f} and \beq\label{reg-omega-s} \Omega_s=\frac{1}{4\sqrt{\pi}}\frac{\vert e_sH\vert}{(2\pi)^2}\int^{\infty}_{-\infty} dp_3\sum_{n\epsilon}d(n)\int_{1/\Lambda^2}^{\infty}\frac{ds}{s^{3/2}}e^{-sE_s^2}-\frac{\vert e_sH\vert }{4\pi^2}\sum_{n}d(n)\int^{\infty}_{-\infty}dp_3(\widetilde{\mu}_s-E_s)\Theta( \widetilde{\mu}_s-E_s)|_{\epsilon=1}. \end{equation} The dynamical parameters of this model have to be determined from the equations \begin{eqnarray} \label{eq-3f} \frac{\partial\Omega^{(3f)}}{\partial \Delta_f}=0, \qquad \frac{\partial\Omega^{(3f)}}{\partial \tilde{\mu}_f}&=&0 \,,\qquad f=u,d,s \,, \\ \frac{\partial\Omega^{(3f)}}{\partial b_f}&=&0 \,, \qquad f=u,d \,, \\ \frac{\partial\Omega^{(3f)}}{\partial \mu_e}&=&0 \,. \end{eqnarray} Since the s-quark condensate is homogeneous, its role is basically to decrease the number of electrons required for neutrality, but it does not produce any significant qualitative change in the solutions of the inhomogeneous condensates of the light quarks, which display the same behavior with the chemical potential as in the two-flavor case depicted in Fig. \ref{Delta_q_vs_mu}. \section{Nuclear Matter in a Magnetic Field} \noindent For describing nuclear matter we employ the nonlinear Walecka model \cite{Walecka1986}. In the presence of an external magnetic field, it is characterized by the following Lagrangian \cite{Glendenning}: \begin{equation} \label{lag_nuc} \mathcal{L}=\sum_l\mathcal{L}_l+\sum_B \mathcal{L}_B+\mathcal{L}_M \end{equation} \\ where \begin{align} \mathcal{L}_l&=\bar{\psi_l}(i\gamma_\mu\partial^\mu- e\gamma_\mu A^\mu-m_l)\psi_l \,, \\ \mathcal{L}_B&=\bar\psi_B(i\gamma_\mu\partial^\mu-e_B\gamma_\mu A^\mu-m_B+g_{\sigma B}\sigma -g_{\omega B}\gamma_\mu\omega^\mu-g_{\rho B}\vec{ \tau}\cdot\vec{\rho_\mu}\gamma^\mu)\psi_B \,, \\ \mathcal{L}_M&=\frac12(\partial_\mu\sigma\partial^\mu\sigma-m_\sigma^2\sigma^2)-U(\sigma) + \frac12m_\omega^2\omega_\mu\omega^\mu-\frac14\omega_{\mu\nu}\omega^{\mu\nu}+\frac12 m_\rho^2\vec{\rho_\mu}\cdot\vec{\rho_\mu}-\frac14 \mathbf{\rho}_{\mu\nu}\mathbf{\rho}^{\mu\nu} \,. \end{align} The sum is taken over baryons ($B$), considering the nuclear octect (protons, neutrons, and hyperons $\Lambda$, $\Sigma^-$, $\Sigma^+$, $\Sigma^0$, $\Xi^-$, and $\Xi^0$), and leptons ($l$), considering electrons and muons. The mesons ($M$) considered comprise the scalar $\sigma$, isoscalar-vector $\omega_\mu$, and isovector-vector $\vec{\rho_\mu}$. They mediate the interactions between the baryon Dirac fields $\psi_B$. The lepton Dirac field is represented by $\psi_l$. Here, $e_B$ is the electric charge of each baryon, $\vec{ \tau}=(\tau_1,\tau_2,\tau_3)$ denotes the isospin matrices, and $m_B$ is the baryon mass. The field tensors for the mesonic fields are given by $\omega_{\mu\nu}=\partial_\mu\omega_\nu-\partial_\nu\omega_\mu$ and $\mathbf{\rho}_{\mu\nu}=\partial_\mu\vec{\rho_\nu}-\partial_\nu\vec{\rho_\mu}-g_{\rho B}(\vec{\rho_\mu}\times\vec{\rho_\nu})$, while $U(\sigma)=1/3 b m_n(g_{\sigma N} \sigma)^3 + 1/4 c (g_{\sigma N} \sigma)^4$ is the scalar self-interactions, $m_n$ being the nucleon mass. In the mean-field approximation the mesonic fields $\sigma$, $\omega_0$, and $\rho^3_0$ are assumed to acquire nonzero expectation values: $\langle \sigma \rangle=\bar{\sigma}$, $\langle \omega_0 \rangle=\bar{\omega}_0$, and $\langle \rho^3_0 \rangle=\bar{\rho}^3_0$. The mesonic masses are $m_{\sigma}=$ 400 MeV, $m_{\omega}=$ 783 MeV, and $m_{\rho}=$ 770 MeV. The hyperon couplings to the mesonic fields are described as a fraction of that of the nucleons and are taken as $x_{\sigma H}=0.7$ and $x_{\rho H}=x_{\omega H}=0.783$ ($x_{iB}=g_{iB}/g_i$, $i=\sigma, \rho,\omega$). The parameters are chosen to reproduce a binding energy of $-16.3$ MeV and a symmetry energy coefficient of $32.5$ MeV for saturated nuclear matter with compression modulus $K=300$ MeV and effective baryon mass $m_B^{*}=m_B-g_{\sigma}\bar{\sigma}=0.7 m_B$. For this we adopt the following values: $(g_{\sigma}/m_{\sigma})^2$= 11.79 fm$^{-2}$, $(g_{\omega}/m_{\omega})^2$= 7.149 fm$^{-2}$, $(g_{\rho}/m_{\rho})^2$= 4.411 fm$^{-2}$, $b=0.002947$, and $c=-0.001070$ (GM1 parametrization). The thermodynamic potential at zero temperature is therefore: \begin{align} \label{eq:omegaNucl} \Omega & = -\sum_{i=B,l}\Omega_i - \frac12\left(\frac{g_{\omega}}{m_{\omega}}\right)^{-2}\rho_B'^2+\frac12\left(\frac{g_{\sigma}}{m_{\sigma}}\right)^{-2}(g_{\sigma}\bar\sigma)^{2}+\frac13 bm_n(g_{\sigma}\bar\sigma)^3+\frac14 c(g_{\sigma}\bar\sigma)^4-\frac12\left(\frac{g_{\rho}}{m_{\rho}}\right)^{-2}\rho_{I_3}'^2 \,,\\ \rho_{I_3}' & = \sum_{i=B}x_{\rho i}I_{3i}\rho_i \,,\\ \rho_{B}' & = \sum_{i=B}x_{\omega i}\rho_i \,, \end{align} where the terms for charged particles (with dynamics modified by the filling of the Landau levels) and uncharged particles are given by \begin{align} \Omega_i^{neutral}&= -\frac13\frac{\gamma_i}{(2\pi)^3}\int d^3k\frac{k^2}{\sqrt{k^2+m_i^{*2}}} \,,\\ \rho_i^{neutral}&= \frac{k_{fi}^3}{3\pi^2}\,,\\ \Omega_i^{charged}&= -\frac{|eH|}{2\pi^2}\sum_{n=0}^{n_{max}}d(n)\int dk\, \frac{k^2}{\sqrt{k^2+\tilde{m}^{*2}_i}} \,, \\ \rho_i^{charged}&= \frac{|eH|}{2\pi^2}\sum_{n=0}^{n_{max}}d(n)\tilde{k}_{fi} \,, \end{align} where $\gamma_i$ is the degeneracy factor and $\rho_i$ the number density. The spin degeneracy of the Landau levels $n$ is denoted as before by $d(n)=2-\delta_{n0}$. The sum is taken from the LLL to $n_{max}$, where $n_{max}$ is the nearest natural number equal to or less than $[(\widetilde{\mu}_i^2-m^{*2}_i)/2|eH|]$ with $\widetilde{\mu}_i=\mu_i-g_\omega\bar{\omega}_0-g_\rho \tau_{3i}\bar{\rho}^3_0$ denoting the effective chemical potential for the given fermion. If we write the effective mass of charged components as {$\tilde{m}^{*2}_i=m^{*2}_i+2n|eH|$}, the Fermi momentum of the particles becomes {$\tilde{k}_{fi}^2=\widetilde{\mu}_i^2-\tilde{m}^{*2}_i$}. The baryon sum can be limited to protons and neutrons (GM1n case) or include the full baryon octet (GM1nh case). The EoS for the two cases will start to differ at densities around 0.3-0.4 fm$^{-3}$ or $\mu_B \sim 1230$ MeV, which mark the onset of hyperons \cite{Baldo, Veronica}. The thermodynamically favored values for the variational parameters in the model are obtained by solving the field equations describing the coupling of baryons to the mesons while considering chemical equilibrium and charge neutrality: \begin{eqnarray} \left(\frac{m_\sigma}{g_\sigma}\right)^{2}(g_\sigma \bar{\sigma})+bm_n(g_\sigma \bar{\sigma})^2+c(g_\sigma \bar{\sigma})^3 &=& \frac{1}{\pi^2}\sum_{i=B}\frac{g_\sigma}{m_\sigma^2}\int\frac{m^*_B}{\sqrt{k^2+ m^{*2}_B}}k^2 dk\, \\ \bar{\omega}_0 &=&\sum_B \frac{g_\omega}{m_\omega^2}\frac{k_{fB}^3}{3\pi^2}\, , \\ \bar{\rho}_0 &=& \sum_B \frac{g_\rho}{m_\rho^2}\frac{k_{fB}^3}{3\pi^2}I_{3B}\, , \\ \sum_{i=B,l} e_i \rho_i&=&0\,, \\ \mu_i&=&B_i\mu_B+e_i\mu_e\, , \end{eqnarray} \\ where $e_i$ and $B_i$ are the electric and baryonic charges of each component, associated with the corresponding electrical and baryonic chemical potentials, $\mu_e$ and $\mu_B$, respectively. \section{Varying inner magnetic field} \noindent Most compact stellar objects are known to have very high values of surface magnetic fields, with white dwarfs in the range of $10^{6}-10^9$G, typical neutron stars $10^8-10^{12}$G, and magnetars $10^{14}-10^{15}$G \cite{magnetar1, *magnetar2, *magnetar3, *magnetar4, *magnetar5}. While the magnetic field is definitely expected to rise in several orders of magnitude when going from the surface to the core, its actual inner profile is unknown. Estimates based on the virial theorem for stars made of quark matter give upper values of central fields of order $10^{19}-10^{20}$G \cite{Ferrer2010}. Other estimates found by solving the Einstein equations with axisymmetric and poloidal field configurations \cite{Bocquet, Cardall}, or by applying the virial theorem to stars entirely composed of nuclear matter \cite{Dong}, have led to the lower range $0.1 - 4.2 \times 10^{18}$G. In all our derivations, we consider a static background magnetic field $H$ pointing in the $z$-direction. This field influences the EoS both by altering the energy spectrum of the charged particles and by producing a splitting between the parallel and transverse pressures with respect to the field direction \cite {Ferrer2010}. The pressure splitting is mostly due to the Maxwell term $\mathcal{L}_{\rm EM} = -\frac{1}{4}F^{\mu\nu}F_{\mu\nu}$, where $F_{\mu\nu}=\partial_\mu A_\nu - \partial_\nu A_\mu$, contribution to the Lagrangian. We shall work in a region of magnetic field strengths where the pressure splitting is $\leqslant 10\%$, so that the ambiguity associated with using the spherical TOV equations to obtain the mass-radius sequences is also small. Additionally, we neglect the interaction of the magnetic field with the anomalous magnetic moment of the particles because its inclusion has been shown to have a negligible effect on the EoS of the dense medium \cite{Ferrer2015}. In the stellar medium, the electric conductivity is very large; thus the magnetic flux is conserved. Hence, the magnetic field should be stronger as the density increases towards the core. Assuming a constant field magnitude throughout the star would then be a very crude approximation, which might introduce a significant bias in the resulting EoS. To avoid this, we consider a varying magnetic field in the star. A first implementation of a varying magnetic field inside the star was done in Ref. \cite{H_profile} and then used by several authors \cite{Mao03,*Menezes09,*Rabhi09,*Casali14}. There, the neutron star was assumed to be composed entirely of nuclear matter and the magnetic field was expressed as a function of the baryon density, changing from a maximum central value to some lower surface value that was estimated from known magnetars. This ansatz, however, is not convenient to study hybrid stars on which the change from nuclear to quark matter occurs through a first-order transition. The jump that occurs in the baryon density due to the first-order transition would in turn produce an unphysical jump in the magnetic field. In such a case, a better way to mimic the inner varying field is to express it as a function of the baryon chemical potential, as proposed in \cite{Veronica}. Here we adopt the same approach and consider a medium-dependent value of $H$ when calculating the EoSs of the nuclear and quark phases. With this aim, we employ the following ansatz: \begin{equation} \label{varB} H(\mu_B)=H_{S}+H_C\left[1-e^{ -\kappa \left (\frac{\mu_B-\mu_N}{\mu_N} \right )^\gamma} \right ] \,, \end{equation} where $\mu_N = 938$ MeV can be interpreted as the baryon chemical potential for nuclear matter at the crust of the star. The parameters $\kappa$ and $\gamma$ determine how fast the rise of $H$ is with the baryon chemical potential from the surface to the core. $H_{S}$ is the value for the surface magnetic field and $H_C$ an estimate of the value at the core. The uncertainty in the knowledge of the actual inner profile of the magnetic field is expressed in (\ref{varB}) by the arbitrariness of the $(\gamma,\kappa)$ parametrization. While building hybrid stars, in Sec.\ref{sectionV}B we consider different values within acceptable ranges for these parameters, in order to determine the sensitivity of the EoS and the maximum stellar mass to their variation. In Fig.\ref{B-Profile} we show the magnetic field profiles for different sets of the $(\kappa, \gamma)$ parameters. Note that for the selected parametrization, the field decays quicker for densities corresponding to nuclear matter, while in the quark core it is almost constant. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{bprofilecanvas.pdf} \caption{ Magnetic field profiles corresponding to Eq. (\ref{varB}) for different parametrizations ($\kappa, \gamma$) in the chemical potential region under consideration.}\label{B-Profile} \end{center} \end{figure} \section{Inhomogeneous quark matter in the core of compact stars}\label{sectionV} \noindent Since the first indications of the existence of 2$M_{\odot}$ stars \cite{MassStars, MassStars2}, there were claims \cite{Trumper, Ozel} that quark matter might have to be ruled out as a core phase. The problem was motivated by the fact that despite more than a decade of perturbative QCD predicting maximum stellar masses for quark stars larger than 2$M_{\odot}$ \cite{Fraga, Kurkela}, the expected densities of compact stars are not sufficiently high to validate a perturbative approach for the strong interaction. On the other hand, nonperturbative calculations based on simple QCD phenomenological models like NJL with four-fermion channels were found to produce too soft an EOS, incapable to stabilize a 2$M_{\odot}$ star against the gravitational collapse (see for example \cite{Bordbar} and Fig. 8 in \cite{mag3}). However, when other interactions, also present in a dense medium of quarks \cite{Kitazawa2002}, such as the diquark channel \cite{Horvath} and/or the vector channel, were taken into account, the EOS stiffened enough to meet the 2$M_{\odot}$ challenge. Because of these findings, quark matter was back in the competition as a possible core phase of massive stars (see for instance \cite{Orsaria, Menezes:2014, Hell}). One may wonder if this last conclusion could be challenged in turn by the presence of an inhomogeneous quark matter phase in the moderate density region. Exploring the suitability of the CDW phase for the core of neutron stars is particularly important, given that practically all neutron stars have nonzero magnetic fields, and once a magnetic field is present, the CDW phase is known to be energetically favored over the homogeneous chiral phases of quark matter, the chirally broken (at low $\mu$) and the chirally restored (at intermediate $\mu$). As discussed in Sec. II, the reason why the CDW solution is energetically favored is connected to the anomaly of the baryon charge produced by the spectrum asymmetry of the LLL. The charge anomaly leads to the term $-\frac{\vert e_f H\vert}{2\pi^2}\widetilde{\mu}_f b_f$ in the thermodynamic potential that increases the pressure, rendering the EoS stiffer. Moreover, from the behavior of the parameters $b_f$ with the quark baryon chemical potential shown in Fig. \ref{Delta_q_vs_mu}, it is evident that this term will become more and more important with increasing $\mu$. What is unclear however is whether this term alone will be enough to compensate for the softening of the EoS typically associated to a transition to quark matter, or if other common stiffening factors such as vector interactions will also be needed. The only way to find out is through explicit calculations. Consequently, a main question we aim to explore in this section is the following: Can a magnetized hybrid star with a core of quark matter in the CDW phase sustain a star mass consistent with the $2M_\odot$ observations for an acceptable range of the model parameters? \subsection{Pressure splitting and Maxwell construction} \noindent To find the transition point from nuclear to quark matter, we employ the Maxwell construction, prescribing that the transition between two phases $1$ and $2$ occurs at the same baryonic chemical potential, $\mu_{B1}=\mu_{B2}$, temperature, $T_1=T_2$, and pressure, $P_1=P_2$. The transition is then of first order and the density exhibits a discontinuity at the phase transition. It was shown in \cite{Debora} that the Gibbs construction, for which the continuity of the electron chemical potential is also required, leads to very similar results for the macroscopic properties of a compact star, so that the choice of the Maxwell construction should be acceptable. The inclusion of a background magnetic field can in principle introduce a richer scenario in the construction of a hybrid EoS. Indeed, as mentioned in the previous section, when a system is subject to an uniform magnetic field along a specific direction, the pressure of the system develops a splitting in the directions parallel and perpendicular to the applied field. This splitting has to be taken into account in the EoS, which is then modified from the usual form into \cite{Ferrer2010} \begin{align} P^{\parallel}& =-\Omega-\frac{H^2}{2} \,,\\ \label{eq:pperp} P^{\perp} & =-\Omega - H\mathcal{M} + \frac{H^2}{2} \,, \\ \varepsilon & =\Omega+\mu \rho +\frac{H^2}{2} \,, \label{eq:split} \end{align} \\ where $\Omega$ is the matter contribution to the thermodynamical potential evaluated at the physical minimum, $\rho=-\partial\Omega/\partial\mu$ is the particle density, $\varepsilon$ the energy density and ${\cal M}=-\partial\Omega/\partial H$ the magnetization. In light of the pressure anisotropy, the Maxwell construction has to be generalized to require separately the equality of the pressure components of the two phases. Nevertheless, taking into account that the leading term in the thermodynamic potential is $\sim \mu^4$, and that for nonferromagnetic media ${\cal M}<H$, it follows that for the region under consideration, where $H<\mu^2$, neglecting the magnetization energy (${\cal M}H$) in the transverse pressure is a reasonable assumption. As a corroboration of these arguments, a direct calculation of the magnetization term for quark matter shows that for the range of $\mu$ relevant for star applications $330$ MeV$<\mu< 500$ MeV, $H{\cal M}/\Omega$ never exceeds $4\%$ (see Fig. \ref{Magnetization}), while for nuclear matter $H{\cal M}/H^2$ does not exceed $3\%$ \cite{Broderick2000}. Thus, the magnetization contribution can be neglected in the two phases. This, together with the fact that the magnetic field at the interphase is the same, reduces the Maxwell construction to \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{magnetization.pdf} \caption{Magnetization contribution compared to the matter thermodynamic potential (\Eq{eq:omega2f}) as a function of the quark chemical potential for a two-flavor system in a fixed external magnetic field. The Haas-van Alphen oscillations are richer in the isospin-asymmetric matter here considered due to the difference in the maximum Landau levels reachable for the up and down quark.} \label{Magnetization} \end{center} \end{figure} \beq \label{Maxwell-Cond} \Omega_{\rm nuclear}(\mu_{tr}) = \Omega_{\rm quark}(\mu_{tr}) \,, \end{equation} \\ with $\Omega_{\rm nuclear}$ given by \Eq{eq:omegaNucl} and $\Omega_{\rm quark}$ given either by \Eq{eq:omega2f} or \Eq{eq:omegaiso}, depending on whether the quark matter is composed of two or three flavors. Using this procedure, the transition chemical potential $\mu_{tr}$ can be obtained from the solution of Eq. (\ref{Maxwell-Cond}). Upon closer inspection, one can see that this criterion is however not entirely consistent: since the NJL model does not include gluonic degrees of freedom, its pressure will be completely blind to any effect related to confinement, while the nuclear model considered obviously deals exclusively with confined objects. In this sense, the transition occurring at $\mu_{tr}$ should be interpreted as a ``deconfinement'' phase transition, which is not built from fundamental gluon dynamics, but only as an effective construction relying on the two phenomenological models involved. While a proper inclusion of confinement properties and the interplay between chiral and deconfinement phase transitions is clearly beyond the scope of this paper, in the spirit of previous works \cite{QM3,Pagliara} we attempt to incorporate these effects in a crude way through the introduction of a constant shift to the NJL vacuum pressure $\delta\Omega_0$, which will be treated as a free parameter. In the following, as done e.g. in \cite{Debora} for the case of homogeneous condensates, we consider two possible scenarios, one where strange quarks do not contribute to the thermodynamics of the star, and another in which they are included. For the first case (which we refer to as SU(2) case), we neglect hyperons in the nuclear EoS (GM1n case) and consider only the two light flavors for the quark part. In order to have a consistent SU(2) description, the phase transition to quark matter must occur before the onset of hyperons. This limits the maximum value of the vector coupling in our calculations to $G_V \sim 0.02 G_S$ with $\delta\Omega_0=0$. For the SU(3) case, such a limitation is not present, although if $G_V > 0.05 G_S$ the transition chemical potential is so high that a quark matter core is never realized. The hybrid EoS for each case, using the parallel and perpendicular pressures, is shown in Fig. \ref{Fig_eos_hybrid}. We note that the phase transition happening at high chemical potentials would present a much more prominent density jump. It is also noticeable in the curve for GM1nh+SU(3) the softening of the EoS when the strange quark appears, as well as the stiffening of the quark EoS with increasing $G_V$. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{eos_hib_su2_3.pdf} \includegraphics[width=0.49\textwidth]{eos_hib_su2_3parallel.pdf} \caption{Equation of state (considering $P_{\perp}$ on the left panel and $P_{\parallel}$ in the right one) for hybrid stars for different values of the vector repulsion as indicated. The nuclear EoS does not include hyperons when considering quark matter with only quarks up and down (GM1n+SU(2) case) and includes hyperons when the strange quark is included in the quark phase as well (GM1nh+SU(3) case). In the first case, the phase transition must happen at lower values of $\mu_B$. The magnetic field follows \Eq{varB} and is taken with $H_{\rm S}=1\times 10^{15}$ G, $H_{C}=2.5\times 10^{18}$ G, $\gamma=2.5$, and $\kappa=12$. The horizontal lines joining the jumps in the energy density produced by the first-order phase transitions are only graphical artifacts to connect the curves before and after the phase transitions. No shift in the vacuum value of the quark phase was introduced.}\label{Fig_eos_hybrid} \end{center} \end{figure} \subsection{Masses and radii} \label{sec:MR} \noindent We now present numerical results for (M-R) sequences using the hybrid EoS obtained in the previous section, together with the Baym-Pethick-Sutherland \cite{BPS} EoS for the star crust. As discussed above, the presence of strong magnetic fields breaks the star's spherical symmetry and the usual TOV equations for obtaining the star's structure are no longer valid \cite{mag3}. However, one can consider a range of magnetic fields that is physically meaningful for compact stars and yet does not produce a sizable splitting in the pressure. An example of the pressure splitting profile inside the star is given in Fig. \ref{p_anis}. As expected, such splitting of the pressures is more prominent with an increase in the field, which translates to higher densities given the density dependence of the magnetic field inside the star. One can see from Fig. \ref{p_anis} that if the field remains below $ 3 \times 10^{18} G$ the relative error associated with using one of the two pressures as representative of the whole interior of the star remains relatively small ($\leq 10\%$). A similar scale has been found using the EoS of other models of dense quark matter \cite{mag3}. Therefore, we work within the region of fields satisfying $ H \leq 3 \times 10^{18}$ G, so as not to invalidate the use of the spherical TOV equations. As a cross-check, we perform our calculations using both pressures, in order to make sure that the choice of one over the other does not lead to dramatic changes in our results. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{p_anis_star.pdf} \caption{Parallel (lower curves) and perpendicular (upper curves) pressures for a two-flavor hybrid star as a function of the baryonic chemical potential. The surface magnetic field is taken as $1\times 10^{15}$ G and the central field is $2.5\times 10^{18}$ G for the full lines and $6.8\times 10^{18}$ G for the dashed ones with $\gamma=2.5$ and $\kappa=12$. The curves end at the maximum value of the baryon chemical potential achieved in the interior of each configuration.}\label{p_anis} \end{center} \end{figure} Hybrid star M-R sequences with an inhomogeneous quark matter core are shown in Fig. \ref{MR_GM1} for values of surface magnetic field compatible with magnetars, $H_{S} \approx 1 \times 10^{15} $G and $H_C \approx 2.5 \times 10^{18}$ G, using the perpendicular pressure. From the shape of these curves, the shrinking of the quark matter core with increasing $G_V$ is clearly visible. The effective field in the center of the star is usually smaller than $H_C$ as can be seen in Table \ref{table}. As expected, the inclusion of strangeness for both the nuclear and quark phases softens the EoS, so that the maximum mass is substantially reduced. In the case of GM1nh+SU(3) with $G_V=0.05$, the phase transition happens for a very high value of the central density and the quark core turns out to be extremely small. As previously mentioned, we also investigated the effects of including a constant shift $ \delta\Omega_0$ in the NJL model vacuum pressure, which we interpret as a contribution from confining effects (i.e. a bag constant). Following \cite{QM3,Pagliara}, this quantity is treated as a free parameter ranging between $-17 \leq \delta\Omega_0 \leq 0 $ MeV/fm$^3$. It is clear from Fig. \ref{MR_GM1} that the effect of the charge anomaly alone is not large enough to push the mass to the desired $2M_\odot$ value. However, choosing a negative value of $\delta\Omega_0$ results in a slightly stiffer EoS for quarks and pushes the phase transition from nuclear to quark matter to lower chemical potentials (albeit always larger than $\mu = 350$ MeV). This allows the use of a larger value of the vector coupling, which will also help make the quark EoS stiffer. The left panel of Fig. \ref{MR_GM1} shows that with the combination of all these effects, for suitable values of $G_V$ and $\delta\Omega_0$, it is possible to increase the maximum mass achieved in the two-flavor case. Although the mass increases only by a relatively small amount, it is enough to make it compatible with the observations of PSR J1614-2230 and PSR J0348+0432. In the GM1nh+SU(3) case, a lower value of the transition chemical potential and a stiffer quark phase could also be achieved by the use of a negative shift in the vacuum value (see the right panel of Fig. \ref{MR_GM1}). However, in this case, due to the small size of the quark core, the influence of these effects is greatly reduced compared to the two-flavor scenario. Of course, this is limited by the nuclear EoS used here: different models might allow for larger quark cores and make these effects more noticeable. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{mrgm1nshifted.pdf} \includegraphics[width=0.49\textwidth]{mrgm1nhshifted.pdf} \caption{The mass-radius relations for a two-flavor (left) and three-flavor (right) inhomogeneous hybrid star considering $H_{S}=1\times 10^{15}$G, $H_C=2.5\times 10^{18}$G, $\gamma=2.5$, and $\kappa=12$. We employed the perpendicular pressure and the shifted vacuum pressure in units of MeV/fm$^3$ (see text for details) is indicated. The curves for stars composed entirely of nuclear matter in the GM1 parametrization with no magnetic field are also shown for comparison. The mass constraints ($\pm \sigma$) of pulsars PSR J1614-2230 and PSR J0348+0432 are shown as shaded regions.}\label{MR_GM1} \end{center} \end{figure} \begin{table} \setlength{\tabcolsep}{6pt} \begin{tabular}{l| r |r r r r| r| r r} \hline \hline & GM1n ($H$=0) & \multicolumn{4}{c|}{GM1n+SU(2)} & GM1nh ($H$=0) & \multicolumn{2}{c}{GM1nh+SU(3)}\\ \hline $G_V/G_S$ & - & 0 & 0 & 0 & 0.02 & - & 0 & 0.05 \\ $\gamma$ & - & 2.00 & 2.50 & 3.00 & 2.50 & - & 2.50 & 2.50\\ $\kappa$ & - & 4.70 & 11.70 & 35.20 & 11.70 & - & 11.70 & 11.70 \\ $H_{S}$ ($\times 10^{15}$ G) & 0 & 1.00 & 1-00 & 1.00 & 1.00 & 0 & 1.00 & 1.00 \\ $H_C$ ($\times 10^{18}$ G) & 0 & 2.50 & 2.50 &2.50 & 2.50 & 0 & 2.50 & 2.50\\ $M_{max} (M_{\odot})$ & 2.39 & 1.84 & 1.85 & 1.87 & 1.96 & 2.03 & 1.70 & 1.98\\ $H_{max}/H_C$ & - & 0.78 & 0.94 & 1.00 & 0.93 & - & 0.56 & 0.92\\ \hline \hline \end{tabular} \caption{Maximum mass values in units of solar masses obtained using the perpendicular pressure for hybrid stars composed of nuclear matter (GM1) and two-flavor ($GM1n+SU(2)$) and three-flavor ($GM1nh+SU(3)$) quark matter with inhomogeneous condensates for different values of the parameters determining the magnetic field profile inside the star and the quark repulsion strength ($G_V$).}\label{table} \end{table} Table \ref{table} shows the maximum masses obtained using different parametrizations for the magnetic field in \Eq{varB}, as well as different values for the vector coupling. The results are obtained considering the perpendicular pressure. In this case, the maximum masses of the considered magnetized hybrid stars are smaller than those of pure nuclear matter at zero field (GM1n ($H$=0)). In order to test the dependence of the maximum mass with the magnetic field profile inside the star, we have changed the values of $\gamma$ and $\kappa$ so as to have a faster ($\gamma=3.0$ and $\kappa=35.2$) and a slower ($\gamma=2.0$ and $\kappa=4.7$) increase of the field as one moves toward the center of the star (see Fig. \ref{B-Profile}), while keeping the same values for $H_C$ and $H_S$ for the case $G_V=0$ and two-flavor matter. The difference in the corresponding maximum masses can be seen in Table \ref{table}. We note that increasing the rate of decay of the field toward the surface, the maximum mass increases very slightly. For the parametrization used, this only amounts to up to $\sim$ 1\% in comparison to the profile with $\gamma=2.5$ and $\kappa=11.7$ used for all other calculations. Nevertheless, we point out that a more significant difference could be reached by allowing for a very steep increase/decrease of the magnetic field with $\mu$. Whether such a profile would be compatible with the internal structure of compact stars is not known, so in the present work we restricted ourselves to the more conservative parametrizations shown in the table. We also point out that enforcing a larger value of $H_C$ (although excessively strong central fields would lead to large star deformations and invalidate the use of the spherically symmetric TOV equations) is also expected to influence the maximum mass value in a more prominent way. The differences arising from the choice of the parallel or perpendicular pressures in the TOV equations are presented in Table \ref{tabledif} and Fig.\ref{parXperp}. The difference in the maximum masses never exceeds $0.1$ solar masses and the two curves differ from each other only at high densities, where the magnetic field strength is higher. Unfortunately, given the limitations of the method used here to obtain the star structure in light of the anisotropy introduced by the magnetic field, we cannot conclude which of these results could better represent the maximum mass for a highly magnetized compact object. We therefore simply treat the two results as upper and lower limits of an uncertainty band. In this context, it is interesting to point out that the inclusion of a magnetic field using an axisymmetrical geometry has been found to increase the star mass \cite{Cardall}, a result that might indicate that the actual physical situation in our study could be closer to the upper than the lower limit. Nevertheless, it should also be kept in mind that the conclusions of \cite{Cardall} are not exempt of limitations either, as they were found considering a poloidal field configuration and disregarding the modification of the matter part of the EoS by the magnetic field. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{mr_parxperp.pdf} \caption{ Comparison of the mass-radius relations for a two-flavor hybrid star obtained when using the parallel and perpendicular pressures. The magnetic field is modeled with $H_{S}=1\times 10^{15}$G, $H_C=2.5\times 10^{18}$G, $\gamma=2.5$, and $\kappa=12$. The values of $G_V$ and shifted vacuum pressure in units of MeV/fm$^3$ are indicated. We note that the decrease in the maximum mass attainable is less than 5\% when using the parallel pressure over the perpendicular one.}\label{parXperp} \end{center} \end{figure} \begin{table} \setlength{\tabcolsep}{6pt} \begin{tabular}{l|r r r | r r} \hline \hline & \multicolumn{3}{c|}{GM1n+SU(2)} & \multicolumn{2}{c}{GM1nh+SU(3)}\\ \hline $G_V/G_S$ & 0 & 0.02 & 0.05 & 0 & 0.05 \\ $\delta\Omega_0$ (MeV/fm$^3$) & 0 & 0 & -10 & 0 & 0 \\ $M_{max}: P_{\perp}$ & 1.85 & 1.96 & 2.05 & 1.70 & 1.98\\ $M_{max}: P_{\parallel}$ & 1.77 & 1.88 & 1.95 & 1.67 & 1.89\\ \hline \hline \end{tabular} \caption{Maximum mass (in units of solar mass) allowed for hybrid stars composed of nuclear matter (GM1) and inhomogeneous condensate obtained using the perpendicular and parallel pressures. The varying magnetic field profile is given by $H_S=1\times 10^{15}$ G, $H_C=2.5\times 10^{18}$ G, $\gamma=2.5$, and $\kappa=12$.}\label{tabledif} \end{table} Taking all these elements into account, we find that with a relatively low value of $G_V$ and a realistic value for $H_C$, well within the range where the pressure anisotropy is small enough to justify the application of the usual spherically symmetric TOV equations, it is possible to achieve maximum masses around 2 $M_{\odot}$, a result compatible with the precise mass measurements of PSR J1614-2230 ($M=1.97\pm 0.04M_{\odot}$ \cite{Demorest}) and PSR J0348+0432 ($M=2.01\pm 0.04M_{\odot}$ \cite{Antoniadis}). A quark matter core characterized by an inhomogeneous chiral condensate can thus be seen as a viable internal composition of these compact objects. \section{Concluding Remarks} \noindent We studied the effects of the formation of inhomogeneous chiral symmetry breaking phases on the EoS of quark matter in a magnetic field, and the consequences for the masses of hybrid stars. To describe quark matter in the core, we considered a conventional NJL model with scalar and pseudoscalar four-fermion channels and included several elements relevant for astrophysical scenarios, such as electrical neutrality, $\beta-$-equilibrium and vector repulsion. Considering a CDW ansatz for each light flavor, we saw that the spectral asymmetry of the LLL gives rise to a term in the thermodynamic potential that favors the formation of this inhomogeneous ground state and stiffens the pressure. Using this quark model together with the well-established nonlinear Walecka model, we built hybrid EoSs for stars with quark matter core in the CDW phase, and showed that such a configuration can support masses of $\sim 2 M_\odot$. We investigated the sensitivity of these results on the parametrizations involved, and found that masses compatible with the recent PSR J1614-2230 and PSR J0348+0432 measurements can be obtained with reasonable values of the parameters chosen, although in the case where strangeness is allowed, the quark core would end up being significantly reduced in order to achieve higher masses. We find the fact that the realization of inhomogeneous quark matter in the core of compact stellar objects gives compatible results with the current mass observation a very encouraging outcome of our investigation, even when considering the still unsolved issue with the splitting of the pressures in the presence of a magnetic field. In light of this, it would definitely be interesting to investigate other effects of the formation of inhomogeneous condensates on the physics of compact stellar objects, such as transport and cooling properties, with the hope of finding stronger experimental signatures. One may wonder if the CDW condensate of the magnetized quark system could be washed out by fluctuations. Let us recall that in one-dimensional systems, the Mermin-Wagner theorem \cite{MW,*Coleman} forbids the existence of (1+1)-dimensional Nambu-Goldstone bosons, so that in such systems, condensate solutions that break chiral symmetry cannot be stable against quantum fluctuations about the condensate. In addition, single-modulated condensates in (3+1)-dimensional systems can be quite sensitive to temperature fluctuations because their pairing dynamics is often essentially one-dimensional. This possibility was recently investigated in \cite{Lee:2015bva}, where it was found that the Nambu-Goldstone modes associated with the CDW condensate wash out the long-range order at finite temperatures, although they do support algebraically decaying long-range correlations, so the phase can still exhibit a quasi-one-dimensional order as in liquid crystals. While the analysis of our paper has not included the effects of fluctuations, we believe that there are good reasons to expect that in the case with magnetic field, the fluctuations will likely not be as effective in inducing the instability of the CDW condensate. The magnetic field is known to enhance chiral symmetry breaking through the dimensional reduction of the LLL fermions, the well-known mechanism of magnetic catalysis (see \cite{Miransky:2015} for a recent review). The same dimensional reduction has proven to be essential to make the inhomogeneous CDW condensate energetically favored thanks to the asymmetry of the LLL spectrum in the inhomogeneous background. Now, the magnetic catalysis mechanism was recently questioned by claims that the dimensional reduction of the LLL fermions would translate into an effective dimensional reduction of the Nambu-Goldstone bosons, which in turn would hinder the stability of the chiral condensate, the so-called inverse magnetic catalysis \cite{Fuku:2013}. But these claims were later challenged by the results of Ref. \cite{MC-KK} that used a functional renormalization group approach to demonstrate that the constituent quark mass increases with the magnetic field at all temperatures and concluded that despite a strong anisotropy in the meson propagation, their fluctuations do not lead to the inverse magnetic catalysis claimed in \cite{Fuku:2013}. In view of this, we expect that a similar behavior to the one found in \cite{MC-KK} should occur in the case of the inhomogeneous chiral condensate. At this point however, we admit that ours are just hand-waving arguments that can be corroborated only by a thorough study of the fluctuations in the CDW phase in a magnetic field, an interesting task to be undertaken in the future. We remark that in order to obtain a first insight on the effect of inhomogeneous quark matter on the stellar EoS, several simplifying assumptions were made when building our model. One was to ignore the generation of the condensate associated with the anomalous magnetic moment of the chiral pairs. Such a condensate has been found in the presence of a magnetic field for the homogeneous background \cite{Quiroz} and even in the case of color superconductivity in a magnetic field \cite{MCFL-1, LN871, MCFL-2}. There is no reason to expect it is not present also in the inhomogeneous case. However, the magnitude of the magnetic moment condensate is typically small compared to the chiral condensate, except for extremely large magnetic fields, hence, as a first approximation it can be neglected. Another simplification was the omission of color-superconducting phases, which are expected to be the true ground state at asymptotically high densities. While we recall that the presence of a magnetic field has been shown to favor the formation of chiral crystalline phases, it is also known that a magnetic field leads to extra condensates in color superconductivity \cite{MCFL-1, MCFL-2}, and makes the MCFL more stable than the regular CFL, so it is expected that a competition between color superconductivity and CDW may occur at intermediate densities in the presence of a magnetic field. Therefore, it is a pending and important task to explore whether the inhomogeneous phase can push or not the onset of color superconductivity in the presence of a magnetic field to densities higher than the ones considered in the present work. An explicit model calculation to address this question would be of course highly desirable. Additionally, an interesting question to consider is whether the incorporation of gluon effects through an extension of the NJL quark model to a gauged NJL quark model could lead to any sizable softening of the EoS of the system, as was recently found to occur in the case of CFL color superconductivity \cite{Laura15}. Finally, we recall that the inhomogeneous phase considered in this work is just one of the possible exotic phases that could be realized in dense matter. Another plausible candidate for the ground state of strongly interacting matter at intermediate densities, the so-called quarkyonic matter \cite{Mcl}, is also characterized by a spatially varying chiral condensate \cite{Kojo, InceraNJLquark}, although possibly with very different characteristics, particularly in a magnetic field background \cite{Ferrer:2012}, which could result in new unexpected effects on the EoS and on the transport properties of cold and dense quark matter. \begin{acknowledgments} S.C. and L.P. are grateful to M. Chiapparini for very helpful discussions. The work of E.J.F. and V.I. has been supported in part by DOE Nuclear Theory grant DE-SC0002179. L.P. acknowledges the financial support received from the Brazilian funding agencies CNPq, Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico, and Fapesp, Funda\c c\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (2013/26258-4), and the hospitality of the UTEP Physics Department where this work was conducted. \end{acknowledgments}
\section{Introduction} In the presence of spatial inversion or time-reversal symmetry (TRS), Weyl fermions can appear in condensed matter as topologically protected electronic band crossings close to the Fermi level with quantized Chern numbers. First theoretically predicted~\cite{PhysRevB.83.205101,PhysRevX.5.011029,NatCommHuang} and then experimentally discovered~\cite{PhysRevX.5.031013,Xu07082015} in the family of TaAs compounds, they display exotic transport properties and spectroscopic phenomena such as the chiral anomaly, large negative magnetoresistance with values exceeding the ones achieved in semiconductors and metals, and surface disconnected Fermi arcs\cite{Nielsen1983389,PhysRevB.83.205101,Xiong413,PhysRevX.5.031023,PhysRevB.90.155316,PhysRevLett.114.206401,PhysRevB.91.165105,PhysRevB.94.241405}. As promising new platforms for novel applications in spintronics, several families of materials have been proposed as candidate materials for Weyl semimetals (WSMs) \cite{soluyanov2015nature,hasanatcomm,PhysRevB.94.161116,PhysRevB.94.161401,TQC,Schooprev}. However, when time-reversal symmetry is preserved, Weyl nodes appear in multiples of $4$, resulting in complicated band structures challenging their integration into spintronic devices.\\ If TRS is broken, the number of Weyl nodes is reduced by a factor of two, which is why the search for magnetic WSMs is crucial for realizing possible applications. Unfortunately, materials proposed theoretically as magnetic Weyl fermions\cite{xu2011,burkov2011,liuweyl2014,OngNatMat,PhysRevLett.117.236401,FelserWeylsemimetal} have not yet unambiguously been experimentally confirmed, and a good candidate material is still needed. Further, in order to search for material candidates efficiently, it is important to fully understand when Weyl nodes can appear in a material. Recently, it has been proposed that topological band crossings protected by non-symmmorphic symmetries will result in fermions beyond Weyls\cite{PhysRevB.85.155118,PhysRevLett.116.186402,Bradlynaaf5037}; these fermions have also been called {\it New Fermions}. They are based on the appearance of three-, six- or eight-fold band degeneracies that are located at high-symmetry points in several space groups (SGs) and are enforced by group theory, thus appearing in all materials that crystallize in these SGs. The challenge is to find a material where these band crossings appear close to the Fermi level\cite{ToppNJP}. In particular, it was shown that the groups $I2_{1}31'$ (No 199.13) and $I4_{1}321'$ (No. 214.68), host a three dimensional band crossing at the $P$ point, which is a time-reversal (TR), non-invariant point of the Brillouin zone (BZ). These three-fold degenerate new fermions can be viewed as a Weyl nodes with a nominal topological charge of 2 intersected by an additional flat band with no charge. However, the presence of these three-fold fermions in real materials is very rare. The {\it International Crystallographic Database} (ICSD)\cite{ICSD} does not contain many compounds that crystallize in the two parent grey groups of these two groups, and of the known ones, many compounds host the three-fold band crossings far away from the Fermi level. This is the reason why we introduce a different approach for realizing three-fold degenerate fermions in this paper. In the presence of TR symmetry, six space groups can host six-fold degeneracies, which are in general much more abundantly found in real materials, since the amount of materials crytallizing in these space groups is much larger\cite{ICSD}. One space group hosting six-fold band degenercies is $P2_{1}31'$ (198.10)\cite{Bradlynaaf5037}, which is a folded version of space group $I2_{1}31'$. It corresponds to a simple-cubic Bravais lattice, and the six-fold degeneracy occurs at the TR invariant R point , which is located at the corner of the BZ. Moreover, the magnetic $P2_{1}31'$, is one of the Sohncke groups, making materials crystallizing in this space group promising for experimental applications such as the observation of the quantized photogalvanic effect\cite{PhysRevLett.116.077201,FerNatCom,PhysRevLett.119.206401,PhysRevB.94.241105}. It was recently proven that band degeneracies that are a result of non-symmorphic symmetries are affected by magnetic order\cite{Schoopeaar2317,TOPP2017,Canoprep}. In particular, TR symmetry breaking can reduce the band degeneracy. Thus, breaking TR symmetry in a compound containing a six-fold degenerate point, could generate magnetic WSMs based on two or three-fold band crossings. So far, three-fold magnetic Weyl fermions have not been described either theoretically or experimentally, to the best of our knowledge. \begin{figure} \centering \includegraphics[scale=0.3]{tmp610190} \caption{Graph of magnetic subgroups of the gray group $P2_{1}31'$ (No.198.10) compatible with no change of the unit cell and a non-zero component of the magnetic moment at the 4a wyckoff position of the Fe atom in the PtFeSb compound. The subgroups and the graph have been obtained from the tool {\it k-Subgroupsmag}\cite{KSUB1,KSUB2}.} \label{fig0} \end{figure} In this paper, we investigate the compound PtFeSb, which was reported to crystallize in the non-symmorphic SG $P2_{1}31'$\cite{FePtSb} as candidate to host magnetic Weyl nodes. By means of first-principle calculations we will introduce the different topological magnetic phases this compound can display. Symmetry analysis based on magnetic group theory will solidify our $ab-initio$ claims of the existence of magnetic Weyl nodes at high symmetry points. Finally, we will present experimental progress on the characterization of this compound. Unlike previously reported \cite{FePtSb}, we were only able to synthesize a material that crystallizes in space group $Pa\bar 31'$ (No. 205.34), adopting a disordered structure compared to the reported one. We therefore extend our theoretical analysis to $Pa\bar 31'$, finding that similar conditions apply for the enforced band degeneracies, finding that they can split to form two-fold Weyl nodes. \begin{figure*} \centering \includegraphics[scale=0.6]{fig1-3} \caption{(a) Crystal structure of PtFeSb with spins aligned so that it adopts the $P2_{1}3$ (No. 198.9 in the BNS setting) magnetic group. Fe, Pt and Sb atoms are shown in brown, grey and green respectively. (b) Paramagnetic bulk band structure of PtFeSb along high symmetry lines including spin-orbit coupling, six-fold band crossings are highlighted. (c) Bulk band structure of PtFeSb in the magnetic chiral structure $P2_{1}3$, the inset shows three-fold and one-fold degenerate bands. (d) Bulk band structure of PtFeSb in a ferromagnetic phase with the magnetic moments aligned along (001) direction (magnetic group $P2_{1}'2_{1}'2_{1}$ No. 19.27).} \label{fig1} \end{figure*} \section{Method of Calculation} The first-principles calculations have been performed within the DFT\cite{Hohenberg-PR64,Kohn-PR65} as implemented in the Vienna Ab initio Simulation Package (VASP)\cite{Kresse199615,PhysRevB.48.13115}. The interaction between ion cores and valence electrons was treated by the projector augmented-wave method\cite{vaspPaw} and the generalized gradient approximation (GGA) for the exchange-correlation potential with the Perdew-Burke-Ernkzerhof for solids (PBEsol) parametrization~\cite{PBE}. In order to account for the magnetic character of the system we performed spin-polarized calculations. Spin-orbit coupling (SOC) was included in the Hamiltonian by adding scalar relativistic corrections using the second variation method\cite{PhysRevB.62.11556}. A Monkhorst-Pack k-point grid of (4$\times$4$\times$4) for reciprocal space integration and 600 eV energy cutoff of the plane-wave expansion have been used to get a residual error on the forces of less than 1 meV/\AA, resulting in a fully converged electronic structure including SOC. The matrices of the irreducible representations (irreps) of the (double) space groups have been obtained from the tool {\it representations DSG}\cite{Elcorodsg,PhysRevE.96.023310} in the {\it Bilbao Crystallographic Server}\cite{BCS1,BCS2,BCS3}, and the dimensions of the irreducible co-representations of the (double) magnetic groups have been inferred from the dimensions of these irreps. For completeness we have also used {\it ISODISTORT} software\cite{ISODISTORT}. \section{Experimental Details} Samples were prepared by placing elemental Pt, Fe and Sb in a molar ratio of 1:1:1.5 in sealed quartz tubes. The tubes were then heated to 1150 $^\circ$ C for 24 hours. They were subsequently cooled with a rate on 1.5$^\circ$/h to 700$^\circ$ C and then quenched into ice water. Without quenching, multi-phase samples were obtained. Without a large excess of Sb, an impurity phase, which could be identified as a Pt-Fe alloy appeared in the x-ray diffraction pattern. Powder x-ray diffraction measurements were performed at room temperature on a Bruker D8-Advance with Cu-K$\alpha_1$ radiation (Ge(111) monochromator, $\lambda$ = 1.54059 \AA), in reflection geometry, using a Vantec detector. High temperature magnetic measurements were performed on an MPMS from Quantum Design, equipped with a high temperature furnace. \section{Results} \subsection{Theoretical prediction} PtFeSb was originally reported to crystallize in the space group $P2_{1}31'$ (No. 198.10)\cite{FePtSb}. It was expected to crystallize in the half-Heusler crystal structure similarly to related compounds, but was found to exhibit a closely related structure in which Sb and Fe are displaced from the ideal (half-Heusler) positions, breaking all the mirrors and inversion symmtries (see Fig.~\ref{fig1}(a))\cite{xtal-str,PhysRevB.56.13012}. The structure can be viewed as an ordered version of the pyrite crystal structure. Pyrites crystallize in SG $Pa\bar 31'$ (No. 205.34) and have the nominal composition of \textit{MX$_2$}, where the \textit{M} atoms sit on the Wyckoff position 4a and the \textit{X} atoms on 8c. In the ordered version of the structure, the 8c position splits into two 4a positions in SG $P2_{1}31'$ (No. 198.10), thus forming an ordered ternary structure type. SG $P2_{1}31'$ has four time-reversal invariant momenta (TRIMs), $\Gamma$, $R$, $M$ and $X$. We first elucidate the band topology in the absence of magnetism. At the corner of the BZ, at R=($\pi$,$\pi$,$\pi$), 3 generators leave this point invariant, $C_{3,111}^{-1}$ along the (111) axis, $C_{2x}$ and $C_{2y}$; adding TRS the little group can be described by a six- or a two-dimensional irreducible representation (irrep)\cite{Bradlynaaf5037}. The six dimensional irrep is visible as the six-fold degenerate band crossing in Fig.~\ref{fig1}(b)), above and below Fermi level. These chiral six-fold bands near the Fermi level should feature a spin-1 Weyl fermion. In addition, a four-fold-degenerate point appears at $\Gamma$, with a Chern number of $\pm$4\cite{PhysRevLett.115.036806,PhysRevLett.119.206402}. The two TRIMs are unrelated by symmetry and they are therefore able to display gigantic Fermi arcs. These Fermi arcs should be easily detectable with spectroscopic methods. Since we have a non-zero topological charge when TRS is preserved, we expect the six-fold crossing to split in Weyl nodes when TRS breaks. \begin{table}[h!] \begin{center} \caption{Possible dimensions of the irreps at R point of the ferromagnetic and magnetic chiral phases of PtFeSb} \label{posi} \begin{tabular}{ccc} \hline \hline Magnetic Group & K point & Dim \\ \hline $P2_{1}3$ (No. 198.9) & R & 1,3\\ $P2_{1}'2_{1}'2_{1}$ (No. 19.27) & R & 2 \\ \hline \hline \end{tabular} \end{center} \end{table} Breaking TRS can give rise to different magnetic groups (of type I and III) without changing the unit cell. Fig.~\ref{fig0} shows the subgroup-tree compatible with a non-zero magnetic moment at the position occupied by the Fe atoms (4a). Only the magnetic groups $P2_{1}3$ and $P2_{1}'2_{1}'2_{1}$ are maximal subgroups. In this work, we are going to focus on the $P2_{1}3$ (No. 198.9) and $P2_{1}'2_{1}'2_{1}$ (No. 19.27) subgroups, as they can hold three and two dimensional co-representations at point $R$, respectively (see Table~\ref{posi}). In all other magnetic groups, magnetic order will cause the bands to split fully, so that they are no longer degenerate. {\it Possible three-fold magnetic Weyl fermions} The group $P2_{1}3$ (No. 198.9) holds a magnetic phase that breaks TRS while keeping the cubic structure. In this way the six-dimensional irrep ${\bar R_{7}}{\bar R_{7}}$, splits into two ${\bar R_{7}}$ irreps of three dimensions. Accordingly, the two dimensional ${\bar R_{4}}{\bar R_{4}}$ and ${\bar R_{5}}{\bar R_{6}}$ irreps split into 1 dimensional ones (${\bar R_{4}}$,${\bar R_{5}}$ and ${\bar R_{6}}$). In order to maintain the cubic symmetry, there is only one possible magnectic configuration: the direction of the magnetic moments needs to be along (111) cubic diagonals and the sum of the magnetic moment of the Fe atoms needs to be 0 ($\sum{\bf m}_{Fe}=0$). This complex non-collinear magnetic structure (see Fig.~\ref{fig1}(a)) has been reported in other compounds crystallizing in the same SG\cite{PhysRevB.69.054422}. As predicted by magnetic group theory, the band splittings are visible in our ab-initio calculations, shown in Fig.~\ref{fig1}(c). We preformed this calculation for different values of Hubbard-U (0, 1, 2 and 3 eV) obtaining the same value of the magnetic moment m=1.7 $\mu_{B}$. {\it Possible two-fold magnetic Weyl fermions} $P2_{1}'2_{1}'2_{1}$ is another maximal subgroup of the magnetic group $P2_{1}31'$ that can hold a magnetic phase with three different components of the magnetic moment (m$_x$,m$_y$,m$_z$) on the Fe atoms, symmetry imposes that the sum of the 4 magnetic components on the $x$ and $y$ axis needs to be 0 independently ($\sum{\rm m}_{x}=0$ and $\sum{\rm m}_{y}=0$). In this magnetic phase the co-representation ${\bar R_{7}}{\bar R_{7}}$ of the grey SG $P2_{1}3$ (see Fig. \ref{fig0}), splits into three 2-dimensional ones\cite{Miller}. We performed our band structure analysis considering a magnetization along the (001) direction. The band splitting predicted by group theory can again be observed in the calculated band structure; in Fig.~\ref{fig1}(d) the magnetic two-fold Weyls nodes are visible at the $R$ point. \subsection{Experimental data} Motivated by our theoretical analysis, we tried to synthesize the material to explore the magnetic structure, In order to obtain information about the magnetic groups accessible in PtFeSb. Phase pure samples could only been obtained if about 50 \% excess of Sb was used during the synthesis. Fig. \ref{XRD} shows the powder x-ray diffraction pattern of a sample obtained with the starting composition of PtFeSb$_{1.5}$. A very small amount of impurity phase is present, which is marked with an asterisks. If different starting compositions are used, this peak increases significantly in intensity. It's position roughly matches the database entry of a bcc-type Pt-Fe alloy \cite{ICSD}. In comparison to the remaining reflections, the impurity reflection always appeared much broader, indicating that the Pt-Fe alloy side-phase can adopt a range of composition. Chemical analysis of the sample with the smallest amount of impurity revealed a composition of PtFe$_{1.1}$Sb$_{1.33}$. Fig. \ref{XRD} also displays the simulated x-ray patterns for PtFeSb \cite{FePtSb} and PtSb$_2$ \cite{PtSb2}. Both materials exhibit the exact same lattice constant, hence doping between the two end members of PtFe$_{x}$Sb$_{2-x}$ would not cause a peak shift. However, a clear distinction between the two space groups of the ordered ternary or of a disordered pyrite structure can be made, since the former one requires superstructure reflections. These extra reflections are indicated with arrows in Fig. \ref{XRD}. It is very obvious that the sample does not exhibit these reflections, clearly indicating that it is adopting a disordered version of the pyrite crystal structure in space group $Pa\bar 31'$. If we would assume that Fe would exclusively substitute Sb, the measured composition would not match the expected MX$_2$ composition of a pyrite compound. We thus assume that Fe can substitute on both, the Pt and the Sb site. We could assume a nominal composition of (Pt$_{0.9}$Fe$_{0.1})($Sb$_{1.3}$Fe$_{0.7}$) for the material, which would match measured composition in a very disordered pyrite phase, in agreement with our powder x-ray diffraction analysis. We tried to confirm this model with single crystal diffraction. Although we were able to obtain single crystal diffraction data of reasonable quality, we were unable to reliably refine the data in respect to the location of the Fe atoms. Single crystal diffraction did however confirm the space group of the material to be $Pa\bar 31'$. Since no major impurity phases were present and since the reflections, both in powder and single crystal diffraction are sharp, we conclude the we made a highly disordered pyrite phase with the chemical composition PtFe$_{1.1}$Sb$_{1.33}$. Since there is no change in lattice constant between PtSb$_2$ and PtFeSb, we cannot be sure that the material synthesized here has a range of intermediate compositions. We can however certainly say that the pyrite phase PtSb$_2$ can be doped with a significant amount of Fe. \begin{figure}[h!] \centering \includegraphics[scale=0.35]{XRD-fig.pdf} \caption{Powder x-ray diffraction of PtFe$_{1.1}$Sb$_{1.33}$. Measured data are shown in black and simulated patterns for PtFeSb and PtSb$_2$ are shown in red and blue respectively. Peaks that can serve as an indicator for the ordered phase are indicated by arrows. None of these are observed in the experimental data. The impurity phase is marked with an asterisks .} \label{XRD} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.35]{mag-fig.pdf} \caption{Magnetic properties of PtFe$_{1.1}$Sb$_{1.33}$. The Temperature dependent susceptibility is shown in the main panel. While a transition from ferromagnetic to paramagnetic order is observed as roughly 545 K, the susceptibility remains high, indication that a fraction of the sample remains ferromagnetic above T$_C$. The inset shows the field dependent magnetic moment measured at T=300K. The saturated moment is too high for being cause by an impurity phase.} \label{mag} \end{figure} We further analyzed the magnetic properties of our sample. We found that the sample sticks to a simple bar magnet at room temperature, indicating ferromagnetic behavior. Fig. \ref{mag} shows the temperature dependent susceptibility of PtFe$_{1.1}$Sb$_{1.33}$ between 300 K and 630 K. While we observe a drop in the susceptibility at around 545 K, which indicates a transition form ferromagnetic to paramagnetic order, the susceptibility remains relatively large above the transition. This suggests that a portion of the sample remains ferromagnetic above the transition, supporting our previous discussion of a possible range of compositions in the sample. Nonetheless, we can exclude elemental Fe as the cause for the ferromagnetic behavior due to two reasons. For once, the saturated magnetic moment at 300 K is too high for just being caused by an Fe impurity, which is not visible in the powder x-ray diffraction pattern and could thus not be larger than 5 \% of the sample. Secondly, the Curie temperature of Fe is much higher, about 1043 K \cite{Spaldin}.We can therefore conclude that we could synthesize a ferromagnetic pyrite phase in group $Pa\bar 31'$. This is not surprising, since $P2_{1}31'$ is a subspace-group of $Pa\bar 31'$ that lacks inversion symmetry. As in previous section, we analyze the electronic properties of a magnetic phase in space group $Pa\bar 31'$ by means of magnetic group theory. The magnetic symmetry analysis was performed considering only the k=0 propagation vector because of the observation of a spontaneous large ferromagnetic moment in the magnetization data and assuming a single irreducible representation. Since Fe was observed on both, the Pt and Sb sites, both Wyckoff positions (4a and 8c) were allowed to have an ordered magnetic moment in the symmetry analysis (see Fig. \ref{fig3}). The results are summarized in Table \ref{posi2}; there are three different interesting groups that allow for ferromagnetic phases , $Pb'c'a$, $R\bar{3}$ and $P2_{1} '/c'$, all deriving from the mGM4+ irreducible representation with different order parameter directions. These groups also feature two-fold magnetic Weyl nodes in $Pb'c'a$ and $P2_{1} '/c'$. Unfortunately the observed ferromagnetic order does not allow for the existence of three-fold crossings. Three-fold crossings can only appear in non-colinear magnetic structures that maintain the cubic symmetry. \begin{table}[h!] \begin{center} \caption{Possible dimensions of the irreps at R point of the ferromagnetic phases of Fe-doped PtSb$_2$ in the maximal subgroups of $Pa\bar 31'$. High-symmetry labels of the corresponding magnetic group where the band-crossings appear , dimensionality of the irreps and direction of the easy axis in the cubic basis are shown in columns 2, 3 and 4 respectively. } \label{posi2} \begin{tabular}{cccl} \hline \hline Magnetic Group & K point & Dim & FM direction\\ \hline $Pb'c'a$ (No. 61.436) & R & 2 & $\langle100\rangle$ direction\\ $R\bar{3}$ (No. 148.17) & T & 1 & $\langle111\rangle$ direction\\ $P2_{1} '/c'$ (No. 14.79) & C & 2& $\{110\}$ plane\\ \hline \hline \end{tabular} \end{center} \end{table} \begin{figure*}[h!] \centering \includegraphics[scale=0.6,angle=90]{tmp359679} \caption{Graph of the maximal subgroups of space group $Pa{\bar 3}1'$ (205.34) } \label{fig3} \end{figure*} \section{Conclusions} In conclusion we showed theoretically, using magnetic group theory and ab-initio calculations, that the compound PtFeSb in $P2_{1}31'$ has the potential to host new fermionic excitations in the presence of magnetic order. Depending on the exact spin order, two different topological phases can exist, exhibiting either two-fold and three-fold magnetic Weyl nodes. This is the first study providing theoretical evidence for magnetic order being a tool to create three-fold degenerate magnetic new fermions. The implications range far beyond the material candidate introduced here, further magnetic compounds in SG $P2_{1}3$ will potentially show similar features, since our arguments are solely based on group theory and not depend on the individual elements composing the material. This formalism could for example also be applied to Mn$_3$IrSi\cite{PhysRevB.69.054422} reported to crystallize in SG $P2_{1}3$. Our efforts of synthesizing PtFeSb resulted in a different crystallographic phase, a magnetic alloy in SG $Pa\bar 31'$ (No. 205.34). We showed that this phase can also hold two-fold magnetic Weyl fermions, making this material interesting for future investigations as well. At this point, we can't exclude that PtFeSb can also be synthesized in its ordered version. More synthesis methods, maybe at elevated pressures, to avoid formation of the Fe-Pt alloy, might be necessary to synthesize this phase. Nonetheless, both phases, the disordered and ordered version are of high interest for future investigation with spectroscopic methods to confirm the presence of magnetic Weyl nodes. \section{Acknowledgements} MGV was supported by IS2016-75862-P national project of the Spanish MINECO. The work of LE was supported by the Government of the Basque Country (project IT779-13) and the Spanish Ministry of Economy and Competitiveness and FEDER funds (project MAT2015-66441-P). This research was partially supported by NSF through the Princeton Center for Complex Materials, a Materials Research Science and Engineering Center DMR-1420541.
\subsection{Proof of \texorpdfstring{\Cref{MainThm1d}}{Theorem 1}} Let $\delta \in (0, 1)$. Define the sequence $t(n)$ by setting \begin{align}\label{t_n} t(n) = \frac{3.4 \sigma}{\alpha}\sqrt{\frac{\ln \ln 2n + 0.72\ln (\nicefrac{10.4}{\delta})}{n}} \end{align} for any integer $n \geq 1$ and define $n_0=n_0(\alpha,r,\delta)$ to be the smallest integer $n\ge 1$ for which $t(n) \le r$. We intentionally omit the dependence of $t(n)$ in $\delta$ to lighten notations. We only detail the proof for the upper bound of the probability of the event \begin{align} \mathcal{A} \coloneqq \left\{ \exists n \geq n_0 \text{ such that } \widehat{\theta}_n - \theta^* > t(n) \right\}, \end{align} the proof for upper bounding the probability of the event $\mathcal{A}' \coloneqq \big\{ \exists n \geq n_0, \theta^* - \widehat{\theta}_n > t(n) \big\}$ is very similar. Our proof can be decomposed into two steps : first, we show that we can reduce the problem of upper bounding the probability of the event $\mathcal{A}$ to the problem of uniformly bounding a sum of sub-Gaussian random variables ; then we employ a tight uniform concentration inequality for the sum of sub-Gaussian random variables. \begin{lemma}\label{lem:m_estimator_to_SG} Under \Cref{as:convex_phi,as:Phi_strongly_convex,as:SG}, for any integer $n \geq n_0$ and positive real $t \in (0, r]$, there exist $n$ Ni.i.d.\ $\sigma^2$-sub-Gaussian random variables $Z_1(t),\ldots,Z_n(t)$ such that \begin{align} \mathcal{A}_n(t) \coloneqq \big\{\widehat{\theta}_n > \theta^* + t \big\}\subset \mathcal{B}_n (t) = \bigg\{\sum_{i=1}^n Z_i(t) \geq \frac{\alpha}{2} n t\bigg\}. \end{align} \end{lemma} \begin{myproof For any integer $n\geq n_0$ and real $t\in (0,r]$, we set \begin{align} S_n(t) &= n\big(\widehat\Phi_n(\theta^*)-\Phi(\theta^*)\big)- n\big(\widehat\Phi_n(\theta^*+t)-\Phi(\theta^*+t)\big)\\ &= n\big(\widehat\Phi_n(\theta^*)-\widehat\Phi_n(\theta^*+t)\big)+ n\big(\Phi(\theta^*+t)-\Phi(\theta^*)\big).\label{eqq1} \end{align} \Cref{as:convex_phi} ensures that the empirical risk $\widehat\Phi_n$ is convex and coercive, thus, \begin{align} \mathcal{A}_n(t) \subset \big\{\widehat\Phi_n(\theta^*)\geq \widehat\Phi_n(\theta^*+t)\big\}, \end{align} see \Cref{fig:U_shape} for an illustration of this implication. Using \eqref{eqq1} and the lower-boundedness of the population risk $\Phi$ by a quadratic function (\Cref{as:Phi_strongly_convex}), we arrive at \begin{align} \mathcal{A}_n(t) \subset \big\{S_n(t) \geq n\left(\Phi(\theta^* + t) - \Phi(\theta^*)\right)\big\} \subset \big\{S_n(t) \geq \frac{\alpha}{2}nt^2\big\} \subset\bigg\{\frac{S_n(t)}{t} \geq \frac{\alpha}{2}nt^2\bigg\}. \end{align} Finally, using the definition of $\widehat\Phi_n$, we can write $S_n(t)$ as follows \begin{align} \frac{S_n(t)}{t} = \sum_{i=1}^n t^{-1}\big\{\underbrace{ \phi(Y_i,\theta^*) -\phi(Y_i,\theta^*+t)-\mathbb E\big[\phi(Y_i,\theta^*)- \phi(Y_i,\theta^*+t)\big]}_{:=tZ_i(t)}\big\}. \end{align} The random variables $Z_i(t)$ are clearly centered and i.i.d. Furthermore, it follows from \Cref{as:SG} that $Z_i(t)$ is sub-Gaussian variables with variance proxy $\sigma^2$. This completes the proof. \end{myproof} \definecolor{xdxdff}{rgb}{0.,0.,1} \begin{figure} \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm] \begin{axis}[ x=1cm,y=0.5cm, axis lines=middle, xlabel = $\theta$, grid style=dashed, ymajorgrids=false, xmajorgrids=false, xmin=-0.5, xmax=7.435978260869557, ymin=-0.5, ymax=7.539371980676332, xtick={\empty}, ytick={\empty}, extra x ticks={2, 3,4}, extra x tick labels={$\theta^*$, $\theta^* + t$, $\widehat{\theta}_{n}$}], \clip(-1.0509782608695646,-0.6747584541062818) rectangle (12.435978260869557,7.539371980676332); \draw [samples=1000,rotate around={0:(4,1)},xshift=4cm,yshift=0.5cm,line width=2pt,domain=-5:5)] plot (\x,{(\x)^2/2/0.5}); \begin{scriptsize} \draw[color=black] (6.7,6.1410024154589373) node {$\widehat{\Phi}_n$}; \draw [fill=xdxdff] (2,5) circle (2pt); \draw[color=xdxdff] (1.3,5) node {$\widehat{\Phi}_n(\theta^*)$}; \draw [fill=xdxdff] (3,2) circle (2pt); \draw[color=xdxdff] (1.9,2) node {$\widehat{\Phi}_n(\theta^* + t)$}; \draw [fill=xdxdff] (4,1) circle (2pt); \draw[color=xdxdff] (4.05,1.9) node {$\widehat{\Phi}_n(\widehat{\theta}_{n})$}; \end{scriptsize} \end{axis} \end{tikzpicture} \caption{Illustration of the shape of the function $\widehat{\Phi}_n$.} \label{fig:U_shape} \end{figure} \Cref{lem:m_estimator_to_SG} tells us that, in order to bound the probability of the event \begin{align} \mathcal{A} = \bigcup_{n=n_0}^\infty \mathcal{A}_n\big(t(n)\big) \end{align} it suffices to bound the probability of the event \begin{align} \mathcal{B} \coloneqq \bigcup_{n=n_0}^\infty \mathcal{B}_n \big(t(n)\big)= \bigg\{ \exists n \geq n_0\text{ such that } \sum_{i=1}^n Z_i\big(t(n)\big) \geq \frac{\alpha}{2} n t(n) \bigg\}. \end{align} Thus, we need a uniform in sample size upper bound on the sum of sub-Gaussian random variables. We will use a special case of \cite[Theorem 1]{howard2018uniform} which we now state (see Eq.~(7) in the original paper). \begin{theorem}[{\citealp[Theorem 1]{howard2018uniform}}] \label{thm:Howard} Let $Z_1, Z_2, \dots$ be independent, zero-mean, $\sigma^2$-sub-Gaussian random variables. It holds that, for any confidence $\delta \in (0, 1)$, \begin{align} \mathbb{P}\bigg( \exists\, n \geq 1 : \sum_{i=1}^n Z_i \geq 1.7 \sigma \sqrt{n\big( \ln \ln (2n) + 0.72 \ln({5.2}/{\delta}) \big)} \bigg) \leq \delta. \end{align} \end{theorem} Combining \Cref{lem:m_estimator_to_SG} with \Cref{thm:Howard}, and taking into account the definition \eqref{t_n} of $t(n)$, we get \begin{align} \mathbb{P}\left(\mathcal{A}\right) \leq \mathbb{P} \bigg(\exists n \geq n_0 \text{ such that } \sum_{i=1}^n Z_i\big(t(n)\big) \geq \frac{\alpha}{2} n t(n) \bigg) \leq \delta/2. \end{align} One can easily check that an identical upper bound for the probability of the event \begin{align} \mathcal{A}' = \left\{ \exists n \geq n_0 \text{ such that }\theta^* - \widehat{\theta}_n > t(n) \right\} \end{align} can be obtained using the same arguments. \begin{remark} Several uniform bounds on the sum of sub-Gaussian random variables have been proved (see, e.g. \citep{jamieson2014lil,pmlr-v98-maillard19a} and the other theorems from \citep{howard2018uniform}). \Cref{fig:comparison} and \Cref{tab:sum_SG_upper_bound} shows a comparison between those bounds. The bound from \citep{jamieson2014lil} is loosest for any sample size. The bound from \citep{pmlr-v98-maillard19a} is the tightest for small sample size while the one from \cite{howard2018uniform} becomes the tightest when the sample size increases. \end{remark} \begin{table} \caption{Uniform upper bounds for sum of $t$ i.i.d. 1-sub-Gaussian random variables.} \label{tab:sum_SG_upper_bound} \centering \begin{tabular}{lll} \toprule Reference & Bound & Confidence \\ \midrule \cite{jamieson2014lil} & $1.57\left[t \left(\ln\ln(1.01t) + \ln(\nicefrac{1}{\delta})\right)\right]^{\nicefrac{1}{2}}$ & $21154 \delta^{1.01}$\\ \cite{howard2018uniform} & $1.44\left[t\left(1.4 \ln \ln(2t) + \ln\left(\nicefrac{5.19}{\delta}\right) \right)\right]^{\nicefrac{1}{2}}$ & $\delta$\\ \cite{pmlr-v98-maillard19a} & $1.42 \left[ \left(t+1\right)\left(\ln(\sqrt{t+1}) + \ln(\nicefrac{1}{\delta})\right)\right]^{\nicefrac{1}{2}}$ & $\delta$\\ \bottomrule \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{img/upper_bound_vs_UB.pdf} \caption{Comparison of uniform, high-probability, upper tail bounds for the sum of i.i.d. sub-Gaussian random variables scaled by $c_{n, \delta}^{UB}$ bound (see \Cref{sec:2}). \citet[Lemma 3]{jamieson2014lil} with $\varepsilon=0.02$, \citet[Lemma 15]{pmlr-v98-maillard19a} and \citet[Theorem 1]{howard2018uniform} with $ \eta=2.04, s=1.4$. Global confidence is set to $\nu = 0.1$.} \label{fig:comparison} \end{figure}% \subsection{Proof of \texorpdfstring{\Cref{main:thm2}}{Theorem 2}} Without loss of generality, we assume hereafter that $\boldsymbol{a}$ is a unit vector. Let $\beta\in(1,2)$ and $\varepsilon>0$ be two constants that we will choose to be equal to $1.1$ and $0.2$, respectively. Throughout the proof we consider, for $k \in \mathbb{N}$, the sequence of integers, $n_0 = 4$, $n_{k+1} = \lceil\beta n_k\rceil$ and the sequence of integer intervals $I_k = [ n_k, n_{k+1})\cap\mathbb{N}$. We define the sequence $(t(n))_{n \in \mathbb{N}}$ by setting \begin{align} t(n) = \frac{10\varrho_n B}{\sqrt{3}}\sqrt{\frac{(1+\varepsilon)\ln\ln_\beta n + \ln(1/\delta) + 5/8}{n}}, \quad \text{ for } n \geq 1\label{cn}. \end{align} For readability we write $t(n_k) = t_{k}$ for any integer $k$. We wish to upper bound the probability of the event \begin{align} \mathcal{A} = \bigcup_{n=4}^\infty \mathcal{A}_n,\quad \text{where} \quad\mathcal{A}_n = \{ {\boldsymbol a}^\top(\boldsymbol{\theta}^* - \bm{\widehat{\theta}}_n) > t(n)\}. \end{align} Define the set $\mathcal{V} = \big\{{\boldsymbol v} \in \mathbb{R}^d, {\boldsymbol v}^\top{\boldsymbol a} = 1\big\}$ and the random variable \begin{align} S_n({\boldsymbol w}) = n\left(\widehat\Phi_n(\boldsymbol{\theta}^*) -\Phi_n(\boldsymbol{\theta}^*)\right) -n\left(\widehat\Phi_n(\boldsymbol{\theta}^*-\boldsymbol{w}) - \Phi_n(\boldsymbol{\theta}^* - \boldsymbol w) \right). \end{align} We have the following lemma resulting from the convexity assumptions. \begin{lem}\label{lem:1} Under \Cref{as:convex_lipschitz_phi,as:convex_penalty,as:Phi_strongly_convex_everywhere}, for any integers $k \in \mathbb{N}, n \in I_k$, the event $\mathcal{A}_n$ is included in the event \begin{align} \mathcal{B}_{n} \coloneqq \left\{ \sup_{{\boldsymbol w} \in t_{k+1}\mathcal{V}} \left[ S_n(\boldsymbol w) - (\nicefrac{ \alpha_n}{2}) n \| {\boldsymbol w}\|^2\right] \geq 0 \right\}. \end{align} \end{lem} The proofs of the lemmas stated in this section are postponed to \Cref{sec:postponed_proofs}. Combining \Cref{lem:1} with a union bound gives \begin{align} \mathbb{P}\left(\mathcal{A} \right) &\le \mathbb{P}\bigg(\bigcup_{k \geq 0} \bigcup_{n \in I_k}\mathcal{B}_{n} \bigg)\leq \sum_{k \geq 0} \mathbb{P}\bigg(\bigcup_{n \in I_k} \mathcal{B}_{n} \bigg). \end{align} Let k be an integer. Since the sequence $(\alpha_n)_n$ is non-increasing we have, for any integer $n \in I_k$, $\alpha_n \geq \alpha_{n_{k+1}}$. Setting $\beta = 1.1$ we have $n_k/n_{k+1} \geq 4/5$ for $n \geq 4$. Thus, for any positive real $\lambda$, \begin{align} \mathbb{P}\bigg(\bigcup_{n \in I_k} \mathcal{B}_{n} \bigg) &\leq \mathbb{P}\bigg( \sup_{n \in I_k} \sup_{\boldsymbol w \in t_{k+1}\mathcal{V}} \bigg[S_n(\boldsymbol w) - \frac{\alpha_n}{2} n_k \lVert \boldsymbol w \rVert_2^2 \bigg] \geq 0 \bigg)\\ &\leq \mathbb{P}\bigg( \sup_{n \in I_k} \sup_{\boldsymbol w \in t_{k+1}\mathcal{V}} \bigg[S_n(\boldsymbol w) - \frac{2\alpha_{n_{k+1}}}{5} n_{k+1} \lVert \boldsymbol w \rVert_2^2 \bigg] \geq 0 \bigg)\\ &\leq \mathbb{P}\bigg( \sup_{n \in I_k} \sup_{\boldsymbol w \in t_{k+1}\mathcal{V}} \exp\left\{\lambda \left(S_n(\boldsymbol w) - \frac{2\alpha_{n_{k+1}}}{5} n_{k+1} \lVert \boldsymbol w \rVert_2^2 \right) \right\} \geq 1 \bigg). \end{align} The stochastic process $\left(\sup_{\boldsymbol w \in t_{k+1}\mathcal{V}} \exp\left\{\lambda \left(S_n(\boldsymbol w) - 2\alpha_{n_{k+1}} n_{k+1} \lVert \boldsymbol w \rVert_2^2/5 \right) \right\}\right), n \in \mathbb{N}^*$, is a submartingale with respect to its natural filtration, therefore, Doob's maximal inequality for submartingales yields, \begin{align} \mathbb{P}\bigg(\bigcup_{n \in I_k} \mathcal{B}_{n} \bigg) &\leq \inf_{\lambda > 0} \mathbb{E}\left[\sup_{\boldsymbol w \in t_{k+1}\mathcal{V}} \exp\left\{ \lambda \left(S_{n_{k+1}}(\boldsymbol w) - \frac{2\alpha_{n_{k+1}}}{5} n_{k+1} \lVert \boldsymbol w \rVert_2^2 \right)\right\} \right].\label{eq:expectation_exp_sum} \end{align} The next lemma uses classic tools from empirical processes theory such as the symmetrization trick and the contraction principle to bound the expectation from \eqref{eq:expectation_exp_sum}. \begin{lem}\label{lem:symmetrization_contraction} Under \Cref{as:convex_lipschitz_phi}, given a positive integer $m$ and three positive real numbers $t$, $\alpha$ and $\lambda$, letting $t' = (\nicefrac{2m \alpha }{L})t$, we have, \begin{align} \inf_{\lambda > 0} \mathbb{E}\left[\sup_{\boldsymbol{w} \in t \mathcal{V}} \exp\left\{ \lambda \left( S_m(\boldsymbol{w}) - \alpha m \lVert \boldsymbol{w} \rVert_2^2 \right) \right\} \right] \leq \inf_{\lambda > 0} \mathbb{E}\left[\sup_{\boldsymbol{w} \in t' \mathcal{V}} \exp\left\{ \lambda \left(\boldsymbol{w}^\top \mathbf{X} \boldsymbol{\varepsilon} - \lVert \boldsymbol{w} \rVert_2^2/2 \right) \right\} \right]. \end{align} \end{lem} Applying \Cref{lem:symmetrization_contraction} with $m=n_{k+1}, \alpha = 2\alpha_{n_{k+1}}/5$ and $t = t_{k+1}$ gives \begin{align} \mathbb{P}\bigg(\bigcup_{n \in I_k} \mathcal{B}_{n} \bigg) \leq \inf_{\lambda > 0} \mathbb{E}\left[\sup_{\boldsymbol w \in s_{k+1} \mathcal{V}} \exp \left\{ \lambda(\boldsymbol w^\top \mathbf{X} \boldsymbol\varepsilon - \lVert \boldsymbol w \rVert_2^2/2) \right\} \right], \quad s_{k+1} = \frac{4n_{k+1}}{5\varrho_{n_{k+1}}}t_{k+1}.\label{eq:expectation_exp_rademacher} \end{align} For fixed $\mathbf{X}$ and $\boldsymbol\varepsilon$, define the concave quadratic function $G(\boldsymbol w) \coloneqq \boldsymbol w^\top \mathbf{X} \boldsymbol\varepsilon - \lVert \boldsymbol w \rVert_2^2/2 $. The next lemma results from explicitly computing the supremum inside the expectation in \eqref{eq:expectation_exp_rademacher} and bounding the resulting moment generating function. For the next lemma, we denote by $B_{\boldsymbol{a}^\top \boldsymbol{X}}$ the smallest constant $B$ for which $\mathbb{P}(|\boldsymbol{a}^\top\boldsymbol{X}_1|\le B)=1$. It is clear that $B_{\boldsymbol{a}^\top \boldsymbol{X}}\le B_{\|\boldsymbol{X}\|}$. Nevertheless, we prefer to use the constant $B_{\boldsymbol{a}^\top \boldsymbol{X}}$ for this lemma in order to keep the inequality as tight as possible. \begin{lem}\label{lem:first_expectation} Let $I$ be a finite set of cardinality $m \in \mathbb{N}$. Let $(\boldsymbol{X}_i)_{i \in I}$ be i.i.d.\ random vectors in $\mathbb{R}^d$ satisfying \Cref{as:boundedness} and let $(\varepsilon_i)_{i \in I}$ be i.i.d.\ Rademacher variables, independent of $(\boldsymbol{X}_i)_{i \in I}$. Then, for any positive constants $s, \mu$ such that $ 8 \mu m B^2 \leq 1$, \begin{align} \mathbb{E}\left[\sup_{w \in s \mathcal{V}} e^{\mu G(\boldsymbol w)} \right] \le \exp\big\{(ms^2 B_{{\boldsymbol a}^\top \boldsymbol{X}}^2)\mu^2 + (5 m B^2 - s^2/2)\mu \big\}.\label{exp:1} \end{align} \end{lem} Applying \Cref{lem:first_expectation} with $m=n_{k+1}$, $\mu = \lambda = \frac{1}{8n_{k+1}B^2}$ and $s=s_{k+1}$ gives \begin{align} \mathbb{E}\left[\sup_{\boldsymbol w \in s_{k+1} \mathcal{V}} e^{\lambda G(\boldsymbol w)} \right] &\le \exp\left\{ - \frac{3s_{k+1}^2 - 40n_{k+1}B^2}{64n_{k+1}B^2} \right\}. \end{align} The choice of $t_{k+1}$ ensures that $\frac{3s_{k+1}^2 - 40n_{k+1}B^2}{64n_{k+1}B^2} \geq (1+\varepsilon) \ln \ln_\beta n_{k+1} + \ln(1/\delta)$. It follows that \begin{align}\label{eq:expectation1} \mathbb{E}\left[\sup_{\boldsymbol w \in s_{k+1} \mathcal{V}} e^{\lambda G(\boldsymbol w)} \right] \leq \frac{\delta}{(k+15)^{1+\varepsilon}}. \end{align} Finally, summing over all integer $k \geq 0$ and setting $\varepsilon=0.2$, we get \begin{align} \mathbb{P}(\mathcal{A}) \leq \delta \sum_{k\geq 0} \frac{1}{(k+15)^{1+\varepsilon}} \leq 3\delta. \end{align} \subsection{Proof of \texorpdfstring{\Cref{thm:UBforBandits}}{Theorem 3}} In this section, we provide the proof of the upper bound established for the proposed algorithm in the problem of the best arm identification in the multi-armed bandit problem. We start with two technical lemmas, then we provide two other lemmas that constitute the core technical part of the proof of \Cref{thm:UBforBandits}. Finally, in \Cref{ssec6.3.3}, we put all the pieces together and present the proof of the theorem. \subsubsection{Preliminary lemmas} We state and prove two elementary lemmas which we will need for the proof of \Cref{thm:UBforBandits}. \begin{lem}\label{lem:first_bandit_inequality} For $t\geq 1, c>0$ and $0 < \omega \leq 0.15$, we have \begin{align} \frac{1}{t}\ln\left(\frac{\ln(2t)}{\omega} \right) \geq c \implies t \leq \frac{1}{c} \ln\left(\frac{2\ln(1/(c \omega))}{\omega} \right). \end{align} \end{lem} \begin{proof} Let $f(t) = \frac{1}{t}\ln\big(\frac{\ln(2t)}{\omega} \big)$, defined for any $t\geq1$ and $t_* = \frac{1}{c} \ln\big(\frac{2\ln(1/(c\omega))}{\omega} \big)$. It suffices to show that $f(t_*) \leq c$. Indeed, since the function $f$ is decreasing, it implies that $f(t) < c$ for any $t > t_*$ which is the contrapositive of the claimed implication. Using the definition of $f$ and $t_*$ we have, \begin{align} f(t_*) \leq c &\iff \ln\left(\frac{\ln(2t_*)}{\omega} \right) \leq t_* c\\ &\iff t_* \leq \frac{1}{2(c\omega)^2}\\ &\iff \ln\left(\frac{2\ln(1/(c\omega))}{\omega} \right) \leq \frac{1}{2c\omega^2} \end{align} The last inequality is clearly true since $\ln(x) \leq \frac{x}{2}$ on $(0, \infty)$ and this proves our claim. \end{proof} \begin{lem}\label{lem:second_bandit_inequality} For $t\geq 1$, $s \geq e$, $c \in (0, 1]$, $0 < \omega \leq \delta \leq e^{-e}$, we have, \begin{align} \frac{1}{t}\ln\left(\frac{\ln(2t)}{\omega}\right) \geq \frac{c}{s}\ln\left(\frac{\ln (s)}{\delta}\right) \implies t \leq \frac{s}{c}\frac{\ln(\nicefrac{2}{\omega}) + \ln \ln (\nicefrac{1}{c\omega})}{\ln(\nicefrac{1}{\delta})}. \end{align} \end{lem} \begin{proof} \Cref{lem:first_bandit_inequality} immediately implies that \begin{align} \frac{c t}{s} &\leq \frac{\ln(\nicefrac{2}{\omega}) + \ln\left[\ln(s) + \ln(\nicefrac{1}{c\omega}) - \ln\ln(\nicefrac{\ln(s)}{\delta})\right]}{\ln(\nicefrac{1}{\delta}) + \ln \ln (s)}. \end{align} Using the fact that $\ln\ln(\nicefrac{\ln(s)}{\delta}) \geq 1$ and the following fact \begin{align} s \geq e & \implies \ln s -1 \geq 0 \\ & \implies \ln s -1 \le e(\ln s-1)\\ &\implies \ln s -1 \le (\ln s-1)\ln(\nicefrac{1}{c\omega}) \\ &\implies \ln s + \ln(\nicefrac{1}{c\omega}) -1 \le \ln s\ln(\nicefrac{1}{c\omega})\\ &\implies \ln s + \ln(\nicefrac{1}{c\omega}) -\ln\ln(\nicefrac{\ln(s)}{\delta}) \le \ln s\ln(\nicefrac{1}{c\omega}), \end{align} we have \begin{align} \frac{ct}{s} \leq \frac{\ln(\nicefrac{2}{\omega}) + \ln\ln(\nicefrac{1}{c\omega}) + \ln\ln s}{\ln(\nicefrac{1}{\delta}) + \ln \ln s} \end{align} We conclude by applying the inequality $a\geq b, x > 0 \implies \frac{x+a}{x+b} \leq \nicefrac{a}{b}$ with $a = \ln(\nicefrac{2}{\omega}) + \ln\ln(\nicefrac{1}{c\omega})$, $b = \ln(\nicefrac{1}{\delta})$ and $x = \ln \ln s$. \end{proof} \subsubsection{Main lemmas} Without loss of generality, we assume hereafter that the arms' parameters are ranked in decreasing order : $\theta_1 \geq \theta_2 \geq \dots, \theta_K$. We define the function\begin{align} U(n, \omega) = \frac{3.4\sigma}{\alpha}\sqrt{\frac1n \ln \left(\frac{\ln (2n)}{\omega}\right)}, \quad n \in \mathbb{N}^*,\ \omega \in (0, 1), \end{align} and the events \begin{align} \mathcal{E}_k(\omega) = \{ \forall n \geq n_0(\omega) \text{ it holds that } \lvert \widehat{\theta}_{k, n} - \theta_k \rvert \leq U(n, \omega) \}. \end{align} Note that, according to \Cref{MainThm1d}, $\mathbb{P}\big(\mathcal{E}_k(\omega)^\complement \big) = {O}(\omega)$. The proof of \Cref{thm:UBforBandits} is essentially the combination of two lemmas. The first lemma states that with high probability the number of times each sub-optimal arm is pulled is not too large. The second lemma shows that the algorithm indeed stops at some time and returns the best arm with high probability. \begin{lem}\label{lem:bandit_1} Let $\beta\in (0,\frac{2}{\sqrt{2}-1})$, $\delta \in (0, e^{-e})$ and $\varkappa = (2+\beta)^2(\nicefrac{3.4 \sigma}{\alpha})^2$. Then we have, with probability at least $1 - 11 \delta$ and any integer $n \geq 1$, \begin{align} \sum_{k=2}^K T_k(n) \leq n_0(\delta) (K-1) + 104\varkappa \mathbf{H}_1 \ln(\nicefrac{1}{\delta}) + \sum_{k=2}^K \varkappa \frac{\ln(2\max\{1, \ln(\varkappa/(\Delta_k^2\delta))\})}{\Delta_k^2} \end{align} \end{lem} \begin{proof} The proof is carried out in two steps. In the first step, we upper bound the number of pulls on events for which the rewards are well behaved. In the second step we resort on standard concentration arguments to show that the events considered in the first step happen with high probability. \textbf{Step 1.} Let $k > 1$. Assuming that $\mathcal{E}_1(\delta)$ and $\mathcal{E}_k(\omega)$ hold true and $I_n = k$, one has, for $n \geq K n_0(\delta)$ (i.e. after warm-up stage), \begin{align} \theta_k + U(T_k(n), \omega) + (1+\beta)U(T_k(n), \delta) &\geq \widehat{\theta}_{k, T_k(n)} + (1+\beta) U(T_k(n), \delta) &\text{($\mathcal{E}_k(\omega)$ holds)}\\ &\geq \widehat{\theta}_{1, T_1(n)} + (1+\beta)U(T_1(n), \delta) &\text{($I_n = k$)}\\ &\geq \theta_1. &\text{($\mathcal{E}_1(\delta)$ holds)} \end{align} Since the function $U$ is decreasing in its second argument, we have \begin{align} (2+\beta)U(T_k(n), \min(\omega, \delta)) \geq \Delta_k \coloneqq \theta_1 - \theta_k. \end{align} Setting $\varkappa = (2+\beta)^2(\nicefrac{3.4 \sigma}{\alpha})^2$ and using Lemma~\ref{lem:first_bandit_inequality} with $c = \frac{\Delta_k^2}{\varkappa}$, one obtains that, for $n \geq K n_0(\delta)$, if $\mathcal{E}_1(\delta)$ and $\mathcal{E}_i(\omega)$ hold true and $I_n = k$ then \begin{align} T_k(n) &\leq \frac{\varkappa}{\Delta_k^2}\ln\left(\frac{2\ln(\nicefrac{\varkappa}{\left(\Delta_k^2 \min(\omega, \delta)\right)})}{\min(\omega, \delta)} \right)\\ &\leq \tau_k + \frac{\varkappa}{\Delta_k^2} \ln\left(\frac{\ln(\nicefrac{e}{\omega})}{\omega} \right)\\ &\leq \tau_k + \frac{2\varkappa}{\Delta_k^2}\ln\left({1}/{\omega}\right). \end{align} with $\tau_k = \frac{\varkappa}{\Delta_k^2} \ln\left( ({2/\delta})\max\{1, \ln(\nicefrac{\varkappa}{\Delta_k^2\delta}) \} \right)$. Since $T_k(n)$ increases only when $k$ is pulled, the above argument shows that the following inequality is true for any time $n \geq 1$ : \begin{align}\label{eq:Tkn} T_k(n)\mathds{1}\{\mathcal{E}_1(\delta) \cap \mathcal{E}_k(\omega)\} \leq n_0(\delta) + \tau_k + \frac{2\varkappa}{\Delta_k^2}\ln\left({1}/{\omega}\right). \end{align} \begin{remark} Indeed, if arm $k$ is pulled at time $n \geq Kn_0(\delta)$ then \begin{align} T_k(n+1) - 1 = T_k(n) \leq \tau_k + \frac{2\varkappa}{\Delta_k^2}\ln({1}/{\omega}), \end{align} and if arm $k$ is pulled before time $Kn_0(\delta)$, i.e. during the warm-up stage, then \begin{align} T_k(n) \leq n_0(\delta) \leq n_0(\delta) + \tau_k + \frac{2\varkappa}{\Delta_k^2}\ln\left({1}/{\omega}\right). \end{align} \end{remark} \textbf{Step 2.} We define the random variable $\Omega_k \coloneqq \max\{\omega \in [0, 1] : \mathcal{E}_k(\omega) \text{ holds true}\}$. \Cref{MainThm1d} guarantees that it is well defined and that $\mathbb{P}(\Omega_k < \omega) \leq c \omega$ with $c=10.4$\footnote{\Cref{MainThm1d} gives a slightly tighter bound but we chose to loosen it for simplicity of the proof.}. Furthermore, one can rewrite \cref{eq:Tkn} as \begin{align} T_k(n) \mathds{1}\{\mathcal{E}_1(\delta)\} \leq n_0(\delta) + \tau_k + \frac{2\varkappa}{\Delta_k^2}\ln\left({1}/{\Omega_k}\right) \end{align} Therefore, for any $x>0$, \begin{align} \mathbb{P}\left( \sum_{k=2}^K T_k(n) > x + \sum_{k=2}^K (\tau_k + n_0(\delta)) \right) &\leq \mathbb{P}\left(\mathcal{E}_1(\delta)^\complement \right) \\ &\quad+ \mathbb{P}\left(\left\{\sum_{k=2}^K T_k(n) > x + \sum_{k=2}^K (\tau_k + n_0(\delta))\right\} \bigcap \mathcal{E}_1(\delta) \right)\\ &\leq c\delta +\mathbb{P}\left( \sum_{k=2}^K \frac{2\varkappa}{\Delta_k^2}\ln\left({1}/{\Omega_k}\right) > x \right) \end{align} Define the random variables $Z_k = \frac{2\varkappa}{\Delta_k^2} \ln\left({1}/{\Omega_k}\right)$, for $k \in [K]\backslash \{1\}$. Observe that these are independent non-negative random variables and since $\mathbb{P}(\Omega_k < \omega) \leq c \omega$, it holds that $\mathbb{P}(Z_k > x) \leq c\exp(-x/a_k)$ with $a_k = 2\varkappa/\Delta_k^2$. Observing that \begin{align} \mathbb{E}Z_k = \int_{0}^{+\infty} \mathbb{P}\left(Z_k > x\right)dx \leq c\int_{0}^{+\infty} e^{-x/a_k} = c a_k \end{align} and applying a basic concentration inequality for the sum of sub-exponential random variables (see \Cref{lem:sub_exponential_bound}), we have, \begin{align} \mathbb{P}\left(\sum_{k=2}^K(Z_k-ca_k)>z\right) &\leq \mathbb{P}\left(\sum_{k=2}^K(Z_k-\mathbb{E}Z_k)>z\right)\\ &\leq \exp\left(-\min\left\{\frac{z^2}{8 c \lVert a \rVert_2^2}, \frac{z}{4\lVert a \rVert_\infty}\right\} \right)\\ &\leq \exp\left(-\min\left\{\frac{z^2}{8c\lVert a \rVert_1^2}, \frac{z}{4\lVert a \rVert_1}\right\} \right). \end{align} Putting everything together with $z = 4 c \lVert a \rVert_1 \ln(1/\delta)$, $x = z + c\lVert a \rVert_1$ one obtains, for $n \geq 1$ \begin{align} \mathbb{P}\left(\sum_{k=2}^K T_k(n) > \sum_{k=2}^K\left(\frac{10\varkappa c\ln(1/\delta)}{\Delta_k^2} + \tau_k + n_0(\delta) \right) \right) \leq 11\delta \end{align} and the claim of the lemma follows. \end{proof} \begin{lem}\label{lem:bandit_2} Let $\beta \in (0, (\nicefrac{2}{\sqrt{2}-1})), \delta \in (0, 0.01)$ and $c_\beta = \big(\frac{2 + \beta}{\beta}\big)^2$. If \begin{align} \lambda \geq \frac{\varrho}{1- 10.4\delta - {\textstyle\sqrt{\delta^{\nicefrac14} \ln(1/\delta)}}},\quad \text{ with } \quad \varrho = c_\beta \frac{\ln\left( 2\ln(\nicefrac{c_\beta}{2\delta})/ \delta\right)}{\ln(\nicefrac{1}{\delta})}, \end{align} then, for all $k=2, \dots, K$ and $n=1, 2, \dots$ we have $T_k(n) < n_0(\delta) + \lambda \sum_{\ell \neq k} T_\ell(n)$ with probability at least $1 - 6 \sqrt{\delta}$. \end{lem} \begin{proof} Let $k > \ell$. Assuming that $\mathcal{E}_k(\omega)$ and $\mathcal{E}_\ell(\delta)$ hold true and that $I_n = k$, one has, for $n \geq Kn_0(\delta)$, \begin{align} \theta_k + U(T_k(n), \omega) + (1+\beta)U(T_k(n), \delta) &\geq \widehat{\theta}_{k, T_k(n)} + (1+\beta)U(T_k(n), \delta)\\ &\geq \widehat{\theta}_{\ell, T_\ell(n)} + (1+\beta)U(T_\ell(n), \delta)\\ &\geq \theta_\ell + \beta U(T_\ell(n), \delta) \end{align} This implies $(2+\beta)U(T_k(n), \min(\omega, \delta)) \geq \beta U(T_\ell(n), \delta)$. Applying \Cref{lem:second_bandit_inequality} with $c=2c_\beta^{-1}$ one obtains that if $\mathcal{E}_k(\omega)$ and $\mathcal{E}_\ell(\delta)$ hold true and $I_n = k$ then \begin{align}\label{eq:(7)} T_k(n) \leq c_\beta \frac{\ln\left( 2\ln(\nicefrac{c_\beta}{2\min(\omega, \delta)}) /\min(\omega, \delta)\right)}{\ln(\nicefrac{1}{\delta})} T_\ell(n). \end{align} Since $T_k(n)$ only increases when $k$ is played, then, for all $n\geq 1$, \begin{align} (T_k(n) - n_0(\delta))\mathds{1}\left(\mathcal{E}_k(\omega) \cap \mathcal{E}_\ell(\delta) \right) \leq c_\beta \frac{\ln\left( 2\ln(\nicefrac{c_\beta}{2\min(\omega, \delta)}) /\min(\omega, \delta)\right)}{\ln(\nicefrac{1}{\delta})} T_\ell(n). \end{align} Using \eqref{eq:(7)} with $\omega = \delta^{k-1}$ we see that \begin{align} \mathds{1}\{\mathcal{E}_k(\delta^{k-1})\} \frac{1}{k-1} \sum_{\ell=1}^{k-1} \mathds{1}\{\mathcal{E}_\ell(\delta)\} > 1-\alpha \implies (1-\alpha)(T_k(n) - n_0(\delta)) \leq \varrho \sum_{\ell \neq k}T_\ell(n). \end{align} The above implication leads to the following inequalities \begin{align} &\mathbb{P}\bigg(\exists (k, n) \in \{2, \dots, K\}\times \mathbb{N}^* : (1-\alpha)(T_k(n) - n_0(\delta)) \geq \varrho \sum_{\ell \neq k} T_\ell(n) \bigg) \\ & \qquad \leq \mathbb{P}\bigg( \exists k \in \{2, \dots, K\} : \mathds{1}\{\mathcal{E}_k(\delta^{k-1})\} \frac{1}{k-1} \sum_{\ell=1}^{k-1} \mathds{1}\{\mathcal{E}_\ell(\delta)\} \leq 1 - \alpha \bigg)\\ & \qquad \leq \sum_{k=2}^K \mathbb{P}\Big(\mathcal{E}_k(\delta^{k-1})^\complement \Big) + \sum_{k=2}^K \mathbb{P}\bigg(\frac{1}{k-1} \sum_{\ell=1}^{k-1} \mathds{1}\left(\mathcal{E}_\ell(\delta) \right) \leq 1 - c\delta - (\alpha - c\delta) \bigg). \end{align} Since $\mathbb{E}\mathds{1}\left(\mathcal{E}_\ell(\delta) \right) \geq 1 - c\delta$ with $c=10.4$, using \emph{separately} a union bound and Hoeffding's inequality, we get \begin{align} \mathbb{P}\bigg(\frac{1}{k-1} \sum_{\ell=1}^{k-1} \mathds{1}\left(\mathcal{E}_\ell(\delta) \right) \leq 1 - c \delta - (\alpha - c\delta) \bigg) \leq \min\big(c(k-1)\delta, \exp(-2(k-1)(\alpha - c\delta)^2\big). \end{align} Define $R=e^{-2\delta^{1/4}\ln(1/\delta)}$ and $j= \lceil \ln \{2\delta^{3/4}(1-R)\}/\ln R \rceil$. One can check that $1-R = 1-e^{2\delta^{1/4}\ln \delta} \ge 0.64\delta^{1/4} \ln(1/\delta)$, which leads to \begin{align} j -1 &\leq - \frac{\ln \{2 \delta^{3/4}(1-R)\}} {2\delta^{1/4}\ln(1/\delta)} \le -\frac{\ln \{1.28\delta \ln(1/\delta)\}} {2\delta^{1/4}\ln(1/\delta)} \le (1/2)\delta^{-1/4}. \end{align} Setting $\alpha = c\delta + \sqrt{\delta^{\nicefrac{1}{4}}\ln(1/\delta)}$, we have \begin{align} & \mathbb{P}\bigg(\exists (k, n) \in \{2, \dots, K\}\times \mathbb{N}^* : \big(1- c\delta - {\textstyle\sqrt{\delta^{\nicefrac14}\ln(1/\delta)}}\big) \big(T_k(n) - n_0(\delta)\big) \geq \varrho \sum_{\ell \neq k} T_\ell(n) \bigg) \\ &\leq \sum_{k=2}^K \left\{ c\delta^{k-1} + \min\big(c(k-1)\delta,e^{-2(k-1)\delta^{1/4}\ln(1/\delta)} \big)\right\}\\ &\leq c \frac{\delta}{1-\delta} + \frac{c\delta}{2}j^2 + \frac{R^j}{1-R} \leq 10.6 \delta + 5.2 \delta j^2 + 2 \delta^{3/4} \leq 6\sqrt{\delta}. \end{align} This completes the proof of the lemma. \end{proof} \subsubsection{Putting all lemmas together}\label{ssec6.3.3} Let $\nu$ be the confidence level from \Cref{thm:UBforBandits} and let $\delta$ satisfy the relation $\nu = 11\delta + 6\sqrt{\delta}$. Note that this implies $\sqrt{\delta} = (\sqrt{11\nu +9}-3)/11$, which is the value of $\delta$ given in \Cref{algo}. On the one hand, \Cref{lem:bandit_1} states that, with probability at least $1-11\delta$, the total number of times the suboptimal arms are sampled does not exceed $(K-1)n_0(\delta) + \varkappa\left(104\mathbf{H}_1 \ln(\nicefrac{1}{\delta}) + \mathbf{H}_2\right)$ where $\varkappa= ((2+\beta)3.4\sigma/\alpha)^2$. On the other hand, \Cref{lem:bandit_2} states that with probability at least $1-6\sqrt{\delta}$, if the parameter $\lambda$ is large enough, only the optimal arm will meet the stopping criterion and therefore, the number of pulls from the optimal arm is equal to $n_0(\delta) + \lambda \sum_{k \geq 2}T_k(n)$. Combining those two lemmas, we have that with probability at least $1-11\delta-6\sqrt{\delta}$, the optimal arm meets the stopping criterion and the total number of pulls does not exceed $(1+\lambda)K n_0(\delta) + (1+\lambda)\varkappa \left( 104\mathbf{H}_1 \ln(\nicefrac{1}{\delta}) + \mathbf{H}_2\right)$. \subsection{Proof of Theorem \ref{thm:LBforBandits}} Since $\tilde\phi$ is symmetric, the means of the two arms $\theta_1$ and $\theta_2$ coincide with the parameters of interest and so, the gap $\Delta$ coincides with the difference in means, i.e., $\Delta=|\theta_1-\theta_2|$. Therefore, finding the best arm amounts to finding the arm with the best mean and the result follows from \cite[Corollary 1]{jamieson2014lil}, which in turn is a consequence of \cite[Theorem 1]{farrell1964asymptotic} which we recall here for completeness \begin{theorem} \citealp[Theorem 1]{farrell1964asymptotic} Let $X_1, X_2,...$ be i.i.d. Gaussian random variables with unknown mean $\Delta\neq0$ and variance $1$. Consider testing whether $\Delta > 0$ or $\Delta < 0$. Let $Y \in \{-1, 1\}$ be the decision of any such test based on $T$ samples (possibly a random number) and let $\delta \in (0, 1/2)$. If $\sup_{\Delta \neq 0}\mathbb{P}\left(Y\neq sign(\Delta)\right)\leq \delta$, then \begin{align} \limsup\limits_{\Delta \xrightarrow{} 0} \frac{\mathbb{E}_\delta[T]}{\delta^{-2}\ln \ln \Delta^{-2}} \geq 2 - 4 \delta. \end{align} \end{theorem} \section{Introduction}\label{sec:1} \input{1.Introduction.tex} \section{Uniform law of iterated logarithm for \texorpdfstring{$M$}{M}-estimators}\label{sec:2} \input{2.UnivariateLIL.tex} \section{Uniform LIL for \texorpdfstring{$M$}{M}-estimators of a multidimensional parameter} \input{3.MultivariateLIL.tex}\label{sec:3} \section{Application to Bandits} \input{4.Bandits.tex}\label{sec:4} \section{Conclusion and further work}\label{sec:5} \input{5.Conclusion} \section{Proofs of postponed lemmas}\label{sec:postponed_proofs} \begin{myproof}[Proof of \Cref{lem:1}] Let $k$ be a positive integer and let $n \in I_k$. We define the vectors \begin{align} \boldsymbol{v}_n^* =\frac{\boldsymbol{\theta}^* - \bm{\widehat{\theta}}_n}{{\boldsymbol a}^\top(\boldsymbol{\theta}^* - \bm{\widehat{\theta}}_n)} \in \mathcal{V} \quad\text{and}\quad \bm{\Bar{\theta}}_n = \boldsymbol{\theta}^* - t_{k+1} \boldsymbol{v}^*_n. \end{align} Since the sequence $(t(n))_n$ is non-increasing, if $\mathcal{A}_n$ is realized then $p_n=\frac{t_{k+1}}{{\boldsymbol a}^\top(\boldsymbol{\theta}^* - \bm{\widehat{\theta}}_n)} \in (0, 1)$. Furthermore, since $\widehat{\Phi}_n$ is a convex function (\Cref{as:convex_lipschitz_phi,as:convex_penalty}) we have, \begin{align} \label{eq:bar_theta} \inf_{w\in t_{k+1}\mathcal{V}} \widehat{\Phi}_n(\boldsymbol{\theta}^*-\boldsymbol{w}) &\le \widehat{\Phi}_n(\bm{\Bar{\theta}}_n)=\widehat{\Phi}_n(p_n\bm{\widehat{\theta}}_n+(1-p_n)\boldsymbol{\theta}^*)\\ &\leq p_n \widehat{\Phi}_n(\bm{\widehat{\theta}}_n) + \left(1 - p_n\right) \widehat{\Phi}_n(\boldsymbol{\theta}^*)\le \widehat{\Phi}_n(\boldsymbol{\theta}^*). \end{align} Therefore, on the event $\mathcal{A}_n$, \begin{align} \sup_{w\in t_{k+1} \mathcal{V}} \left[\widehat{\Phi}_n(\boldsymbol{\theta}^*) - \widehat{\Phi}_n(\boldsymbol{\theta}^*-\boldsymbol{w})\right] \geq 0. \end{align} We conclude the proof by noting that the curvature of the population risk (\Cref{as:Phi_strongly_convex_everywhere}) implies that for any vector ${\boldsymbol w} \in \mathbb{R}^d$, \begin{align} \mathbb{E}\left[\widehat{\Phi}_n(\boldsymbol{\theta}^*) - \widehat{\Phi}_n(\boldsymbol{\theta}^*-\boldsymbol{w})\right] &= \Phi_n(\boldsymbol{\theta}^*) - \Phi_n(\boldsymbol{\theta}^* - \boldsymbol w) \le - \frac{\alpha_n \lVert {\boldsymbol w} \rVert^2}{2}. \end{align} \end{myproof} \begin{myproof}[Proof of \Cref{lem:symmetrization_contraction}] A modified version\footnote{The version we use here can be found, for instance, in \citep[Eq.\ (2.3)]{lecue2014}.} of the symmetrization inequality yields \begin{align} \mathbb{E}\left[\sup_{w \in t \mathcal{V}} \exp\left\{ \lambda \left( S_m(w) - \alpha m \lVert w \rVert_2^2 \right) \right\} \right] \leq \mathbb{E} \left[\sup_{{\boldsymbol w} \in t\mathcal{V}}\exp\left\{2\lambda(S'_{m}({\boldsymbol w})- \alpha m\| {\boldsymbol w}\|_2^2 )\right\}\right], \end{align} where $S'_{m}({\boldsymbol w})$ is the symmetrized version of $S_{m}({\boldsymbol w})$, defined by \begin{align} S'_{m}({\boldsymbol w}) =\sum_{i=1}^{m} \varepsilon_i \left\{ \phi(Y_i,\boldsymbol{X}_i^\top \boldsymbol{\theta}^*) - \phi(Y_i,\boldsymbol{X}_i^\top (\boldsymbol{\theta}^*-\boldsymbol{w})) \right\}. \end{align} We define the set $R = \left\{ t \mathbf{X}^\top\boldsymbol{v}: {\boldsymbol v} \in \mathcal{V} \right\}\subset \mathbb{R}^m$ and the functions $\varphi_i:\mathbb{R}\to\mathbb{R}$ by \begin{align} \varphi_i : r \mapsto \left[ \phi(Y_i, \boldsymbol{X}_i^\top \boldsymbol{\theta}^*) - \phi(Y_i, \boldsymbol{X}_i^\top \boldsymbol{\theta}^* - r) \right]/L, \quad i=1, \dots, m. \end{align} These functions $\varphi_i$ are contractions (\Cref{as:convex_lipschitz_phi}) such that $\varphi_i(0)=0$. The contraction principle \cite[Theorem 2.2]{koltchinskii} gives \begin{align} \mathbb{E} \left[\sup_{{\boldsymbol w} \in t\mathcal{V}}\exp\left\{2\lambda(S'_{m}({\boldsymbol w})- \alpha m\| {\boldsymbol w}\|_2^2 )\right\}\right]\le \mathbb{E} \left[\sup_{{\boldsymbol w} \in t\mathcal{V}}\exp\left\{2\lambda(L \boldsymbol{w}^\top \mathbf{X}\boldsymbol{\varepsilon}- \alpha m\| {\boldsymbol w}\|_2^2 )\right\}\right]. \end{align} Setting $t'= (\nicefrac{2m\alpha}{L})t$ and $\lambda'=(\nicefrac{ L^2}{m\alpha})\lambda$, we arrive at \begin{align}\label{exp:5} \mathbb{E} \left[\sup_{{\boldsymbol w} \in t\mathcal{V}}\exp\left\{2\lambda(S'_{m}({\boldsymbol w})- \alpha m\| {\boldsymbol w}\|_2^2 )\right\}\right]\le \mathbb{E} \left[\sup_{w \in t'\mathcal{V}}\exp\left\{\lambda'( \boldsymbol{w}^\top \mathbf{X}\boldsymbol{\varepsilon}- \lVert {\boldsymbol w}\rVert_2^2/2 )\right\}\right]. \end{align} Finally, since the positive real numbers $\lambda$ and $\lambda'$ are positively proportional, taking the infimum over all positive $\lambda$ is exactly the same as taking the infimum over all positive $\lambda'$. \end{myproof} \begin{lem}\label{lem:Xepsilon_exp_ineq} Let $\mathbf{X}$ be a deterministic $d \times m$ matrix and $\boldsymbol{\varepsilon} = (\varepsilon_1, \dots, \varepsilon_m)$ a $m$-dimensional vector with i.i.d.\ Rademacher entries. As soon as $\lVert \mathbf{X} \rVert_F^2 \leq \nicefrac{1}{8}$, we have \begin{align} \mathbb{E}\left[ \exp\left\{ \lVert \mathbf{X} \boldsymbol\varepsilon \rVert_2^2 \right\} \right] \leq \exp\left\{ 10 \lVert \mathbf{X} \rVert_F^2 \right\}. \end{align} \end{lem} \begin{myproof}[Proof of \Cref{lem:Xepsilon_exp_ineq}] Using the fact that for any positive random variable $\eta$, its expectation can be written as $\mathbb{E}[\eta] = \int_0^\infty \mathbb{P}(\eta >z )\,dz$, we get \begin{align} \mathbb{E}\left[e^{\lVert \mathbf{X} \boldsymbol{\varepsilon} \rVert_2^2} \right] &\leq e^{2 \lVert \mathbf{X} \rVert_F^2} \mathbb{E}\left[e^{2\left( \lVert \mathbf{X} \boldsymbol{\varepsilon} \rVert_2 - \lVert \mathbf{X} \rVert_F \right)_{+}^2} \right] \\ &\leq e^{2 \lVert \boldsymbol{X} \rVert_F^2} \bigg(1 + \int_{0}^{+\infty} \mathbb{P}\Big( \lVert \mathbf{X} \boldsymbol{\varepsilon} \rVert_2 \geq \lVert \mathbf{X} \rVert_F + \sqrt{(\nicefrac{1}{2}) \ln(1+z)} \Big)dz \bigg) \end{align} We apply the result from \cite[Example 6.3]{boucheron2013concentration} on the variables $\varepsilon_1 \boldsymbol{X}_1, \dots, \varepsilon_m \boldsymbol{X}_m$ which are independent zero-mean random variable : setting $c_i = 2 \lVert \boldsymbol{X}_i \rVert_2$, we have $\nu = \lVert \mathbf{X} \rVert_F^2$ and therefore, for any $z >0$, \begin{align} \mathbb{P}\Big( \lVert \mathbf{X} \boldsymbol{\varepsilon} \rVert_2 \geq \lVert \mathbf{X} \rVert_F + \sqrt{(\nicefrac{1}{2}) \ln(1+z)} \Big) \leq \exp\left\{-\frac{\ln(1+z)}{4\lVert \mathbf{X}\rVert_F^2}\right\} = (1+z)^{-\nicefrac{1}{4\lVert \mathbf{X} \rVert_F^2}}\,.\label{exp:3} \end{align} Assuming that $\lVert\textbf{X} \rVert_F^2 < \nicefrac{1}{4}$, we can plug this in inequality \eqref{exp:3} to get \begin{align} \mathbb{E}\left[e^{\lVert \mathbf{X} \boldsymbol{\varepsilon} \rVert_2^2} \right] &\leq e^{2 \lVert \mathbf{X} \rVert_F^2} \left(1 + \frac{4 \lVert \mathbf{X} \rVert_F^2}{1 - 4 \lVert \mathbf{X} \rVert_F^2}\right)\\ &\leq \exp\left\{ 2 \lVert \mathbf{X} \rVert_F^2 + \frac{4 \lVert \mathbf{X} \rVert_F^2}{1 - 4 \lVert \mathbf{X} \rVert_F^2} \right\} \end{align} The RHS of the inequality can be large when $\lVert \mathbf{X} \rVert_F^2$ is close to $\nicefrac{1}{4}$. Restricting $\lVert \mathbf{X} \rVert_F^2 \leq \nicefrac{1}{8}$ we arrive at the desired inequality $\mathbb{E}\left[e^{\lVert \mathbf{X} \boldsymbol{\varepsilon} \rVert_2^2} \right] \leq \exp\left\{ 10 \lVert \mathbf{X} \rVert_F^2 \right\}$. \end{myproof} \begin{myproof}[Proof of \Cref{lem:first_expectation}] Let us define $\Pi_{{\boldsymbol a}^\perp} = \mathbf{I}_d - \boldsymbol{a} {\boldsymbol a}^\top$ to be the projection matrix onto the orthogonal complement of the vector $a$ and set \begin{align} \boldsymbol{w}_* = \Pi_{{\boldsymbol a}^\perp} \mathbf{X}\boldsymbol{\varepsilon} + s \boldsymbol{a}. \end{align} One checks that $\boldsymbol{w}_*\in s\mathcal{V}$ is the maximizer of the quadratic function $G({\boldsymbol w}) = \boldsymbol{w}^\top \mathbf{X}\boldsymbol{\varepsilon} - \lVert {\boldsymbol w} \rVert^2/2$ over the set $s\mathcal{V}$. In addition, \begin{align} G(\boldsymbol{w}_*) = \boldsymbol{w}_*^\top\mathbf{X}\boldsymbol{\varepsilon}- \| {\boldsymbol w}_*\|_2^2/2 = \frac{1}{2}\left(\big\| \Pi_{{\boldsymbol a}^\perp} \mathbf{X}\boldsymbol{\varepsilon}\big\|_2^2 + 2s {\boldsymbol a}^\top \mathbf{X}\boldsymbol{\varepsilon}- s^2\right). \end{align} Denoting by $T(\mu)$ the left hand side of \eqref{exp:1}, we arrive at \begin{align} T(\mu) &\le e^{-\mu s^2/2} \mathbb{E}\big[\exp\big\{(\mu \big\|\Pi_{{\boldsymbol a}^\perp} \mathbf{X}\boldsymbol\varepsilon\big\|_2^2/2 + \mu s {\boldsymbol a}^\top\mathbf{X}\boldsymbol\varepsilon \big\}\big]. \end{align} The fact that $\Pi_{{\boldsymbol a}^\perp}$ is a contraction and the Cauchy-Schwarz inequality imply \begin{align} T(\mu) &\le e^{-\mu s^2/2}\big( \mathbb{E}\big[\exp\big\{\mu \|\mathbf{X}\boldsymbol\varepsilon\|_2^2 \big\}\big] \mathbb{E}\big[\exp\big\{2\mu s {\boldsymbol a}^\top\mathbf{X}\boldsymbol\varepsilon \big\}\big]\big)^{1/2}. \label{exp:2} \end{align} We bound separately the two last expectations. For the first one, since ${\mu}\lVert \mathbf{X}\rVert_F^2 \leq {\mu} m B^2 \leq \nicefrac{1}{8}$, we can apply~\Cref{lem:Xepsilon_exp_ineq}, conditionally to $\mathbf{X}$ and then integrate w.r.t.\ $\mathbf{X}$, to get \begin{align} \mathbb{E}\big[\exp\big\{{\mu}\lVert \mathbf{X}\boldsymbol\varepsilon \rVert_2^2 \big\} \big] \leq \mathbb{E} \big[ \exp\big\{ 10\mu \lVert \mathbf{X} \rVert_F^2 \big\} \big] \leq \exp\big\{ 10 m \mu B^2 \big\}. \end{align} We now bound the second expectation in the right-hand side of \eqref{exp:2}. Using the fact that $\varepsilon_{1:m}$ are i.i.d. Rademacher random variables independent from $\mathbf{X}$, as well as the inequality $\cosh(x)\le e^{x^2/2}$, we arrive at \begin{align} \mathbb{E}\big[\exp\big\{(2\mu s) {\boldsymbol a}^\top \mathbf{X}\boldsymbol\varepsilon \big\}\big] \le \mathbb{E}\big[\exp\big\{2(\mu s)^2 \|\mathbf{X}^\top{\boldsymbol a}\|_2^2\big\}\big] \le \exp\big\{ 2(\mu s)^2 m B_{{\boldsymbol a}^\top \boldsymbol{X}}^2 \big\}. \end{align} Grouping the bounds on these two expectations we obtain the stated inequality. \end{myproof} \paragraph{Bounding the sum of random variables with sub-exponential right tails}\label{par:sub_exponential} \begin{lem}\label{lem:sub_exponential_bound} Let $X_1, \dots, X_n$ be independent, non-negative, random variables such that there exists positives constants $c$ and $a_1, \dots, a_n$ such that \begin{align} \mathbb{P}\left(X_i > x\right) \leq c e^{-\nicefrac{x}{a_i}}, \qquad x > 0, i=1, \dots, n. \end{align} Then, for any real positive $t$, \begin{align} \mathbb{P}\left(\sum_{i=1}^n (X_i - \mathbb{E}X_i) > t\right) &\leq \exp\left(- \min\left(\frac{t^2}{8\lVert a\rVert_2^2}, \frac{t}{4\lVert a \rVert_{\infty}}\right) \right).\\ \end{align} \end{lem} \begin{myproof} Defining $\psi_i(\lambda) \coloneqq \log \mathbb{E}e^{\lambda\left(X_i - \mathbb{E}X_i\right)}, i=1,\dots,n$, Markov inequality and the independence hypothesis give \begin{align}\label{eq:chernoff} \mathbb{P}\left(\sum_{i=1}^n (X_i - \mathbb{E}X_i) > t\right) \leq \inf_{\lambda > 0} e^{-\lambda t}\prod_{i=1}^n e^{\psi_i(\lambda)}. \end{align} Using the inequality $\ln u \leq u - 1$ valid for any positive real $u$, we have \begin{align} \psi_i(\lambda) \coloneqq \ln \mathbb{E}e^{\lambda X_i} - \lambda \mathbb{E}X_i \leq \mathbb{E}\left[e^{\lambda X_i} - \lambda X_i - 1 \right]. \end{align} Let $\phi(u) = e^u - u -1$. The monotone convergence theorem guarantees that for any $\lambda > 0$, \begin{align} \mathbb{E}\phi(\lambda X_i) = \sum_{p \geq 2} \frac{\lambda^p}{p!}\mathbb{E}X_i^p. \end{align} Since the $X_i$'s are non-negative, we have, for any integer $p\geq2$ and for any index $i=1,\dots,n$, \begin{align} \mathbb{E}X_i^p &= \int_{0}^{+\infty} \mathbb{P}\left( X_i > t^{1/p}\right) dt \leq c p \int_0^{+\infty} t^{p-1} e^{-\nicefrac{t}{a_i}}dt = c a_i^p p!. \end{align} Therefore, for any $\lambda \in (0, \nicefrac{1}{2a_i})$ \begin{align}\label{eq:log_MFG} \psi_i(\lambda) \leq \mathbb{E}\phi(\lambda X_i) \leq 2c(\lambda a_i)^2 \end{align} Plugging \eqref{eq:log_MFG} into \eqref{eq:chernoff} yields \begin{align} \mathbb{P}\left(\sum_{i=1}^n (X_i - \mathbb{E}X_i) > t\right) \leq \inf_{\lambda \in (0, \nicefrac{1}{2a_i})} \exp\left( 2c\lVert a\rVert_2^2\lambda^2 - \lambda t \right). \end{align} The minimum is attained in $\lambda^* = \min\left( \frac{t}{4c \lVert a \rVert_2^2}, \frac{1}{2 \lVert a\rVert_{\infty}} \right)$ and yields the stated upper bound \begin{align} \mathbb{P}\left(\sum_{i=1}^n (X_i - \mathbb{E}X_i) > t\right) &\leq \exp\left(- \min\left(\frac{t^2}{8\lVert a\rVert_2^2}, \frac{t}{4\lVert a \rVert_{\infty}}\right) \right).\\ \end{align} \end{myproof}
\section{Introduction} X-ray Computed Tomography (CT) is one of the most powerful clinical imaging imaging tools, delivering high-quality images in a fast and cost effective manner. However, the X-ray is harmful to the human body, so many studies has been conducted to develop methods that reduce the X-ray dose. Specifically, X-ray doses can be reduced by reducing the number of photons, projection views or the size of the field-of-view of X-rays. Among these, the CT technique for reducing the field-of-view of X-ray is called interior tomography. Interior tomography is useful when the region-of-interest (ROI) within a patient's body is small (such as heart), because interior tomography aims to obtain an ROI image by irradiating only the ROI with x-rays. Interior tomography not only can dramatically reduce the X-ray dose, but also has cost benefits by using a small-sized detector. However, the use of an analytic CT reconstruction algorithm generally produces images with severe artifacts due to the transverse directional projection truncation. Sinogram extrapolation is a simple approximation method to reduce the artifacts. However, sinogram extrapolation method still generates biased CT number in the reconstructed image. Recently, Katsevich et al \cite{katsevich2012stability} proved the general uniqueness results for the interior problem and provided stability estimates. Using the total variation (TV) penalty, the authors in \cite{yu2009compressed} showed that a unique reconstruction is possible if the images are piecewise smooth. In a series of papers \cite{ward2015interior,lee2015interior}, our group has shown that a generalized L-spline along a collection of chord lines passing through the ROI can be uniquely recovered \cite{ward2015interior}; and we further substantiated that the high frequency signal can be recovered analytically thanks to the Bedrosian identify, whereas the computationally expensive iterative reconstruction need only be performed to reconstruct the low frequency part of the signal after downsampling \cite{lee2015interior}. While this approach significantly reduces the computational complexity of the interior reconstruction, the computational complexity of existing iterative reconstruction algorithms prohibits their routine clinical use. In recent years, deep learning algorithms using convolutional neural network (CNN) have been successfully used for low-dose CT \cite{kang2017deep, chen2017low}, sparse view CT \cite{han2016deep,jin2017deep}, etc. However, the more we have observed impressive empirical results in CT problems, the more unanswered questions we encounter. In particular, one of the most critical questions for biomedical applications is whether a deep learning-based CT does create any artificial structures that may mislead radiologists in their clinical decision. Fortunately, in a recent theory of {\em deep convolutional framelets} \cite{ye2017deep}, we showed that the success of deep learning is not from a magical power of a black-box, but rather comes from the power of a novel signal representation using non-local basis combined with data-driven local basis. Thus, the deep network is indeed a natural extension of classical signal representation theory such as wavelets, frames, etc; so rather than creating new informations, it attempts to extract the most information out of the the input data using the optimal signal representation. Inspired these findings, here we propose a deep learning framework for interior tomography problem. Specifically, we demonstrate that the interior tomography problem can be formulated as a reconstruction problem in an end-to-end manner under the constraints that remove the null space signal components of the truncated Radon transform. Numerical results confirmed the proposed deep learning architecture outperforms the existing interior tomography methods in image quality and reconstruction time. \section{Theory} \subsection{Problem Formulation} Here, we consider 2-D interior tomography problem and follow the notation in \cite{ward2015interior}. The variable ${\boldsymbol{\theta}}$ denotes a vector on the unit sphere ${\mathbb S}\in {\mathbb R}^2$. The collection of vectors that are orthogonal to ${\boldsymbol{\theta}}$ is denoted as $${\boldsymbol{\theta}}^\perp=\{{\mathbf y} \in {\mathbb R}^2~:~{\mathbf y}\cdot{\boldsymbol{\theta}} = 0 \}.$$ We refer to real-valued functions in the spatial domain as images and denote them as $f({\mathbf x})$ for ${\mathbf x} \in {\mathbb R}^2$. We denote the Radon transform of an image $f$ as \begin{eqnarray} {\mathcal R} f({\boldsymbol{\theta}},s):= \int_{{\boldsymbol{\theta}}^\perp} f(s{\boldsymbol{\theta}}+{\mathbf y}) d{\mathbf y} \end{eqnarray} where $s\in {\mathbb R}$ and ${\boldsymbol{\theta}} \in {\mathbb S}$. The local Radon transform for the truncated field-of-view is the restriction of ${\mathcal R} f$ to the region $\{({\boldsymbol{\theta}},s)~:~|s|<\mu \} ,$ which is denoted as ${\mathcal T} _\mu {\mathcal R} f$. Then, the interior reconstruction is to find the unknown $f({\mathbf x})$ within the ROI from ${\mathcal T}_\mu{\mathcal R} f$. \begin{figure}[!hbt] \centering \includegraphics[width=5cm]{coordinate.png} \caption{The coordinate system for interior tomography.} \label{fig:coordinate} \end{figure} \subsection{Null Space of Truncated Radon Transform} The main technical difficulty of the interior reconstruction is the existence of the null space \cite{ward2015interior, katsevich2012finite}. To analyze the null space, we follow the mathematical analysis in \cite{ward2015interior}. Specifically, the analytic inversion of ${\mathcal T}_\mu{\mathcal R} f$ can be equivalently represented using the differentiated backprojection followed by the truncated Hilbert transform along the chord lines, so we analyze the interior reconstruction problem to take advantages of this. More specifically, if the unit vector ${\mathbf e}\in{\mathbb R}^2$ along the chord line is set as a coordinate axis, then we can find the unit vector ${\mathbf e}^\perp\in {\mathbb R}^2$ such that $V=[{\mathbf e},{\mathbf e}^\perp]$ consists of the basis for the local coordinate system and $(u,v) \in {\mathbb R}^2$ denotes its coordinate value (see Fig.~\ref{fig:coordinate}). We further define 1-D index set parameterized by the $v$: $$I_\mu(v) := \{u' \in {\mathbb R} ~|~\sqrt{(u')^2+v^2} \leq \mu \}.$$ Then, the null space of the ${\mathcal T}_\mu {\mathcal R} f$ is given by \cite{ward2015interior,lee2015interior}: \begin{equation}\label{eq:null} {\mathcal N}_\mu:= \left\{ g ~|~ g(u,v)= -\int_{u'\notin I_\mu(v)} \frac{du'}{\pi(u-u')}\psi(u',v)\right\} \notag\\ \end{equation} for some functions $\psi(u,v)$. A typical example of the null space image $g$ is illustrated in Fig.~\ref{fig:null}. This is often called as the cupping artifact. The cupping artifacts reduce contrast and interfere with clinical diagnosis. \begin{figure}[!bt] \centering \includegraphics[width=7cm]{cupping_artifact.png} \caption{Decomposition of the analytic reconstruction into null space component and the true image.} \label{fig:null} \end{figure} Note that the null space signal $g\in {\mathcal N}_\mu$ is differentiable in any order due to the removal of the origin in the integrand. Accordingly, an interior reconstruction algorithm needs an appropriate regularization term that suppresses $g\in {\mathcal N}_\mu$ by exploiting this. Specifically, one could find an analysis transform $\mathrm{L}$ such that its null space ${\mathcal N}_{\mathrm{L}}$ is composed of entire function, and use it for an analysis-based regularization term. For example, the regularization using TV \cite{yu2009compressed} and L-spline model \cite{ward2015interior,lee2015interior} correspond to this. The main result on the perfect reconstruction in \cite{ward2015interior} is then stated as follows. If the null space component $g \in {\mathcal N}_\mu$ is equivalent to a signal $h\in {\mathcal N}_{\mathrm{L}}$ within the ROI, then $g$ is identically zero due to the characterization of Hilbert transform pairs as boundary values of analytic functions on the upper half of the complex plane \cite{ward2015interior}; so TV or L-spline regularization provides the unique solution. \subsection{CNN-based Null Space Removal} Instead of designing a linear operator ${\mathrm{L}}$ such that the common null space of ${\mathcal N}_\mu$ and ${\mathcal N}_{\mathrm{L}}$ to be zero, we can design a frame ${\mathcal W}$ and its dual $\tilde {\mathcal W}$ such that $\tilde {\mathcal W}^\top {\mathcal W}=I$ and $\tilde {\mathcal W}^\top S_\lambda {\mathcal W} (f^*+g) = f^*$ for all $g \in {\mathcal N}_\mu$ and the ground-truth image $f^*$. This frame-based regularization is also an active field of research for image denoising, inpainting, etc \cite{cai2008framelet}. One of the most important contributions of the deep convolutional framelet theory \cite{ye2017deep} is that ${\mathcal W}$ and $\tilde {\mathcal W}^\top$ correspond to the encoder and decoder structure of a CNN, respectively, and the shrinkage operator $S_\lambda$ emerges by controlling the number of filter channels and nonlinearities. Accordingly, a convolutional neural network represented by ${\mathcal Q} =\tilde {\mathcal W}^\top S_\lambda {\mathcal W}$ can be designed such that \begin{eqnarray}\label{eq:Qc2} {\mathcal Q}(f^*+g) = f^* ,\quad \forall g \in {\mathcal N}_\mu. \end{eqnarray} Then, our interior tomography algorithm is formulated to find the solution $f$ for the following problem: \begin{eqnarray}\label{eq:constraint} y = {\mathcal T}_\mu{\mathcal R} f ,\quad {\mathcal Q} f = f^* \ , \end{eqnarray} where $f^*$ denotes the ground-truth data available for training data, and ${\mathcal Q}$ denotes the CNN satisfying \eqref{eq:Qc2}. Now, by defining ${\mathcal M}$ as a right-inverse of ${\mathcal T}_\mu {\mathcal R}$, i.e. $({\mathcal T}_\mu{\mathcal R}){\mathcal M} y =y, \forall y$, we have $${\mathcal M} y = f^*+g$$ for some $g\in {\mathcal N}_\mu$, since the right inverse is not unique due to the existence of the null space. See Fig.~\ref{fig:null} for the decomposition of ${\mathcal M} y$. Thus, ${\mathcal M} y$ is a feasible solution for \eqref{eq:constraint}, since \begin{eqnarray}\label{eq:Q2} {\mathcal Q} {\mathcal M} y = {\mathcal Q} \left(f^*+g\right) = f^*, \end{eqnarray} and the data fidelity constraint is automatically satisfied due to the definition of the right inverse. Therefore, the neural network training problem to satisfy \eqref{eq:Q2} can be equivalently represented by \begin{eqnarray}\label{eq:opt} \min_{{\mathcal Q}} \sum_{i=1}^N\|f_i^* - {\mathcal Q} {\mathcal M} y_i \|^2 \end{eqnarray} where $\{(f_i^*,y_i)\}_{i=1}^N$ denotes the training data set composed of ground-truth image an its truncated projection. A typical example of the right inverse for the truncated Radon transform is the inverse Radon transform, which can be implemented by the filtered backprojection (FBP) algorithm. Thus, ${\mathcal M} y_i$ in \eqref{eq:opt} can be implemented using the FBP. After the neural network ${\mathcal Q}$ is learned, the inference can be done simply by processing FBP reconstruction image from a truncated radon data $y_t$ using the neural network ${\mathcal Q}$, i.e. $\hat f ={\mathcal Q}{\mathcal M} y_t$. The details of the network ${\mathcal Q}$ and the training procedure will be discussed in the following section. \section{Method} \subsection{Data Set} Ten subject data sets from AAPM Low-Dose CT Grand Challenge were used in this paper. Out of ten sets, eight sets were used for network training. The other two sets were used for validation and test, respectively. The provided data sets were originally acquired in helical CT, and were rebinned from the helical CT to $360^{\circ}$ angular scan fan-beam CT. The $512 \times 512$ size artifact free CT images are reconstructed from the rebinned fan-beam CT data using FBP algorithm. From the CT image, sinogram is numerically obtained using a forward projection operator. The number of detector in numerical experiment is 736. Only 350 detectors in the middle of 736 detectors are used to simulate the truncated projection data. Using this, we reconstruct $256 \times 256$ ROI images. \subsection{Network Architecture} \begin{figure}[t] \centering \includegraphics[width=8.5cm]{network_architecture.png} \caption{The proposed deep learning architecture for inteior tomography.} \label{fig:architecture} \end{figure} The proposed network is shown in Fig. \ref{fig:architecture}. The first layer is the FBP layer that reconstructs the cupping-artifact corrupted images from the truncated projection data, which is followed by a modified architecture of U-Net \cite{ronneberger2015u}. A yellow arrow in Fig. \ref{fig:architecture} is the basic operator and consists of $3 \times 3$ convolutions followed by a rectified linear unit and batch normalization. The yellow arrows between the seperate blocks at every stage are omitted. A red arrow is a $2 \times 2$ average pooling operator and is located between the stages. The average pooling operator doubles the number of channels and reduces the size of the layers by four. Conversely, a blue arrow is $2 \times 2$ average unpooling operator, reducing the number of channels by half and increasing the size of the layer by four. A violet arrow is the skip and concatenation operator. A green arrow is the simple $1 \times 1$ convolution operator generating the final reconstruction image. \subsection{Network training} The proposed network was implemented using MatConvNet toolbox in MATLAB R2015a environment. Processing units used in this research are Intel Core i7-7700 central processing unit and GTX 1080-Ti graphics processing unit. Stochastic gradient reduction was used to train the network. As shown in Fig.~\ref{fig:architecture}, the inputs of the network are the truncated projection data, i.e. $y_i$. The target data $f_i$ corresponds to the 256 $\times$ 256 size center ROI image cropped from the ground-truth data. The number of epochs was 300. The initial learning rate was $10^{-3}$, which gradually dropped to $10^{-5}$. The regularization parameter was $10^{-4}$. Training time lasted about 24 hours. \section{Results} We compared the proposed method with existing iterative methods such as the TV penalized reconstruction \cite{yu2009compressed} and the L-spline based multi-scale regularization method by Lee et al \cite{lee2015interior}. Fig. \ref{recon_results} shows the ground-truth images and reconstruction results by FBP, TV, Lee method \cite{lee2015interior} and the proposed method. The graphs in the bottom row in Fig. \ref{recon_results} are the cross-section view along the white lines on the each images. Fig. \ref{difference} shows the magnitude of difference images between the ground truth image and reconstruction results of each method. The reconstructed images and the cut-view graphs in Fig. \ref{recon_results} show that the proposed method results have more fine details than the other methods. The error images in Fig. \ref{difference} confirm that the high frequency components such as edges and textures are better restored in the proposed method than other method. We also calculated the average values of the peak signal-to-noise ratio (PSNR) and the normalized mean square error (NMSE) in Table ~\ref{tab_quality}. The proposed method achieved the highest value in PSNR and the lowest value in NMSE with about $7\sim 10$ dB improvement. The computational times for TV, Lee method \cite{lee2015interior} and the proposed method were 1.8272s, 0.3438s, and 0.0532s, respectively, for each slice reconstruction. The processing speed of the proposed method is about 34 times faster than the TV method and 6 times faster than Lee method \cite{lee2015interior}. \begin{figure}[t] \centering \includegraphics[width=8.cm]{recon_results.png} \caption{Reconstruction images by the cone-beam simulation. The last row shows the cut-view plots of the white lines on the images. The number written in the images is the PSNR value in dB.} \label{recon_results} \end{figure} \begin{figure}[t] \centering \includegraphics[width=7.cm]{difference_image.png} \caption{Error images from (ii) TV, (iii) Lee method \cite{lee2015interior}, and (iv) the proposed method. (i) is ground truth image. The number written in the images is the NMSE value.} \label{difference} \end{figure} \begin{table}[h!] \caption{Quantitative comparison of various methods.} \vspace*{-0.5cm} \label{tab_quality} \begin{center} \begin{tabular}{c|cccc} \hlin & FBP & TV & Lee method \cite{lee2015interior} & Proposed\\ \hline\hline PSNR [dB] & 9.4099 & 30.2004 & 27.0344 & 37.4600 \\ NMSE & 8.2941e-1 & 6.9137e-3 & 1.4332e-2 & 1.2994e-3 \\ \hlin \end{tabular} \end{center} \end{table} \section{Conclusion} In this paper, we proposed a deep learning network for interior tomography problem. The reconstruction problem was formulated as a constraint optimization problem under data fidelity and null space constraints. Based on the theory of deep convolutional framelet, the null space constraint was implemented using the convolutional neural network with encoder and decoder architecture. Numerical results showed that the proposed method has the highest value in PSNR and the lowest value in NMSE and the fastest computational time. \section*{Acknowledgment} The authors would like to thanks Dr. Cynthia McCollough, the Mayo Clinic, the American Association of Physicists in Medicine (AAPM), and grant EB01705 and EB01785 from the National Institute of Biomedical Imaging and Bioengineering for providing the Low-Dose CT Grand Challenge data set. This work is supported by National Research Foundation of Korea, Grant number NRF-2016R1A2B3008104. Yoseob Han and Jawook Gu contributed equally to this work.
\section{Introduction} \setcounter{footnote}{0} The past decade has been witness to a highly productive back-and-forth relationship between X-ray and radio studies of supernova remnants (SNRs). Their nonthermal radio emissions, serving as tracers of particle acceleration, have motivated follow-up X-ray observations, most notably with the \chandra\ telescope. In many instances, X-ray imaging has revealed point-like sources surrounded by extended synchrotron structures, typically axisymmetric; these pulsar wind nebulae (PWNe) arise from the shocked outflows of relativistic particle winds driven by a central engine, the powerful time-varying fields in a neutron star magnetosphere. Targeted radio periodicity searches of PWNe have then uncovered previously unseen pulsations, providing the spin periods and, eventually, estimated ages, magnetic field strengths, and spin-down luminosities of the neutron stars, without which a meaningful physical understanding of the surrounding phenomena would be beyond reach. A few specific examples of this general picture of radio/X-ray symbiosis have been the discoveries of the pulsars in the remnants 3C58, G54.1+0.3, G292.0+1.8, and in the ``Mouse'' \citep[see, e.g.,][]{2004IAUS..218...97C}. The Crab Nebula has been viewed for decades as a prototype, but there is growing evidence of PWNe with substantially different properties. At least three objects, among them the Vela SNR, have nonthermal radio spectra that are steep in comparison to the Crab (i.e., with spectral indices $\alpha\approx0.6$, where flux $S_{\nu}\propto\nu^{-\alpha}$ at frequency $\nu$) and morphologies characterized by radio emission lobes straddling a central depression. While the nature of one such candidate PWN, DA~495, was once in question, X-ray imaging observations led to the discovery of an X-ray nebula and a presumed neutron star, both residing near the radio ``hole'' \citep[hereafter, ASL+08]{2008ApJ...687..505A}. Radio and X-ray pulsation searches have not, however, detected a pulsar signature, so a key ingredient is lacking from our understanding of DA~495. In this paper, we describe a similar series of observations directed toward understanding the third member of this group of unusual SNRs, \snr. \begin{figure*}[t] \centerline{ \subfigure{\includegraphics[width=\columnwidth,angle=270]{j2022_acis_composite_v2.ps}}% \hfill \subfigure{\includegraphics[angle=270,width=\columnwidth]{g769.vla21_color.ps}}% } \caption{{\it Left:\/} \chandra\ ACIS-S3 0.5--7~keV X-ray image of the center of \snr, smoothed, exposure-corrected, and scaled to emphasize the field point sources. The $1\arcmin\times1\arcmin$ box is centered on \cxo, while the circled sources were used to register the image. {\it Lower inset:\/} The unsmoothed count-rate map within the box containing the candidate neutron star and PWN. {\it Upper inset:\/} The result of adaptive smoothing of the same image. Overplotted are the regions used to extract spectral information for the unresolved source ({\em circle}) and diffuse emission ({\em ellipse}). {\it Right:\/} $1.49$~GHz map of SNR \snr\ obtained using the VLA (see \citealp{lhw93} for details) with the X-ray pulsar and nebula regions superposed ({\em contours}). The pulsar lies along the ridge of emission connecting the two lobes, adjacent to the central ``hole.'' The arrangement is strikingly similar to the radio/X-ray configuration of the Vela pulsar and PWN (see Figure~10 of \citealp{dlm+03}) as well as of DA~495 (Figure~1 of ASL+08). } \label{fig:chandraimage} \end{figure*} SNR \snr\ was discovered in a 408~MHz DRAO survey of the Cygnus~X region and initially characterized as a ``possible galactic object'' \citep{whl91}. Follow-up three-band radio observations using the Very Large Array (VLA) by \cite{lhw93} resolved a two-lobed structure $4'$ in size, with a connecting bridge of emission, embedded in a faint, roughly circular emitting region $9'\times12'$ across. Based on its steep, nonthermal spectrum, polarization, and morphological similarities to center-filled SNRs (in important respects such as the lack of an outer boundary and radially decreasing flux away from the central depression), these authors favored an SNR interpretation. In this picture, the radio lobes trace the Crab-like synchrotron emission of an evolved pulsar wind nebula. Here we present imaging, spectral, and timing studies of \snr\ in X-rays and radio, culminating in the discovery of \psr\ and of its associated X-ray PWN. In \S\ref{sec:chandra}, we describe analysis of a \chandra\ observation that yielded the first identification of the SNR's central engine, a neutron star with its surrounding nonthermal nebula. In \S\ref{sec:radio}, we describe the discovery of the radio pulsar J2022+3842 and follow-up radio timing observations with the Green Bank Telescope (GBT) that have provided its spin-down rate and pulse properties. In \S\ref{sec:xte}, we present the results of a deep {\it Rossi X-ray Timing Explorer} (\xte) observation: we show that the pulsed X-ray flux is consistent with the \chandra\ point-source flux, cementing the pulsar's association with the SNR. For reasons developed below, we adopt a distance to \snr\ of 10~kpc throughout this work. \section{\chandra\ Observations and Results} \label{sec:chandra} SNR \snr\ was observed for 54~ks by the \chandra\ observatory on 2005 August 01 UT using the Advanced CCD Imaging Spectrometer \citep[ACIS;][]{bur97} operating in the full-frame TIMED/VFAINT exposure mode (ObsID \#5586). This detector is sensitive to X-rays in the 0.3--12~keV energy range with a resolution of $\Delta E /E \sim 0.06$ FWHM at 1~keV. The CCD pixel scale is $0.5\arcsec$, comparable to the telescope's on-axis spatial resolution. Low-level data were reprocessed with the latest calibrations using the CIAO script {\tt chandra\_repro}. The data analysis made use of the CIAO (V4.3), FTOOLS (V6.10), CALDB (V4.4.2), and XSPEC (V12.6.0q) software packages. A total of 53.7~ks of live-time was accumulated. No additional time filtering was necessary as the background rate was stable over the course of the observation. We followed the CIAO online science threads to create ACIS spectra and images. To allow for imaging with the best available angular resolution, we used the {\tt EDSER} reprocessing option, applying an energy-dependent sub-pixel event-repositioning algorithm to each photon \citep{2004ApJ...610.1204L}. The left-hand panel of Figure~\ref{fig:chandraimage} presents the exposure-corrected image, in the 0.5--7~keV band, acquired by the ACIS-S3 and S2 CCDs, which together covered the entire extent of the radio SNR. No morphological evidence of \snr\ itself is apparent nor is there any hint of resolved diffuse emission in images created in the soft ($<2$~keV) or hard ($>2$~keV) energy bands; instead, several faint point-like sources are detected. The brightest of these, enclosed within a box in the figure, is located close to the optical axis and contains $N=1190$ photons. This source is not associated with any known stellar counterpart or object in the 2MASS image of the field. Its identification as a possible neutron star associated with \snr\ is reinforced by the presence of faint but distinct X-ray nebulosity surrounding a brighter, unresolved core. To highlight the diffuse emission, we show as insets in Figure~\ref{fig:chandraimage} both the raw and smoothed 0.5--7 keV count-rate maps for the boxed $1\arcmin\times1\arcmin$ region centered on the source. The smoothed image was produced using a circular convolution kernel of varying size, requiring a minimum of 5 counts within the kernel diameter. Although the putative nebula is faint, there is a clear excess of emission elongated on a line with position angle $\simeq 26^{\circ}$ North through East. This and the relative faintness of the point source argue against identification of the diffuse emission as a dust-scattering halo; these are normally circular and found around much brighter sources. The $1.49$~GHz Very Large Array (VLA) radio image shown in the right-hand panel of Figure~\ref{fig:chandraimage} places the X-ray nebula in the context of the double-lobed radio structure. The unambiguous alignment of the long axis of the diffuse X-ray emission with the radio ridge connecting the lobes suggests that the radiation processes underlying the emissions in both bands are related, such as in the canonical picture of synchrotron emission from relativistic particles originating in the pulsar's magnetosphere. To refine the location of the putative pulsar, the X-ray image was registered using the coordinates of eight 2MASS infrared counterparts obtained from the NASA/IPAC Infrared Science Archive. The updated centroid is located at (J2000.0) R.A. = $20^{\rm h}22^{\rm m}21\fs689$, Decl.\ = $+38^{\circ}42^{\prime}14\farcs82$ with a $1\sigma$ uncertainty of ($0\farcs09,0\farcs07$) in the two coordinates, respectively. The required shift in R.A. and Decl.\ was ($-0\farcs012,-0\farcs162$). This source, \cxo, is hereafter referred to by the shortened name \cxos. We extracted a spectrum for \cxos, and generated an appropriate response matrix, with the {\tt specextract} CIAO script, using a circular $2\farcs5$-radius aperture (representing $>90$\% encircled energy, depending on photon energy) centered on the source. The diffuse emission and cosmic and detector backgrounds contribute negligibly ($\sim 10$ counts) in this region. With a maximum count rate in a pixel of $< 4\times10^{-3}$~s$^{-1}$, photon pile-up could safely be ignored. The 0.5--10 keV spectrum was grouped with a minimum of 15 counts per spectral channel and fitted using XSPEC to an absorbed power-law model, a common property of young, rotation-powered pulsars. (Blackbody models are formally allowed, but produce unnaturally high temperatures.) An excellent fit, with $\chi^2=69.5$ for 70 degrees of freedom (DoF) was obtained for an absorbing column $N_{\rm H} = (1.6\pm0.3) \times 10^{22}$~cm$^{-2}$ and photon index $\Gamma = 1.0\pm0.2$ (90\% confidence intervals are used throughout). The absorbed 2--10~keV source flux is $F_{\rm PSR} = 5.3 \times 10^{-13}$~erg~cm$^{-2}$~s$^{-1}$, implying an isotropic luminosity of $L_{\rm X} = 7.0 \times 10^{33}$~erg~s$^{-1}$ at 10~kpc distance. As described in \S\ref{sec:xtespec}, this \chandra\ spectrum was also used in joint fits with \xte\ data. To investigate the spectrum of the nebula, we used an elliptical extraction region (Figure~\ref{fig:chandraimage}), with semi-major and -minor axes of length $8\farcs3 \times 5\farcs1$, centered on the pulsar, with the point-source region ($r<2\farcs5$) excluded. For the local background, we extracted photons from a concentric annular region well outside the nebular extent ($20\arcsec < r < 40\arcsec$). We also performed a {\tt MARX} simulation to determine the small (2.6\%) flux contribution within the nebular region from point-source photons scattered into the point-spread function wings; this simulation took into account the measured pulsar spectrum. The tally of background-subtracted counts from the nebula is then $N=76\pm13$. The faintness of the nebula does not allow a well-constrained spectral fit; instead we fixed the column density to the best-fit value for the unresolved source and assumed a nominal absorbed power-law model with $\Gamma = 1.4$, a value derived from the empirical trend relating the spectral indices of energetic pulsars and their nebulae \citep{got04}. The absorbed 2--10~keV flux for the putative PWN is $F_{\rm PWN} \approx 4 \times 10^{-14}$~erg~cm$^{-2}$~s$^{-1}$. The flux ratio $F_{\rm PWN}/F_{\rm PSR} \approx 0.08$ is well below that expected for similarly energetic pulsars \citep{got04}. The uncertainty in this result is dominated by the small number of nebula photons and not the choice of $\Gamma$. \section{Radio Observations and Results} \label{sec:radio} Motivated by the \chandra\ discovery of the point source and possible PWN at the heart of SNR \snr, we searched for radio pulsations with the GBT. The Spectrometer SPIGOT data-acquisition backend \citep{2005PASP..117..643K} was used for 5 hours on each of two days, with 600 MHz of useful bandwidth centered at 1.95 GHz. The instrument provided 768 channels with 16-bit sampling of summed polarizations at 81.92~$\mu$s intervals. The PRESTO software \citep{2002AJ....124.1788R} was used to search for pulsations, and a candidate 24-ms periodicity was detected on both days with formal significances of 7$\sigma$ on MJD 54397 and 12$\sigma$ on MJD 54400. Confirmation observations were carried out on 2009 May 6 (MJD 54957) using the GUPPI data-acquisition system configured similarly to the discovery instrumentation, but with the addition of full-Stokes sensitivity. The difference in the pulse periods derived at these epochs gave the first indication of the pulsar's spin-down rate and thence its canonical age, magnetic field strength, and spin-down power (see below). A search for single ``giant'' pulses yielded no compelling candidates, despite the fact that the pulsar's estimated magnetic field strength at the light cylinder, suggested to be relevant in the production of giant pulses \citep{1996ApJ...457L..81C}, is approximately 70\% that of both the Crab pulsar and PSR~B1937+21, and comparable to that of PSR~B1821$-$24, all of which exhibit giant pulses. \begin{figure}[t] \hspace{0.1in} \begin{center} \includegraphics[height=0.9\linewidth,angle=270,clip=true]{radio_profile_pol_v2.ps} \includegraphics[width=0.9\linewidth,angle=0,clip=true]{PvsT.eps} \end{center} \caption{{\it Top:\/} Radio pulse properties of PSR J2022+3842 averaged over a 700~MHz band centered at 1.95~GHz, and representing a total integration time of approximately 24 hours. The evolution with pulse phase of total ({\em black line}), linearly polarized ({\em red}), and circularly polarized ({\em blue}) flux are shown. Phase zero is arbitrary. {\it Bottom:\/} Radio pulse period evolution ({\em red squares}) with 1$\sigma$ uncertainty error bars; the blue cross corresponds to the \xte\ observation (uncertainty much smaller than the plotted symbol). Residuals are relative to the timing model shown in Table~\ref{tab:ephem}. The pulse period at the discovery epoch of the radio pulsations, around MJD~54400, was significantly longer than at later times, indicating that a spin-up glitch occurred at some point before the confirmation observation near MJD~54950.} \label{fig:radiotiming} \end{figure} Additional radio observations, also with the GUPPI system, were carried out at roughly 3-month intervals to constrain the pulsar's long-term timing behavior. Notably, the pulse periods measured during and after the confirmation observation differed substantially from those detected during the two discovery observations made 1.5 years earlier, implying that a spin ``glitch'' of magnitude $\Delta P/P \simeq 1.9\times 10^{-6}$ had occurred at an unknown epoch in this interval. The spacing of the radio timing observations together with apparent ``timing noise'' instability in the pulsar's rotation on similar timescales precluded derivation of a phase-connected timing solution. We therefore determined the long-term average spin-down rate, $4.3192(27)\times10^{-14}$, through a least-squares fit to the multi-epoch period measurements (Figure~\ref{fig:radiotiming}), a result consistent with the short-term X-ray-derived ephemeris shown in Table~\ref{tab:ephem} (see \S4.1). The pulsar's flux density at 2 GHz, $S_{\rm 2\,GHz}$, is 60~$\mu$Jy; its 4.8~GHz flux density of $\approx 45$~$\mu$Jy suggests that \psr\ has an unusually flat spectrum, $\alpha \simeq -0.33$. The implied radio pseudo-luminosity $S_{\rm 1.4\,GHz}d^2 \simeq 7\,D^2_{10}$~mJy~kpc$^2$ is unremarkable, low in comparison to the majority of known pulsars, but an order of magnitude higher than the luminosities of other young, faint pulsars discovered in deep searches of SNRs \citep{2004IAUS..218...97C}. Even with a long cumulative exposure, our best average pulse profile (Figure~\ref{fig:radiotiming}) retains significant statistical noise; despite this faintness, a number of relevant pulse shape and polarization results are available. The radio pulsations of \psr\ are significantly affected by dispersion and scattering due to propagation in the interstellar medium, which are typically important at frequencies $\lesssim 1$~GHz, even in our 2~GHz observations. To estimate dispersion measure (DM) and the pulse broadening timescale $\tau^{\rm scatt}$, we formed a pulse-shape model assuming a gaussian-shaped intrinsic profile, a one-sided exponential-decay impulse response function for the scatter-broadening, and a $\nu^{-4}$ frequency dependence for the width of the exponential. We fit the S-band (i.e., 2\,GHz) data to this model in four evenly-spaced frequency sub-bands, allowing the width of the intrinsic gaussian profile to vary and also accounting for a DM bias caused by the frequency-dependent pulse broadening due to scattering. This simple model fit the data well, but we caution that the assumptions inherent in the model may introduce significant systematic errors, perhaps as large as twice the statistical uncertainties quoted below. The NE2001 Galactic electron distribution model \citep{2002astro.ph..7156C} fails to accommodate the derived ${\rm DM} = 429.1\pm0.5$~pc~cm$^{-3}$ in the direction of \psr---formally, the model suggests a lower bound on the distance to the pulsar of 50 kpc. Instead, a likely over-density of free electrons in the Cygnus region, along the line of sight, accounts for the higher-than-expected dispersion---a similar explanation has been advanced for the unexpectedly large DM of the nearby PSR J2021+3651 \citep{2002ApJ...577L..19R}. In this direction, the Perseus Arm lies at a distance of approximately 6~kpc, and the Outer Arm at $\gtrsim10$ kpc. For convenience in scaling distance-dependent parameters and because the larger distance brings X-ray efficiencies more approximately in line with those of other young pulsars (see~\S\ref{sec:discuss}), we adopt a distance $d = 10\, D_{10}$~kpc. Independent supporting evidence for the large distance derives from H\,{\sc i} absorption and X-ray studies of the PWN CTB 87 \citep{2003ApJ...588..852K}, roughly $2\degr$ from \snr\ on the sky. At a distance of 6.1 kpc, CTB 87 lies in the Perseus arm and its X-ray-derived absorbing column is $N_{\rm H} = (1.4\pm0.2)\times10^{22}$~cm$^{-2}$ at 90\% confidence (Safi-Harb, Matheson \& Kothes, in preparation); although their uncertainties are large, the nominal H\,{\sc i} column for \snr\ is somewhat larger than that for CTB 87. A radio bright extragalactic point source adjacent to CTB 87 exhibits a deep absorption component at a velocity of about $-85$~km~s$^{-1}$ not seen in the absorption profile of CTB 87. This feature could account for the larger foreground absorption toward \snr\ if the latter is located at a significantly larger distance, such as the Outer Arm or beyond it. The NE2001 model also predicts a timescale for pulse broadening due to interstellar scattering of 0.3 ms at 1 GHz; instead, we find $\tau^{\rm scatt}_{\rm 1 GHz} = 55 \pm 7$~ms, which scales to $3.8$~ms at the center of our observing band but varies strongly across it. The long decay of the pulse, here averaged over the 700 MHz band, can be seen in the upper half of Figure~\ref{fig:radiotiming}. Consistent with this result, an exploratory observation centered at 800 MHz sky frequency failed to detect the pulsar; similar attempts at 1.4 GHz and 5 GHz yielded weak detections. At 5 GHz, the pulse was unexpectedly broad, consistent with the results of the frequency-subband fit to the pulse shape. Thus, in addition to the importance of scatter-broadening at the lower frequencies, the pulse appears to be intrinsically broad, roughly 30\% of the pulse period, full width at half maximum. The well-established trend of pulse widths increasing as $P^{-0.5}$ underestimates this result by a factor of 2--3 (e.g., for the outermost conal component of \citealp{1999A&A...346..906M}). Polarization calibration and analysis using the PSRCHIVE software package \citep{2004PASA...21..302H} reveals that the radio pulse is substantially linearly polarized, despite the depolarizing effects of multipath propagation in the scattering tail, incomplete removal of Faraday rotation (due to the low signal-to-noise ratio), and other effects. The true polarized fraction in the pulse is likely to be significantly larger than we have observed. The variation of the polarization vector's position angle with rotational phase, shown in the top-most panel of Figure~\ref{fig:radiotiming}, is found to be fairly flat, inconsistent with the canonical rotating-vector model \citep{1969ApL.....3..225R}, but similar to behavior seen in the Crab pulsar (especially its high-frequency components; \citealp{1999ApJ...522.1046M}) and several millisecond-period ``recycled'' pulsars \citep{2004MNRAS.352..804O}. Perhaps not coincidentally, many of these same pulsars exhibit nonthermal magnetospheric pulsations in X-rays. Finally, we note that the pulsar's rotation measure is ${\rm RM} \simeq +270$~rad~m$^{-2}$. \section{\xte\ Observations and Results} \label{sec:xte} The field containing \cxos\ was observed by \xte\ for a total of 99~ks over the 8-day span 2010 January 27--February 4 UT (observation P95316). The data were collected with the Proportional Counter Array \citep[PCA;][]{jah96} in the GoodXenon mode with an average of 1.8 of the 5 proportional counter units (PCUs) active. In this mode, photons are time-tagged to $0.9$ $\mu$s and have an absolute uncertainty better than 100 $\mu$s \citep{rot98}. The effective area of five combined detectors is approximately $6500$ cm$^{2}$ at 10~keV with a roughly circular field-of-view of $\sim 1^{\circ}$ FWHM. Spectral information is available in the 2--60~keV energy band with a resolution of $\sim 16\%$ at 6~keV. Production PCA data for this observation were obtained from NASA's HEASARC archive and time-filtered using the standard criteria. This rejected intervals of South Atlantic Anomaly passages, Earth occultations, and other periods of high particle activity, resulting in a total of 94.2~ks of useful data. The photon arrival times were transformed to the solar system barycenter in Barycentric Dynamical Time (TDB) using the JPL DE200 ephemeris and the \chandra-derived coordinates of \cxos. \subsection{Timing Analysis} \label{sec:xtetim} The timing analysis was restricted to PCA channels 2--50, corresponding approximately to the 2--20~keV energy range, and from the top Xenon layer of each PCU, to optimize the signal-to-noise for a typical pulsar spectrum. Because of the long observation span, photon detection times were accumulated in 4-ms bins and a $2^{28}$-point accelerated Fast Fourier Transform (FFT) was performed to search for a periodic signal, testing a range of plausible frequency derivatives. This immediately revealed a signal at 24~ms of sufficient strength to analyze short intervals of data and build up a fully phase-coherent timing solution using the time of arrival (TOA) method, as follows. \begin{figure}[t] \hspace{0.1in} \begin{center} \includegraphics[height=0.9\linewidth,angle=270,clip=true]{psr2022_pca_toa_v3.ps} \hfill \includegraphics[height=0.9\linewidth,angle=270,clip=true]{psr2022_pca_fold_v3.ps} \end{center} \caption{\xte\ timing results for \psr\ in the 2--20~keV X-ray band. {\em Top:\/} Phase residuals after fitting the phase-connected quadratic ephemeris given in Table~\ref{tab:ephem} to the set of PCA observations as described in \S\ref{sec:xtetim}. On average, individual TOA measurements were obtained in 17 ks-PCU of exposure. {\em Bottom:\/} The pulse profile of the \xte\ data folded on the best-fit ephemeris. Two cycles are shown for clarity.} \label{fig:xtepulse} \end{figure} The \xte\ observations of \cxos\ distributed over the 8~days were clustered into 10 groups of two, three, or four adjacent 96-min orbits, with large gaps in between. For each of these groups, we were able to extract the period and phase of the signal with sufficiently small uncertainties to maintain cycle counts between fitted epochs. Pulse profiles were generated using the optimum period derived from the $Z^2_n$ test \citep{buc83} with $n=10$ harmonics to allow for the sharp profile. The resulting profiles were cross correlated, shifted, and summed to generate a master pulse profile template. Individual profiles were then cross-correlated with the template to determine the arrival time (time of phase zero) and its uncertainty at each epoch. The derived TOAs were iteratively fitted to a quadratic ephemeris using the TEMPO software. We started by fitting TOAs from the first three most closely spaced TOAs to a linear solution, and then iteratively added the next TOA. At each step we found that the new TOA would match to $<0.02$ cycles the predicted phase derived from the previous set. The resulting ephemeris referenced to the mid-point of the observation is presented in Table~\ref{tab:ephem} and the phase residuals are shown in Figure~\ref{fig:xtepulse}. The residuals are all less than 0.02 cycles and appear to be random within their statistical uncertainties, with the exception of a single $3\sigma$ data point ($\Delta \phi =0.025\pm0.01$) which an investigation finds no reason to exclude. \begin{deluxetable}{ll} \tabletypesize{\small} \tablecaption{\label{tab:ephem}Properties of \psr } \tablehead{ \colhead{Parameter} & \colhead{Value} } \startdata \multicolumn{2}{c}{\em X-ray Measurements} \\ R.A. (J2000)\tablenotemark{a}\dotfill & $20^{\rm h}22^{\rm m}21\fs689(6)$\\ Decl. (J2000)\tablenotemark{a}\dotfill & $+38\arcdeg42'14\farcs82(7)$ \\ Epoch (MJD TDB)\tablenotemark{b}\dotfill & 55227.00000027 \\ Period, $P$ (ms)\tablenotemark{b}\dotfill & 24.2877561082(84) \\ Period derivative, $\dot P$\tablenotemark{b}\dotfill & $4.3064(93)\times10^{-14}$ \\ Range of timing solution (MJD)\tablenotemark{b}\dotfill & 55223--55231 \\ \multicolumn{2}{c}{\em Radio Measurements} \\ Long-term average $\dot P$\dotfill & $4.3192(27)\times10^{-14}$ \\ Range of $\dot P$ fit (MJD)\dotfill & 54957--55469 \\ Dispersion measure, DM (pc-cm$^{-3}$)\dotfill & $429.1\pm0.5$ \\ Scatter-braodening timescale, $\tau^{\rm scatt}_{\rm 1\,GHz}$ (ms)\dotfill & 55(7) \\ Flux density, $S_{\rm 2\,GHz}$ ($\mu$Jy)\dotfill & 60 \\ \multicolumn{2}{c}{\em Derived Parameters} \\ Characteristic age, $\tau_c$ (kyr)\dotfill & 8.9 \\ Spin-down luminosity, $\dot E$ (erg\,s$^{-1}$)\dotfill & $1.19\times10^{38}$ \\ Surface dipole magnetic field, $B_s$ (G)\dotfill & $1.03\times10^{12}$ \enddata \tablecomments{\footnotesize $1\sigma$ uncertainties given in parentheses.} \tablenotetext{a}{\footnotesize \chandra\ ACIS-S3 position registered using 2MASS objects (see text).} \tablenotetext{b}{\footnotesize \xte\ phase-connected ephemeris.} \end{deluxetable} Figure~\ref{fig:xtepulse} displays the pulse profile using all of the 2--20~keV data folded on the final ephemeris. It has a single symmetric peak that is triangular in shape and narrow, with a FWHM of 0.06 of a full cycle. The measured pulsed emission corresponds to 0.91\% of the total PCU countrate; however, as will be shown below, the intrinsic signal is nearly 100\% pulsed compared to the flux derived from the spectrum of \cxos. We see no energy dependence of the pulse profile when subdividing the 2--20~keV \xte\ energy range. \subsection{\xte\ Spectral Analysis} \label{sec:xtespec} The pulsed-flux spectrum of \psr\ can be isolated through phase-resolved spectroscopy. We used the FTOOLS {\tt fasebin} software to construct phase-dependent spectra based on the ephemeris of Table~\ref{tab:ephem}. We combined spectra from all observation intervals using only data recorded in the top Xenon layer of PCU\#2; this PCU was (uniquely) used for all observation intervals over the 8~days. Similarly, we averaged the standard PCA responses generated for each epoch. In fitting the pulsed flux, the unpulsed emission provides a near perfect background estimate. The spectra were accumulated in two phase intervals representing the off-peak ($0.1<\phi_{\rm off}\leq0.88$) and on-peak ($0.88<\phi_{\rm on}\leq1.1$) emission and fitted using XSPEC. The fits were performed in the 2--16~keV range, above which the background dominates. \begin{figure}[t] \centerline{ \includegraphics[height=0.95\linewidth,angle=270,clip=true]{j2022_acis_pca_spec_pl_v2.ps} } \caption{The \chandra\ ACIS spectrum of \cxos\ ({\it black}) and the \xte\ PCA spectrum of the pulsed emission from \psr\ ({\it red}), fitted jointly to an absorbed power-law model, with independent normalizations. The solid line shows the best-fit model (see text \S\ref{sec:xtespec}). The pulsed \xte\ flux is obtained by subtracting the off-peak spectrum from the on-peak spectrum. Residuals from the best fit are shown in units of standard deviation.} \label{fig:chandraspectra} \end{figure} Based on the \chandra\ spectrum, we fitted an absorbed power-law model with the interstellar absorption held fixed at $N_{\rm H} = 1.6\times10^{22}$~cm$^{-2}$; leaving $N_{\rm H}$ unconstrained results in a larger uncertainty in $\Gamma$. The resulting best-fit photon index is $\Gamma = 1.1\pm0.2$ with $\chi^2 = 27.9$ for 30 DoF. The absorbed 2--10~keV flux for the pulsed emission is $5.4 \times 10^{-13}$~erg~cm$^{-2}$~s$^{-1}$, which represents all of the point-source flux measured from \cxos, indicating that its intrinsic pulsed fraction is essentially 100\%. In Figure~\ref{fig:chandraspectra}, we show the result of a joint fit to the \xte\ pulsed emission from \psr\ and \cxos\ in the 0.5--16~keV band, leaving only their normalizations free. The best fit parameters are $N_{\rm H} = (1.7\pm0.3)\times 10^{22}$~cm$^{-2}$ and $\Gamma = 1.0\pm0.2$ with $\chi^2 = 99.3$ for 100 DoF. There is no indication of spectral curvature in the fitted energy range. The independently measured absorbed 2--10~keV flux is the same for both \cxos\ and the pulsed emission from \psr, namely $F_{X} = 5.3 \times 10^{-13}$~erg~cm$^{-2}$~s$^{-1}$. We adopt the results of the joint spectral fit as our final reported values. \section{Discussion} \label{sec:discuss} There can be little doubt that \cxos\ and \psr\ are one and the same: the non-thermal X-ray source lies at the center of \snr, lacks an optical counterpart, and anchors nebular emission consistent with a PWN. Most importantly, the pulsed flux detected using \xte\ accounts for all of the \chandra\ emission and no more. These properties are consistent with pulsars producing broadband magnetospheric emission powered by rotational energy losses. \chandra\ astrometry thus locates the pulsar to sub-arcsec precision. \psr\ is outstanding in several respects. It is the second-most energetic Galactic pulsar known, after the Crab pulsar, and the fourth overall, taking into account two LMC pulsars, in N157B (PSR~J0537$-$6910, with $\dot E = 4.9\times 10^{38}$~erg~s$^{-1}$ and $P=16$~ms) and in SNR 0540$-$69 (PSR~J0540$-$6919, with $\dot E = 1.5\times 10^{38}$~erg~s$^{-1}$ and $P=50$~ms). However, it is among the least efficient at converting its spin-down luminosity into X-rays\footnote{For ease of comparison with the compilation of \citet{2008AIPC..983..171K}, the flux, luminosity, and efficiency in the 0.5--8 keV band are $F_{\rm PSR} = 4.0\times10^{-13}$~ergs~cm$^{-2}$~s$^{-1}$, $L_{\rm X} = 6.5\times10^{33}\,D^2_{10}$~ergs~s$^{-1}$, and $L_{\rm X}/\dot E = 5.5\times 10^{-5}\,D^2_{10}$. }, with $\eta \equiv L_{\rm 2-10\,keV} /\dot E = 5.9\times10^{-5}\, D^2_{10}$. The flat pulsed spectrum also sets it apart from the typical young, energetic pulsar, deviating significantly from the observed trend $\Gamma \propto 1/\sqrt{\dot E}$ \citep[][2--10~keV]{got03}, which predicts an index of $\Gamma=1.8$. Flat power laws have primarily been found among pulsars with \hess\ counterparts \citep[e.g., PSR~J1838$-$0655;][]{gh08}. \psr\ is also the second-most rapidly rotating young pulsar after the LMC pulsar J0537$-$6910, and the shortest-period radio-bright pulsar known. Its radio properties are thus potentially interesting probes of the elusive radio emission mechanism in pulsars, but detailed studies will be hampered by this object's faintness and the interstellar propagation effects imposed by its great distance. The short rotation period also engenders uncertainty in the pulsar's (and thus the supernova remnant's) age: the characteristic age $\tau_c = P/2\dot P = 8.9$~kyr approximates the true age $\tau$ only when $P_0 \ll P$, where $P_0$ is the spin period at birth, \begin{equation} \label{eq:psrage} \tau = \frac{P}{(n-1)\dot{P}} \left[1-\left(\frac{P_0}{P}\right)^{n-1}\right], \end{equation} \noindent where $n$ is the so-called ``braking index.'' For a birth period of 16 ms, the shortest known among young pulsars, the true age decreases to $\approx 5$~kyr, assuming spin-down due to magnetic dipole braking ($n=3$). A countering effect, however, is spindown that deviates from the standard dipole assumption: the measured braking indices for several young pulsars are found to lie within the range $2.0 \lesssim n < 3.0$ (see, e.g., \citealp{1997MNRAS.288.1049M}). For example, as shown in Figure~\ref{fig:psrage}, $P_0 = 10$~ms for $n = 2$ would imply an age of 11 kyr. For any assumed value of $n$, an upper limit to the age of the pulsar, and thus the remnant, is provided in the limit $P_0 \rightarrow 0$. A braking index well below $n = 1.5$ would be highly unusual, so that $\sim 40$~kyr is a reasonable upper limit on the age of \snr. \begin{figure}[t] \hspace{0.1in} \includegraphics[height=0.9\linewidth,angle=270,clip=true]{psrage_final.ps} \caption{The true age $\tau$ of \psr\ as a function of its spin period at birth, $P_0$, according to Equation~(\ref{eq:psrage}); $n$ is the braking index. The curves correspond, from top to bottom, to $n=1.5$, 2.0, 2.5, 3.0 (dashed), 3.5, and 4.0, converging at the present-day period, $P = 24.28$~ms. The associated upper-limits on the pulsar age (for $P_0$ much smaller than $P$) are $\sim 36$, 18, 12, 9, 7, and 6 kyr, respectively. } \label{fig:psrage} \end{figure} \psr\ lacks the bright X-ray PWN expected from an energetic pulsar: it is only the second example (of some 20) of a pulsar with $\dot E \lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 4\times 10^{36}$~erg~s$^{-1}$ unaccompanied by a PWN of comparable brightness, $F_{\rm PWN}/F_{\rm PSR}\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1$, defying the trend presented in Gotthelf (2004). As such, it is similar to \psrb\ (Torii {et~al.}\ 1998), the 69~ms pulsar with $\dot E = 1.6 \times 10^{37}$~erg~s$^{-1}$, $B_s = 3.1 \times 10^{12}$~G, and $\tau_c = 8.1$~kyr. The underluminous PWNe in these cases remain unexplained. In other respects, however, the X-ray PWN around \psr\ may not be so unusual. The semi-major axis of the elliptical elongation around the unresolved source is approximately $6\arcsec$ in length, representing a physical dimension of $9\times10^{17} \ D_{10}$~cm. For comparison, the Vela PWN's X-ray ``outer arc'' lies at a distance $1\times10^{17}$~cm from the pulsar in the geometric model of \citet{2001ApJ...556..380H}. If the extended emission around \psr\ does arise from axisymmetric features, the \chandra\ image constrains---in analogy with the Crab, Vela, and others---the orientation of the pulsar's spin axis: in inclination, to $\simeq 50^\circ$, and on the sky, to the symmetry axis at position angle $-64^\circ$ North through East, orthogonal to the long direction of the diffuse X-ray emission and the ridge linking the radio lobes. The apparent alignment of spin axes and proper motion vectors in some of these same pulsars offers a testable prediction for the motion of \psr. The spin inclination relative to the line of sight, meanwhile, may help elucidate the role of viewing geometry (e.g., relative to a narrow magnetospheric emission beam) in {\em i)\/} producing simultaneously a narrow X-ray pulse but an unexpectedly broad radio pulse, a reversal of their typical relationship, and {\em ii)\/} determining whether the observed neutron star spectrum is primarily thermal or nonthermal: the flat spectrum of \psr\ departs markedly from the dominant thermal emissions of the central stars in the Vela and DA~495 SNRs. We have already alluded to some of the notable similarities between \snr\ and DA~495, both of which can now unambiguously be characterized as radio PWNe. Both are very bright radio sources without a well-defined outer boundary. Both manifest a bipolar structure and have a steep radio spectrum, unusual for a PWN. DA 495 has a break in its radio spectrum at roughly 1 GHz, and \snr\ may have one below that as well. Both show very faint X-ray emission compared to their radio emission, and a small X-ray nebula compared to the radio nebula. The conclusions arrived at by ASL+08 for DA~495 may thus also apply to \snr. For example, based on the radio/X-ray PWN size ratio of DA~495, $\gtrsim 25$, and other lines of argument \citep[see also][]{2008ApJ...687..516K}, ASL+08 suggested that the pulsar wind energizing DA~495 has a high magnetization factor---i.e., it carries electromagnetic flux and particle flux at roughly comparable levels---in contrast with the Crab, which has a strongly particle-dominated wind, but similar to independent assessments of the wind properties of the Vela pulsar. Moreover, ASL+08 speculated that, because particle-dominated winds are necessary for efficient conversion of wind luminosity to synchrotron luminosity, PWNe in which Poynting flux is an important wind component may be those with dim X-ray PWNe. The PWN size ratio for \snr\ is $\simeq 20$, comparable to DA~495 and nearly two orders of magnitude greater than that for the Crab Nebula. (At a distance of 10 kpc, G76.9+1.0 may be the largest PWN known in our Galaxy, with a physical size of $29 \times 35$~pc, bigger than or comparable to MSH 15$-$57, which is at least 25 pc in diameter, \citealp{2000ApJ...542..380G}, and so far believed to be the largest Galactic PWN.) If our prior conclusions hold, the wind from \psr\ should be highly magnetized well beyond its termination shock, contributing to the low X-ray conversion efficiency of its PWN. Why this might be so is an open question. The overarching conclusion of ASL+08 and \citet{2008ApJ...687..516K} was that DA~495 is likely to be a PWN of advanced age ($\sim 20$~kyr), but having evolved without significant interaction with the ambient medium. All of the characteristics we find for \snr\ similarly indicate a rather old object, yet \psr\ has all the characteristics of a young object: a spin-down age of 20~kyr would require both $n < 2$ and very high spin at birth, $P_0 \lesssim 5$~ms. We do not have a ready explanation for this apparent discrepancy, which highlights the importance of having uncovered the spin and energetic properties of the central pulsar. \snr\ and its pulsar, J2022+3842, are clearly unusual and require further investigation. Additional multi-wavelength observations (e.g., an X-ray search for the SNR shell, to aid in constraining the system's age) may be fruitful in providing important components for their understanding. Based on its spin-down luminosity, and given the sub-arcsec localization and available timing information, \psr\ is a good candidate for a search for gamma-ray pulsations using {\em Fermi\/} data. The absence of a bright PWN, however, makes it an unlikely TeV target, even though the spectrum of the pulsed X-ray emission suggests otherwise. \acknowledgements Support for this work was provided by the National Aeronautics and Space Administration through \chandra\ Award Number GO5-6077Z issued by the \chandra\ X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We have also made use of \xte\ data provided by the High Energy Astrophysics Archive at NASA's Goddard Space Flight Center, as well as data products from the Two Micron All Sky Survey, a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/Caltech, funded by NASA and the National Science Foundation. SSH acknowledges support by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canada Research Chairs program.
\section{Introduction} The falling dynamics of an oscillating drop is essential to many natural phenomena and industrial applications, such as rain drops \cite{Feng_1991a} and inkjet printing \cite{Basaran_2013a}. For drops that are formed by a nozzle, the drop characteristics can be controlled through the inflow rate. When the inflow rate is large, the injected liquid inertia dominates and the drop formation is in the jetting regime; when the inflow rate is small, then the gravity plays the dominant role, placing the drop formation in the dripping regime \cite{Clanet_1999a}. In the present study, we focus on one specific case in the dripping regime. The purpose of the study here is to provide a comprehensive description of the short-term oscillation and falling dynamics for the dripping drop. The initial conditions for the drop fall are determined by the drop formation process. Since the shape oscillation of the falling drop is triggered by the non-equilibrium shape and the velocity field when the drop is just formed. The shape oscillation will in turn impact the falling dynamics of the drop and the development of the transient flow around the drop. Nevertheless, despite its importance, the effect of drop formation on the subsequent oscillation and falling dynamics have not received enough attention in former studies. Instead of using the precise post-drop-formation state, \textit{ad hoc} initial conditions (such as simple spheroid shape), are often used in simulations \citep{Lalanne_2013a,Agrawal_2017a,Bergeles_2018a}. To fully incorporate the effect of drop formation, the whole process starting from drop growth, continuing with detachment, and eventually fall, is considered in the present simulation. Another important advantage of simulating the whole process is that an experiment with exact conditions can be done to validate the simulation results. This is hard to achieve if \textit{ad hoc} initial conditions are specified like in former simulations. \subsection{Drop formation} The dripping drop first develops as a pendant drop, hanging at the nozzle exit. When the drop volume is smaller than the critical volume, the surface tension is strong enough to resist gravity and to keep the drop stably attached to the nozzle \citep{Padday_1973a, Sumesh_2010a}. As the volume of the pendant drop reaches the critical value, the drop becomes unstable and a neck is formed between the nozzle and the main body of the drop \citep{Schulkes_1994a,Coullet_2005a}. The minimum radius of the neck rapidly decreases, giving rise to an increasingly large capillary pressure in the neck. This high pressure drives the liquid away from the neck toward the nozzle and the main body of the drop, further accelerating the pinching process. The pinching of the liquid neck will eventually detach the drop \tcbl{from the nozzle} and the pinching dynamics has been studied extensively in the past. The overall pinching process is dictated by surface tension, inertial, and viscous forces \citep{Castrejon-Pita_2015a}. The pinching process exhibits a finite-time singularity and a universal self-similar behavior near the singularity \tcbl{\citep{Eggers_1993a, Eggers_1994a, Papageorgiou_1995a, Day_1998a, Zeff_2000a, Chen_2002a, Doshi_2003a, Castrejon-Pita_2012a}}. For low-viscosity liquids like water, inertia of the liquid flow toward \tcbl{the} main body of the drop results in the shift of the local minimum of bridge radius toward the top of the drop, where the interface overturns before pinching eventually occurs. To obtain details of the flow field in the drop formation process, advanced experimental diagnostics and high-resolution simulations are required \citep{Wilkes_1999a, Bos_2014a, Borthakur_2017a}. By recording two consecutive images of the same drop with a small time delay, \citet{Bos_2014a} extracted the \tcbl{longitudinal} velocity profile during drop formation. \tcbl{ For the present problem, the viscosity and density of the surrounding air are small compared to those for water, and the effect of the surrounding air on drop formation is small. When the surrounding fluid has similar density or viscosity as the drop fluid, the surrounding fluid can have a significant impact on the drop formation dynamics \citep{Zhang_1999b}. } \subsection{\tcbl{Oscillation of a free drop}} Following the formation, the drop falls under the action of gravity. Since the shape of the drop just after detachment is out of equilibrium, the capillary force will cause the drop to oscillate when it falls. Drop oscillation is a classic fluid mechanics problem, and the early investigation on the oscillation of a free drop can be traced back to the pioneering work of \citet{Rayleigh_1879a}. \tcbl{(A free drop here is referred to a drop that is located in an unbounded domain without gravity and falling motion.)} For the infinitesimal amplitude oscillation of a free and inviscid liquid drop, Rayleigh decomposed the shape of the drop into spherical harmonic modes and calculated the corresponding frequency for each mode \citep{Rayleigh_1879a}. The original work of Rayleigh is based on a free-surface approximation. The extension to incorporate the effect of ambient fluid and the viscous effect was made by \citet{Lamb_1932a}, and later followed by others \citep{Reid_1960a, Miller_1968a,Prosperetti_1980a}. \tcbl{ Lamb's theory is generally valid for low-viscosity fluids. Yet \citet{Miller_1968a} showed that even if the viscosities of the drop and surrounding fluid are both small, the viscous effect cannot be ignored since the oscillation damping rate is controlled by the boundary layer developed near the interface. The transient effect on the oscillation frequency and the damping rate was investigated by \citet{Prosperetti_1980a} and it is shown that the predictions based on normal mode analysis by \citet{Lamb_1932a} are strictly valid only asymptotically. When the oscillation amplitude is finite, the nonlinear effect on drop oscillation becomes important. Typical nonlinear effects include decrease of oscillation frequency with oscillation amplitude, asymmetry in oscillation amplitude, and coupling between different oscillation modes \citep{Tsamopoulos_1983a, Natarajan_1987a, Becker_1991a, Basaran_1992a, Becker_1994a}. } \subsection{Dynamics of a falling drop} For a falling drop, the oscillation dynamics and the \tcbl{transient} flow around the drop become more complicated. Extensive numerical and experimental studies have been performed to understand the long-term falling dynamics of liquid drops after the terminal velocity is reached \tcbl{(see for example \citet{Gunn_1978a, Feng_1991a, Helenbrook_2002a, Feng_2010a})}. Those research efforts were usually motivated by the interest in rain drops in atmospheric science. The present study has a different focus, that is, on the short-term dynamics of the falling drop. \tcbl{Here, the short-term and long-term are defined with respect to the response time required for the drop to reach the terminal velocity.} The interest on the short-term behavior is motivated by the fact that, for many application of falling drops, such as inkjet printing, the drop will reach a substrate or a liquid film far before reaching the quasi-steady state. Furthermore, the oscillation dynamics of a falling drop in the short term has also lead to new technology to measure liquid properties, \textit{e.g.}, \citet{Staat_2017a} recently proposed new methods to measure surface tension and drop viscosity based on the short-term oscillation frequency and damping rate. In the short term, the drop velocity and Reynolds number increase over time and the viscous flow around the drop is transient. Furthermore, due to the falling motion and the induced shear stress, the equilibrium shape of the oscillating drop is not spherical in general \citep{Feng_2010a}. Because of these additional complexities, there is no general analytical solution for the problem and numerical approaches are required to solve the governing equations \citep{Lalanne_2013a, Tripathi_2014a, Agrawal_2017a, Bergeles_2018a}. Owing to the similar dynamics between a falling drop and a rising bubble, these two cases are often discussed together (see for example \citet{Ern_2012a}), although fundamental difference between these two cases exists \citep{Tripathi_2014a}. It is challenging to accurately measure the three-dimensional flow inside a small drop in experiments. By seeding tracer particles of an average size of 10 \textmu m, \citet{Chung_2000a} obtained instantaneous velocity maps inside an oscillating drop which is electrostatically levitated. \subsection{Numerical Simulation} Thanks to the development of advanced interface capturing techniques in the past decades, direct numerical simulation is now capable of capturing interfacial flows that exhibit topology changes \citep{Tryggvason_2011a} and can also provide high-level details of the flow field that are difficult to measure in experiments. Extensive numerical studies have been conducted to simulate the drop formation process by the volume-of-fluid (VOF) method, see for example \citet{Zhang_1999a} and \citet{Gueyffier_1999a}. The recent simulations by \citet{Agrawal_2017a} have used the VOF method to resolve the oscillation of a falling drop with a non-spherical initial shape. It is shown that the oscillation only arises in the longitudinal direction and no azimuthal variation was observed even when vortex shedding occurs in the wake of the drop. Another recent work by \citet{Bergeles_2018a} presented high-resolution three-dimensional simulation results for a falling drop of millimeter class. The detailed flow structure was well captured and in particular, the roller vortex that is required to link the circulation in the wake of the drop with a Hill vortex inside the drop was clearly unveiled. For a similar problem, \citet{Lalanne_2013a} have performed axisymmetric simulations using the level-set method for the oscillation of rising drops and bubbles. It was found that the oscillation frequency decreases slightly with the rising velocity while the damping rate of the drop oscillation is significantly magnified due to the rising motion. \subsection{Goal of this study} In spite of of the extensive studies discussed above, a comprehensive understanding of the short-term oscillation and falling dynamics for a dripping drop remains to be established. In particular, the effect of drop formation on the oscillation dynamics and the transient flow around the falling drop are still not fully understood. To the knowledge of the authors, there is no previous study that considers the effect of the drop formation on the oscillation dynamics of a falling drop. The oscillation of a drop is dictated by the initial conditions which are in turn set by the drop formation process. Former numerical studies generally assumed the initial drop shape to be ellipsoidal or spherical with a constant initial velocity within the drop \citep{Lalanne_2013a,Agrawal_2017a,Bergeles_2018a}. However, the shape of the drop when it is just formed is far more complex than an ellipsoid, \tcbl{and furthermore, the velocity field in the just-formed drop is highly non-uniform due to the pinching dynamics.} The former simulations with simplified initial conditions are useful to understand the general physics of oscillation of a falling drop. Nevertheless, in order to precisely predict the shape and dynamics of the falling drop, which are critical to many applications of drops, \textit{e.g.}, the impact of a falling drop on a deep pool \citep{Deka_2017a}, the effect of drop formation on the subsequent drop oscillation and falling dynamics must be faithfully incorporated. The goal of the present study is therefore to investigate the dynamics of a water drop dripping in quiescent air through simulation and experiment. Particular focus will be placed on the drop oscillation dynamics and the development of the transient flow around the drop. To achieve this goal, one specific case is considered in the present study. The flow rate at the nozzle inlet is chosen to be sufficiently small, so that the drop formation is in the dripping regime and the drop growth is quasi-static. Furthermore, we focus on only the short term of the drop fall, during which the drop shape and the flow remain axisymmetric. The key questions that the present study aims to address include: \begin{itemize} \item \tcbl{Are the ``initial conditions" set by the drop formation process important to the drop oscillation and falling dynamics? } \item \tcbl{How do the nonlinear dynamics and falling motion influence the drop oscillation dynamics, such as the oscillation frequency and damping rate?} \item How do the drop oscillation and the falling motion contribute to the development of the transient flow around the drop? Is the flow structure within the drop similar to the classic Hill vortex? \end{itemize} To address these questions, axisymmetric simulations are carried out with the adaptive multiphase flow solver, \emph{Gerris}. An experiment with the same conditions has also been conducted to validate the simulation results. The simulation and experimental approaches are described in section \ref{sec:methods}. The results for drop formation, shape oscillation, and transient flow around the drop, will be presented and discussed in sequence in sections \ref{sec:results_formation}, \ref{sec:results_oscillation} and \ref{sec:results_transient}, respectively. Finally, concluding remarks will be given in section \ref{sec:conclusions}. \section{Methodology} \label{sec:methods} \subsection{Key parameters} The process of drop formation are controlled by physical parameters listed in table \ref{tab:parameter}, including the liquid and gas properties, the nozzle radius, the gravity acceleration, and the inlet flow rate. The mean inflow velocity, $u_0=Q/\pi R_0^2=0.265$ mm/s can serve as an alternative for the inflow rate $Q$. The key dimensionless parameters can be derived and the values are given in table \ref{tab:dimension1}. Since the gas-to-liquid density and viscosity ratios, $r$ and $m$, are both very small, the effect of the gas phase on drop formation is small. The Weber, Ohnesorge, and Bond numbers \tcbl{are} measures of the relative importance of the fluid inertia, liquid viscosity, and gravity with respect to surface tension. For the small $Q$ used in the present problem, the drop formation process is quasi-static and $We=8.17\times10^{-7}\ll 1$. The effect of inflow inertia is thus negligible. The variation of $We$ does not influence the drop formation \citep{Wilkes_1999a} and the value of $Q$ is immaterial to the results to be presented, as along as $Q$ remains to be small. Due to the relatively low viscosity of water, $Oh=0.00426$, is also very small, suggesting that the viscous effect is generally small in the drop formation process. Finally, the Bond number is the primary dimensionless parameter to determine the sizes of the detached primary and secondary drop. \begin{table} \centering \begin{tabular}{ccccccccc} \hline $\rho_l$ &$\rho_g$ &$\mu_l$ &$\mu_g$ &$\sigma$ &$R_0$ &$g$ &$Q$ \\ (kg/m$^3$) &(kg/m$^3$) &(Pa s) &(Pa s) &(N m) &(m) &(m/s$^2$) &($\mu$L/min) \\[0.5em] \hline 1000 &1.2 &$0.85 \times 10^{-3}$ &$1.8 \times 10^{-5}$ &0.0688 &$8 \times 10^{-4}$ &9.81 &32 \\ \hline \end{tabular} \caption{Physical parameters for the formation of a dripping drop.} \label{tab:parameter} \end{table} \begin{table} \centering \begin{tabular}{ccccc} \hline $r$ & $m$ & $We$ & $Oh $ & $Bo$ \\ $\rho_g/\rho_l$ & $\mu_g/\mu_l$ & $\rho_l u_0^2 R_0/\sigma$ &$\mu_l/\sqrt{\rho_l\sigma R_0}$ & $\rho_lgR_0^2/\sigma$ \\ \hline 0.0012 & 0.021 & $8.17\times 10^{-7}$ & 0.00426 & 0.091 \\ \hline \end{tabular} \caption{\tcbl{Key dimensionless parameters for the drop formation. }} \label{tab:dimension1} \end{table} After the drop detaches from the nozzle, the drop radius is measured to be $R_d=1.86$ mm. The oscillation and falling dynamics of the drop can be fully determined by the Reynolds and Weber numbers based on the drop diameter ($D_d=2R_d$), namely $Re_d\equiv D_d u_d\rho_g/\mu_g$ and $We_d\equiv D_d u_d ^2\rho_g/\sigma$, along with the post-formation state of the drop as the initial conditions. As the drop velocity, $u_d$, increases over time, $Re_d$ and $We_d$ rise accordingly. In the time range considered in the present study, the drop velocity increases from 0.07 m/s (just after detachment) to about 1.70 m/s. The corresponding range of drop Reynolds and Weber numbers are $25.9 \lesssim Re_d \lesssim 633$ and $2.62 \times 10^{-4} \lesssim We_d \lesssim 0.156$. For this range of $Re_d$, the flow was observed to remain approximately axisymmetric in the experiment. It was measured that the deviation of the drop centroid from the nozzle axis is smaller than 0.3\% of the falling distance in the time range considered. Furthermore, as $We_d$ is small, the surface tension will be sufficient to avoid an aerobreakup. According to the experiment of \citet{Gunn_1949a}, the terminal velocity for this drop size is about 6.2 m/s. The Reynolds and Weber numbers corresponding to the terminal falling velocity will then be about $Re_{d,t=\infty}\approx1600$ and $We_{d,t=\infty}\approx2.4$. It is clear that the drop velocity in the present study remains far from the terminal state. The drop oscillation Ohnesorge number, $Oh_{osc}=\mu_l/(\rho_l\sigma R_d)^{1/2}$, is often used to characterize the viscous effect on the oscillation of a free drop, which can be expressed as $Oh_{osc} = \sqrt{2We_d} /(m Re_d)$. (Alternatively, the oscillation Reynolds number, $Re_{osc}=1/Oh_{osc}$, can be used.) Here $Oh_{osc}=0.00278$, is very small, therefore, it is expected the viscous effect on the drop oscillation is small. \tcbl{Due to the rich flow physics involved in the present problem, we have focused on only one specific case, instead of a parametric study. If the key dimensionless parameters listed in Table \ref{tab:dimension1} vary, the specific values in the results to be shown later will change. However, the case selected here well represents millimeter-size low-viscosity droplets in the dripping regime. The conclusions with regard to the droplet formation, oscillation, and falling dynamics will remain valid as long as both of the Ohnesorge and Bond numbers are significantly smaller than unity. Parametric study for wider ranges of $Oh$ and $Bo$ is of interest but will be relegated to future work.} \subsection{Modeling and Simulation} \subsubsection{Governing equations} The one-fluid approach is employed to resolve the two-phase flow, where the phases corresponding to the water drop and the ambient air are treated as one fluid with material properties that change abruptly across the interface. The incompressible Navier-Stokes equations with surface tension can be written as \begin{align} \rho (\partial_t\bs{u} + \bs{u} \cdot \nabla \bs{u}) = -\nabla p + \nabla \cdot (2\mu \bs{D}) + \sigma \kappa \delta_s \bs{n}\, , \label{eq:mom} \\ \nabla \cdot \bs{u} = 0 \, , \label{eq:divfree} \end{align} where $\rho$, $\mu$, $\bs{u}$, and $p$ represent density and viscosity, velocity and pressure, respectively. The strain-rate tensor is denoted by $\bs{D}$ with components $D_{ij}=(\partial_i u_j + \partial_j u_i)/2$. The third term on the right hand side of Eq.\ \eqr{mom} is a singular term, with a Dirac distribution function $\delta_s$ localized on the interface, and it represents the surface tension. The surface tension coefficient is $\sigma$, and $\kappa$ and $\bs{n}$ are the local curvature and unit normal of the interface. The liquid volume fraction $C$ is introduced to distinguish the different phases, in particular $C=0$ in the computational cells with only air (respectively $C=1$ in cells containing only water), and its time evolution satisfies the advection equation \begin{align} \partial_t C + \bs{u} \cdot \nabla C =0 \, . \label{eq:color_func} \end{align} The fluid density and viscosity are then determined by \begin{align} \rho & = C \rho_l + (1-C) \rho_g \, , \label{eq:density} \\ \mu & = C \mu_l + (1-C) \mu_g \, , \label{eq:viscosity} \end{align} where the subscripts $g$ and $l$ represent the gas phase (air) and the liquid phase (water), respectively. \subsubsection{Numerical methods} \label{sec:numerics} The Navier-Stokes equations (Eqs.\ \eqr{mom} and \eqr{divfree}) are solved by the open-source solver \emph{Gerris} \citep{Popinet_2003a,Popinet_2009a}. In \emph{Gerris}, a finite-volume approach based on a projection method is employed. A staggered-in-time discretization of the volume-fraction/density and pressure leads to a formally second-order accurate time discretization. The interface between the different fluids are tracked by solving the advection equation (Eq.\ \eqr{color_func}) using a Volume-of-Fluid (VOF) method \citep{Scardovelli_1999a}. A quadtree spatial discretization is used, which gives a very important flexibility allowing dynamic grid refinement into user-defined regions. Finally the height-function (HF) method is used to calculate the local interface curvature, and a balanced-force surface tension discretization is used \citep{Francois_2006a, Popinet_2009a}. \subsubsection{Simulation setup} \begin{figure} \begin{center} \includegraphics [trim=0 0.5in 0 0.5in,clip, width=1.0\columnwidth]{simulation} \caption{Simulation setup. } \label{fig:simulation} \end{center} \end{figure} In the numerical model, the flow is assumed to be axisymmetric. The 2D computational domain is shown in figure \ref{fig:simulation}. The gravitational acceleration is along the $z$ direction. The water is injected into the domain from the left and the inlet flow rate $Q$ is kept the same as in the experiment. The thickness of the nozzle wall is ignored in the model. The ratio between the inner and outer radii of the nozzle in the experiment is 0.75. It has been shown by \citet{Ambravaneswaran_2002a} that the nozzle wall thickness can affect the drop formation dynamics when the flow rate is high. For the present problem, a very small flow rate has been used. According to the experimental results of \citet{Zhang_1995a} for similar small flow rates, the effect of the wall thickness becomes negligible if the ratio of the inner to the outer radii of the nozzle exceeds 0.2. The ratio in the present experiment is significantly larger than the critical value and thus the effect of nozzle wall thickness on the drop formation can be ignored. Furthermore, a solid block is added above the nozzle, see figure \ref{fig:simulation}. The boundary condition of the volume fraction $C$ at the solid boundary is $\partial C/\partial n=0$. \tcbl{The reason for adding the solid block is to pin the contact line, where water, air, and solid meet, at the block corner. In the experiment, the interface is pinned at the outer perimeter of the nozzle. By setting the distance between this pinned point to the $z$--axis as $R_0$, the model and the experiment exhibit the same $Bo$. In both experiment and simulation, the contact angle varies slightly when pinching occurs, and the contact line remains pinned during the drop formation process. } Thanks to the adaptive mesh, a computational domain that is significantly larger than the drop size can be used. As a result, the effect of boundaries on the drop can be eliminated. The length of the domain is $L_z=200\ \mathrm{mm}=250 R_0$ and the height is $L_x=6.4\ \mathrm{mm}=8 R_0$. The axisymmetric boundary condition is invoked at the bottom of the domain. The inflow \tcbl{(Dirichlet velocity and Neumann pressure conditions)} and outflow \tcbl{(Dirichlet pressure and Neumann velocity conditions}) BC's are applied to the left and the right of the domain. The top boundary is considered as a slip wall. The minimum cell size used in the simulation is determined by the maximum mesh refinement level, $L$, namely $\Delta_{\min}=L_x/2^L$. Different refinement levels have been tested and the grid-refinement results are to be shown in the next section. The time step is computed based on the restriction from the advection, viscous, and surface tension terms in the governing equations. For the present problem, the time step restriction is mainly from the surface tension due to the small capillary number \cite{Ling_2016b}. \subsection{Experiment} \begin{figure} \begin{center} \includegraphics [width=8.6 cm]{setup} \caption{Experimental setup.} \label{fig:setup} \end{center} \end{figure} Figure \ref{fig:setup} shows the experimental setup to investigate the formation and the fall of the water drop using high-speed imaging. A stainless steel nozzle with sharp-edged exit surface was used, and its inner and outer radii are 0.6 mm and 0.8 mm, respectively. Water drops were then generated from the nozzle either by the pressure from a constant-height reservoir, or by the pressure from a syringe pump (KDS210, KD Scientific). High-speed camera (NAC Memrecam GX-1) with frame rates varying from 100 to 5,000 fps (frame per second) have been used to capture the shape of the drop. The spatial resolution and exposure time varies in the range of 20-70 $\mu$m/pixel and 20-200 $\mu$s, respectively. To minimize the influence of vibrational disturbances and temperature variations in the pinching of the drop, all experiments were conducted on an anti-vibration table in the isolated corner of a basement with air-conditioning. For better visualization, uniform illumination was achieved by placing a diffuser in front of the 100W white light LED lamp. \tcbl{To avoid the heating effect, the LED light was placed 1.5 meter away from the observation area and the LED light was turned on only during recording.} The images obtained by high-speed camera were post-processed by Matlab code to measure the geometric properties of the drop before and after detachment, such as the volume and height of the pendant drop, the radius of the neck, the eccentricity of the falling drop. Surface tension was measured by Du No\"uy ring method. Temperature (25 $^{\circ}$C) and density of the test liquid were measured by a temperature recording device (Chino AH3760 with Pt100 sensor) and a mass-volume method, respectively. Liquid viscosity was determined using a rotational viscometer (Brookfield DV-II). \section{Results for drop formation} \label{sec:results_formation} The focus of the present study is on the oscillation and falling dynamics after the drop is detached from the nozzle. Nevertheless, since we aim at unveiling the effect of drop formation on the subsequent shape oscillation, the results for the drop formation will be first presented and validated against theory and experiment. \subsection{General process and time scales} A sequence of images of the drop obtained from high-speed imaging are shown in figure \ref{fig:evolution} to depict the process of drop formation and subsequent fall in quiescent air. The overall process can be generally divided into three phases: growth, pinch-off, and fall. When the drop falls, it deforms in an oscillatory manner. It should be noted that the time scales for different phases in the process are different. (The time differences between the images shown in figure \ref{fig:evolution} are not even.) The growth of the drop is very slow compared to the other two phases, simply due to small flow rate at the nozzle inlet. It takes about one minute for the pendant drop to grow to the critical volume. In contrast, when the drop volume reaches the critical volume, the developing and pinching of the neck of the pendant drop evolve at a very fast speed, taking about a millisecond. When the detached drop falls in air, the dominant oscillation period is about $\tau_{osc}=21.5$ ms. This multiple time-scale nature makes the investigation challenging for both experiment and simulation if one aims at capturing the whole process from drop formation to fall. To overcome this challenge, \tcbl{multiple experiments with different frame rates were conducted} to capture different phases. For the growth of the drop, a low frame rate, 100 fps was used. For the pinching and oscillation, a high frame rate, 5000 fps was used. The theoretical solution of a static pendant drop that is close to the critical volume is used to initialize the simulation. The initial velocity throughout the domain is taken to be zero since the pendant drop is quasi-static. For the most refined simulation ($L=11$), the simulation starts at the time that is 394 ms before the drop detaches, namely $t_d-t=394$ ms. \begin{figure} \begin{center} \includegraphics [width=1.0\columnwidth]{evolution.jpg} \caption{Overall process of a drop dripping from a nozzle: growth, pinch-off, and fall shown by high-speed camera images.} \label{fig:evolution} \end{center} \end{figure} \subsection{Drop growth as a pendant drop} \label{sec:growth} \tcbl{Due to the small Weber and Ohnesorge numbers} in the present problem, the effects of liquid inertia and viscosity on the drop formation are negligible compared to that of the surface tension. As a consequence, the drop grows quasi-statically and follows the static pendant drop theory \citep{Padday_1973a}. For a static pendant drop, its shape is axisymmetric and the surface tension and the gravitational force are in equilibrium. The shape of the drop can then be obtained by solving a set of ordinary differential equations, which are given in Appendix \ref{sec:pendant_drop_theory}. The integration of the equations is from the bottom of the pendant drop as shown in figure \ref{fig:static}, (a new coordinate $(x',z')$ is used,) with the curvature at the drop bottom $\kappa_b$ as the boundary condition. For each $\kappa_b$, there are multiple solutions that satisfy a given Bond number \citep{Coullet_2005a}. Here only the two solutions which give drop volumes which are close to the critical volume are relevant. The two solutions are schematically shown in figure \ref{fig:static}. While for solution A the angle between the interface and the nozzle exit is less than 90\textdegree, for solution B the angle is larger than 90\textdegree. \begin{figure} \begin{center} \includegraphics [width=0.5\columnwidth]{static} \caption{Sketch of the axisymmetric quasi-static pendant drop profile.} \label{fig:static} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics [width=1.0\columnwidth]{volume} \caption{Comparison of static pendant drop theory with experimental and simulation results: (\textit{a}) drop volume $V$ versus drop height $Z_{\max}$; (\textit{b}) drop contours at different times. \tcbl{The critical volume shown in (a) is $V_{crit} = 27.05$ mm$^3$.} } \label{fig:volume} \end{center} \end{figure} The volume ($V$) and the height ($Z_{\max}$) of the pendant drop can be measured from the experimental and numerical results, which are shown along with the pendant-drop theoretical predictions in figure \ref{fig:volume}(a). It can be observed the experimental and theoretical results agree very well before the drop volume reaches the critical volume. The critical volumes measured from the experiment and simulation are both about 27.10 mm$^3$, which is very close to the value predicted by the pendant-drop theory, \textit{i.e.}, $V_{crit}=27.05$ mm$^3$. The $V$-$Z_{max}$ curves obtained in the experiment and simulation appear to be flat when pinching occurs. During the pinching process, the rapid increase of $Z_{max}$ is due to the redistribution of volume within the drop, as a result, the drop volume increase in the fast pinching process is negligibly small. The initial conditions for the simulation are taken from the theoretical result for $V=26$ mm$^3$. At the time, the angle between the interface and the nozzle exit is less than 90\textdegree\ (case A in figure \ref{fig:static}). If the inflow at the nozzle is stopped, the pendant drop will remain stable. The simulation results of the $V$-$Z_{\max}$ curve at later times match very well with both the experiment and theory, see figure \ref{fig:volume}(a). This validates the present simulation setup in capturing the drop growth. \tcbl{The experimental and numerical results deviate from the theoretical solution beyond the critical $z_{\max}$, since the latter represents unstable static solution which will not be observed in reality. } The excellent agreement between the experimental and theoretical results are also achieved in the contours of the drop at different times, as shown in figure \ref{fig:volume}(b). The experimental results are shown to match very well with the theoretical predictions at 39.4s, 29.4s and 9.4s before pinching occurs. The simulation is started at $t_d-t=394$ ms ($V=26$ mm$^3$). The simulation result at $t_d-t=200$ ms (after the simulation has been run for a physical time of 194 ms) is compared to the theoretical and experimental results. The theoretical, numerical and experiment curves all collapse perfectly, which again validates the present experimental and simulation approaches. \subsection{Pinching and drop detachment} \label{sec:pinching} As the pendant drop reaches the critical volume, it becomes unstable. The interface evolution during the pinching process for both simulation and experiment is shown in figure \ref{fig:dynamic}. The numerical and experimental results generally agree very well for the formation of the neck and the liquid bridge, the detachment of the primary drop, and finally the formation of the secondary drop. In figures \ref{fig:dynamic}(c)--(d), there exists a small discrepancy in the drop contours between experiment and simulation. This is due to the concave shape at the top of the drop, which cannot be seen from the experimental images taken from the lateral side. To better elucidate the pinching dynamics and the formations of the primary and secondary drops, temporal evolutions of the pressure and velocity fields are plotted in figure \ref{fig:pinching}. As the drop reaches the critical volume, a ``neck" develops between the nozzle and the pendant drop. The minimum radius of the neck ($x_{\min}$) decreases rapidly over time. As a consequence, the pressure in the neck, which is inversely proportional to the neck radius, also increases rapidly. The pressure difference between the neck and the regions above and below the neck expels the liquid away from the neck with increasing velocity, see figures \ref{fig:pinching}(a)--(d). The thinning process of the neck contributes to the elongation of the pendant drop and the neck turns into a thin liquid bridge. The minimum radius is initially located at about the center of the liquid bridge. The stagnation point is slightly higher than the location for the minimum neck radius. As the liquid accelerates from the stagnation point toward the attached liquid and the primary drop, see \textit{e.g.}, figure \ref{fig:pinching}(c), the radius near the top and bottom of the bridge decreases faster than that near the center. The local radius minimum then shifts from the center to the bottom of the liquid bridge, see figure \ref{fig:pinching}(d). Pinching first occurs at the location for the new minimum radius near the bottom of the bridge, detaching the primary drop. After the pinch-off, the liquid filament rapidly retracts upward from the pinch-off location. Due to similar effect of the inertia of the upward fluid motion, a new local minimum of radius develops at the top of the liquid bridge, see figure \ref{fig:pinching}(g), where soon another pinching happens. At the end, the liquid bridge is separated from the attached liquid and the primary drop, forming the secondary drop (see figure \ref{fig:pinching}(h)). A closeup of the secondary drop is also provided to show the high-resolution mesh used to resolve the pinching process. The dynamics of drop formation shown in the present experiment and simulation are consistent with former studies of drop formation \citep{Zhang_1995a, Wilkes_1999a, Popinet_2009a} and filament breakup \citep{Castrejon-Pita_2015a}. \begin{figure} \begin{center} \includegraphics [width=1.0\columnwidth]{dynamic} \caption{Comparison between the numerical (solid lines) and experimental (dashed lines) results for the process of drop detachment.} \label{fig:dynamic} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics [width=0.9\columnwidth]{pinching.jpg} \caption{Evolution of the velocity (left) and pressure (right) fields for the formation of primary and secondary drops. Skewed color scales have been used for better visualization. } \label{fig:pinching} \end{center} \end{figure} \tcbl{ Since for the present problem $Oh\ll1$, the pinching process is mainly in the inertial regime where the temporal evolution of the minimum radius follows the 2/3 power law: $x_{\min}\sim (t_d-t)^{2/3}$. As the new minimum radius shifts from near the center toward the two ends of the liquid bridge, the downward flow from the neck to the primary drop slows down, reducing the local Reynolds number and bringing the pinching dynamics into the viscous regime \citep{Castrejon-Pita_2015a}, where $x_{\min}\sim (t_d-t)$. The temporal evolution of $x_{\min}$ for both experiment and simulation is plotted in figure \ref{fig:thread}(a), where the two power-law scalings and the transition from the inertial to viscous regimes can be clearly identified. As the viscous regime cannot sustain to the eventual breakup, another transition from the viscous regime to the inertial-viscous regime will occur in the pinching process at an even smaller time scale. Nevertheless, that time scale for the present problem with such a small $Oh$ is hard to resolve by simulation. Yet ignoring the inertial-viscous regime seems to introduce little effect on the formation of the primary drop. } \tcbl{ The elongation of the drop due to the pinching process is measured and shown in figure \ref{fig:thread}(b). Again the numerical and experimental results agree very well. When the primary drop detaches from the liquid bridge, the drop height is about $z_{\max}/R_0=7.35$. Similar experiments of dripping water drops by \citet{Zhang_1995a} showed that $z_{\max}/R_0=9.92$ and 5.58 for nozzle radius $R_0=0.4$ and 1.6 mm, respectively. In the present study, $R_0=0.8$ mm, so the drop height at the detachment time is in a good agreement with the experimental results. } \tcbl{Due to the low liquid viscosity in the present problem, when the liquid rushes from the neck toward the to-be-formed drop, the interface overturns before pinch-off occurs \citep{Day_1998a,Chen_2002a}. The overturning of the interface at the bottom of the liquid bridge can be identified with a careful look at figure \ref{fig:pinching}(e). A closeup of the interface near the pinch-off location is presented in figure \ref{fig:thread}(c) to better show the overturning interface. The simulation results are shown to approach the self-similar solution given by \citet{Day_1998a}. For the same minimum radius $x_{min}/R_0=0.0028$, the overturning interface obtained in the present simulation agrees well with the inviscid flow result \citep{Day_1998a}. } The excellent agreement between the simulation, experiment, and theoretical results for both drop growth and detachment fully affirms that the drop formation is well captured and its effect on the subsequent fall of the drop has been faithfully incorporated in the present study. \begin{figure} \begin{center} \includegraphics [width=0.99\columnwidth]{thread} \caption{Temporal evolution of (a) the minimum radius $x_{min}$, (b) the drop height $z_{max}$, and (c) the interface profiles near the pinching location prior to drop breakup. The \tcbl{dotted and dash-dotted} lines in (a) indicate the $(t_d-t)^{2/3}$ and $(t_d-t)$ power laws for the inertial and viscous regimes, respectively. The error bars on the experimental data in (b) are smaller than the line thickness and thus are not plotted. The simulation results shown in (c) approach the inviscid self-similar solution provided by \citet{Day_1998a}.} \label{fig:thread} \end{center} \end{figure} \section{Results for shape oscillation} \label{sec:results_oscillation} \subsection{Validation studies for oscillation and falling dynamics} When the drop is detached from the nozzle, the drop shape is elongated and out of equilibrium. Under the action of surface tension, the drop starts to deform and oscillate. The eccentricity of the drop, defined as the ratio between the height ($b$) and the width ($a$) of the drop, $e=b/a$, is a common parameter to characterize the shape deformation of an oscillating drop. \tcbl{The height $b$ is defined as the difference between the minimum and maximum z-coordinates of the droplet surface and thus does not account for the concave shape near the top of the drop shown in Fig.\ \ref{fig:dynamic}.} The temporal evolutions of $e$ obtained from simulations with different meshes are compared with the experimental measurement in figure \ref{fig:oscillation}. It is observed that the second mode dominates the oscillation of $e$ and the time period agrees well with that for the second mode of Lamb, $\tau_{2,Lamb}$. Therefore, $\tau_{2,Lamb}$ is taken to be the reference time scale for drop oscillation, namely $\tau_{osc}=\tau_{2,Lamb}$, and in figure \ref{fig:oscillation} time is normalized by $\tau_{osc}$. This indicates that the falling drop retains similar dominant frequency (or periods) as for a free drop. This observation is consistent with the former studies \citep{Lalanne_2013a, Staat_2017a, Bergeles_2018a}. The angular frequency for the $n^{\mathrm{th}}$ spherical harmonic mode for small-amplitude oscillations of a free, viscous, and incompressible drop was derived by \citet{Lamb_1932a}, which is given as \begin{align} \omega_{n,Lamb}^2 = \frac{(n-1)n(n+1)(n+2)\sigma}{[(n+1)\rho_l + n \rho_f] R_d^3 }\,. \label{eq:Lamb_freq} \end{align} The frequency is $f_{n,Lamb}=\omega_{n,Lamb}/(2\pi)$ (for convenience $\omega$ is simply referred to as ``frequency" in the rest of the paper) and the time period is $\tau_{n,Lamb} = 1/f_{n,Lamb}=(2\pi)/\omega_{n,Lamb}$. For the second mode, the angular frequency is $\omega_{2,Lamb}=292$ s$^{-1}$, and the oscillation period $\tau_{2,Lamb} = 21.5$ ms. The Lamb frequencies for other modes, $\omega_{n,Lamb}$, for the present drop size are listed in table \ref{tab:sph_mode}. The simulation results for all the three mesh refinement levels agree well with the experimental data at early times as shown in figure \ref{fig:oscillation}(a), though the results for the coarser meshes deviate from the experimental data at later times. For example, the curve for $L=9$ becomes different from the experimental data at about \tcbl{$(t-t_d)/\tau_{osc}>4.6 $}. For the most refined case $L=11$ ($\Delta_{\min}\approx 3$ \textmu m and $R_d/\Delta_{\min} \approx 595$), the numerical and experimental results match remarkably well in the time range ($(t-t_d)/\tau_{osc} \lesssim 8$) considered in the present study, indicating that $L=11$ is necessary and adequate to resolve the present problem. \begin{figure} \begin{center} \includegraphics [width=1.0\columnwidth]{oscillation} \caption{Temporal evolution of the drop eccentricity for experiment and simulation. Here, the eccentricity is defined as $e = b/a$, where $b$ and $a$, as indicated, represent the height and width of the drop, respectively. The simulation results for different maximum mesh refinement levels ($L$) are compared to the experimental data in figure (a) and a closeup for $0<(t-t_d)/\tau_{osc}<1$ is given in figure (b). The corresponding minimum cell size $\Delta_{\min}$ for $L=11$, 10, and 9 are 3.12, 6.25, and 12.5 \textmu m, respectively. } \label{fig:oscillation} \end{center} \end{figure} A closeup of the eccentricity evolution for $0<(t-t_d)/\tau_{osc}<1$ is presented in figure \ref{fig:oscillation}(b), from which it can be observed that the simulation results agree with experiment not only for the large-scale variation set by the dominant second mode, but also for the small-scale variations induced by the high-order oscillation modes. The temporal evolution of the drop centroid position is shown in figure \ref{fig:Reynolds}(a). The simulation and experiment results again match very well. Since the falling motion of the drop is coupled with the shape oscillation, the excellent agreement in high-level details between simulation and experiment for both eccentricity and drop trajectories fully validates the simulation results for both falling and oscillation dynamics of the drop. It also confirms that the axisymmetric approximation made in the present simulation is valid up to the time range considered. The evolution of the drop velocity, plotted in dimensionless form as the drop Reynolds number, is shown in figure \ref{fig:Reynolds}(b). A dashed line is given to indicate the evolution of Re when the drop falls with no aerodynamic drag, namely undergoes a constant acceleration. In the short term, it is clear that the aerodynamic drag is small compared to the gravity force. The Reynolds number increases almost linearly, though small discrepancy can be identified for $(t-t_d)/\tau_{osc}>5$. The oscillation Reynolds number is $Re_{osc}=1/Oh_{osc}=360$. Initially $Re_d$ is smaller than $Re_{osc}$ but later overtakes and becomes larger than $Re_{osc}$. At $(t-t_d)/t_{osc}=7.9$, the drop Reynolds number, $Re_d=633$, is about 75\% larger than $Re_{osc}$. Nevertheless, it is observed that the dominant oscillation frequency for the falling drop is still well predicted by Lamb's linear theory for a free drop. \begin{figure} \begin{center} \includegraphics [width=1.0\columnwidth]{Reynolds} \caption{Temporal evolutions of the drop centroid $x$-position and Reynolds number $Re_d$.} \label{fig:Reynolds} \end{center} \end{figure} \begin{table} \centering \begin{tabular}{lccccccccc} \hline n & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ [.5em] $\omega_{n,Lamb}$ (s$^{-1}$) & 292.3 & 566.1 & 877.0 & 1223 & 1601 & 2009 & 2446 & 2909 & 3397\\ [.5em] $\omega_{n,sim}$ (s$^{-1}$) & 306.8 & 552.2 & 859.0 & 1227 & 1595 & 2024 & 2454 & 2883 & 3436\\ [.5em] $\beta_{n,Lamb}$ (s$^{-1}$) & 1.44 & 4.05 & 7.80 & 12.7 & 18.8 & 26.0 & 34.4 & 43.9 & 54.6 \\[.5em] $A_{n,0}$ & 0.10 & 0.037 & 0.022 & 0.012 & 0.0066 & 0.0031 & 0.0011 & -0.00045 & -0.0014\\[.5em] $\alpha_{n,0}$ & 0.144 & 0.0672 & 0.0393 & 0.0280 & 0.0210 & 0.0165 & 0.0135 & 0.0110 & 0.0092\\[.5em] $\phi_{n}/\tau_2$ & 0.12 & 0.08 & 0.06 & 0.04 & 0.031 & 0.024 & 0.020 & 0.017 & 0.015\\ $\beta_{n,peak}$ (s$^{-1}$) & 1.35 & 4.12 & - & - & - & - & - & - & - \\[.5em] $\beta_{n,valley}$ (s$^{-1}$) & 1.67 & 3.45 & - & - & - & - & - & - & - \\[.5em] $\alpha_{n,0,peak}$ & 0.148 & 0.0673 & - & - & - & - & - & - & - \\[.5em] $\alpha_{n,0,valley}$ & 0.141 & 0.0652 & - & - & - & - & - & - & - \\ \hline \end{tabular} \caption{Results for the spherical harmonic mode analysis for the oscillation of the falling drop. \tcbl{The frequency $\omega_{n,Lamb}$ and damping rate $\beta_{n,Lamb}$ are calculated following the linear theory of \citet{Lamb_1932a}. The primary frequency $\omega_{n,sim}$ is measured through the frequency spectrum of the computed Fourier-Legendre coefficients $A_n$. The value of $A_n$ at $t=t_d$ is denoted by $A_{n,0}$, while the amplitude ($\alpha_n$) of the oscillation of $A_n$ at $t=t_d$ is represented by $\alpha_{n,0}$. The initial phase of the oscillation of $A_n$ is denoted by $\phi_{n}$. The values of $A_{n,0}$, $\alpha_{n,0}$ and $\phi_n$ are obtained from simulation results for drop formation. Exponential functions are used to fit the peaks and valleys of the temporal evolution of $A_n$ for $n=2,3$. The fitted initial oscillation amplitudes and damping rates for peaks and valleys are represented by $\alpha_{n,0, peak}$ and $\alpha_{n,0,valley}$, and $\beta_{n,0, peak}$ and $\beta_{n,0,valley}$, respectively. }} \label{tab:sph_mode} \end{table} \subsection{Spherical harmonic mode decomposition} To better understand the shape oscillation of the falling drop, the instantaneous shape of the drop is decomposed into spherical harmonic modes \citep{Basaran_1992a, Lalanne_2013a}. \tcbl{The temporal evolution and frequency spectra of the mode amplitudes will be presented to analyze the effects of the drop formation, the nonlinear dynamics, and the falling motion on the shape oscillation. } The shape of an axisymmetric drop can be described by the radius of the drop contour with respect to the centroid $R$ as a function of the colatitude $\theta$ (which is taken to be zero at the top of the drop), as shown in figure \ref{fig:oscillation}. For an oscillating drop, $R=R(\theta,t)$, and can be expanded as the superposition of spherical harmonic modes as \tcbl{ \begin{align} \frac{R(\theta,t)}{R_d} = \sum_{n=0}^{\infty}A_n(t) P_n(\cos (\theta))\, , \end{align} where $P_n$ is the Legendre polynomial of degree $n$ and $A_n$ is the corresponding Fourier-Legendre coefficient, which represents the amplitude of the $n^{\mathrm{th}}$ spherical harmonic mode. Assuming incompressibility, the drop volume is fixed and $A_0=1$. Furthermore, for the analysis of the falling drop, a reference frame moving with the drop velocity is used and the origin is set as the centroid of the drop. As a result, $A_1=0$. } The temporal evolutions of $A_2$ to $A_{10}$ for the simulation results are shown in figure \ref{fig:mode}. \tcbl{A grid refinement study has been performed to confirm that the results presented are mesh independent, see appendix \ref{sec:mode_grid}. } \begin{figure} \begin{center} \includegraphics [width=0.98\columnwidth]{mode} \caption{Temporal evolutions of the Fourier-Legendre coefficients, $A_n$, for different spherical harmonic modes, comparing the simulation results with the linear free-drop model based on the theory of \citet{Lamb_1932a}, with and without the initial kinetic energy. The exponential decay of the oscillation amplitudes for the peaks and valleys are also indicated in (a) and (b) for $n=2$ and 3 modes. } \label{fig:mode} \end{center} \end{figure} The Fourier-Legendre coefficients at $t=t_d$ are denoted as $A_{n,0}$ and the values are listed in table \ref{tab:sph_mode}. The initial amplitudes for spherical harmonic modes generally decrease with the mode number $n$. The amplitudes of higher order modes ($n>2$) are finite and cannot be ignored. For example, $A_{5,0}$ and $A_{7,0}$ are about 11\% and 3\% of $A_{2,0}$. The small-scale spatial variations in the drop contours near the top of the drop (see figure \ref{fig:dynamic}(c)), which are in turn induced by the pinching process, contribute to the finite amplitudes of the high order oscillation modes. The frequency spectra of $A_n$ are shown in figure \ref{fig:mode_coupling}, from which the primary frequency for each mode can be identified. The values of the primary frequencies for simulation, $\omega_{n,sim}$, are given in table \ref{tab:sph_mode}. It can be seen that the oscillation frequency agrees well with the Lamb frequency. This conclusion is valid not only for the dominant $n=2$ mode (as already shown in figure \ref{fig:oscillation}) but also for other modes up to $n=10$. It can be observed from figure \ref{fig:mode} that, at the end of the simulation, $(t-t_d)/t_{osc}\approx7.9$, the drop Reynolds number, $Re_d=633$, is about 75\% larger than $Re_{osc}$, yet the agreement with the Lamb frequency is still very good. According to the nonlinear analysis of \citet{Tsamopoulos_1983a}, the leading term in the decrease of oscillation frequency due to finite amplitude is second order. For the dominant second mode, the initial amplitude $A_{2,0}$ is about 10\%. The correction of frequency due to nonlinear effects is about 1\%, which is quite small. This explains why the linear theory of \citet{Lamb_1932a} remains a very good approximation for present case, even though the mode amplitudes are finite. \begin{figure} \begin{center} \includegraphics [width=0.99\columnwidth]{mode_coupling} \caption{Frequency spectra of Fourier-Legendre coefficients for (a) even and (b) odd spherical harmonic modes, indicating the effect of mode coupling.} \label{fig:mode_coupling} \end{center} \end{figure} \subsection{Linear oscillation of a free viscous drop} The short-term oscillation of the drop is mainly controlled by the capillary effect, however, it is also significantly affected \tcbl{by} the drop formation, the nonlinear dynamics due to finite oscillation amplitudes, and the falling motion. To better understand these effects on the oscillation dynamics, the simulation results are compared to the linear theory of \citet{Lamb_1932a} for the linear oscillation of a free viscous drop. The Fourier-Legendre coefficients for the n$^{th}$ Lamb mode, $A_{n,Lamb}$, are given as \begin{align} A_{n,{Lamb}}(t) = \alpha_n \cos[ \omega_{n,Lamb}(t +\phi_{n})] \, . \label{eq:mode_amp} \end{align} For a viscous drop, the oscillation amplitude $\alpha_n$ decreases in time due to viscous dissipation. For small $Oh_{osc}$, the viscous damping effect causes an exponential decay of $\alpha_{n}$, \begin{align} \alpha_n(t) = \alpha_{n,0} \exp(-\beta_{n,Lamb} t) \, , \label{eq:damping} \end{align} where \tcbl{$\beta_{n,Lamb}$ is the damping rate, given by \cite{Lamb_1932a} as} \begin{align} \beta_{n,Lamb} = \frac{(n-1)(2n+1)\nu_l}{R_d^2}\, . \label{eq:damping_Lamb} \end{align} Then Eq.\ \eqr{mode_amp} can be rewritten as \begin{align} A_{n,{Lamb}}(t) = \alpha_{n,0} \exp(-\beta_{n,Lamb} t) \cos[ \omega_{n,Lamb}(t +\phi_{n})] \, . \label{eq:Lamb_model} \end{align} The viscous damping influences the oscillation frequency as $\omega_{n}^{*2} = {\omega_{n,Lamb}^{2} - \beta_{n,Lamb}^2}$. For the present problem $\beta_{n,Lamb} \ll \omega_{n,Lamb}$ (see table \ref{tab:sph_mode}), as a result, the decrease of frequency due to viscous effect is negligible. This also explains why the dominant oscillation frequency agrees so well with the Lamb frequency (Eq.\ \eqr{Lamb_freq}) as already shown in figure \ref{fig:oscillation}. In Eq.\ \eqr{Lamb_model}, there are in total four parameters, $\omega_{n,Lamb},\beta_{n,Lamb},\alpha_{n,0}, \phi_{n}$. The frequency $\omega_{n,Lamb}$ and damping rate $\beta_{n,Lamb}$, as shown in Eqs.\ \eqr{Lamb_freq} and \eqr{damping}, depend only on the fluid properties. In contrast, the initial oscillation amplitude of the Fourier-Legendre coefficient, $\alpha_{n,0}$, and is the initial phase, $\phi_{n}$, are determined by the drop formation process and the resultant post-formation state, including both the shape (surface energy) and the velocity field (kinetic energy). \tcbl{ \subsection{Effect of the initial kinetic energy in the drop} Conventionally, the surface energy contained in the initial shape is assumed to dominate the initial state of drop oscillation and the initial kinetic energy (velocity field) is usually ignored. The present study that covers both the drop formation and subsequent oscillation provides an opportunity to reexamine this assumption. } If the kinetic energy in the initial condition is ignored, \textit{i.e.}, the velocity field is zero everywhere, or a static drop with the same shape as the post-formation drop is released, then $\alpha_{n,0}=A_{n,0}$ and $\phi_n=0$ and Eq.\ \eqr{Lamb_model} becomes \begin{align} A_{n,{Lamb,surf}}(t) = A_{n,0} \exp(-\beta_{n,Lamb} t) \cos[ \omega_{n,Lamb}(t)] \, . \label{eq:Lamb_model_shapeOnly} \end{align} The results of Eq.\ \eqr{Lamb_model_shapeOnly} for the first four modes ($n=2$ to 5) are plotted in figure \ref{fig:mode}. It is clear that the model including only the surface energy in the initial state yields results that are very different from the simulation results, even though the Fourier-Legendre coefficients for the exact initial shape of the drop, $A_{n,0}$, have been used. A close examination of figure \ref{fig:mode}(a) indicates that the deviation starts right at $t-t_d=0$. The computed $A_2$ decreases faster and to a lower minimum than that predicted by the model. The decrease of $A_2$ represents that the drop deforms from the prolate (elongated) to the oblate (flattened) shapes. Therefore, the drop in simulation is flattened faster and to a larger extent compared to the model prediction. The discrepancy is due to the remaining effect of pinching dynamics and the non-uniformly distributed kinetic energy in the post-formation drop. As discussed above in section \ref{sec:pinching}, the high pressure in the liquid bridge expels fluid toward the drop (which even induces overturning of the interface at the top of drop). As a consequence, when the drop is just detached from the liquid bridge, the top portion of the drop retains a significant downward velocity, which contribute to strengthening the prolate-to-oblate deformation, in addition to the capillary effect. The results clearly lead to the conclusion that the initial kinetic energy is as important as the initial surface energy to the shape oscillation and should not be ignored. \tcbl{The key contributions of the initial kinetic energy to the shape oscillation are the amplification of $\alpha_{n,0}$ and the non-zero initial phase angle $\phi_{n}$. The values of $\alpha_{n,0}$ and $\phi_{n}$ for different modes can be obtained by fitting Eq.\ \eqr{Lamb_model} with the simulation results near $t-t_d=0$. As shown in table \ref{tab:sph_mode}, $\alpha_{n,0}>|A_{n,0}|$ and $\phi_{n}\neq 0$ are true for all the modes considered here. The amplification of $\alpha_{n,0}$ and the non-zero initial phase angle due to drop formation were also observed in the experiments of \citet{Becker_1991a}, though the physics behind them was not discussed. With the corrected $\alpha_{n,0}$ and $\phi_{n}$, Eq.\ \eqr{Lamb_model} yields a much better agreement with the simulation results for the whole time range considered, see figure \ref{fig:mode}. (Hereafter, Eq.\ \eqr{Lamb_model} with corrected values of $\alpha_{n,0}$ and $\phi_{n}$ is referred to as the linear free-drop model.) Considering the fact that the linear free-drop model still ignores the effects of falling motion and nonlinear dynamics, the agreement between the model and the simulation is quite impressive for the $n=2$ and 3 modes. } In spite of the apparent good agreement between the linear free-drop model and the simulation results for the lower-order modes ($n=2,3$), significant differences exist in the higher-order modes ($n \ge 4$). At early time (\tcbl{$t\lesssim 5\tau_{osc}$}) the falling velocity is small and thus the effect of the falling motion is negligible, the discrepancy is thus mainly due to the nonlinear effects, which are in turn triggered by the finite mode-amplitudes when the drop is formed. At later time, when the drop velocity becomes large, $Re_d > Re_{osc}$, the contribution of the falling motion to the discrepancy becomes significant. These two effects are discussed in sequence in the following sections. \subsection{\tcbl{Effect of mode coupling and energy transfer}} As summarized by \citet{Becker_1991a}, typical nonlinear effects in shape oscillation include a) the dependence of the oscillation frequency on the amplitude, b) the asymmetry of the oscillation amplitude, and c) the coupling between modes. As shown in figure \ref{fig:oscillation} the variation in frequency is small for the present case, however, the other two nonlinear effects can be clearly identified. A close look at figure \ref{fig:mode} shows that the oscillation amplitude of $A_n$ is generally asymmetric, namely the oscillation amplitudes corresponding to the peaks and valleys are different. The asymmetry of oscillation amplitude is most profound for the $n=4$ mode: the temporal evolution of $A_4$ is clearly shifted upward, see figure \ref{fig:mode}(c). (A physical explanation for the strong nonlinear effect for the $n=4$ mode is to be given later.) Similar but less obvious upward shifting in the mode amplitude evolution can also be identified for the $n=6$ and 8 modes. The asymmetric behavior is less obvious for the lower-order modes ($n=2$ and 3). To better illustrate the asymmetric behavior, the exponential function (Eq.\ \eqr{damping}) is used to fit the peaks and valleys of the temporal evolutions of $A_2$ and $A_3$. The fitted initial amplitudes and damping rates for the peaks and valleys are different as shown in table \ref{tab:sph_mode}. It is shown that $\alpha_{n,0,peak}>\alpha_{n,0,valley}$ for both $n=2$ and 3 modes. For the damping rate, $\beta_{2,peak}<\beta_{2,valley}$ while $\beta_{3,peak}>\beta_{3,valley}$. The damping rate prediction of Lamb (Eq.\ \eqr{damping_Lamb}) lies in between the damping rates for the peaks and valleys. Due to the strong non-monotonicity in the decay of the oscillation amplitude for the higher-order modes, it is infeasible to fit the amplitude with an exponential function. Another important nonlinear effect on drop oscillation is the interaction between different spherical harmonic modes through energy transfer. When energy is added or extracted from a specific mode, the oscillation amplitude of that mode will be amplified or suppressed, respectively. As a result, the decay of oscillation amplitude becomes non-monotonic, see figures \ref{fig:mode} (d--i). It is conventionally considered that the nonlinear effects arise due to a large amplitude, however, it is observed here that the nonlinear effect is stronger for the higher-order modes ($n\ge 4$) than the lower-order modes ($n=2,3$) while the amplitudes of the former are actually smaller than of the latter. This interesting behavior has also been observed in experiments and can be explained through mode coupling \citep{Becker_1991a}. For the present problem, the energy stored in the lower-order modes is significantly larger than that in the higher-order modes, see $\alpha_{n,0}$ values in table \ref{tab:sph_mode}. Therefore, when a small energy transfer between the lower-order and higher-order modes, its effect on the lower-order mode amplitude is small but it can modify the higher-order mode amplitude significantly. \tcbl{ Due to the large water-to-air density ratio in the present problem, the Lamb frequency is similar to the Rayleigh frequency \begin{align} \omega_{n,Rayleigh}^2 = \frac{(n-1)n(n+2)\sigma}{\rho_l R_d^3 }\,. \label{eq:Rayleigh_freq} \end{align} An important feature of the Rayleigh frequency is that $\omega_2$ and $\omega_4$ are commensurate ($\omega_4=3\omega_2$), see table \ref{tab:sph_mode}. As a result, there exist a resonant effect in the coupling between the $n=2$ and $n=4$ modes \citep{Tsamopoulos_1983a,Natarajan_1987a}. As the $n=2$ mode is the dominant mode that contains the most of the oscillation energy, the $n=4$ mode is modulated significantly due to the resonant energy transfer between the two modes. This explains why the nonlinear effect is always the most intense for the $n=4$ mode. } \tcbl{ The effect of mode coupling is also shown in the frequency spectra of the spherical harmonic mode amplitudes $A_n$, see figure \ref{fig:mode_coupling}. While the linear free-drop model yields a single frequency for each mode, $\omega_{n,Lamb}$ (indicated by the vertical lines), the spectra of computed $A_n$ show multiple frequencies for modes $n>2$. For the fundamental $n=2$ mode, only the primary frequency $\omega_2$ is observed. (Other smaller peaks in the $A_2$ spectrum correspond to the even number times of the primary frequency, such as $2\omega_2$, $4\omega_2$, $6\omega_2$.) For a given mode $n>2$, the spectrum shows a primary frequency that agrees well with $\omega_{n,Lamb}$, and also multiple secondary frequencies corresponding to other modes ($\omega_m$ with $m\ne n$) which interact with the $n^{th}$ mode. In the spectrum of $A_4$, secondary frequencies $2\omega_2$ and $4\omega_2$ are observed, (note the small difference between $4\omega_2$ and $\omega_5$,) which is another evident for its strong coupling with the $n=2$ mode. A close look indicates that the $A_6$ spectrum also shows similar secondary frequencies at $2\omega_2$ and $4\omega_2$. Former studies have shown that an initial second-mode deformation will excite even modes due to mode coupling \citep{Tsamopoulos_1983a, Basaran_1992a}. Therefore, though the coupling between the dominant $n=2$ mode and other higher-order even modes like $n=6$ is not as strong as with the $n=4$ mode, their spectra also show the influence from the second mode. } Furthermore, a drop with initial finite-amplitude deformation of odd modes will transfer energy to the fundamental $n=2$ mode and excite the oscillation of the latter \citep{Basaran_1992a}. In figure \ref{fig:mode_coupling}, a secondary frequency of $\omega_2$ is observed in the spectra of the odd modes $n=3,5,7,9$. Due to the resonant coupling between the $n=2$ and 4 modes, the oscillation energy from the odd modes can also be transfered to the $n=4$ mode through the intermediary $n=2$ mode. As results, the spectra of the odd modes also show a secondary frequency $\omega_4$. Finally, another commensurate relation exists between the $n=5$ and 8 modes, namely $\omega_8=2\omega_5$, and therefore, there exists a resonant coupling between the two. That explains why a secondary frequency $\omega_8$ arises in the spectrum of $A_5$. It can be seen from figure \ref{fig:mode} that, although the decay in oscillation amplitude is non-monotonic due to mode-coupling, the viscous damping rates are generally consistent with Lamb's prediction (see the results for the linear free-drop model). However, the $n=8$ mode seems to be an exception, the decay of oscillation amplitude is slower than the linear free-drop model, which is due to the resonant coupling between the $n=5$ and 8 modes. \subsection{Effect of falling motion} The effect of the drop fall on the shape oscillation is initially small, yet as the drop falling velocity increases in time, its impact on drop oscillation is enhanced. The influence of the falling motion on the shape oscillation can be identified through the asymmetric oscillation amplitude. The asymmetric amplitude in $A_n$ (such as $n=4$) for $(t-t_d)/\tau_{osc}\lesssim 4$ is due to the nonlinear effect. If the drop does not fall, then as the oscillation amplitude decreases with time, the nonlinear effect will become weaker and the level of asymmetry will also decrease over time. For the falling drop considered here, it is observed in figure \ref{fig:mode}(c) that the difference between the peak and valley amplitudes decreases initially but then remains at a similar level for $(t-t_d)/\tau_{osc}\gtrsim 4$, which is due to the interaction between the drop and the external flow induced by the falling motion. In the long term when the drop reaches its terminal falling velocity, (the drop can reach a fixed shape \citep{Feng_2010a} or still oscillate \citep{Helenbrook_2002a} depending on $Re_{d,\infty}$ and $We_{d,\infty}$), the balance between surface tension and shear stress induced by the external flow results in a non-spherical equilibrium drop shape which exhibit non-zero mode amplitudes ($A_{n,eq}\ne 0$). Although the time period considered here is far from the equilibrium state, the shear stress induced by falling motion already has an impact on the drop shape and enhance the asymmetry in oscillation amplitudes. The asymmetric effect is reflected as an upward shit of $A_n$ for the higher-order even modes ($n=4,6,8,10$) and is negligibly small for higher-order odd modes ($n=5,7,9$). For the lower-order modes ($n=2,3$), the oscillation amplitudes are large and thus the capillary effect dominates. Therefore, the effect of falling motion is less profound. There also exists an energy transfer between the falling motion and the shape oscillation. It can be observed from figure \ref{fig:mode}(c) that for $(t-t_d)/\tau_{osc}\gtrsim 4$, the oscillation amplitude decays much slower than the linear free-drop model. The energy dissipated by viscosity is compensated by the energy from the falling motion. Similar slower decay in oscillation amplitude for $(t-t_d)/\tau_{osc}\gtrsim 4$ can also be observed in figures \ref{fig:mode}(d--i) for other high-order modes. \section{Results for the transient flow field } \label{sec:results_transient} The multi-mode oscillation of the falling drop is accompanied by a complex transient velocity field around the drop, see figure \ref{fig:multimode}. The snapshot shown here is taken soon after the drop is formed, at $(t-t_d)/\tau_{osc}=0.5$, \tcbl{from the simulation results}. The inward and outward motions of the interface can be observed from the velocity vector field. The oscillating motion of the interface induces swirling motion of the fluid near the drop, which can be visualized by the vorticity ($\Omega$) field, as shown in the right half of figure \ref{fig:multimode}. \begin{figure} \begin{center} \includegraphics [width=0.75\columnwidth]{multimode} \caption{\tcbl{Simulation results for }the velocity (left) and vorticity (right) fields around the drop at $(t-t_d)/\tau_{osc}=0.56$. } \label{fig:multimode} \end{center} \end{figure} \tcbl{ \subsection{Asymptotic limits} } To better understand the development of the flow field for the falling and oscillating drop, we first look at the two asymptotic limits: 1) the case when the drop is freely oscillating but not falling, and 2) the case when the drop is falling but without oscillation. \begin{figure} \begin{center} \includegraphics [width=0.9\columnwidth]{streamline_sketch} \caption{Schematics of the flow field for (a) a drop that is oscillating without falling motion, and (b) a drop that is falling without oscillation. Figure (a) is adapted from our simulation of a free drop undergoing only second mode oscillation. In figure (b) the streamlines are sketched based on the simulation results by \citet{Feng_2010a} for $Re_d=200$ and $We_d=1$.} \label{fig:streamline_sketch} \end{center} \end{figure} A representative flow field around an freely oscillating drop is shown in figure \ref{fig:streamline_sketch}(a). The simple case shown here contains only the second mode. As a response to the oscillation, two vortices are formed outside the drop with opposite rotation directions. The directions of the two vortices change within the oscillation cycle. When higher-order modes exist, more vortices will arise as can be seen in figure \ref{fig:multimode}. As the drop falls, it accelerates and the relative velocity between the drop and the surrounding air increases in time until the terminal velocity is reached. When the drop Reynolds and Weber numbers are small, the drop will eventually reach a steady state. For this limiting case where the drop is falling without oscillation, the internal flow pattern is dictated by the external shear. In the Stokes limit, the drop shape will remain spherical and the flow circulation inside the drop is known as Hill vortex \citep{Hill_1894a}. For finite but small Reynolds and Weber numbers, the drop will not be perfectly spherical but the internal flow remains similar to Hill vortex \citep{Feng_2010a}. A representative flow field for a falling drop without oscillation is shown in figure \ref{fig:streamline_sketch} (b), which is sketched based on the simulation results of \citet{Feng_2010a} \tcbl{for $Re_d=200$ and $We_d=1$. There exist only one vortex, similar to the Hill vortex, inside the drop.} \tcbl{ \subsection{Flow patterns during one oscillation cycle} } The interplay between the falling motion and the shape oscillation creates a complicated transient flow which is different from either of the two limiting cases. The evolution of the flow is illustrated with streamlines in the drop reference frame in figure \ref{fig:streamline}. Since the second mode is dominant, the temporal variation of the flow pattern generally follows the cycle of the second-mode oscillation. The time range covered in figure \ref{fig:streamline} is $(t-t_d)/\tau_{osc}\approx 5$ to 6, namely representing the sixth oscillation according to the second mode. The drop deforms from its prolate (elongated in $z$-direction) to oblate (flattened in $z$--direction) shapes in figures (a)-(c), reaching the most oblate shape at $(t-t_d)/\tau_{osc}\approx 5.30$. Then the drop returns back to the prolate shape from (c)--(g), until a new cycle starts. \begin{figure} \begin{center} \includegraphics [trim=0cm 0 0cm 0,clip, width=1.0\columnwidth]{streamline} \caption{Flow field near the oscillating and falling drop for $(t-t_d)/\tau_{osc}$ from 5 to 6. The characteristic length scales for the wake geometry, including the wake length $l_1$, the distance between the wake-vortex center and the axis $l_2$, and the distance between the wake-vortex center and the top of the drop $l_3$, are measured. } \label{fig:streamline} \end{center} \end{figure} When the drop deforms from the prolate to the oblate shapes, see figures \ref{fig:streamline}(a)-(b), the streamlines inside the drop are quite similar to those for the second-mode free oscillation. A stagnation point is formed when the fluid moves from the two poles toward the center. In the ground reference frame, the fluid velocity at that stagnation point is identical to the mean falling velocity of the drop. A close examination further shows that the stagnation point does not generally overlap with the centroid. In this time range, the external flow going over the drop is already strong enough to overcome the rotational flow induced by drop oscillations, therefore, the vortices outside the drop that are seen in the free oscillation (see figure \ref{fig:streamline_sketch}(a)) become invisible. At the colatitude $\theta$ about 45 and 135 degrees, the streamlines inside the drop align well with those outside. The internal flow corresponding to the prolate-to-oblate oscillation is enhanced by the external flow. After the drop reaches the most oblate shape and starts to deform back (figures \ref{fig:streamline}(c)-(g)), the internal flow field becomes very different from that for the free oscillation shown in figure \ref{fig:streamline_sketch}(a). For the freely-oscillating drop, while the drop deforms from the oblate to the prolate shapes, the flow moves from the lateral side to the the stagnation point and then bifurcates toward the two poles (see figure \ref{fig:streamline_sketch}(a)). However, for the falling drop, as the original internal flow due to prolate-to-oblate oscillation is strengthened by the external flow, the oblate-to-prolate oscillation fails to reverse flow direction near the stagnation point. Indeed, the flow direction near the stagnation point does not change through the oscillation cycle. While the interface at the lateral side of the drop retracts toward the axis, the flow near the stagnation point still tries to move toward the lateral side. As a consequence, a saddle point (a saddle curve due to the axisymmetric geometry) is formed, which in turn induces two vortices (vortex tubes in the axisymmetric geometry) within the drop, see figure \ref{fig:streamline}(c). As the drop continues to deform towards the prolate shape, the saddle point is further pushed toward the $z$--axis, so are the two vortices. Furthermore, as the internal circulations near the top and bottom of the drop are not aligned with the wake and the external flow, see figure \ref{fig:streamline}(d), roller vortices are formed outside the drop \citep{Bergeles_2018a}. When the drop becomes more prolate, the two vortices inside are further flattened. At a certain point, see figure \ref{fig:streamline}(e), the internal vortex near the top of the drop splits into two. After reaching the most prolate shape, the drop starts to deform back toward the oblate shape. In this process, as shown in figure \ref{fig:streamline}(g), the two vortices inside the drop near the axis become invisible. However, they still exist, as will be shown later with vortex-identification techniques. It is just that the potential flow induced by the drop oscillation is so strong that, the local swirling motion cannot be shown by streamlines. Two new transient vortices are formed inside the drop near the lateral side, which vanish very soon. Then the internal flow pattern returns to the form similar to the beginning of the cycle. Within the time range considered the drop oscillation is still quite strong, \textit{e.g.}, the second-harmonic-mode amplitude remains larger than 0.1 as shown in figure \ref{fig:mode}. As a result, the fluid inertia due to oscillation plays a significant role in the transient flow inside the drop. It is important to note that the internal flow pattern observed here is substantially different from Hill vortex, which corresponds to the long-term behavior when the drop oscillations are damped. In particular, the two vortices formed during the oblate-to-prolate process rotate in opposite directions compared to the corresponding external flows. Roller vortices are then formed in between the internal and external flows to satisfy the fluid kinematics. \tcbl{ The formation of the saddle point during the oblate-to-prolate deformation is an important feature, which is due to the different directions of the flows induced by the external shear and the shape oscillation. Therefore, the Strouhal number, $Sr=u_{osc}/u_{ic}$, can be defined to characterize the formation of the saddle point, where $u_{osc}$ and $u_{ic}$ represent the characteristic velocities for the internal flows induced by the shape oscillation and by the external flow, respectively. While $u_{osc}$ can be estimated as $u_{osc}\approx a_2 \omega_2$, where $a_2=A_2 R_d$ and $\omega_2$ are the oscillation amplitude and frequency corresponding to the dominant second mode, $u_{ic}$ can be approximated as $u_{ic}\approx u_d \nu_{ic}$, where $\nu_{ic}$ is the internal circulation intensity \cite{Feng_2010a}. The Strouhal number can be rewritten as $Sr=a_2 \omega_2/(u_d \nu_{ic})$. When $Sr \to 0$, the droplet falls without oscillation (see Fig.\ \ref{fig:streamline_sketch}(b)). When $Sr \to \infty$ the drop oscillates without translational motion (see Fig.\ \ref{fig:streamline_sketch}(a)). For both these asymptotic limits, there is no saddle point in the flow. The saddle point will arise only when $Sr \sim O(1)$, namely when $u_{osc}$ and $u_{ic}$ are comparable. } \subsection{Wake topology evolution} The characteristic length scales for the wake geometry, including the wake length $l_1$, the distance between the wake-vortex center and the axis $l_2$, and the distance between the wake-vortex center and the top of the drop $l_3$, are measured over an oscillation cycle $(t-t_d)/\tau_{osc}=5$ to 6 and are shown in figure \ref{fig:streamline}. The simulation results show that the wake length $l_1$ generally increases over time, which is consistent from former observations by \citet{Bergeles_2018a}. At $(t-t_d)/\tau_{osc}=5$ and 6, the drop eccentricity, $e$, are the same, while the wake length increases from $l_1/R_0=2.74$ to 3.11 due to increasing $Re_d$. The values here are larger than those obtained by \citet{Bergeles_2018a} because of the larger $Re_d$. At $(t-t_d)/\tau_{osc}=5$, $Re_d=416$ and $l_1/R_0=2.74$, compared to $l_1/R_d=2.2$ for the maximum $Re_d=273$ in the former study \citep{Bergeles_2018a}. Furthermore, due to the higher resolution in the present simulation, variation of $l_1$ following the dominant second mode oscillation is observed, which was not shown in the former study \citep{Bergeles_2018a}. It can be shown that $l_1$ decreases when the drop deforms from prolate to oblate shapes, and increases when the drop returns back to the prolate shape. There exits a small time lag between the temporal variation of $l_1$ and $e$ due to the inertial effect. Here $e$ reaches the local minimum at about $(t-t_d)/\tau_{osc}=5.44$ while $l_1$ does not get to the local minimum until about $(t-t_d)/\tau_{osc}=5.58$. The distance between wake-vortex center and the top of the drop $l_3$ also generally increases over time similar to $l_1$, though the increase is more gradual. As a result, its variation within the time range shown in figure \ref{fig:streamline} is mainly dictated by the drop oscillation. The amplitude increase of $l_2$ over a cycle is also small, similar to $l_3$. The difference between $l_2$ and $l_3$ is that $l_2$ is large when the drop is oblate and is reduced when the drop turns back to the prolate shape. This is because when the drop deforms toward the oblate shape, the wake-vortex center is also pulled toward the lateral side. \subsection{Vortex dynamics} It is well known that streamlines are insufficient to fully identify vortices. Galilean invariant flow properties must be used instead. The swirling-strength vortex-identification criterion \citep{Zhou_1999a}, also known as $\lambda_{ci}$-criterion, is employed here to illustrate the evolution of vortices, see figure \ref{fig:lci_vort}. The $\lambda_{ci}$-criterion has been shown to be an effective way to visualize vortices \citep{Zhou_1999a,Chakraborty_2005a}. The vorticity, though which cannot fully identify the vortices as $\lambda_{ci}$ (since $\lambda_{ci}$ excludes the contribution from strain), is also plotted here to indicate the rotation directions of vortices. The vortex rotation directions are clockwise and counter-clockwise for $\Omega<0$ (purple color) and $\Omega>0$ (green color) on the right half of the drop, respectively. \begin{figure} \begin{center} \includegraphics [trim=0 0.1in 0 0.1in,clip, width=0.85\columnwidth]{lci_vort.jpg} \caption{Evolution of $\lambda_{ci}$ (left) and vorticity (right) for the dripping drop. The vortices are visualized by the $\lambda_{ci}$ criterion. } \label{fig:lci_vort} \end{center} \end{figure} The figures are organized in such a way that the six rows represent the first, third, fourth, fifth, sixth and seventh oscillations based on the dominant second mode (see figure \ref{fig:oscillation}), as reflected by the time normalized by the dominant second-mode period, $\tau_{osc}=\tau_2$. For the first row of the figure, the drop relative velocity is small and the effect of the falling motion is negligible. The multiple small vortices outside the drop are generated due to higher-order oscillation modes (see also the velocity field in figure \ref{fig:multimode}). As time elapses, the amplitudes of the oscillations decrease in time due to viscous dissipation of the internal flow. It is shown in figure \ref{fig:mode} that the decay rate is faster for the higher-order modes. As a result, the small vortices outside the drop disappear in the second row of figure \ref{fig:lci_vort}. Only the larger vortices corresponding to the lower-order modes (\textit{e.g.}, $n\le 3$) survive. In the first two rows (the first three second-mode oscillations), there is no vortex seen inside the drop. As the falling velocity continues to increase, the influence of the external flow becomes stronger and vortices inside the drop start to arise, at about the middle of third row of figure \ref{fig:lci_vort}, ($(t-t_d)/\tau_{osc} \approx 3.5$. As explained above the formation of vortices inside the drop occurs when the drop deforms from oblate to prolate shapes and is the outcome of the interaction between drop shape oscillation and the external flow. The two internal vortices near the top and the bottom rotate in different directions, as indicated by the different colors in the vorticity plots. It can be seen from figure \ref{fig:Reynolds} that the drop Reynolds number reaches 190 at $(t-t_d)/\tau_{osc} \approx 2$ and the wake developing at the downstream side of the drop can be seen from the second row of figure \ref{fig:lci_vort}. From the subsequent rows of the figure, it can be observed that the shape and relative location of the wake vortex change periodically following the dominant second-mode oscillation. An important observation from the $\lambda_{ci}$ plots is that the vortices inside the drop indeed remain even when the drop shape changes from prolate to oblate, even though they are invisible in the streamline plots as shown in figure \ref{fig:streamline}. The potential flow induced by the prolate-to-oblate oscillation is strong and dominates the streamline pattern. Therefore, though local swirling motions exist, they can only be shown by Galilean-invariant vortex-identification scalars like $\lambda_{ci}$. From the vorticity plots, it is learned that the rotation directions of the internal vortices do not change over an oscillation cycle, even though the potential flow direction changes in the second-mode oscillation cycle, see figure \ref{fig:streamline}. On the right half of the figure the top vortex always rotates in counter-clockwise direction, while the bottom one swirls in the clockwise direction all the time. Closeups of the vortices with annotations are shown in figure \ref{fig:lci_vort_skem}. The topology of the vortices inside the drop changes within an oscillation cycle. When the drop deforms toward the prolate shape, the vortices are stretched and can even split into two pieces. During the oblate-to-prolate deformation, the vortices at the lateral side are pushed toward the axis and will eventually merge with the ones which are already there. \begin{figure} \begin{center} \includegraphics [width=0.9\columnwidth]{lci_vort_skem} \caption{Closeup of the vortices formed around the drop. Annotations are added to indicate the rotation direction.} \label{fig:lci_vort_skem} \end{center} \end{figure} \subsection{A summary of transient flow development inside the drop} With the assistance of both the streamlines and contours of $\lambda_{ci}$ and vorticity, the development of the transient flow and the vortices interaction inside the drop can be described as follows: \begin{enumerate} \item The internal flow induced by prolate-to-oblate oscillation is aligned and enhanced by the external flow. \item As the falling velocity increases, at a certain point, the oblate-to-prolate deformation fails to fully reverse the internal flow induced by its prolate-to-oblate counterpart. \item Then a saddle point (curve) arises inside the drop when the drop deforms from its most oblate shape toward the prolate shape. \item The saddle point induces two vortices rotating in different directions inside the drop. \item As the internal circulations are different from the external flows, roller vortices are formed to satisfy kinematics. \item As the drop continues to deform toward its most prolate shape, the two vortices are pushed toward the axis. (If there are vortices already near the axis, the new ones will merge with the old ones.) \item The vortices near the axis will be stretched and may split when the drop deforms toward the prolate shape. \item When the drop deforms back to the prolate shape, the two vortices remain present and the rotation directions do not change. \item Going back to (iii) and a new cycle starts. \end{enumerate} \subsection{Passive scalar transport within the drop} \begin{figure} \begin{center} \includegraphics [width=1.0\columnwidth]{T1_iso} \caption{Evolution of the tracer function distribution.} \label{fig:T1_iso} \end{center} \end{figure} It is of interest for many drop applications to know the influence of the transient flow within a oscillating drop on scalar transport inside the drop. The question of interest is whether mixing will occur if inhomogeneous fluids are injected into the drop through the nozzle. Although mixing of different fluids inside the drop is not the focus of the present study, here a passive tracer function is introduced in the simulation to illustrate the transport process within the drop. The initial value of the tracer function is set as the streamwise coordinate $z$. The evolution of the tracer field serves to reveal the accumulation effect of the transient internal flow development described above on scalar transport. The advection equation of the tracer function is only solved within the liquid phase. The Godunov advection scheme with the second-order centered estimate for the velocity gradient was used. There exists a small numerical diffusion, but due to the fine mesh used, the numerical diffusion effect on the advection process is small. The results of the tracer function at different times are shown in figure \ref{fig:T1_iso}. Before the drop detaches from the nozzle, the tracer function only varies with $z$. The tracer function here can be considered to mimic an imaginary experiment in which the fluid fed in the nozzle is dyed sequentially with blue, white, and red colors. When the neck of the pendant drop develops, the tracer function is redistributed by the vortex ring created by the Ventruri jet through the neck \citep{Hoepffner_2013a}. The tracer function in the lower part of the drop remains unchanged. The snapshots of the drop after detachment are chosen to exhibit similar eccentricity, namely similar phases in the second-mode oscillation. When the drop simply oscillates at early time ($0<(t-t_d)/\tau_{osc} <3$), the tracer function distribution varies only in $z$, similar to the initial distribution. \tcbl{The shape oscillation by itself may introduce longitudinal motion (for example by the odd modes), but will not lead to net longitudinal transport of the tracer function. This is simply because the fluid motion induced by small-amplitude oscillation is symmetric and after one oscillation cycle the scalar function distribution will return to its original state.} As the falling velocity increases, the external flow develops and interacts with the drop oscillation. Vortices arise inside the drop and they translate and interact following the drop oscillation cycle. Then stretching and folding of the fluids of different tracer function values are observed. As the top and bottom circulations are of different directions, the folding directions of the red and blue fluids are different. Though the fluids are ``mixed" inside the top and bottom portions of the drop, the two portions remain segregated most of the time. At later time, however, more complex distorted patterns of the tracer function arise, which is due to the unsteady motion of the saddle point (see figure \ref{fig:streamline}(c--f)). If the simulation was run for a longer time to allow more oscillation cycles, chaotic mixing \citep{Aref_1986a, Angilella_2003a} of inhomogeneous fluids may arise. More detailed investigation of transport phenomena will be left for our future work. \section{Conclusions} \label{sec:conclusions} The short-term transient falling dynamics of a dripping water drop has been studied. One specific case with a low inflow rate in the dripping regime is considered. The focus is on the short term behavior and the time range considered covers about eight dominant second-mode oscillations of the drop after it is formed. A high-resolution numerical simulation has been performed to investigate the oscillation and falling dynamics. Experiment under the same conditions was also conducted for validation purpose. The grid-refinement study and the excellent agreement between simulation and experiment/theory verify and validate the simulation results. Despite the low fluid inertia, the post-formation state of the drop still triggers a nonlinear oscillation. To rigorously account for the effect of drop formation on shape oscillation, the overall process including the drop growth, pinch-off, and fall, is studied. The interaction between the shape oscillation and the falling motion introduces complex oscillation dynamics and transient flow around the drop. \paragraph{Drop formation} The experimental results for the growing pendant drop, such as the relation between drop height and volume, agree well with the static pendant drop theory, which confirms that the drop development process is quasi-static and can be fully described by the static theory. This justifies the way the simulation setup by using the static pendant drop solution slightly ahead of the pinch-off time as the initial condition in the simulation. The computed drop contours for the drop growth and formation match very well with the experimental results, validating the setup of the numerical model. Though pinching dynamics is not the focus of the present study, evolutions of the velocity and pressure fields are presented to illustrate important features for low-viscosity liquid drop formation, including the shifting of the minimum radius to the two ends of the liquid bridge, the interface overturning before pinch-off occurs, and the formation of the secondary drop. The temporal evolution of the liquid bridge minimum radius shows an initial inertial regime ($(t_d-t)^{2/3}$ power law) which later transitions to the viscous regime ($(t_d-t)^{1}$ linear law). The results affirm that the drop formation is precisely captured. \paragraph{Effect of drop formation on drop oscillation} The post-formation state serves as the initial condition for the subsequent oscillation of the drop. The initial shape of the drop when it is just formed is decomposed into spherical harmonic modes. The initial mode amplitudes, characterized by the Fourier-Legendre coefficients, are found to be finite for the modes $n\le10$ considered. The pinching dynamics such as interface overturning introduces small-scale variation on the drop contour, which in turn contributes to the finite amplitudes of the higher-order modes. Furthermore, during the pinching process the high pressure in the neck expels fluids toward the to-be-formed drop, which leads to a significant downward velocity in the top region of the drop when it is just detached. The initial kinetic energy is as important as the initial surface energy contained in the drop shape, and is found to amplify the initial oscillation amplitude and to induce a phase shift in the oscillation of all the modes. By incorporating both the initial surface and kinetic energy, the linear model for a free drop oscillation yields very good predictions for the second and third modes. \paragraph{Effect of nonlinear dynamics on drop oscillation} The post-formation state of the drop triggers a moderately nonlinear drop oscillation. The oscillation amplitude for the dominant second mode is about 10\%, so the influence of finite amplitude on oscillation frequency is small for all the modes considered here. Nevertheless, typical nonlinear effects including asymmetry in oscillation amplitude and interaction between different modes are identified. The nonlinear effects are more profound for higher-order modes ($n\ge 4$) than lower-order modes ($n=2,3$). Since the majority of energy is stored in the lower-order modes, the small energy transfer between modes may be significant for the higher-order modes but will have little impact on the lower-order modes. Mode coupling is clearly reflected in the frequency spectra of the Fourier-Legendre coefficients. In the spectrum of a given mode $n$, a primary frequency that is very similar to the Lamb frequency can be identified. Furthermore, the spectrum shows secondary frequencies corresponding to different modes due to mode coupling. Due to the low viscosity of water, there exists a commensurate relation between the $n=2$ and $4$ modes, which explains why nonlinear effects are always strongest for the $n=4$ mode. \paragraph{Effect of falling motion on drop oscillation} The present results indicate that the effect of the fall on the oscillation frequency is little for the time range considered here. The oscillation frequency for the falling drop agrees well with Lamb's prediction even when the drop Reynolds number exceeds the oscillation Reynolds number for 75\%. This conclusion is true for both lower and higher order modes. The effect of the drop fall on shape oscillation lies mainly in the time evolution of the amplitudes of the various shape oscillation modes. The increasing shear stress induced by the falling motion changes the force balance with surface tension, resulting in a strengthened upward shift in oscillation amplitude for the higher-order even modes. The drop falling motion also seems to provide energy to the oscillations, and as a result, the damping in amplitude is slowed down for $(t-t_d)/\tau_{osc}\gtrsim 4$. \paragraph{Effect of drop oscillation on transient flow development} When the drop falls without oscillation, the external shear induced by the falling motion will induce the Hill vortex within the drop. For the present case, nonlinear shape oscillation interacts with the external flow induced by the falling motion, resulting in a complicated transient flow around the drop. When the drop oscillates from prolate to oblate shapes, the flow induced by the oscillation is aligned with the external flow. In contrast, for a oblate-to-prolate deformation, the flow goes against the external flow. As a result, a saddle point (curve for the axisymmetric geometry) arises in the drop, which gives rise to two counterrotating vortices. The rotating directions of the vortices remain unchanged, while the potential flow directions vary due to the dominant second-mode oscillation. The drop oscillation also influences the wake geometry. The swirling-strength vortex-identification criterion ($\lambda_{ci}$) and the vorticity are employed to better elucidate the vortex dynamics. When the drop oscillates, the vortices inside can be stretched and even split. Finally, a tracer function is introduced to demonstrate the scalar transport within the drop. Pure shape oscillation does not induce net longitudinal transport of the tracer function. Stretching and folding of the scalar function contours are only observed after vortices arise within the drop. The unsteady motion of the saddle point creates a more distorted tracer function field, which may result in a chaotic mixing of inhomogeneous fluids inside the drop. Yet a longer simulation than the present one will be required to fully verify this. \section*{Acknowledgement} This work was initiated with the support of the MOST-CNRS project. The subsequent investigation was supported by the startup fund at Baylor University. YL was also supported by National Science Foundation (NSF \#1853193). The simulations were performed on the Baylor cluster \emph{Kodiak} and the simulation results are visualized by the software, \emph{VisIt}, developed by Lawrence Livermore National Laboratory. YL would also acknowledge Dr.~S.~Balachandar for helpful discussions on vortex identification and chaotic mixing.
\section{Introduction} Scene graph generation (SGG) is a structured prediction task aiming to explicitly model objects and their relationships in an image by constructing a corresponding visually-grounded scene graph. Its uses can be found in computer vision tasks such as image captioning [1-3] and visual question answering [4-6]. Currently, the variational Bayesian (VB) [7,8] methodology is generally employed to solve the SGG tasks, in which the variational inference step aims to infer the optimum interpretations $z^*$ from the input images $x$ based on the max aposteriori (MAP) estimation principle, i.e. $z^*=\argmax_{z}p(z|x)$, while the classical cross entropy loss is usually applied to fit the underlying posterior with the ground-truth training samples. Due to the exponential dependencies among the output variables, a computationally tractable variational distribution $q(z)$ is generally used to approximate the underlying computationally intractable posterior $p(z|x)$. For tractability, $q(z)$ in SGG models [9-16] is often assumed to be fully decomposed, and the resulting VB framework is also known as the mean field variational Bayesian (MFVB) [7,8]. The associated inference procedure is also known as the mean field variational inference (MFVI) [7,8]. To leverage the superior feature representation learning capability of modern deep neural networks, the above MFVI step is often formulated using message passing neural network (MPNN) models [15-19], in which two fundamental modules are required: i) visual perception and ii) visual context reasoning [20]. The former aims to locate and instantiate objects and predicates within the input images, while the latter tries to infer their consistent interpretation. In the above formulation, due to the nature of the message passing optimization method, a classical evidence lower bound (ELBO) is often implicitly employed as the variational inference objective. However, the variational approximation inferred from such loose ELBO objective generally underestimates the underlying complex posterior [21], which often leads to inferior generation performance. To address the above issue, in this paper, we propose a novel doubly reparameterized importance weighted structure learning (DR-IWSL) method, in which a tighter importance weighted lower bound [21] is employed to replace the ELBO as the variational inference objective. A reparameterizable Gumbel-Softmax sampler [22] is applied to draw $i.i.d.$ samples from the associated distribution to compute the above lower bound. To reduce the gradient variance, we adopt a doubly reparameterized gradient estimator [23] in this paper. The resulting constrained variational inference task is solved by a generic entropic mirror descent algorithm. The proposed DR-IWSL method achieves the state-of-the-art performance on two popular SGG benchmarks: Visual Genome and Open Images V6. \section{Related Works} There are two main research directions that are currently investigated in the current SGG literature: 1) designing a feature extracting structure based on novel MPNN models [3,11,17,25,26], or different mechanisms of embedding the contextual information into the current MPNN models [12,13,15,19,25]; 2) implementing an unbiased relationship prediction via the following debiasing techniques: instance-level resampling [30], dataset resampling [27-29], bi-level data resampling [16], loss reweighting based on instance frequency [35,36] and knowledge transfer learning [32-34]. Besides the above traditional debiasing methodologies, another approach involves the use of a causal inference model [37] to remove the harmful bias from the good context bias based on the counterfactual causality. Most of the above SGG models [10,13,16,19,26,38] follow a unified MPNN-based MFVB formulation, in which the classical ELBO is implicitly employed as the variational inference objective. However, the resulting variational approximation derived from the ELBO objective generally underestimates the underlying complex posterior. Unlike the previous SGG models, the proposed DR-IWSL method applies a tighter importance weighted lower bound [21] as the variational inference objective, which is computed from multiple samples drawn from a reparameterizable Gumbel-Softmax sampler [22]. Moreover, we employ a doubly reparameterized gradient estimator [23] to reduce the variance of the associated derivatives. Instead of relying on the traditional message passing technique, a generic entropic mirror descent algorithm is used to solve the resulting constrained variational inference task. \section{Proposed Methodology} To convey effectively the innovative features of the proposed methodology, in this section, we first formulate the problem, and define the applied scoring function. The presentation then proceeds by motivating the employed Gumbel-Softmax sampler and describing the proposed doubly reparameterized importance weighted structure learning method. The adopted entropic mirror descent method is discussed in the last subsection. \subsection{Problem Formulation} In the current SGG approaches, the output scene graph consists of a list of intertwined semantic triplet structures, each of which constrains three key graph building components: object, subject, and predicate. In particular, the relationship between two interacting instances (object and subject) is referred as a predicate. Due to the exponential dependencies among the structured output variables in SGG tasks, a direct computation of the underlying posterior is generally computationally intractable. For this reason, the classical variational inference (VI) technique is often applied to approximate the above posterior. For tractability, the mean field variational inference (MFVI) [7,8] is commonly used in such SGG tasks, in which the variational distribution is often assumed to be fully decomposable. Equipped with the classical cross entropy loss in the associated variational learning step, current SGG models can be formulated in a corresponding mean field variational Bayesian (MFVB) framework [7,8]. The above MFVI step is predominantly modelled by a message passing neural network (MPNN) [15-19], consisting of two fundamental modules: visual perception and visual context reasoning. In fact, a MPNN-based MFVB framework has became the de facto state-of-the-art method for SGG. Specifically, given an input image $x$, the visual perception module aims to generate a set of instance region proposals $b_i^o\in \mathbb{R}^4,i=1,...,m$, and a set of predicate region proposals $b_j^p\in \mathbb{R}^4,j=1,...,n$, where $m$ and $n$ represent the number of instances/predicates detected in the input image. By applying a ROI pooling on the feature maps generated from the visual perception module, one can extract the associated fixed-sized latent feature representation sets $y_i^o\in \mathbb{R}^d,i=1,...,m$ and $y_j^p \in \mathbb{R}^d, j=1,...,n$ from the corresponding input image patch sets $x_i^o,i=1,...,m$ and $x_j^p,j=1,...,n$. Given a set of object classes $\mathcal{C}$ and a set of relationship categories $\mathcal{R}$, a visual context reasoning module is required to infer the resulting instance/predicate interpretation sets $z_i^o\in \mathcal{C},i=1,...,m$ and $z_j^p\in \mathcal{R},j=1,...,n$ from the above latent feature representation sets. Traditionally, ELBO is routinely applied as the variational inference objective in the above MPNN-based MFVB models. However, the oversimplified variational approximation inferred from the ELBO objective generally underestimates the underlying complex posterior [21], which often leads to inferior detection performance. To this end, in this paper, we propose a novel doubly reparameterized importance weighted structure learning method, which employs a tighter importance weighted lower bound [21] as the variational inference objective, and utilizes a doubly reparameterized gradient estimator [23] to approximate the associated derivatives. aiming to reduce the estimator variance. \subsection{The Scoring Function} Generally, SGG tasks can be formulated using probabilistic graphical models, e.g. a conditional random field (CRF) [40]. A non-negative scoring function $s_{\theta}(x,z)$ is often applied to measure the similarity or compatibility between the input variable $x$ and the output variable $z$, where $\theta$ is used to parameterize the scoring function. The associated log scoring function is often computed as follows: \begin{equation} logs_{\theta}(x,z)=-\sum_{r\in R}\psi_r(x_r,z_r) \end{equation} where $r$ represents a clique within a clique set $R$ (defined by the associated graph structure), $\psi_r$ is a corresponding potential function. Two types of potential functions are commonly used in current SGG models: the unary potential function $\psi_u$ and the pairwise or binary potential function $\psi_b$. However, the above formulation often ignores the informative global contextual information. To this end, we compute a latent global feature representation $y^g\in \mathbb{R}^d$ from the global region proposal $b^g$, where $b^g$ is obtained by the union of all the associated instance/predicate region proposals in the input image. Correspondingly, $x^g$ is the relevant global image patch of $b^g$, and $z^g$ is its interpretation. With the above definitions, by adding two types of pairwise potential terms $\psi_{b}^p(x_j^p, x^g,z_j^p, z^g)$ and $\psi_{b}^o(x_i^o, x^g,z_i^o, z^g)$, one can incorporate the global contextual information into the following applied log scoring function: \begin{equation} \begin{split} logs_{\theta}(x,z)= -\displaystyle\sum_{j=1}^{n}[\psi_u^p(x_j^p, z_j^p)+ \displaystyle\sum_{i\in N(j)}\psi_{b}^p(x_i^o, x_j^p,z_i^o, z_j^p) +\psi_{b}^p(x_j^p, x^g,z_j^p, z^g)] -\\ \displaystyle\sum_{i=1}^{m}[\psi_u^o(x_i^o, z_i^o)+ \displaystyle\sum_{j\in N(i)}\psi_{b}^o(x_i^o, x_j^p,z_i^o, z_j^p) +\displaystyle\sum_{l\in N(i)}\psi_{b}^o(x_i^o, x_l^o,z_i^o, z_l^o)+ \psi_{b}^o(x_i^o, x^g,z_i^o, z^g)] \end{split} \end{equation} where the superscripts $o$, $p$, $g$ represent the object, the predicate and the global context, respectively. $N(i)$ is the set of neighbouring nodes around the target $i$, and the latent feature representations $y$ are implicitly embedded in the above formulation. \subsection{Gumbel-Softmax Sampler} Due to the inability to backpropagate the corresponding gradients, the output discrete variables are rarely applied in stochastic neural networks. To this end, rather than producing non-differentiable samples from a categorical distribution, we employ a reparameterizable Gumbel-Softmax sampler [22] to generate differentiable samples. Suppose $z$ is the interpretation of a potential region proposal defined in terms of a categorical variable with the class probabilities $\pi^1,..., \pi^v$ (where $v$ is the vocabulary size). It is essentially encoded as $v$-dimensional one-hot vector taking values on the corner of the $(v-1)$-dimensional simplex, $\Delta^{v-1}$. Given a $v$-dimensional Gumbel noise $\sigma$, the corresponding $i$-th element of the output variable $z^i$ is computed as follows: \begin{equation} z^i=g_{\pi}(\sigma^i)=\frac{exp((log(\pi^i)+\sigma^i)/\tau)}{\sum_{j=1}^{v}exp((log(\pi^j)+\sigma^j)/\tau)},\; for\; i=1,...,v \end{equation} where $g_{\pi}$ represents the reparameterization function and $\tau$ is the softmax temperature. With the Gumbel-Softmax sampler, the output samples become one-hot vectors when annealing $\tau$ to zero. To avoid exploding gradients, $\tau$ is often annealed to a relatively low temperature instead of zero. \subsection{Doubly Reparameterized Importance Weighted Structure Learning} To avoid underestimating the underlying complex posterior, in this paper, we employ a tighter lower bound $\mathcal{L}_s$ based on $s$-sample importance weighting [21] to replace the classical ELBO as the variational inference objective. Such importance weighted lower bound $\mathcal{L}_s$ is essentially an unbiased estimator of the log partition function $logs_{\theta}(x)$ (when $s$ reaches infinity), which is defined as follows: \begin{equation} \mathcal{L}_{s}=\mathbb{E}_{z_1,...,z_s \sim q(z)}[log\frac{1}{s}\sum_{i=1}^{s}\frac{s_{\theta}(x,z_i)}{q(z_i)}] \leq logs_{\theta}(x) \end{equation} where $s$ represents the number of samples, in which each $z_i$ is an $i.i.d.$ random sample drawn from $q(z)$. $w_i=\frac{s_{\theta}(x,z_i)}{q(z_i)}$ is also known as the importance weight. $\mathcal{L}_{s}$ is at least as tight as the ELBO, and its tightness improves with the number of samples [21]. For tractability, the applied variational distribution $q(z)$ is generally assumed to be fully decomposed as: \begin{equation} \begin{split} q(z)=\displaystyle\prod_{i=1}^{m}q^o_i(z_i^o)\displaystyle\prod_{j=1}^{n}q^p_j(z_j^p) \end{split} \end{equation} where $q^o_i(z_i^o) \in \Delta ^{v_o-1}$ and $q^p_j(z_j^p) \in \Delta ^{v_p-1}$ are local variational approximations of the objects and predicates in the output scene graph, respectively. $v_o$ and $v_p$ are the sizes of vocabularies for the objects and predicates, respectively. In such MFVI scenario, the original MAP inference can be transferred into a corresponding marginal inference task, which may not be the case in general. Given a potential region proposal $b_i$, its corresponding local log marginal posterior $logp_{\theta}(z_i|x_i)=logs_{\theta}(x_i,z_i)-logs_{\theta}(x_i)$ requires us to compute the local log marginal scoring function $logs_{\theta}(x_i,z_i)=\sum_{z\backslash z_i}logs_{\theta}(x_i,z)$ and the computationally intractable log partition function $logs_{\theta}(x_i)$. In this paper, variable elimination techniques are applied to approximate $logs_{\theta}(x_i,z_i)$. Specifically, for a potential instance region proposal $b_i^o$, it is computed as follows: \begin{equation} \label{con1} \begin{split} logs_{\theta}(x_i^o,z_i^o)\propto -[\psi_u^o(x_i^o, z_i^o)+\sum_{j\in N(i)} m^{op}_{j\to i}+\sum_{l\in N(i)} m^{oo}_{l\to i}+m^{og}_{g\to i}] \\ \psi_u^o(x_i, z_i^o) = h^o_{\theta}(x_i)\cdot z_i^o,\; m^{op}_{j\to i}=\sum_{z_j^p\in \mathcal{R}}\psi^o_b(x_i^o, x_j^p, z_i^o, z_j^p)=g^{op}_{\theta}(x_i^o, x_j^p)\cdot z_i^o\\ m^{oo}_{l\to i}=\sum_{z_l^o\in \mathcal{C}}\psi^o_b(x_i^o, x_l^o, z_i^o, z_l^o)=g^{oo}_{\theta}(x_i^o, x_l^o)\cdot z_i^o\\ m^{og}_{g\to i}=\sum_{z^g\in \mathcal{G}}\psi^o_b(x_i^o, x^g, z_i^o, z^g)=g^{og}_{\theta}(x_i^o, x^g)\cdot z_i^o\\ \end{split} \end{equation} while for a potential predicate region proposal $b_j^p$, it is computed as follows: \begin{equation} \label{con2} \begin{split} logs_{\theta}(x_j^p,z_j^p)\propto -[\psi_u^p(x_j^p, z_j^p)+\sum_{i\in N(j)} m^{po}_{i\to j}+m^{pg}_{g\to j}]\\ \psi_u^p(x_j, z_j^p) = h^p_{\theta}(x_j)\cdot z_j^p,\; m^{po}_{i\to j}=\sum_{z_i^o\in \mathcal{C}}\psi^p_b(x_i^o, x_j^p, z_i^o, z_j^p)=g^{po}_{\theta}(x_i^o, x_j^p)\cdot z_j^p\\ m^{pg}_{g\to j}=\sum_{z^g\in \mathcal{G}}\psi^p_b(x^g, x_j^p, z^g, z_j^p)=g^{pg}_{\theta}(x_j^p, x^g)\cdot z_j^p \end{split} \end{equation} where $\cdot$ means an inner product, $z_i^o$ and $z_j^p$ are the output variables for an instance and a predicate, which are generated by a Gumbel-Softmax sampler. In Equations (6) and (7), $\mathcal{G}$ is a relevant global region proposal interpretation set. By combing the visual perception module outputs using multi-layer perceptrons (MLPs), one can construct the relevant feature representation learning functions $h^o_{\theta}$, $h^p_{\theta}$, $g^{op}_{\theta}$, $g^{oo}_{\theta}$, $g^{og}_{\theta}$, $g^{po}_{\theta}$, $g^{pg}_{\theta}$, which are parameterized by $\theta$. Essentially, these functions would first map the input image patches $x$ into the corresponding feature representations $y\in \mathbb{R}^d$ using the visual perception module, and then obtain the resulting $\mathbb{R}^v$ dimensional feature vector by feeding the relevant $y$ into the corresponding MLP. Most importantly, the MLPs implicitly perform the potential function marginalizations prescribed in Equations (6) and (7). The resulting log score is essentially the inner product of the above $\mathbb{R}^v$ dimensional feature vector and the corresponding $v$-dimensional vector $z$. More specifically, to approximate the computationally intractable $logs_{\theta}(x_i)$, $s$-sample importance weighted lower bound $\mathcal{L}_s^i$ is employed in this paper to construct the following constrained variational inference task: \begin{equation} \begin{split} logs_{\theta}(x_i)\triangleq \max_{\pi_i} \mathcal{L}_s^i&=\max_{\pi_i}\mathbb{E}_{z_{i1},...,z_{is}\sim q_{\pi_i}(z_i)}[log\frac{1}{s}\sum_{j=1}^s\frac{s_{\theta}(x_i,z_{ij})}{q_{\pi_i}(z_{ij})}]\\ &=\max_{\pi_i}\mathbb{E}_{\sigma_{i1},...,\sigma_{is}\sim u(\sigma_i)}[log\frac{1}{s}\sum_{j=1}^s\frac{s_{\theta}(x_i,z_{ij})}{q_{\pi_i}(z_{ij})}]|_{z_{ij}=g_{\pi_i}(\sigma_{ij})}\\ &\triangleq \max_{\pi_i}[log\frac{1}{s}\sum_{j=1}^s\frac{s_{\theta}(x_i,z_{ij})}{q_{\pi_i}(z_{ij})}]|_{z_{ij}=g_{\pi_i}(\sigma_{ij}),\; \sigma_{ij \sim u(\sigma_i)}} \;\;\;s.t. \;\;\; \pi_i \in \Delta^{v-1} \end{split} \end{equation} where the local variational approximation $q_i(z_i)$ is set to a Gumbel-Softmax distribution with a categorical probability $\pi_i\in \Delta^{v-1}$. and $z_{i1},...,z_{is}$ represent the $s$ $i.i.d$ samples drawn from $q_{\pi_i}(z_i)$. $\sigma_{ij}$ is $v$-dimensional Gumbel noise drawn from the Gumbel distribution $u(\sigma_i)$, which is fed into the Gumbel-Softmax reparameterization function $g_{\pi_i}$ to explicitly compute the corresponding output sample $z_{ij}$. A Monte Carlo estimator is applied to approximate the expectation $\mathcal{L}_s^i$. Accordingly, the above log probability $logq_{\pi_i}(z_{ij})$ is approximated as follows: \begin{equation} logq_{\pi_i}(z_{ij})\triangleq \lVert \pi_i\cdot z_{ij} \rVert_1 -max(\pi_i)-log \lVert e^{\pi_i-max(\pi_i)}\rVert_1 \end{equation} where $\lVert. \rVert_1$ represents the $\mathbb{L}_1$ norm while $max(\pi_i)$ is the maximum value of $\pi_i$. Generally, naively computing the derivatives $\triangledown_{\pi_i} \mathcal{L}_s^i$ would generate a major problem, as the relevant gradient estimator for the above importance weighted lower bound performs poorly as the number of samples increases [23]. To this end, in this paper, we employ a doubly raparameterized gradient estimator [23] to reduce the variance of the associated derivatives. The estimator is expressed as follows: \begin{equation} \triangledown_{\pi_i} \mathcal{L}_s^i \triangleq [\sum_{j=1}^s (\frac{w_{ij}}{\sum_{l=1}^s w_{il}})^2 \frac{\partial {logw_{ij}}}{\partial{z_{ij}}} \frac{\partial {z_{ij}}}{\partial{\pi_i}}]|_{z_{ij}=g_{\pi_i}(\sigma_{ij}),\; \sigma_{ij \sim u(\sigma_i)}} \end{equation} where $w_{ij}=\frac{s_{\theta}(x_i,z_{ij})}{q(z_{ij})}$ represents the associated importance weight of the $j$-th sample in the $i$-th region proposal. Such doubly reparameterized gradient estimator has the property that when $q_{\pi_i}(z_i)$ is optimal (exactly the same as the underlying posterior), the estimator vanishes and has zero variance. This property does not hold for the naive gradient estimator [23]. Furthermore, a surrogate logit $\phi$ is constructed to compute the target log marginal posterior $logp_{\theta}(z_i|x_i)$: \begin{equation} \begin{split} logp_{\theta}(z_i|x_i)\triangleq \phi+C,\;\;\; \phi=logs_{\theta}(x_i,z_i)-\max_{\pi_i}\mathcal{L}_s^i \end{split} \end{equation} where $C$ is a relevant constant w.r.t. $x_i$ and $z_i$. One can compute $logp_{\theta}(z_i|x_i)$ by ignoring the above constant $C$ based on the $LogSumExp$ trick: \begin{equation} \begin{split} logp_{\theta}(z_i|x_i)\triangleq \phi - log{\lVert e^{\phi} \rVert_1} \end{split} \end{equation} where the optimum interpretation $z_i^*$ of the input region proposal $b_i$ is computed as $z_i^*=\argmax_{z_i}logp_{\theta}(z_i|x_i)$. \begin{algorithm}[ht] \caption{Doubly Raparameterized Importance Weighted Structure Learning}\label{euclid1} \textbf{Input} region proposal $b$, categorical probability $\pi$, number of samples $s$, Gumbel noise distribution $u(\sigma)$, Gumbel-Softmax reparameterization function $g_{\pi}$, learning rate $\alpha$, softmax temperature $\tau$, minimum temperature $\tau_{min}$, temperature annealing rate $\beta$, number of iterations $T$ \\ \textbf{Output} $\theta$, $\tau$ \begin{algorithmic}[1] \STATE randomly initialize $\theta$ \FOR{iteration $t=1$ to $T$} \STATE randomly initialize $\pi$ for $b$ \STATE draw $s$ Gumbel noise samples $\sigma_{1},...,\sigma_{s}$ from $u(\sigma)$ \STATE compute $s$ output samples $z_{1},...,z_{s}$ by feeding $\sigma_{1},...,\sigma_{s}$ into $g_{\pi}$ \STATE compute log importance weight $log\frac{s_{\theta}(x,z)}{q_{\pi}(z)}$ and approximate $\mathcal{L}_s$ via Monte Carlo estimation \STATE employ EMD to solve the resulting constrained variational inference task, in which the derivative $\triangledown_{\pi} \mathcal{L}_s$ is approximated via a doubly reparameterized gradient estimator \STATE apply the updated $\pi$ to compute the surrogate logit $\phi$ as well as the resulting $logp_{\theta}(z|x)$ \STATE compute $\mathbb{L}(\theta)$ and update $\theta \leftarrow \theta - \alpha \cdot \bigtriangledown_{\theta} \mathbb{L}(\theta)$ \STATE update $\tau \leftarrow \max(\tau \cdot e^{-\beta \cdot t}, \tau_{min})$ \ENDFOR \end{algorithmic} \end{algorithm} Finally, in the following variational learning step, we employ the classical cross-entropy loss to fit the above $p_{\theta}(z|x)$ with the ground-truth training samples: \begin{equation} \theta^*=\argmin_{\theta}\mathbb{L}(\theta)=\argmin_{\theta}-[\frac{1}{c}\sum_{k=1}^{c}logp_{\theta}(\hat{z_k}|\hat{x_k})] \end{equation} where $\mathbb{L}(\theta)$ represents the variational learning objective, $c$ is the number of training images in a mini-batch, $\hat{z_k}$ is the ground-truth scene graph of the input image $\hat{x_k}$. To better illustrate the proposed DR-IWSL method, we summarize its learning steps in Algorithm 1. \begin{algorithm}[ht] \caption{Entropic Mirror Descent}\label{euclid2} \textbf{Input} variational distribution $\pi$, importance weighted lower bound $\mathcal{L}_s$, number of iterations $M$, an initial learning rate $\gamma$, a predefined objective $\mathcal{L}_s^p$, a small positive value $\epsilon$\\ \textbf{Output} optimum $\pi^*$ \begin{algorithmic}[1] \FOR{iteration $i=1$ to $M$} \STATE compute the derivative $\bigtriangledown_{\pi}\mathcal{L}_s$ via a doubly reparameterized gradient estimator \STATE set learning rate $\gamma = \frac{\gamma}{\sqrt{i}}$ \STATE end the loop if {$\abs{\mathcal{L}_s-\mathcal{L}_s^p}<\epsilon$} \STATE set $\mathcal{L}_s^p=\mathcal{L}_s$ \STATE compute $r=\gamma\cdot\bigtriangledown_{\pi}\mathcal{L}_s$ \STATE compute $r=\pi\cdot e^{r-\max(r)}$ \STATE set $\pi=\frac{r}{\lVert r \rVert_1}$ \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Entropic Mirror Descent} Unlike the previous SGG models, the proposed DR-IWSL method requires us to solve a constrained variational inference task, that is to maximize the $s$-sample importance weighted lower bound $\mathcal{L}_s^i$, subject to the constraint that the categorical probability $\pi_i$ resides in a $(v-1)$-simplex, as demonstrated in Equation (8). Since the above constraint is a probability simplex, entropic mirror descent (EMD) [24] is chosen to solve the above constrained variational inference problem. Specifically, the negative entropy is applied as a specific function to construct the required Bregman distance [45]. Compared with the traditional projected gradient descent methods [47], EMD generally converges faster due to the utilization of the geometry of the optimization problem [46]. To intuitively illustrate the applied EMD strategy, we summarize the specific training steps in Algorithm 2. \section{Experiments} \subsection{Visual Genome} \textbf{Benchmark:} As the most popular SGG benchmark, Visual Genome [48] consists of 108,077 images with an average of 38 objects and 22 relationships per image. Following the data split protocol [9], the most frequent 150 object categories and 50 predicate classes are selected in this experiment. Furthermore, we split the Visual Genome into a training set ($70\%$) and a test set ($30\%$). For validation, an evaluation set ($5k$) is randomly selected from the training set. Following [50], according to the number of instances in the training split, the relevant categories are divided into three disjoint sets: $head$ (more than $10k$), $body$ ($0.5k\sim 10k$) and $tail$ (less than $0.5k$). \noindent \textbf{Evaluation Metrics:} Instead of the common Recall$@K$, the mean Recall$@K$ ($mR@K$) is chosen as the evaluation metric in this experiment, since it focuses on the informative predicate categories (e.g. $painted\;on$) with much less training samples compared to the common ones (e.g. $on$). We validate the proposed method on three tasks, namely, Predicate Classification (PredCls), Scene Graph Classification (SGCls) and Scene Graph Detection (SGDet). In particular, given an input image with the ground-truth bounding boxes and object labels, PredCls predicts the predicate labels; SGCls predicts the labels for instances and predicates; SGDet generates the resulting scene graph from the input image. \noindent \textbf{Implementation Details:} Like [37], we choose RexNeXt-101-FPN [51] and Faster-RCNN [39] as the backbone and the object detector, respectively. Following the previous methods, we adopt a step training strategy. Accordingly, we freeze the visual perception module and only train the relevant visual context reasoning module. A bi-level data resampling strategy [16] is applied in this experiment. Specifically, we set the repeat factor $t=0.07$ and the instance drop rate $\gamma_{d}=0.7$. The batch size $bs$ is set to 12, and the learning rate of the SGD optimizer is $0.008\times bs$. The number of samples $s$ is set to $20$ and $5000$ in the variational inference and learning steps, respectively. The experiment is conducted on 4 GeForce RTX 3090 GPU cards. \begin{table}[!t] \begin{threeparttable} \renewcommand{\arraystretch}{1.0} \caption{A performance comparison on Visual Genome dataset.} \centering \begin{tabular}{@{\extracolsep{4pt}}*7c@{}} \toprule {} & \multicolumn{2}{c}{PredCls} & \multicolumn{2}{c}{SGCls} & \multicolumn{2}{c}{SGDet}\\ \cmidrule{2-3} \cmidrule{4-5} \cmidrule{6-7} {Method} & {mR@50} & {mR@100} & {mR@50} & {mR@100} & {mR@50} & {mR@100}\\ \midrule RelDN$^{\dagger}$[38] & $15.8$ & $17.2$ & $9.3$ & $9.6$ & $6.0$ & $7.3$\\ Motifs[26] & $14.6$ & $15.8$ & $8.0$ & $8.5$ & $5.5$ & $6.8$\\ Motifs*[26] & $18.5$ & $20.0$ & $11.1$ & $11.8$ & $8.2$ & $9.7$\\ G-RCNN$^{\dagger}$[13] & $16.4$ & $17.2$ & $9.0$ & $9.5$ & $5.8$ & $6.6$\\ MSDN$^{\dagger}$[10] & $15.9$ & $17.5$ & $9.3$ & $9.7$ & $6.1$ & $7.2$\\ GPS-Net$^{\dagger}$[19] & $15.2$ & $16.6$ & $8.5$ & $9.1$ & $6.7$ & $8.6$\\ GPS-Net$^{\dagger *}$[19] & $19.2$ & $21.4$ & $11.7$ & $12.5$ & $7.4$ & $9.5$\\ VCTree-TDE[37] & $25.4$ & $28.7$ & $12.2$ & $14.0$ & $9.3$ & $11.1$\\ BGNN[16] & $30.4$ & $32.9$ & $14.3$ & $16.5$ & $10.7$ & $12.6$\\ \textbf{DR-IWSL} & $\mathbf{30.4}$ & $\mathbf{32.3}$ & $\mathbf{17.4}$ & $\mathbf{19.0}$ & $\mathbf{14.6}$ & $\mathbf{16.7}$\\ \bottomrule \end{tabular} \begin{tablenotes} \item [\textbullet] Note: All the above methods apply ResNeXt-101-FPN as the backbone. $*$ means the re-sampling strategy [30] is applied in this method, and $\dagger$ depicts the results reproduced with the latest code from the authors. A bold font marks the results with the proposed method. \end{tablenotes} \end{threeparttable} \end{table} \noindent \textbf{Comparisons with State-of-the-art Methods:} As shown in Table 1, the proposed DR-IWSL method outperforms the previous state-of-the-art SGG models by a large margin in the SGCls abd SGDet tasks, and achieves comparable performance with the latest BGNN algorithm in the PredCls task. Since the EMD algorithm converges faster than the traditional message passing technique, the above performance is achieved using a much fewer training iterations. Furthermore, the proposed method focuses on detecting informative predicates ($body$ and $tail$) rather than the common ones ($head$). In order to improve the above informative predicate detection capability further, we enhance the proposed method by a generic balance adjustment (BA) strategy. The resulting novel algorithm, referred to as DR-IWSL+BA, is compared with three baseline models as shown in Table 2. The BA strategy aims to overcome two types of imbalance, namely, the semantic space imbalance and the training sample imbalance. It involves two procedures: semantic adjustment and balanced predicate learning. The former induces the predictions made by the DR-IWSL method to be more informative by building a relevant transition matrix, while the latter aims to extend the sampling space for the informative predicates. We observe that the proposed DR-IWSL+BA method outperforms the previous state-of-the-art models by a large margin, especially for the PredCls task. \begin{table}[!t] \begin{threeparttable} \renewcommand{\arraystretch}{1.0} \caption{A comparison of the impact of the balance strategy measured on Visual Genome.} \centering \begin{tabular}{@{\extracolsep{4pt}}*7c@{}} \toprule {} & \multicolumn{2}{c}{PredCls} & \multicolumn{2}{c}{SGCls} & \multicolumn{2}{c}{SGDet}\\ \cmidrule{2-3} \cmidrule{4-5} \cmidrule{6-7} {Method} & {mR@50} & {mR@100} & {mR@50} & {mR@100} & {mR@50} & {mR@100}\\ \midrule Motifs+BA[34] & $29.7$ & $31.7$ & $16.5$ & $17.5$ & $13.5$ & $15.6$\\ VCTree+BA[34] & $30.6$ & $32.6$ & $20.1$ & $21.2$ & $13.5$ & $15.7$\\ Transformer+BA[34] & $31.9$ & $34.2$ & $18.5$ & $19.4$ & $14.8$ & $17.1$\\ \textbf{DR-IWSL+BA} & $\mathbf{37.7}$ & $\mathbf{40.0}$ & $\mathbf{21.5}$ & $\mathbf{22.7}$ & $\mathbf{16.5}$ & $\mathbf{18.7}$\\ \bottomrule \end{tabular} \begin{tablenotes} \item [\textbullet] Note: All the above methods apply the same balance adjustment strategy as in [34]. Using bold to identify the proposed method. \end{tablenotes} \end{threeparttable} \end{table} \subsection{Open Images V6} \textbf{Benchmark:} Open Images V6 [49] is another popular SGG benchmark, which has a superior annotation quality and includes 126,368 training images, 5322 test images and 1813 validation images. The same data processing protocols in [19,38,49] are selected in this experiment. \noindent \textbf{Evaluation Metrics:} Similar to the evaluation protocols in [19,38,49], we choose the following evaluation metrics in this experiment: the mean Recall$@50$ ($mR@50$), the regular Recall$@50$ ($R@50$), the weighted mean AP of relationships ($wmAP_{rel}$) and the weighted mean AP of phrase ($wmAP_{phr}$). Specifically, as in [19,38,49], the weight metric score is defined as: $score_{wtd}=0.2\times R@50 + 0.4\times wmAP_{rel} + 0.4\times wmAP_{phr}$. \noindent \textbf{Implementation Details:} Like the experiment in Visual Genome, we choose the same backbone and object detector in this experiment. Moreover, we employ the same step training strategy and bi-level data resampling technique. The batch size $bs$ is set to 12 and an Adam optimizer with the learning rate of $0.0001$ is utilized. The number of samples $s$ is set to $20$ and $5000$ in the variational inference and learning steps, respectively. \noindent \textbf{Comparisons with State-of-the-art Methods:} To verify the merits of the proposed DR-IWSL method further, we compare it with various state-of-the-art SGG models in Table 3. For a fair comparison, whenever possible, we reproduce the results of the methods with the author's latest code and, over and above that, we investigate the impact of using the additional re-sampling strategy proposed in [30] on two of those methods. As shown in Table 3, the proposed DR-IWSL method achieves the state-of-the-art performance on all evaluation metrics for the Open Images V6 dataset. \begin{table}[!t] \begin{threeparttable} \renewcommand{\arraystretch}{1.0} \caption{A performance comparison on the Open Images V6 dataset.} \centering \begin{tabular}{@{\extracolsep{4pt}}*6c@{}} \toprule {Method} & {mR@50} & {R@50} & {wmAP\_rel} & {wmAP\_phr} & {score\_wtd} \\ \midrule RelDN$^{ \dagger}$[38] & $33.98$ & $73.08$ & $32.16$ & $33.39$ & $40.84$\\ RelDN$^{\dagger*}$[38] & $37.20$ & $75.34$ & $33.21$ & $34.31$ & $41.97$ \\ VCTree$^{\dagger}$[15] & $33.91$ & $74.08$ & $34.16$ & $33.11$ & $40.21$ \\ G-RCNN$^{\dagger}$[13] & $34.04$ & $74.51$ & $33.15$ & $34.21$ & $41.84$ \\ Motifs$^{\dagger}$[26] & $32.68$ & $71.63$ & $29.91$ & $31.59$ & $38.93$ \\ VCTree-TDE$^{\dagger}$[37] & $35.47$ & $69.30$ & $30.74$ & $32.80$ & $39.27$ \\ GPS-Net$^{\dagger}$[19] & $35.26$ & $74.81$ & $32.85$ & $33.98$ & $41.69$ \\ GPS-Net$^{\dagger *}$[19] & $38.93$ & $74.74$ & $32.77$ & $33.87$ & $41.60$ \\ BGNN[16] & $40.45$ & $74.98$ & $33.51$ & $34.15$ & $42.06$ \\ \textbf{DR-IWSL} & $\mathbf{41.00}$ & $\mathbf{75.21}$ & $\mathbf{34.20}$ & $\mathbf{35.28}$ & $\mathbf{42.76}$ \\ \bottomrule \end{tabular} \begin{tablenotes} \item [\textbullet] Note: All the above methods apply ResNeXt-101-FPN as the backbone. $*$ means the re-sampling strategy [30] is applied in this method, and $\dagger$ depicts the results reproduced with the latest code from the authors. Using bold to represent the proposed method. \end{tablenotes} \end{threeparttable} \end{table} \section{Conclusion} To avoid underestimating the underlying complex posterior, in this paper, we propose a novel doubly reparameterized importance weighted structure learning (DR-IWSL) method, which replaces the classical ELBO with a tighter importance weighted lower bound as the variational inference objective. Such lower bound is computed via multiple samples drawn from a reparameterizable Gumbel-Softmax sampler. More importantly, we use a doubly reparameterized gradient estimator to reduce the variance of the associated derivatives, and employ a generic entropic mirror descent method, rather than the traditional message passing technique, to solve the resulting constrained variational inference task. The proposed DR-IWSL method is validated on two popular SGG benchmarks: Visual Genome and Open Images V6. It achieves the state-of-the-art detection performance on both benchmarks. Currently, we only employ a mean field variational Bayes framework to solve the SGG task, which implies no structural dependencies are considered within the variational distribution. Relaxing this issue would be our next target. \begin{ack} This work was supported in part by the U.K. Defence Science and Technology Laboratory, and in part by the Engineering and Physical Research Council (collaboration between U.S. DOD, U.K. MOD, and U.K. EPSRC through the Multidisciplinary University Research Initiative) under Grant EP/R018456/1. \end{ack} \section*{References} \medskip { \small [1] You, Q., Jin, H., Wang, Z., Fang, C.,\ Luo, J. \ (2016). Image captioning with semantic attention. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 4651-4659. [2] Rennie, S. J., Marcheret, E., Mroueh, Y., Ross, J.,\ Goel, V. \ (2017). Self-critical sequence training for image captioning. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 7008-7024. [3] Yang, X., Tang, K., Zhang, H.,\ Cai, J. \ (2019). Auto-encoding scene graphs for image captioning. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 10685-10694. [4] Teney, D., Liu, L.,\ van Den Hengel, A. \ (2017). Graph-structured representations for visual question answering. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 1-9. [5] Anderson, P., He, X., Buehler, C., Teney, D., Johnson, M., Gould, S.,\ Zhang, L. \ (2018). Bottom-up and top-down attention for image captioning and visual question answering. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 6077-6086. [6] Shi, J., Zhang, H.,\ Li, J. \ (2019). Explainable and explicit visual reasoning over scene graphs. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 8376-8384. [7] Wainwright, M. J., \ Jordan, M. I. \ (2008). Graphical models, exponential families, and variational inference. {\it Foundations and Trends® in Machine Learning}, 1(1–2), 1-305. [8] Fox, C. W., \ Roberts, S. J. \ (2012). A tutorial on variational Bayesian inference. {\it Artificial intelligence review}, 38(2), 85-95. [9] Xu, D., Zhu, Y., Choy, C. B., \ Fei-Fei, L. \ (2017). Scene graph generation by iterative message passing. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 5410-5419. [10] Li, Y., Ouyang, W., Zhou, B., Wang, K., \ Wang, X. \ (2017). Scene graph generation from objects, phrases and region captions. {\it In Proceedings of the IEEE international conference on computer vision}, pp. 1261-1270. [11] Dai, B., Zhang, Y., \ Lin, D. \ (2017). Detecting visual relationships with deep relational networks. {\it In Proceedings of the IEEE conference on computer vision and Pattern recognition}, pp. 3076-3086. [12] Woo, S., Kim, D., Cho, D., \ Kweon, I. S. \ (2018). Linknet: Relational embedding for scene graph. {\it Advances in Neural Information Processing Systems}, 31. [13] Yang, J., Lu, J., Lee, S., Batra, D., \ Parikh, D. \ (2018). Graph r-cnn for scene graph generation. {\it In Proceedings of the European conference on computer vision (ECCV)}, pp. 670-685. [14] Wang, W., Wang, R., Shan, S., \ Chen, X. \ (2019). Exploring context and visual pattern of relationship for scene graph generation. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 8188-8197. [15] Tang, K., Zhang, H., Wu, B., Luo, W., \ Liu, W. \ (2019). Learning to compose dynamic tree structures for visual contexts. {\it In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, pp. 6619-6628. [16] Li, R., Zhang, S., Wan, B., \ He, X. \ (2021). Bipartite graph network with adaptive message passing for unbiased scene graph generation. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 11109-11119. [17] Li, Y., Ouyang, W., Zhou, B., Shi, J., Zhang, C., \ Wang, X. \ (2018). Factorizable net: an efficient subgraph-based framework for scene graph generation. {\it In Proceedings of the European Conference on Computer Vision (ECCV)}, pp. 335-351. [18] Chen, T., Yu, W., Chen, R., \ Lin, L. \ (2019). Knowledge-embedded routing network for scene graph generation. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 6163-6171. [19] Lin, X., Ding, C., Zeng, J., \ Tao, D. \ (2020). Gps-net: Graph property sensing network for scene graph generation. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 3746-3753. [20] Liu, D., Bober, M., \ Kittler, J. \ (2021). Visual semantic information pursuit: A survey. {\it IEEE transactions on pattern analysis and machine intelligence}, 43(4), 1404-1422. [21] Burda, Y., Grosse, R. B., \ Salakhutdinov, R. \ (2016). Importance Weighted Autoencoders. {\it In 4th International Conference on Learning Representations (ICLR)}. [22] Jang, E., Gu, S., \ Poole, B. \ (2017). Categorical reparameterization with gumbel-softmax. {\it In 5th International Conference on Learning Representations (ICLR)}. [23] Tucker, G., Lawson, D., Gu, S., \ Maddison, C. J. \ (2018). Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives. {\it In 6th International Conference on Learning Representations (ICLR)}. [24] Beck, A., \ Teboulle, M. \ (2003). Mirror descent and nonlinear projected subgradient methods for convex optimization. {\it Operations Research Letters}, 31(3), 167-175. [25] Qi, M., Li, W., Yang, Z., Wang, Y., \ Luo, J. \ (2019). Attentive relational networks for mapping images to scene graphs. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 3957-3966. [26] Zellers, R., Yatskar, M., Thomson, S., \ Choi, Y. \ (2018). Neural motifs: Scene graph parsing with global context. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 5831-5840. [27] Chawla, N. V., Bowyer, K. W., Hall, L. O., \ Kegelmeyer, W. P. \ (2002). SMOTE: synthetic minority over-sampling technique. {\it Journal of artificial intelligence research}, 16, 321-357. [28] Shen, L., Lin, Z., \ Huang, Q. \ (2016). Relay backpropagation for effective learning of deep convolutional neural networks. {\it In European conference on computer vision (ECCV)}, pp. 467-482. [29] Mahajan, D., Girshick, R., Ramanathan, V., He, K., Paluri, M., Li, Y., ... \ Van Der Maaten, L. \ (2018). Exploring the limits of weakly supervised pretraining. {\it In Proceedings of the European conference on computer vision (ECCV)}, pp. 181-196. [30] Gupta, A., Dollar, P., \ Girshick, R. \ (2019). LVIS: A dataset for large vocabulary instance segmentation. {\it In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, pp. 5356-5364. [31] Hu, X., Jiang, Y., Tang, K., Chen, J., Miao, C., \ Zhang, H. \ (2020). Learning to segment the tail. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 14045-14054. [32] Gidaris, S., \ Komodakis, N. \ (2018). Dynamic few-shot visual learning without forgetting. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 4367-4375. [33] Zhou, B., Cui, Q., Wei, X. S., \ Chen, Z. M. \ (2020). Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition. {\it In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, pp. 9719-9728. [34] Guo, Y., Gao, L., Wang, X., Hu, Y., Xu, X., Lu, X., ... \ Song, J. \ (2021). From general to specific: Informative scene graph generation via balance adjustment. {\it In Proceedings of the IEEE/CVF International Conference on Computer Vision}, pp. 16383-16392. [35] Cao, K., Wei, C., Gaidon, A., Arechiga, N., \ Ma, T. \ (2019). Learning imbalanced datasets with label-distribution-aware margin loss. {\it Advances in neural information processing systems}, 32. [36] Cui, Y., Jia, M., Lin, T. Y., Song, Y., \ Belongie, S. \ (2019). Class-balanced loss based on effective number of samples. {\it In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, pp. 9268-9277. [37] Tang, K., Niu, Y., Huang, J., Shi, J., \ Zhang, H. \ (2020). Unbiased scene graph generation from biased training. {\it In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, pp. 3716-3725. [38] Zhang, J., Shih, K. J., Elgammal, A., Tao, A., \ Catanzaro, B. \ (2019). Graphical contrastive losses for scene graph parsing. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 11535-11543. [39] Ren, S., He, K., Girshick, R., \ Sun, J. \ (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. {\it Advances in neural information processing systems}, 28. [40] Sutton, C., \ McCallum, A. \ (2012). An introduction to conditional random fields. {\it Foundations and Trends® in Machine Learning}, 4(4), 267-373. [41] Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., \ Monfardini, G. \ (2008). The graph neural network model. {\it IEEE transactions on neural networks}, 20(1), 61-80. [42] Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., \ Dahl, G. E. \ (2017). Neural message passing for quantum chemistry. {\it In International conference on machine learning}, pp. 1263-1272. [43] Wang, X., Girshick, R., Gupta, A., \ He, K. \ (2018). Non-local neural networks. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 7794-7803. [44] Zhou, J., Cui, G., Hu, S., Zhang, Z., Yang, C., Liu, Z., ... \ Sun, M. \ (2020). Graph neural networks: A review of methods and applications. {\it AI Open}, 1, 57-81. [45] Teboulle, M. \ (1992). Entropic proximal mappings with applications to nonlinear programming. {\it Mathematics of Operations Research}, 17(3), 670-690. [46] Raskutti, G., \ Mukherjee, S. \ (2015). The information geometry of mirror descent. {\it IEEE Transactions on Information Theory}, 61(3), 1451-1457. [47] Eicke, B. \ (1992). Iteration methods for convexly constrained ill-posed problems in Hilbert space. {\it Numerical Functional Analysis and Optimization}, 13(5-6), 413-429. [48] Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., ... \ Fei-Fei, L. \ (2017). Visual genome: Connecting language and vision using crowdsourced dense image annotations. {\it International journal of computer vision}, 123(1), 32-73. [49] Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., ... \ Ferrari, V. \ (2020). The open images dataset v4. {\it International Journal of Computer Vision}, 128(7), 1956-1981. [50] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., \ Yu, S. X. \ (2019). Large-scale long-tailed recognition in an open world. {\it In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pp. 2537-2546. [51] He, K., Zhang, X., Ren, S., \ Sun, J. \ (2016). Deep residual learning for image recognition. {\it In Proceedings of the IEEE conference on computer vision and pattern recognition}, pp. 770-778. } \section*{Checklist} The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.} \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{} \item Did you discuss any potential negative societal impacts of your work? \answerNA{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{} \item Did you include complete proofs of all theoretical results? \answerYes{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerNo{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerNA{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerNA{} \item Did you include any new assets either in the supplemental material or as a URL? \answerNA{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \end{document}
\section{Introduction} \label{sec:intro} High Mass X-ray binary (HMXB) systems, in which a compact object accretes matter from a more massive companion, often exhibit variability on multiple timescales ranging from less than a second to several months. For most HMXBs, accretion onto the compact object is fed by the stellar winds of the companion. However, in some cases, the objects may orbit at a small enough distance that the companion fills its Roche lobe, resulting in higher accretion rates and higher luminosities. In the case of accreting neutron stars, matter is funnelled along the magnetic field lines onto the surface of the star, resulting in a column of accreted material at the magnetic poles. Because the magnetic poles and the spin axes are not perfectly aligned, the accretion column revolves at the same rate as the neutron star, resulting in emission which appears pulsed. One such HMXB which has been well-studied is {SMC~X-1}. This system, residing in the Small Magellanic Cloud, was first detected by \citet{Price1971}. It was later resolved as a discrete source by \citet{Leong1971}, who reported significant variability in both the intensity and spectrum of the source. The binary nature of {SMC~X-1}\ was soon confirmed by \citet{Schreier1972} who discovered periodic occultations with an orbital period of around 3.9 days. {SMC~X-1}\ also exhibits pulsations with a period of about 0.7 seconds, and the pulse fraction and shape are known to vary significantly over time \citep{Lucke1976}. The existence of X-ray pulsations confirms that the accreting compact object is a neutron star. Accretion onto the neutron star has been attributed to Roche lobe overflow \citep{Hutchings1977,VanParadijs1977} of its companion, {Sk~160}, which has been spectrographically classified as a B0 I supergiant \citep{Webster1972}. This classification places {SMC~X-1}\ in a subcategory of HMXBs known as supergiant X-ray binaries (SGXB). Finally, the source exhibits super-orbital variability on a timescale of 45 to 60 days, which has been attributed to obscuration by a precessing tilted accretion disk \citep{Wojdowski1998}. Throughout this paper, we assume a distance to {SMC~X-1}\ of 60.6\,kpc as reported by \citet{Hilditch2005}. \begin{figure*}[htp] \gridline{\fig{SMC_X-1_BAT_lc_2012_clean-eps-converted-to.pdf}{0.7\textwidth}{}} \caption{{\em Swift} BAT light curve of {SMC~X-1}\ during 2012. The moving average of the BAT flux is shown in gold. A super-orbital period of around $60$\,days is clearly visible. The red vertical bars indicate the duration of each {\em NuSTAR}\ observation presented here. The first observation (10002013001) took place near the end of the low state, while the second observation (10002013003) took place as the source was growing fainter shortly after the high state.} \label{fig:BATcurve} \end{figure*} \begin{figure*}[htp] \gridline{\fig{nu10002013001A01_filt_nustar_fpma_E3-79_lc.pdf}{0.49\textwidth}{(a)} \fig{nu10002013003A01_filt_nustar_fpma_E3-79_lc.pdf}{0.49\textwidth}{(b)}} \caption{(a) {\em NuSTAR}\ FPMA count rate of the source during the first observation (10002013001). The gap between 30000 and 50000 seconds is due to a failed data downlink and is not inherent to the source. We have split this observation into two epochs. (b) {\em NuSTAR}\ FPMA count rate of the source during the second observation (10002013003). We define the third epoch as the entirety of this observation. The apparent variability on timescales of $\sim5000$\,s during Epoch III can be attributed to movement of the source between detectors. Both light curves are binned into intervals of 40\,s. The orbital phase, which is defined by full eclipse of the source at $\phi=0$ and $\phi=1$, is included along the horizontal axis as well as the time in seconds since the beginning of the each observation.} \label{fig:lightcurves} \end{figure*} One of only a handful of SGXBs known to accrete via Roche lobe overflow, {SMC~X-1}\ exhibits persistent emission near or above its isotropic Eddington luminosity of $L_{\rm Edd}\sim 1.3\times10^{38}\,\mathrm{erg\,s^{-1}}$ (for a mass of $\sim 1.1\, M_{\odot}$, as reported by \citealt{VanderMeer2007}) varying between $L_{\rm X}(2-12\,{\rm keV}) \sim 10^{37}\ \mathrm{erg}\ \mathrm{s}^{-1}$ in the low state and luminosities in excess of $5\times10^{38}\ \mathrm{erg}\ \mathrm{s}^{-1}(2-12\,{\rm keV})$, more than three times its Eddington luminosity, in the high state \citep{Bonnet-Bidaud1981}. In addition to this persistent emission, {SMC~X-1}\ has been shown to exhibit type II X-ray bursts with durations of tens of seconds \citep{Angelini1991THEL,2018RAA....18..148R}. Its near- to super-Eddington luminosity places the source in a middle ground between less luminous Be/X-ray binaries (BeXB), which exhibit a range of persistent X-ray luminosities from $10^{32}\ \mathrm{erg}\ \mathrm{s}^{-1}$ \citep{Tomsick2011} up to $10^{35}\ \mathrm{erg}\ \mathrm{s}^{-1}$ \citep{Reig1999}, and brighter ultraluminous X-ray pulsars (ULXPs). ULXPs, the known examples of which are M82~X-2 \citep{Bachetti2014}, NGC~7793~P13 \citep{Furst2016, Israel2017}, NGC~300~ULX1 (also SN~2010da, \citeauthor{Carpano2018DiscoveryEvolution} \citeyear{Carpano2018DiscoveryEvolution}) and NGC~5907~ULX-1 \citep{Israel2017a}, vary between bright pulsing states during which the luminosity can reach $10^{41}\ \mathrm{erg}\ \mathrm{s}^{-1}$ --- several orders of magnitude above the Eddington limit of a typical neutron star --- and faint states when the luminosity drops to $10^{37-38}\ \mathrm{erg}\ \mathrm{s}^{-1}$ \citep{Kaaret2017}. Similar to {SMC~X-1}\ \citep[]{Inam2010}, these sources exhibit pulsations with periods on the order of one second and spin-up rates of $|\dot{P}|=10^{-11}-10^{-9}\,\mathrm{s\,s^{-1}}$, with the exception of NGC~300~ULX1, which has a much longer pulse period of 32\,s and a faster spin-up rate on the order of $10^{-7}\,\mathrm{s\,s^{-1}}$. A given ULXP may not exhibit detectable pulsations at all times, and when they are detected, the fraction of their flux which is pulsed is variable. Pulse transience has been attributed to the propeller effect in which rotation of the neutron star's magnetosphere halts accretion by flinging accreting material out of the system before it can reach the corotation radius \citep{Illarionov1975}. In contrast to the flux variability of {SMC~X-1}\ which occurs quasi-periodically with a continuous transition between high and low states, the propeller effect is associated with changes of more than a factor of $40$ in luminosity on shorter timescales which results in a bimodal flux distribution in ULXPs \citep{Tsygankov2016}. In terms of pulse fraction, this bimodality corresponds to distinct pulsed and non-pulsed states. However, continuous variability in pulse fraction has also been observed in ULXPs. In particular, the pulse fraction of M82~X-2 was shown to gradually increase from $8$\% to $23$\% in the $10-30$\,keV range over an interval of around 10 days \citep{Bachetti2014}. Periodic variability on timescales of 60-80 days has also been measured for the ULXPs NGC 7793 P13, NGC 5907 ULX-1, and M82 X-2 \citep[][respectively]{Motch2014, Walton2016, 2019arXiv190110491B}. While the ~64 day period observed in NGC~7793~P13 has been attributed to the orbital motion of the binary \citep{Furst2018AP13}, this variability has been classified as super-orbital in the case of M82 X-2. It is still uncertain whether the 78\,day period observed in NGC 5907 ULX-1 is orbital or super-orbital in nature. Given that {SMC~X-1}\ displays super-orbital modulations on similar timescales, as well as its persistent near- to super-Eddington luminosity and variable pulsations, {SMC~X-1}\ may provide a link between ULXPs and classes of X-ray binaries which have been studied in more detail. In this paper, we present timing and spectral analyses of two observations of {SMC~X-1}. In Section \ref{sec:data}, we describe the observations of {SMC~X-1}\ and our data reduction methods, including data extraction and corrections. In Section \ref{sec:timing}, we describe the methods and results of our timing analysis, and in Section \ref{sec:spectral}, we describe the spectral analysis of {SMC~X-1}. In Section \ref{sec:discussion}, we discuss the results of our analyses and offer a physical interpretation. Finally, in Section \ref{sec:conclusions}, we list our conclusions and discuss possible applications of our analysis to studies of ULXPs. \section{Observations and Data Reduction} \label{sec:data} {SMC~X-1}\ was observed twice by the {\em Nuclear Spectroscopic Telescope Array} ({\em NuSTAR}) \citep{Harrison2013} in 2012, in the first two months after the launch of the satellite for the purpose of calibration. {\em NuSTAR}\ consists of two focal plane modules, FPMA and FPMB, each of which is made up of four pixelated detectors (DET0-DET3). Each module has a field of view of about 12\,arcminutes, and, combined with focusing optics at a focal length of 10\,m, achieves an angular resolution of 18\,arcseconds, full width at half maximum (FWHM). The energy resolution, given by the FWHM, is 400\,eV at 10\,keV and 900\,eV at 68\,keV, and the full energy range is 3-79\,keV. The timing resolution of the onboard clock is 2\,$\mu$s with a dead time of 2.5\,ms, leading to a maximum count rate of around 400\,events\,$\mathrm{s^{-1}}$. The first observation took place on 2012 July 5 (OBSID 10002013001) and the second took place on 2012 August 6 (OBSID 10002013003) with exposure times of 27\,ks and 15\,ks, respectively. Figure \ref{fig:BATcurve} shows the light curve of the source as observed by the {\em Neil Gehrels Swift Observatory} Burst Alert Telescope (BAT) during a 250\,day interval bracketing the {\em NuSTAR}\ observations in 2012. The super-orbital period of around 60\,days is clearly visible, and the red bars show the location and duration of each observation in the super-orbital cycle. The first {\em NuSTAR}\ observation occurred at the end of the low state, when the luminosity was just beginning to rise, while the second observation occurred near the end of the high state, when the source was growing fainter. The observations were planned such that they avoided obscuration effects due to the donor star. We reduced the data using version 1.8.0 of the NuSTARDAS pipeline and {\em NuSTAR}\ CALDB v20170817. We used DS9 \citep{ds9} to select a circular source region with radius 55 arcseconds centered on the position of the source determined by automatic centroid detection. We also selected a circular background region with radius 80 arcseconds located on the same detector as the source, taking care to choose a region free of other sources and outside the source distribution. We corrected the photon arrival times to the solar system barycenter using the position of the source used for data extraction. Before analysis, the photon arrival times were also corrected for the orbital motion of the source using parameters reported by \citet{Falanga2015} and \citet{Inam2010}. We define three epochs of observation, labeled Epochs I, II, and III. The {\em NuSTAR}\ light curve for each observation is shown in Figure \ref{fig:lightcurves}. Epoch I is defined as the first 40\,ks (13\,ks of exposure time) of observation 10002013001, while the latter half (14\,ks of exposure) of observation 10002013001 makes up Epoch II. The whole of observation 10002013003 makes up Epoch III, which has an exposure time of 15\,ks. During observation 10002013001, the source was positioned on DET0, while the source was positioned on DET3 near the gap between DET3 and DET0 during observation 10002013003. Movement of the source between the two detectors accounts for the $\sim5000$\,s variability apparent in Figure \ref{fig:lightcurves}b. The background count rate did not vary significantly between observations, and for all three epochs, the background rate remained below $10\%$ of the total count rate for energies up to $\sim50$\,keV. To avoid background contamination, we performed spectral analysis for energies between 3\,keV and 40\,keV, resulting in $5.2\times10^4$, $6.3\times10^4$, and $6.3\times10^5$ spectral counts (combined FPMA and FPMB) for Epochs I, II, and III, respectively. For the purpose of spectral analysis, we binned the data such that there are at least 50 counts in each energy bin in Epoch I and Epoch II, and at least 100 counts in each energy bin in Epoch III. We chose to bin Epoch III with more events per bin due to the significantly higher count rate during that epoch. \section{Timing Analysis} \label{sec:timing} \begin{figure*}[htb] \gridline{\fig{nu10002013001_filt_nustar_E3-79_freq_search_epochI-eps-converted-to.pdf}{0.33\textwidth}{(a) Epoch I dynamic folding search} \fig{nu10002013001_filt_nustar_E3-79_pulse_search_part1-eps-converted-to.pdf}{0.33\textwidth}{(b) Epoch I period search} \fig{nu10002013001_filt_nustar_E3-79_pulse_profile_part1.pdf}{0.33\textwidth}{(c) Epoch I pulse profile}\label{fig:epochIsearches}} \gridline{\fig{nu10002013001_filt_nustar_E3-79_freq_search_epochII-eps-converted-to.pdf}{0.33\textwidth}{(d) Epoch II dynamic folding search} \fig{nu10002013001_filt_nustar_E3-79_pulse_search_part2-eps-converted-to.pdf}{0.33\textwidth}{(e) Epoch II period search} \fig{nu10002013001_filt_nustar_E3-79_pulse_profile_part2.pdf}{0.33\textwidth}{(f) Epoch II pulse profile}\label{fig:epochIIsearches}} \gridline{\fig{nu10002013003_filt_nustar_E3-79_freq_search-eps-converted-to.pdf}{0.33\textwidth}{(g) Epoch III dynamic folding search} \fig{nu10002013003_filt_nustar_E3-79_pulse_search-eps-converted-to.pdf}{0.33\textwidth}{(h) Epoch III period search} \fig{nu10002013003_filt_nustar_E3-79_pulse_profile.pdf}{0.33\textwidth}{(i) Epoch III pulse profile}\label{fig:epochIIIsearches}} \caption{Results of pulsation searches applied to each epoch. The left column shows the results of a dynamic folding search. Pulsations are not detected during Epoch I but seem to appear and gradually increase in strength after observation continues during Epoch II. Pulsations are clearly detected in Epoch III and do not appear to vary significantly throughout the epoch. The middle column shows the results of folding searches over both the pulse period and its first derivative. The results of the dynamic searches allowed us to search over a narrower period range. The resulting $Z^2_4$ distribution (b, e, and h) for each epoch is fitted to a 2-d Gaussian distribution. The mean of the fitted Gaussian is indicated by a black cross (\ding{58}) while the white contours represent the 1- and 2-sigma confidence regions. The apparent correlation between $P$ and $\dot{P}$ is an artifact of the search itself and is not intrinsic to the source. The maximum $Z^2_4$ value achieved by each search is indicated by a blue cross (\textcolor{blue}{\ding{54}}) and was used to produce pulse profiles shown in blue in panels b, d, and f. In gold are the 90\% confidence regions determined by the Monte Carlo procedure described in Section \ref{sec:intro}. When applied to Epoch I, the search produces multiple maxima of relatively low detection probability, resulting in a poor fit which cannot constrain the pulse period and first derivative to within the search bounds. We therefore do not show the fitted Gaussian, and we choose to fold the pulse profile using the maximum nearest the values measured for Epochs II and III. The result is a profile with weak pulsations which are not detected when the last 5000\,s of Epoch I are omitted. During Epochs II and III, however, the pulse period is well-constrained, resulting in distinctive pulse profiles, shown in the right column. Note that the scale of the y-axis in panel (c) is narrower than those of (f) and (i) in order to better illustrate the pulse profile during Epoch I.} \label{fig:Pdotsearches} \end{figure*} \quad \begin{deluxetable*}{ccccc} \tablecolumns{5} \tabletypesize{\scriptsize} \tablecaption{Results of the folding pulsation search for each epoch.\label{table:pulsedata}} \tablehead{ \colhead{Epoch} & \colhead{$T_{ref}$ (MJD)} & \colhead{$P$ (s)} & \colhead{$|\dot{P}|\ (10^{-8}\,\mathrm{s\,s^{-1}})$} & \colhead{Pulse Fraction (\%)} } \startdata I & $56113.28661210$ & \nodata & \nodata & $<4.5$ \\ II & $56113.92279551$ & $0.70121(20)$ & $<1.2$ & $21.5 \pm 1.5$ \\ III & $56145.10372569$ & $0.70117(9)$ & $<0.77$ & $40.9 \pm 0.5$ \\ \enddata \end{deluxetable*} We performed a timing analysis of both observations using the Stingray \citep{Huppenkothen2016Stingray:Software} and HENDRICS \citep{Bachetti} software packages in order to determine the pulse fraction, pulse period, and spin-up rate during each epoch. The results of this analysis are shown in Table \ref{table:pulsedata}. The pulse fraction, $PF$, is defined as follows \begin{equation} PF = \frac{F_{\rm max} - F_{\rm min}}{F_{\rm max} + F_{\rm min}} \end{equation} where $F_{\rm max}$ and $F_{\rm min}$ are the maximum and minimum fluxes in the pulse profile, respectively. All pulse fractions and corresponding errors quoted were calculated using a Monte Carlo analysis. Given a measured pulse period and derivative, we folded the observed events into a pulse profile with sixteen phase bins per cycle. The uncertainty in flux for each phase bin is given by a Poisson distribution. We sampled this distribution for each phase bin to produce a large number of simulated pulse profiles and passed these profiles through a Savitzky-Golay filter \citep{Savitsky1964}. We thus arrived at a distribution of smoothed profiles from which we extracted the mean pulse fraction and corresponding confidence regions. All uncertainties and upper limits quoted in this section and following sections correspond to 90\% confidence ranges unless otherwise indicated. Before searching for and analyzing pulsations, we first determined the orbital phase of the observations. Using the orbital parameters reported by \cite{Falanga2015}, we determined the mid-eclipse times which occurred immediately before and after the observations according to the quadratic orbital change function \begin{equation} T_n = T_0 + nP_{\rm orb} + \frac{1}{2}n^2P_{\rm orb}\dot{P}_{\rm orb} \end{equation} where $T_0$ is the reference epoch (MJD 52846.6888), \textit{n} is the number of elapsed orbits, $P_{\rm orb}$ is the orbital period measured at $T_0$ (3.8919232\,days), and $\dot{P}_{\rm orb}$ is the time derivative of the orbital period measured at $T_0$ ($\mathrm{-3.77\times10^{-8}\,day\,day^{-1}}$). Defining the mid-eclipse times preceding and following each observation as orbital phases 0 and 1, respectively, we found that the first observation occurred between orbital phases $0.34$ and $0.61$, while the second observation occurred between phases $0.52$ and $0.61$. These orbital phases are determined to better than $10^{-5}$ and lie far from the eclipse ingress and egress times. Therefore we can be confident that there were no obscuration effects due to the supergiant companion. We performed pulsation searches on the combined filtered and calibrated FPMA and FPMB events for each of the three epochs. When combining the FPMA and FPMB events for each epoch, we produced common good-time intervals (GTIs) in order to avoid introducing artificial variability due to non-simultaneous observation and differences in sensitivity between the two focal plane modules. We began our pulsation search by performing a dynamic search using the HENDRICS function \texttt{dyn\_folding\_search}. This function steps over time and pulse period, folds the events into a pulse profile with that period, and calculates the $Z_{4}^{2}$ statistic \citep{Buccheri1983}, a measure of the probability of pulsation detection, of the profile produced at each step. The probability density function of the $Z_{4}^{2}$ statistic is equivalent to that of a $\chi^{2}$ distribution with 8 degrees of freedom. Therefore, one can use the $\chi^{2}$ cumulative distribution function to determine the probability that a pulse profile with a given value of $Z_{4}^{2}$ has been produced by noise. For example, a profile with $Z_{4}^{2}=13$ has a 10\% probability of being produced by noise, therefore this can be considered a detection at 90\% confidence. A 5-sigma detection, corresponding to a probability of $5.7\times10^{-7}$ that a signal has been produced by noise, would yield a $Z_{4}^{2}$ statistic of 44. The results of the dynamic pulsation search applied to Epoch III (Figure \ref{fig:Pdotsearches}g) confirm the presence of strong pulsations. The pulsations appear to remain persistent throughout the observation with a period around $0.701\,\mathrm{s}$ and without a large period derivative. The results of this test are less striking upon application to Epochs I and II (Figures \ref{fig:Pdotsearches}a and \ref{fig:Pdotsearches}d). The $Z_{4}^{2}$ statistic reaches only a fraction of the maximum value measured during Epoch III, and during Epoch I, there is no sign of pulsations. However, during Epoch II, pulsations appear to have begun with a period similar to that observed in Epoch III, reaching a maximum detection probability at the end of the observation. We next simultaneously searched for the period and first period derivative of the pulsations for each of the three epochs using the HENDRICS function \texttt{folding\_search}. The results of this search are shown in the second column of Figure \ref{fig:Pdotsearches}. We were able to measure the pulse period and to put upper limits on the first derivative during Epochs II and III by fitting the resulting $Z_{4}^{2}$ distributions to 2-dimensional Gaussian distributions\footnote{The uncertainties reported for the pulse periods and the upper limits of the spin-up rates were determined using the widths of the fitted Gaussian distributions.} with mean pulse periods at $P_{\mathrm{II}}=0.70121(20)\,\mathrm{s}$ and $P_{\mathrm{III}}=0.70117(9)\,\mathrm{s}$, respectively. We have obtained upper limits on the instantaneous spin-up rates of $|\dot{P}_\mathrm{II}|<1.2\times10^{-8}\,\mathrm{s\,s^{-1}}$ and $|\dot{P}_\mathrm{III}|<7.7\times10^{-9}\,\mathrm{s\,s^{-1}}$. Note that there is a correlation between $P$ and $\dot{P}$ apparent in Figure \ref{fig:Pdotsearches}. This is not intrinsic to the source itself but is an artifact introduced by the search procedure. In addition to measuring the pulse periods and constraining the instantaneous spin-up rates, we have also placed an upper limit on the secular spin-up rate between Epoch II and Epoch III of $|\dot{P}_{\rm sec}|<10^{-10}\mathrm{\,s\,s^{-1}}$. After determining the pulse periods and the spin-up rates, we then folded the events into pulse profiles at the $Z_{4}^{2}$ maxima produced by the pulsation searches. These pulse profiles are shown in Figures \ref{fig:Pdotsearches}f and \ref{fig:Pdotsearches}i. We observe distinct pulsations in the pulse profiles of Epochs II and III. The probability that these profiles were produced by noise is vanishingly small, being less than $10^{-37}$ in both cases. In stark contrast, we were completely unable to constrain the pulse period during Epoch I. There are multiple local maxima of comparable amplitude in the $Z_{4}^{2}$ distribution. We therefore chose to fold the events into a pulse profile (see Figure \ref{fig:Pdotsearches}c) using the maximum nearest the values measured during Epochs II and III. This corresponds to a pulse period of $P_{\rm fold}=0.70113161\,\mathrm{s}$ and a first derivative of $\dot{P}_{\rm fold}=2.69\times10^{-9}\,\mathrm{s\,s^{-1}}$. The resulting profile has a pulse fraction of $<4.5\%$. This is relatively small compared to the pulse fractions of $21.5\% \pm 1.5\%$ during Epoch II and $40.9\% \pm 0.5\%$ during Epoch III. In addition to the small pulse fraction during Epoch I, the $Z_{4}^{2}$ value of the calculated pulse profile is less than 15 and corresponds to a probability of 7\% that the detection is due to noise. Furthermore, when the last 5000\,s of Epoch I are omitted from the pulsation search even this weak detection disappears, indicating that pulsations were absent until the very end of Epoch I. Therefore, we refer to Epoch I as the non-pulsing state. The pulse periods that we have measured during Epochs II and III and the resulting pulse profiles are in line with previous measurements \citep[cf.][]{Moon2003, Naik2004, Raichur2010, Inam2010}. In particular, we have extrapolated previous results by applying an orthogonal distance regression to the pulse frequencies reported by \citeauthor{Inam2010}. We arrived at a spin up of $\dot{f}_{pulse}=2.589(8)\times10^{-11}\mathrm{\,Hz\,s}^{-1}$ during the interval 50093-52988\,MJD. When propagated forward to the beginning of Epoch III, a pulse period of $0.70093(2)\,\mathrm{s}$ is predicted. The discrepancy of $2.39(96)\times10^{-4}$\,s is small but nonzero. This is consistent with a piece-wise spin-up evolution, reported by \citeauthor{Inam2010}, in which the spin-up rate is variable. We also note that, although the pulse fraction increases with energy, the shapes of the pulse profiles during Epochs II and III do not appear to vary significantly with energy. \section{Spectral Analysis} \label{sec:spectral} \begin{deluxetable*}{ c c r r r } \tablecolumns{5} \tabletypesize{\scriptsize} \tablecaption{Values of spectral parameters determined by $\chi^2$ fitting of observed spectra. Two models are shown: a fully covered power law with a Fermi-Dirac-like cutoff modeled by \texttt{fdcut} (top), and a partially covered power law with an exponential cutoff, modeled by \texttt{cutoffpl} (bottom).\label{table:spectralPar}} \tablewidth{0pt} \tablehead{ \colhead{Component} & \colhead{Parameter} & \colhead{Epoch I} & \colhead{Epoch II} & \colhead{Epoch III} } \startdata \noalign{\smallskip} \texttt{tbabs} & $N_{\rm H}$ ($10^{22}\,\mathrm{cm}^{-2}$) & $16 \pm 5$ & $24^{+5}_{-4}$ & $1.9^{+1.3}_{-0.9}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{4}{*}{\texttt{fdcut}} & $\Gamma$ & $1.0$\tablenotemark{$\dagger$} & $1.0$\tablenotemark{$\dagger$} & $1.0$\tablenotemark{$\dagger$} \\ \noalign{\smallskip} & $E_{\rm cut}$ (keV) & $17.3^{+1.6}_{-2.3}$ & $11.0^{+3.2}_{-5.1}$ & $9.1^{+2.5}_{-3.0}$ \\ \noalign{\smallskip} & $E_{\rm fold}$ (keV) & $6.7^{+0.8}_{-0.6}$ & $8.7^{+1.0}_{-0.8}$ & $9.6 \pm 0.4$ \\ \noalign{\smallskip} & Norm ($10^{-3}$) & $8.0^{+1.4}_{-0.9}$ & $13.8^{+3.2}_{-2.5}$ & $107^{+29}_{-16}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}{*}{\texttt{gauss}} & $E_{\rm 6.4}$ (keV) & $6.36 \pm 0.04$ & $6.36 \pm 0.06$ & $6.51^{+0.09}_{-0.07}$ \\ \noalign{\smallskip} & $\sigma_{\rm 6.4}$ (keV) & $0.24 \pm 0.06$ & $0.21\pm 0.10$ & $0$\tablenotemark{$\dagger$} \\ \noalign{\smallskip} & Norm ($10^{-4}$) & $3.3 \pm 0.5$ & $2.3^{+0.6}_{-0.5}$ & $1.9 \pm 0.7$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}{*}{\texttt{gauss}} & $E_{\rm 13.5}$ (keV) & $13.5$\tablenotemark{$\dagger$} & $13.5$\tablenotemark{$\dagger$} & \multirow{3}{*}{\nodata} \\ \noalign{\smallskip} & $\sigma_{\rm 13.5}$ (keV) & $2.2^{+0.8}_{-0.7}$ & $1.7^{+1.8}_{-0.9}$ & \\ \noalign{\smallskip} & Norm ($10^{-4}$) & $2.3^{+1.6}_{-1.1}$ & $1.1^{+1.7}_{-0.7}$ & \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{2}{*}{\texttt{bbody}} & $kT$ (keV) & $0.36^{+0.04}_{-0.06}$ & $0.31^{+0.03}_{-0.04}$ & $0.26 \pm 0.09$ \\ \noalign{\smallskip} & Norm ($10^{-3}$) & $3.6^{+3.5}_{-1.9}$ & $19.6^{+24.6}_{-9.3}$ & $56^{+195}_{-45}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{2}{*}{\texttt{bbody}} & $kT$ (keV) & \multirow{2}{*}{\nodata} & \multirow{2}{*}{\nodata} & $1.46 \pm 0.07$ \\ \noalign{\smallskip} & Norm ($10^{-3}$) & & & $2.3 \pm 0.4$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} & Absorbed Flux\tablenotemark{$a$} ($10^{-11}\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$) & $6.26^{+0.15}_{-0.32}$ & $6.67^{+0.06}_{-0.49}$ & $99.3^{+0.8}_{-15.4}$ \\ \noalign{\smallskip} & Unabsorbed Flux\tablenotemark{$a$} ($10^{-10}\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$) & $1.46^{+0.56}_{-0.33}$ & $2.82^{+1.16}_{-0.69}$ & $11.6^{+4.0}_{-0.9}$ \\ \noalign{\smallskip} & Unabsorbed Luminosity ($10^{37}\,\mathrm{erg}\,\mathrm{s}^{-1}$) & $6.43^{+2.48}_{-1.44}$ & $12.4^{+5.1}_{-3.0}$ & $51.0^{+17.7}_{-4.1}$ \\ \noalign{\smallskip} & $\chi^2/\mathrm{d.o.f.}$ & 693/630 (1.10) & 681/679 (1.00) & 1091/1019 (1.07) \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} \multirow{2}{*}{\texttt{tbpcf}} & $N_{\rm H}$ ($10^{23}\,\mathrm{cm}^{-2}$) & $7.8^{+1.9}_{-2.1}$ & $5.6^{+0.9}_{-1.2}$ & $26^{+18}_{-10}$ \\ \noalign{\smallskip} & $f_{\rm covering}$ (\%) & $51 ^{+7}_{-8}$ & $61^{+2}_{-4}$ & $15^{+7}_{-6}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}{*}{\texttt{cutoffpl}} & $\Gamma$ & $0.5$\tablenotemark{$\dagger$} & $0.5$\tablenotemark{$\dagger$} & $0.5$\tablenotemark{$\dagger$} \\ \noalign{\smallskip} & $E_{\rm cut}$ (keV) & $8.4 \pm 0.6$ & $8.9^{+0.4}_{-0.3}$ & $9.1 \pm 0.2$ \\ \noalign{\smallskip} & Norm ($10^{-3}$) & $6.7^{+1.5}_{-1.2}$ & $7.5^{+0.7}_{-0.9}$ & $53 \pm 4$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}{*}{\texttt{gauss}} & $E_{\rm 6.4}$ (keV) & $6.35^{+0.04}_{-0.05}$ & $6.34^{+0.06}_{-0.07}$ & $6.52 \pm 0.08$ \\ \noalign{\smallskip} & $\sigma_{\rm 6.4}$ (keV) & $0.22 \pm 0.07$ & $0.24^{+0.10}_{-0.09}$ & $0$\tablenotemark{$\dagger$} \\ \noalign{\smallskip} & Norm ($10^{-4}$) & $3.6^{+0.7}_{-0.6}$ & $2.6^{+0.7}_{-0.5}$ & $2.0^{+0.8}_{-0.9}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{3}{*}{\texttt{gauss}} & $E_{\rm 13.5}$ (keV) & $13.5$\tablenotemark{$\dagger$} & $13.5$\tablenotemark{$\dagger$} & \multirow{3}{*}{\nodata} \\ \noalign{\smallskip} & $\sigma_{\rm 13.5}$ (keV) & $4.9^{+1.3}_{-1.1}$ & $1.7^{+3.3}_{-1.1}$ & \\ \noalign{\smallskip} & Norm ($10^{-4}$) & $7.8 \pm 3.1$ & $1.0^{+3.0}_{-0.7}$ & \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{2}{*}{\texttt{bbody}} & $kT$ (keV) & \multirow{2}{*}{\nodata} & \multirow{2}{*}{\nodata} & $0.23^{+0.08}_{-0.07}$ \\ \noalign{\smallskip} & Norm ($10^{-1}$) & & & $1.1^{+72.3}_{-1.0}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{2}{*}{\texttt{bbody}} & $kT$ (keV) & \multirow{2}{*}{\nodata} & \multirow{2}{*}{\nodata} & $1.46^{+0.08}_{-0.09}$ \\ \noalign{\smallskip} & Norm ($10^{-3}$) & & & $3.0^{+0.8}_{-0.7}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} & Absorbed Flux\tablenotemark{$a$} ($10^{-11}\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$) & $6.49^{+0.06}_{-0.24}$ & $6.90^{+0.08}_{-0.09}$ & $105^{+2}_{-18}$ \\ \noalign{\smallskip} & Unabsorbed Flux\tablenotemark{$a$} ($10^{-10}\,\mathrm{erg}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$) & $1.06^{+0.17}_{-0.14}$ & $1.18^{+0.08}_{-0.11}$ & $12.7^{+1.1}_{-0.6}$ \\ \noalign{\smallskip} & Unabsorbed Luminosity ($10^{37}\,\mathrm{erg}\,\mathrm{s}^{-1}$) & $4.65^{+0.73}_{-0.62}$ & $5.20^{+0.34}_{-0.47}$ & $55.8^{+4.8}_{-2.8}$ \\ \noalign{\smallskip} & $\chi^2/\mathrm{d.o.f.}$ & 705/632 (1.11) & 681/681 (1.00) & 1090/1019 (1.07) \\ \noalign{\smallskip} \enddata \tablenotetext{a}{Fluxes are reported for FPMA in the energy range $2-10$ keV.} \tablenotetext{\dagger}{Values marked with a dagger were frozen during fitting and therefore have no error estimates.} \end{deluxetable*} We also performed a spectral analysis of each of the three epochs using Xspec \citep[v.12.10.0][]{Arnaud1996}. We simultaneously fit the spectra measured by FPMA and FPMB while including a relative constant to account for small ($<10\%$) differences in flux between the two focal plane modules. In addition, for all spectral models described in this section, we have included an absorber in the form of the \texttt{tbabs} component. This component compensates for absorption due to Galactic material. We fixed the equivalent HI column density of this component at $N_{\rm H} = 4.58\times10^{21}\ \mathrm{cm}^{-2}$, determined from the full-sky HI survey, HI4PI \citep{BenBekhti2016}. The spectral fits were performed using interstellar medium abundances reported by \cite{Wilms2000}. The spectra for each epoch are shown in Figure \ref{fig:spectra}, and the results of our spectral analysis are presented in Table \ref{table:spectralPar}. Each panel in Figure \ref{fig:spectra} also includes residuals for three different models, including a simple absorbed power law meant to illustrate additional structure in the spectra. We have found two models that provide fits of similar quality and which result in physically reasonable parameters. Motivated by previous work by, e.g., \citet{Woo1995WindObservations}, \citet{Angelini1991THEL}, and \citet{2014HEAD...1411702P}, the first model we investigated was an absorbed power law with a phenomenological cutoff, named \texttt{fdcut} \citep{Tanaka1986ObservationsSources} for its resemblance to the Fermi-Dirac distribution, which has both a cutoff energy and folding energy and can be written \begin{equation} f_{\rm FD}(E) = \frac{E^{-\Gamma}}{1 + e^{(E-E_{\rm cut})/E_{\rm fold}}} \end{equation} where $\Gamma$ is the photon index, $E_{\rm cut}$ is the cutoff energy, and $E_{\rm fold}$ is the folding energy. The absorber in this model is fully covering and is modeled by \texttt{tbabs} \citep{Wilms2000}. The second model consists of a power law with an exponential cutoff, represented by the Xspec model \texttt{cutoffpl}, partially covered by an absorber modeled by \texttt{tbpcf}. In addition to these base models, we found that the fits benefited from the addition of secondary components, differing depending on the epoch. Below, we describe each of these models in more detail. In order to compare the usefulness of additional model components, we use the Bayesian Information Criterion \citep[BIC;][]{Schwarz1978EstimatingModel}. In the case of $\chi^2$ fitting in Xspec, the BIC is given by \begin{equation} \mathrm{BIC} = k\ln(n) + \chi^2 \end{equation} where $n$ is the number of PHA bins being fitted and $k$ is the number of parameters estimated by a given model. For a given data set, model selection can be achieved by minimizing the BIC, which penalizes models with many parameters. For our analysis, $n$ lies between 600 and 1100 bins, meaning that removing one parameter from a model without a change in $\chi^2$ results in a decrease in the BIC of $\Delta \mathrm{BIC} \approx -7$. In determining the impact of adding or subtracting components, this may be considered one ``unit'' of model improvement. \begin{figure}[t!] \gridline{\fig{spectral_analysis_epochI.pdf}{0.375\textwidth}{(a) Epoch I}} \gridline{\fig{spectral_analysis_epochII.pdf}{0.375\textwidth}{(b) Epoch II}} \gridline{\fig{spectral_analysis_epochIII.pdf}{0.375\textwidth}{(c) Epoch III}} \caption{Observed spectra for each epoch are shown unfolded against a model with constant (energy-independent) flux in the top panels, and fitting residuals for three different models are shown in the lower panels. The FPMA spectra are shown in black while the FPMB spectra are shown in red. The data is consistently described well by a partially absorbed power law with a high energy cutoff. While the shape of the continuum remains relatively constant, the covering fraction and absorbing column vary between successive epochs, with the covering fraction decreasing significantly between Epoch II and Epoch III.} \label{fig:spectra} \end{figure} The spectra observed during the two observations are qualitatively different, as is visible in Figure \ref{fig:spectra}. For Epochs I and II, an absorbed \texttt{fdcut} model alone results in significant excess residuals below 4\,keV, around 6.4\,keV, and above 10\,keV. The excess around 6.4\,keV is consistent with previous detections of an Fe K$\alpha$ line in {SMC~X-1}, such as those by \citet{Woo1995ROSATState} and \citet{Naik2004}. We included a Gaussian component at this energy to model the line, allowing both the position and width of the line to vary. To address the low-energy excess, we added a black body component with temperature $kT_{\rm BB}<0.5$\,keV. Such a component has previously been detected in observations of {SMC~X-1}\ by the {\em Chandra X-ray Observatory} and {\em XMM-Newton} \citep{Neilsen2004PHASEX-1, Hickox2005}. Each of these components decreases the BIC by about 100, and adding both of these components results in a combined improvement to the fit of $\Delta \mathrm{BIC} = -304$ for Epoch I and $\Delta \mathrm{BIC} = -214$ for Epoch II, indicating that the improvement is significant. Adding these components does not resolve the ``bump" above 10\,keV. This residual resembles the 10\,keV feature observed in other accreting pulsars \citep[e.g.][]{Coburn2002MagneticExplorer, Mihara1995ObservationalGinga, Santangelo1998BeppoSAXX-3}, leading us to include a Gaussian component at $E=13.5$\,keV. We froze the position of this component in order to better constrain other parameters, while the width of the Gaussian was allowed to vary. Adding this component does not result in a significant improvement to the fit, with $\Delta \mathrm{BIC} = -7$ for Epoch I and $\Delta \mathrm{BIC} = +6$ for Epoch II. However, because the residuals are clearly reduced, and this feature is consistent with previous studies of accreting pulsars, the component is included in the final fit. In the case of Epoch III, the \texttt{fdcut} model again requires a low-temperature ($kT_{\rm BB}<0.5$\,keV) black body in order to explain excess flux below 4\,keV, the addition of which improves the fit by $\Delta \mathrm{BIC} = -93$. In addition, although the excess near 6.4\,keV is not as prominent as in the previous two Epochs, adding a line at this energy improves the fit significantly ($\Delta \mathrm{BIC} = -46$). However, the width of the line is poorly constrained, leading us to freeze it at $\sigma = 0$. We also found that for Epoch III, adding a $kT\approx1.5$\,keV blackbody component, like the one included by \citet{2014HEAD...1411702P} in their analysis of this observation, improves the fit by $\Delta \mathrm{BIC} = -46$ while also eliminating an excess of flux above 20\,keV. Similar blackbody components with temperatures ranging between 1.2\,keV and 3\,keV have proved useful for modeling the spectra of several BeXRBs \citep[e.g.][]{Reig1999, LaPalombara2009XMM-NewtonRXJ0440.9+4431, Caballero2013ASuzaku}. In contrast to Epochs I and II, the 13.5\,keV bump is not observed during Epoch III, and its addition to the \texttt{fdcut} model does not improve the fit nor can this component be easily constrained. The partially covered cutoff power law provides a similarly good fit to the data as the Fermi-Dirac-like model. However, the secondary components differ somewhat. Adding a line at $E=6.4$\,keV to the base model again improves the fit significantly during Epochs I and II, with $\Delta \mathrm{BIC} = -97$ and $\Delta \mathrm{BIC} = -43$, respectively. Adding this line to Epoch III does not result in a striking improvement (only $\Delta \mathrm{BIC} = -6$), but the position of the Gaussian is constrained to the same value as in the \texttt{fdcut} model. The 13.5\,keV bump is also added to this model for Epochs I and II, again slightly improving the fits. Unlike the absorbed \texttt{fdcut} model, the partially covered cutoff power law does not require a low-temperature black body to resolve excess emission below 4\,keV during Epochs I and II. This component, along with the $kT_{\rm BB}\approx1.5$\,keV black body, remains in Epoch III. Adding each of these black body components individually yields different results. Including only the warm $kT_{\rm BB}\approx1.5$\,keV black body improves the fit by $\Delta \mathrm{BIC} = -142$, while adding only the low-temperature $kT_{\rm BB}<0.5$\,keV black body does not improve the fit, yielding $\Delta \mathrm{BIC} = +6$. However, when both components are included, the fit is improved by $\Delta \mathrm{BIC} = -170$. In other words the combination of the low-temperature and high-temperature black body components improves the fit more than each of these components individually. None of the three spectra can be fit to a simple one-component model, instead requiring several secondary components in order to properly fit the {\em NuSTAR}\ observations. In order to reduce degeneracies resulting from the number of parameters used in the final models, we froze some key parameters at values which are consistent with initial estimates. As mentioned above, the position of the bump above 10\,keV was frozen at $E=13.5$\,keV, and the width of the Fe K$\alpha$-like line in Epoch III was frozen at $\sigma=0$\,keV. In addition to these, we froze the photon indices across all three Epochs at $\Gamma=1.0$ for the \texttt{fdcut} model and $\Gamma=0.5$ for the \texttt{cutoffpl} model. The final model components, parameter estimates, and fit information are shown in Table \ref{table:spectralPar}. Here we remind the reader that the uncertainties quoted on spectral parameters represent 90\% confidence regions. We found that for both models, the absorption parameters vary between epochs, while the underlying power law models show less variability. In the case of the \texttt{fdcut} model, the absorbing column density decreases by an order of magnitude from $N_{\rm H} = (2.4^{+0.5}_{-0.4})\times10^{23}\,\mathrm{cm}^{-2}$ during Epoch II, to $N_{\rm H} = (1.9^{+1.3}_{-0.9})\times10^{22}\,\mathrm{cm}^{-2}$ during Epoch III. As we have shown, the pulse fraction simultaneously increases between these two epochs. The shape of the \texttt{fdcut} component on the other hand remains consistent between Epochs II and III, with the cutoff energy of $E_{\rm cut}\approx 10\,{\rm keV}$ folding energy of $E_{\rm fold}\approx 9\,{\rm keV}$. However, Epoch I has a slightly higher cutoff energy and lower folding energy: $E_{\rm cut} = 17.3^{+1.6}_{-2.3}\,{\rm keV}$ and $E_{\rm fold} = 6.7^{+0.8}_{-0.6}\,{\rm keV}$. On the other hand, in the case of the partially covered cutoff power law, the shape of the \texttt{cutoffpl} component stays constant. The exponential cutoff is consistent with $E_{\rm cut}\approx 9\,{\rm keV}$ during all three epochs. The absorption parameters show little variation between Epoch I and Epoch II, but the covering fraction drops by a factor of four from $f_{\rm covering} = (61^{+2}_{-4})\%$ in Epoch II to $f_{\rm covering} = (15^{+7}_{-6})\%$ in Epoch III. Between these epochs, the column density appears to increase by a factor of a few, but this parameter is poorly constrained during Epoch III due to the low covering fraction. In both models, the underlying continuum increases in flux between successive epochs while the flux of the apparent Fe K$\alpha$ line remains constant. Taken together, these observations indicate that the increase in total flux between epochs cannot be attributed solely to the absorption included in the models described above, and that the source of the Fe K$\alpha$ line is likely distinct from the source of the continuum (e.g., originating in the photoionization region surrounding the central X-ray source). In addition, the appearance of the $kT_{\rm BB}\approx1.5\,{\rm keV}$ black body in Epoch III, observed in both the \texttt{fdcut} model and the \texttt{cutoffpl} model, may indicate that the emitting region responsible for this component either did not exist or was obscured during Epochs I and II. \section{Discussion} \label{sec:discussion} Our timing analysis has shown that the source was observed in a non-pulsing state during Epoch I which subsequently evolved into a pulsing state, observed in Epoch II. During Epoch III, about a month after Epoch II, the pulsations had increased in strength, with the pulse fraction increasing by nearly a factor of two. At the same time, our spectral analysis has shown that for all three epochs, the emission of the source can be described by two different models: a fully covered power law with a phenomenological Fermi-Dirac-like cutoff, and a partially absorbed power law with an exponential cutoff. Each of these models requires additional components, but we found that both models are consistent with variable absorption parameters between the low and high pulse fraction states. In particular, the Fermi-Dirac model exhibits a decrease in absorption column density between Epoch II and Epoch III, and the cutoff power law is consistent with a decrease in the covering fraction (and a poorly constrained increase in column density), between the low and high pulse fraction states. In addition, the luminosity was observed to gradually increase between the non-pulsing Epoch I and the pulsing Epoch II. In order to synthesize these results, we propose that the pulsing region was observed emerging from behind absorbing material. Given that Epochs I and II took place near the end of the low state, as illustrated in Figure \ref{fig:BATcurve} and that the super-orbital period has been attributed to a warped precessing accretion disk \citep{Wojdowski1998, Clarkson2003, Dage2018}, the absorbing material obscuring the pulsing region is likely part of the accretion disk. In short, the warped accretion disk absorbs and scatters the pulsed emission from the neutron star, leading to the absence of detected pulsations in the low state and a gradual turn-on of pulsations as the disk moves out of the line of sight. This picture is consistent with the opaque inner disk region described by \citet{Hickox2005} to explain apparent reprocessing of pulsed emission. Their analysis describes the case when both the neutron star and the inner regions of the warped accretion disk are visible to the observer, while ours describes the opposite case when the warped disk lies between the neutron star and the observer, obscuring the pulsing emission regions. The relatively high absorption column density of the partially covered cutoff power law in Epoch III does not immediately fit within this interpretation. Although this column density is not particularly well constrained, it is still well above the values measured during the first two epochs. This increased column density is accompanied by an increase in the brightness of the power law itself; in other words, the increased flux between the two observations cannot be attributed solely to the absorption included in the models presented here. Thus one interpretation of the combination of a relatively high column density and a relatively low covering fraction is that much of the absorber is completely Compton thick during Epochs I and II. The partially covering absorber represented by \texttt{tbpcf}, then, only models the optically thinner regions of the accretion disk leading the covering fraction and column density to be underestimated for the first two epochs. During Epoch III, according to this interpretation, the source is observed through an overall less opaque region of the accretion disk so that a higher column density is measurable. Pulse fraction variability, including pulse drop-out, has been observed in several other accreting pulsars. In some cases, this variability has been attributed to changes in accretion via the propeller effect \citep{Illarionov1975}. These include HMXBs Vela~X-1 and GX~301$-$2, which have been shown to exhibit off-states during which the sources drop in luminosity and pulsations are no longer detected \citep{Kreykenbohm2008HighStates, Furst2011AstronomyXMM-Newton}. LMC~X-4, in which pulse drop-out and turn-on have been observed during the high state \citep{Brumback2018DiscoveryX-4}, presents a different case. Still others, such as the low mass X-ray binary Her~X-1, exhibit pulse fraction variability attributed to obscuration by warped accretion disks \citep{Kuster2002}. Of these examples, the case of variable obscuration in Her~X-1 is most analogous to the behavior we have observed in {SMC~X-1}. \section{Conclusions} \label{sec:conclusions} We have performed spectral and timing analyses of the accreting neutron star binary {SMC~X-1}\ for three separate epochs occurring during two {\em NuSTAR}\ observations. Our timing analysis confirmed that the source was observed in the midst of a turn-on of pulsations, which subsequently increased in strength before strong pulsations were observed a month later. Our spectral analysis, which showed variable absorber parameters and luminosity, led us to conclude that the non-pulsing state was due to obscuration of the pulsing region by a warped accretion disk, and that the gradual turn-on was due to the emergence of the pulsing emission from behind the disk. Similarly to {SMC~X-1}, ULXPs are also known to exhibit variability in their luminosities and pulse fractions. In particular, the gradual change in pulse fraction observed in the beginning of the 2014 observation of M82~X-2 \citep{Bachetti2014} may share the same physical origin as the pulse fraction variability we have observed in {SMC~X-1}. In that case, the super-orbital periods observed in ULXPs may be attributable to precessing accretion disks which periodically obscure the pulsing source, resulting in variability in the observed pulse fractions. Spectral and timing analyses at different points in the super-orbital cycles of known ULXPs, like the analysis we have carried out for {SMC~X-1}, may help to illuminate the accretion mechanism and causes of variability in this recently discovered class of X-ray binary. \acknowledgements This work was supported under NASA grant No. NNG08FD60C, and made use of data obtained with {\em NuSTAR}, a project led by Caltech, funded by NASA and managed by NASA/JPL and has utilized the NUSTARDAS software package, jointly developed by the ASDC (Italy) and Caltech (USA). We would also like to thank the anonymous referee for providing helpful comments that helped to improve the quality of this paper. \software{Stingray \citep{Huppenkothen2016Stingray:Software}, HENDRICS \citep{Bachetti}, NUSTARDAS, MaLTPyNT \citep{Bachetti}, DS9 \citep{ds9}}
\section{Introduction} From the spread of pathogens \cite{flu,manore14,stanley1} through places such as airports, schools \cite{children1} and hospitals \cite{onnela}, to the spread of online popularity \cite{boyle,havlin} and rumors through Internet chatrooms \cite{finance,neil} or bulletin boards \cite{sornette1,riley}, the issue of viral spreading through popular places is of prime importance. Many sophisticated epidemiological models have been proposed of viral dynamics \cite{Keeling,previous,May,Koopman,Murray,cvespignani,schwartz,blasius,dodds,colizza,Havlin2,us,watts,stanley2,3,murase,vesp,scholtes14,barrat,baron,vesp2,kaski,ker,perra,10,5,11more,vesp09,11,gonc,estrada} with a theoretical focus spanning from the well-mixed (i.e. mass-action) limit through to heterogeneous and even dynamically-evolving networks \cite{cvespignani,schwartz,blasius,dodds,Havlin2,11more,5,10,vesp2,baron,barrat,vesp,stanley2,watts,3,murase,us}. There is however a lack of quantitative understanding of how people revisiting a popular place impacts the detailed profiles that emerge from viral spreading (e.g. school, supermarket, airport or online bulletin board). In particular, the interplay between the mobility through such a space and the average occupancy, has not been addressed in any analytically amenable way to our knowledge. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.9\linewidth]{Figure1.pdf} \end{center} \caption{(Color online) Schematic diagram showing our model of viral spreading due to revisits to a popular offline or online space. (a) At each timestep, an agent who happens to be outside the popular space, has a probability $p_{j}$ to enter. Meanwhile, an agent who happens to be inside the popular space has a probability $p_{l}$ to leave at that timestep. (b) At each timestep, an infected agent who happens to be inside the popular space, has a probability $q_{i}$ to infect any susceptible agent who also happens to be inside the space at that timestep. Also at each timestep, each infected agent, whether inside or outside the space, recovers with probability $q_{r}$ as in standard SIR (Susceptible-Infected-Recovered) models.} \label{fig1} \end{figure} In this paper we present a simple model of co-existing human mobility and infection dynamics. Our model predicts highly non-trivial viral dynamics due to the direct interplay between the mobility through, and the average occupancy of, a generic popular space $G$ (see Fig. \ref{fig1}(a)). In Sec. II we introduce the model and its variants. Section III focuses on the SIR (Susceptible-Infected-Recovered) process. We analyze the co-evolving dynamical equations numerically and analytically. We obtain highly diverse infection profiles $I(t)$ as a function of the mobility and occupancy of the public place. Even when each individual agent spends the same average time in $G$ and has the same average number of contacts, and the average attendance of $G$ is constant, infection profiles arise which are qualitatively different from the well-mixed limit (e.g. resurgent epidemics). We then derive a specific analytic condition involving the mobility and occupancy, for which the co-existing dynamical processes reduce to an effective SIR model with renormalized parameters. More generally, we study the variation in shape of the infection profile by looking at its extensive features such as duration, severity and time-to-peak, and uncover an interesting linearity between the infection probability and the mobility. We use this analysis to compare the outcome of the model with modern-day outbreaks from the social domain, finding good agreement. We then compare these results to a broadcast-type infection mechanism, where infection occurs through an individual's presence in the popular space (e.g. viruses on surfaces, or an endemic population of infected mosquitos) as opposed to through another infected person, and hence the infection process does not depend on the other individuals in the popular space. We find that it is only in the specific limit of the infection probability being much greater than the recovery probability, that the results are similar for these two distinct mechanisms. The significant qualitative differences that we otherwise observe, suggest that distinct policies need to be implemented by planners when dealing with infected individuals (e.g. students or travelers) as opposed to infected spaces (e.g. schools or airports). In Sec. IV we present results for our model with an SIS (Susceptible-Infected-Susceptible) process. We find that the system is still tractable by means of a set of differential equations. In addition, we uncover a set of conditions under which the system resembles a standard SIS model and hence the evolution of the subpopulation of susceptibles can be obtained analytically. We also employ the same analysis in the corresponding subsection discussing the broadcast mechanism. Sec. V provides the summary and discussion. \section{Model} Figure 1(a) illustrates our model of $N$ agents (e.g. people) with access to a popular space $G$. At any given timestep, an agent not in $G$ has a probability $p_{j}$ to join $G$ while somebody inside $G$ has a probability $p_{l}$ to leave. This effectively generates a dynamical group in $G$, with an occupancy $N_g(t)$ (i.e. number of agents in $G$) which can fluctuate arbitrarily in time. Two useful combinations of $p_{j}$ and $p_{l}$ are: $\gamma_{s}=p_{j}/(p_{j}+p_{l})$ and $\gamma_{m}=2p_{j}p_{l}/(p_{j}+p_{l})$. Note that $1/\gamma_{m}$ is the average of $1/p_{j}$ and $1/p_{l}$, i.e. $1/\gamma_{m} = \frac{1}{2} (1/p_{j} + 1/p_{l})$. The mean number of agents in $G$ is $\langle N_{g}(t) \rangle = Np_{j}/(p_{j}+p_{l}) = N\gamma _{s}$. The total number of agents joining and leaving $G$ on average in a timestep is $N\gamma _{m}$, which characterizes the mobility of the agents. When $p_{j}$ and $p_{l}$ are scaled by a factor $r$, $\gamma_{s}$ and hence $\langle N_{g}(t) \rangle$ remain unchanged while the mobility changes by a factor $r$. Hence varying $\gamma_{m}$ with fixed $\gamma_{s}$ amounts to changing the mobility while keeping $\langle N_{g}(t) \rangle$ fixed. Figure 1(b) shows the effect of adding SIR infection dynamics. At any timestep, any infected person within the popular space $G$ can transmit a virus to any susceptible in $G$ with probability $q_{i}$ (i.e., SIR or SIS mechanism). Later we consider another mechanism where there is a constant probability for every person within the popular space to get infected (i.e. broadcast mechanism). In both cases, no transmission can occur from infected individuals outside $G$. By contrast, since recovery is an individual-based phenomenon, infected individuals both inside and outside $G$ have a probability $q_{r}$ to either recover and become immune (i.e. SIR mechanism) or recover and become susceptible (i.e. SIS mechanism). It is convenient to define an infected individual's contact rate $\lambda=q_{i}/q_{r}$, which is proportional to the basic reproduction rate of an SIR infection in a well-mixed population \cite{Murray}. We note that we can equivalently view the agents in $G$ as instantaneously connected -- hence our model and results can represent $N$ agents in a time-dependent network, or be applied to the common real-world scenario of a social group with time-varying membership \cite{Palla}. Our regime of focus in this paper, in which a popular space has fairly constant occupancy but variable throughput, is of direct relevance to online social groups in for example multi-player online games, where it is known that these groups (e.g. guilds) have a size that is fairly constant yet a membership that changes rapidly overtime \cite{us2}. We stress however that our model is far more general in that it allows for any rate of change of occupancy and throughput. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.95\linewidth]{Figure2.pdf} \end{center} \caption{(Color online) Trajectories of evolution of the system in the $S$-$I$ space for three different sets of parameter choices. The rougher and smoother looking curves are obtained by numerical simulation and by integrating the set of differential equations (Eq. \ref{SIReqsP2P}) respectively. The three setups have the same mean number of agents in $G$ given by $N\gamma_{s}$ where $\gamma_{s}=0.1$. Other parameters are: (Red curves) $\gamma_{m}=0.009$, $\lambda=0.1$, and $q_{i}=0.005$; (Green curves) $\gamma_{m}=0.018$, $\lambda=0.1$, and $q_{i}=0.001$; (Blue curves) $\gamma_{m}=0.0018$, $\lambda=0.022$, and $q_{i}=0.002$. Insets show the time dependence of the total system infection level $I(t)$ for each of the cases.} \label{fig2} \end{figure} \section{SIR case} \subsection{Person-to-person contagion} \subsubsection{Analysis} Figure \ref{fig2} summarizes the rich diversity of behaviors which emerge from our model, and the close agreement between individual runs of the numerical simulation and the coupled differential equations that we describe below. The main panel shows the trajectory of $S$ and $I$ values in the system in the $S$-$I$ space. The trajectory starts from the lower right-hand corner, as initially we have $S/N \sim 1$ and $I/N \sim 0$. The results are in sharp contrast with the SIR model in a well-mixed population in which once $\lambda$ and $I(0)$ are given, the trajectory is fixed \cite{Murray}. For standard SIR in a well-mixed population, if $\lambda$ and $I(0)$ are given, then there will only be one trajectory in the $S$-$I$ space. In the simulations, all agents are initially susceptible and we allow the system to run until the group size in $G$ (i.e. popular space occupancy) reaches its steady-state value $N\gamma _{s}$. We then randomly pick an agent in $G$ and make it infected. In every subsequent timestep, all the agents first carry out the SIR process followed by the joining or leaving of $G$. We choose $N=1000$. The number of recovered agents $R$ at the end of the epidemic reflects the extent of the infection. We now derive a set of equations for this system. Since the S$\rightarrow$I process only occurs inside the group (i.e. inside the popular space), we use $S(t)$, $I(t)$, $R(t)$ for the number of susceptible, infected, and recovered agents in the whole system, and $S_{g}(t)$, $I_{g}(t)$, and $R_{g}(t)$ for the corresponding numbers in the space $G$. The six equations that describe the dynamics of a SIR process for a single dynamical group (i.e. a single popular space) are as follows, with the subscript $g$ on a variable denoting that variable applies to agents within the space $G$: \begin{eqnarray} \frac{dS_{g}}{dt} &=&-q_{i}S_{g}I_{g}-p_{l}(S_{g}-q_{i}S_{g}I_{g})+p_{j}(S-S_{g}), \nonumber \\ \frac{dI_{g}}{dt} &=&q_{i}S_{g}I_{g}-q_{r}I_{g}-p_{l}(I_{g}+q_{i}S_{g}I_{g}-q_{r}I_{g})\nonumber\\ &&+(1-q_{r})p_{j}(I-I_{g}),\nonumber \\ \frac{dR_{g}}{dt} &=&q_{r}I_{g}-p_{l}(R_{g}+q_{r}I_{g})+p_{j}((R-R_{g})\nonumber\\ &&+q_{r}(I-I_{g})),\nonumber \\ \frac{dS}{dt} &=&-q_{i}S_{g}I_{g},\nonumber \\ \frac{dI}{dt} &=&q_{i}S_{g}I_{g}-q_{r}I,\nonumber \\ \frac{dR}{dt} &=&q_{r}I. \label{SIReqsP2P} \end{eqnarray} The extra terms $-p_{l}q_{i}S_{g}I_{g}$ and $-p_{l}(q_{i}S_{g}I_{g}-q_{r}I_{g})$ as well as the factor $(1-q_{r})$ in Eq. \ref{SIReqsP2P} could in principle be excluded when considering particular real-world applications, depending on the precise details of the discrete time processes involved, i.e. whether one can simultaneously enter or leave the public space while changing infection status. We have checked how the omission of these terms affects the numerical simulations of the equations -- and we find that it makes little difference to the results (less than $10\%$). This makes sense since they represent higher-order interaction terms in the mean-field equations. We choose to retain them in the remainder of the paper, noting that the best implementation of these equations for a particular real-world situation may differ slightly according to the details of how individuals join and leave the public space in question, and the details of whether the infection and recovery processes continue during those dynamical changes. This level of detail is outside the scope of our present paper given that we wish to focus on the discrete-time stochastic results.\\ The insets in Fig. \ref{fig2} show the time-variation of the number of infected agents $I(t)$ for three different sets of parameters that correspond to the same mean number of agents $\langle N_{g} \rangle$ in $G$. Depending on the agents' mobility and infection and recovery probabilities, $I(t)$ (insets) shows qualitatively different behavior including a rapid increase with a gradual drop (red curve), gradual increase and drop in number (green), and an $I(t)$ that shows resurgence (oscillatory $I(t)$) after the initial increase and drop (blue curve). We stress that these are real oscillatory behaviors, not simply fluctuations. These oscillations (or more generally, resurgent behavior) also appear in the results obtained from integrating the set of equations (Eq.(\ref{SIReqsP2P})), although the resulting curve is smoother since an average over many runs is implicitly implied by the equations. The resurgence arises from the fresh supply of susceptible and infected agents when new agents join the group. \begin{figure} \includegraphics[width=0.8\linewidth]{Figure3.pdf} \caption{Dependence of the final fraction of recovered agents $R/N$ (which is the same as the fraction of agents who have been infected) on the mobility $\gamma_m$ as obtained by simulations (symbols) and by integrating the set of equations in Eq. \ref{SIReqsP2P} (lines). Results are shown for two systems with the same values of $\gamma_s = 0.1$ and $\lambda = 0.1$ but different infection probabilities. Squares and solid line: $q_i = 0.01$. Circles and dashed line: $q_i = 0.08$. Simulation results are obtained by averaging over $10^3$ different runs corresponding to different initializations for a given set of parameters.} \label{fig3} \end{figure} Interestingly, our results show that a large mobility does not necessarily imply more agents become infected and hence that a large $R$ arises in the long time limit. Instead Fig. \ref{fig3} shows the resulting fraction $R/N$ as a function of the mobility $\gamma_m$, for two systems with the same $\gamma_s$ (hence group size) and $\lambda$ (ratio of infection and recovery probabilities) but different infection probabilities. For each case, there is some particular value of $\gamma_m$ that leads to a maximum $R$. The set of equations also gives results that are in qualitative agreement with simulation results. Note that even though the simulations (data points) and equations (lines) do not coincide exactly, the shapes are in reasonable agreement while the results from the equations are consistently higher than the simulation results. A key difference between the simulations and equations is that the number of agents of a certain type is discretized, i.e. $0, 1, 2, 3$ etc. in the simulations. When integrating the differential equations, the associated quantities are taken to be continuous, thus we could have $0<I(t)<1$ when obtaining results using the equations. \subsubsection{Analytics} To determine when $I(t)$ will grow initially and create an epidemic, we see from the equation \begin{equation} \frac{dI}{dt}=q_{i}S_{g}I_{g}-q_{r}I \end{equation} that this can occur when the initial $dI/dt > 0$. At $t=0$, the $S(0)$ initial susceptibles are randomly distributed in that there is no bias in them initially occupying the space $G$ or not. In this case, $S_g(0) = \gamma_s S(0)$ and $I_g (0) = \gamma_s I(0)$. Requiring the right-hand side of the above equation to be greater than zero implies the criterion $\lambda\gamma_{s}^{2}S(0)>1$. We will initialize the infection with one infected agent inside the group, $I_g (0) = 1$ and $S_g (0) = N \gamma_s$. In this case, the criterion for having an initial increase in $I$ is given by $\lambda N \gamma_s > 1$. Under certain restrictive conditions, it is possible to regard our dynamical model as an effective SIR process in which the effective infection probability is $\gamma_{s}^{2}q_{i}$ and the effective recovery probability is $q_{r}$. The conditions for this to hold are that $p_j + p_l = 1$ (see below) and that the infection probability $q_i$ is sufficiently small so that the number of newly infected agents $q_i S_g (t)I_g (t)$ is less than the number of susceptible $S_g (t)$ in the space $G$. Recalling that the two probabilities are in general treated as independent parameters, we stress that this condition poses an additional restriction on the parameters and hence is not in general true. We can then write the last three equations in Eq. \ref{SIReqsP2P} as: \begin{eqnarray} \frac{dS}{dt}&=&-q_{i}S_{g}I_{g}=-q_{i}\left(N\gamma_{s}\frac{S}{N}\right)\left(N\gamma_{s}\frac{I}{N}\right)\nonumber\\&=&-\gamma_{s}^{2}q_{i}SI,\nonumber\\ \frac{dI}{dt}&=&q_{i}S_{g}I_{g}-q_{r}I=q_{i}\left(N\gamma_{s}\frac{S}{N}\right)\left(N\gamma_{s}\frac{I}{N}\right)-q_{r}I\nonumber\\&=&\gamma_{s}^{2}q_{i}SI-q_{r}I,\nonumber\\ \frac{dR}{dt}&=&q_{r}I, \end{eqnarray} so that the three equations are the standard SIR equations in a well-mixed population with an effective infection probability of $\gamma_s^{2} q_i$ and an effective recovery probability of $q_r$. Physically, it means that $S_g (t) = N \gamma_s S(t)/N = \gamma_s S(t)$ and $I_g (t) = N \gamma_s I(t)/N =\gamma_{s}I(t)$ for every time step. The system therefore behaves as if all the susceptible and infected agents inside and outside the space $G$ are randomly mixed and may be re-assigned to $G$ at every time step. We now derive the condition $p_j + p_l = 1$ by starting with the discrete time version of $dS_g/dt$: \begin{eqnarray} S_{g}(t+1)&=&S_{g}(t)-q_{i}S_{g}(t)I_{g}(t)-p_{l}(S_{g}(t)\nonumber\\&&-q_{i}S_{g}(t)I_{g}(t))+p_{j}(S_{t}-S_{g}(t))\nonumber. \end{eqnarray} From $dS/dt = q_i S_g (t)I_g (t)$, we have \begin{eqnarray} S(t+1)=S(t)-q_{i}S_{g}(t)I_{g}(t).\nonumber \end{eqnarray} The effective dynamical equations are valid if we can write $S_g (t + 1) = \gamma_s S(t + 1)$. Imposing this equality in the above equations, we have \begin{eqnarray} (1-p_{j}-p_{l})S_{g}+(p_{j}-\gamma_{s})S-(1-p_{l}-\gamma_{s})q_{i}S_{g}I_{g}=0.\nonumber \end{eqnarray} This equality can {\em only} be true at all times if $p_j + p_l = 1$, for which the coefficients in the three terms then all vanish. Though this condition is restrictive, it is interesting in that it says that the system behaves as a well-mixed SIR model when the probability for the agents outside the space $G$ \textit{not to join} the space (i.e. $(1-p_j )$) is equal to the probability of those inside the space $G$ to leave. Equivalently, the probability for agents inside the space $G$ to stay (i.e., $(1-p_l )$) must be equal to the probability of those outside the space $G$ to join. Under this condition, we no longer need two parameters and a single $p_j$ is sufficient: hence $\gamma_s = p_j$ and the mean group size is $N p_j$. The dynamics of the model then become: An agent outside the space $G$ has a probability $p_j$ to join $G$ and every agent inside $G$ has a probability $(1-p_j)$ to leave. \begin{figure*} \includegraphics[width=0.85\linewidth]{ExtensiveCSC-1.pdf} \caption{(Color online) Duration, time-to-peak, severity and area associated to the infection profile $I(t)$ (from left to right: $T$, $T_{m}$, $H$ and $A$) as a function of $\gamma_{m}$ and $q_{i}$. For three values of $\lambda$ (from top to bottom: $0.022$, $0.15$, and $0.5$), $N=1000$ and $\gamma_{s}=0.1$.} \label{CSC-Ext} \end{figure*} \subsubsection{Extensive features of infection profile $I(t)$} We characterize the profile differences by looking at the \textit{extensive} features of $I(t)$. This becomes particularly useful when comparing with viral outbreaks in real complex systems since information about the microscopic parameters is typically unknown. We consider the duration of the outbreak which we call $T$; the peak of the infection, i.e. the maximum value of the number of infected $I(t)$ which we call $H$; the time to achieve this maximum, i.e. time-to-peak which we call $T_m$; and the area below the $I(t)$ curve which we call $A$. Figure \ref{CSC-Ext} shows the behavior of these extensive quantities, obtained by integrating numerically Eqs. \ref{SIReqsP2P}. Profile features are shown as a function of the mobility $\gamma_{m}$ and infection probability $q_{i}$ for three different values of the infection contact rate $\lambda$. The relationship between the duration, time to peak, and area becomes evident by showing the similar qualitative results for a given value of $\lambda$. Some key points emerge: (i) As $\lambda$ grows, the times (duration and time-to-peak) and area become independent of the mobility. (ii) As $q_{i}$ increases, the times and area become smaller. (iii) By increasing the parameter $\lambda$ the maximum height grows. (iv) The highest severity value $H$ shows linearity with $\gamma_{m}$ and $q_{i}$ (i.e. $q_i=e^3\gamma_m)$. (v) The regions in the $q_{i}$-$\gamma_{m}$ space where the maximum height is located, change from low mobility and high infection probability for small $\lambda$, to the region of low infection probability and high mobility for large $\lambda$. The transition between these two limits can be seen to occur around $\lambda=0.15$. (vi) For small values of $\lambda$, the times and area follow linearly their maximum value with $\gamma_{m}$ and $q_{i}$. Before the transition point at $\lambda=0.1$, this linearity is lost. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{seed-csc.pdf} \caption{(Color online) Distribution of duration of infection $T$, for different values of initial seed $s$. Parameters: $\lambda=0.022$, $q_{i}=0.002$, $\gamma_{m}=0.0018$ and $N=10^{3}$.} \label{Dist-Blue} \end{figure} The initial conditions that we have so far considered, feature one infected individual in the space $G$. In real systems some infections are controlled or naturally dissipated before a large-scale spreading is reached. The numerical simulation can account for these types of situations: Figure \ref{Dist-Blue} illustrates the distribution of the infection's duration for $1000$ different realizations and different initial conditions which varies with the number of initially infected objects (seed $s$). Each run leads to a slightly different dynamics whose mean values are well captured by the dynamical equations. For small values of $s$, the probability of having a short outbreak ($T\approx0$) is very high in comparison with realizations for larger values of $s$. For this illustration, the recovery probability is selected to be approximately $50$ times greater than the infection probability. Hence, the distribution for small $s$ shows a large probability of a short infection, i.e. for most of the runs, the few infected agents recover faster than they can spread the infection. On the contrary as $s$ increases, the probability of short durations decreases and the distribution gets populated with a duration that is similar for all the values of $s$. Interestingly, the point where the distribution is maximum (after the short duration peak for small $s$) is only slightly shifted to shorter times as $s$ grows. It becomes more evident in the bottom of Fig. \ref{Dist-Blue} by looking at the difference between $s=5$ and $s=10$. The duration that more closely resembles the result from the differential equations, is around $T \approx 400$ (see blue curve in Fig. \ref{fig2}) and has a very low probability for all $s$ values. For this parameter choice, the simulation is far from the mean-field (i.e. differential equation) result. \begin{figure} \centering \includegraphics[scale=0.56]{T-10pow4-1.pdf} \includegraphics[scale=0.56]{A-10pow4-1.pdf} \includegraphics[scale=0.56]{H-10pow4-1.pdf} \includegraphics[scale=0.56]{Tm-10pow4-1.pdf} \caption{(Color online) Duration (top left), severity (bottom left), area (top right) and time-to-peak (bottom right) as a function of mobility from numerical integration of differential equations (solid curve) and mean values of $10^4$ simulations (dotted curve). Parameters: $\lambda=0.1$, $q_{i}=0.005$, $\gamma_{s}=0.1$ and $N=10^{4}$.} \label{Red-Contrast-Sim-DE} \end{figure} As an illustration, Fig. \ref{Red-Contrast-Sim-DE} depicts the result for the extensive quantities of the infection profile as a function of mobility, contrasting the results from the differential equations (solid curve) with the mean value from the simulations (dotted curve). The results show that for small values of mobility, the duration predicted by the differential equations is greater than the mean from simulation, in agreement with the previous finding. This statement is also valid for the area and time-to-peak, but it is false for the peak height. The latter displays, for small $\gamma_{m}$, a good agreement between the simulations and the equations. In contrast, as the mobility is increased, the statement is no longer accurate for the duration and maximum height. For instance, the simulation result grows with a smaller rate than the differential equation for the maximum height while the agreement between the results for duration grows as the mobility is increased. \subsection{Comparison with real-world social contagion} We note that the profiles in the top two insets of Fig. \ref{fig2} are commonly observed in association with the download popularity of YouTube clips reported in Refs. \cite{sornette1,riley}. The bottom inset is more characteristic of financial systems, and looks remarkably similar to the profile obtained for the revaluation of the Chinese Yuan currency reported in Ref. \cite{us2}. This same rumor circulated twice in the space of a few months, producing an almost identical profile. The currency pairs follow a similar dynamical pattern in each case, which suggests that the same underlying group dynamics developed, in line with our model \cite{us2}. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{libya-fig.pdf} \caption{(Color online) Infection profile from our model (green) and civil-unrest event profile in Libya during the Arab spring of 2011. We take $N = 4000$ agents and the timestep to be one day, starting on February 24, 2011. The other parameters are $p_j = 0.08$, $p_l = 0.72$ (for joining and leaving the group); $q_i = 0.1$, $q_r = 0.46$ (for infection and recovering processes in SIR).} \label{libya} \end{figure} In the social domain, we also find similarities between the infection profiles produced by our dynamical model and those of civil-unrest events. The use of new technologies such as social media and mobile phones is arguably one of the main elements that helped large mobilizations such as the Arab Spring to generate continuous waves of civil unrest activity. Indeed the use of this technology during these protests doubled in some participant countries. The case of Libya is known to have relied on the use of cell phones, emails and YouTube videos to spread information about the current state of the protests and to coordinate new demonstrations \cite{stepoanova11}. Figure \ref{libya} compares the output of our model with the volume of civil unrest events in Libya during the 2011 Arab Spring, treating the events in a given day as a proxy for the number of infecteds $I(t)$. Given that our model accounts for the spreading dynamics within a population that is continuously renewed over time, individuals may join a popular place (e.g. enter a chatroom) and hence be susceptible to becoming infected (e.g. get influenced by a political idea). The dynamics of the number of infected individuals in our model is therefore likely to be illustrative of the on-street activity that then follows (e.g. demonstration events). While we are well aware of the limitations in making such a connection, it is nevertheless a reasonable first approximation. As can be seen from the results in Fig. \ref{libya}, the model mechanisms such as mobility and spreading result in revivals on the infected community that matches reasonably well the actual on-street event data. We stress that while these profiles for the model and actual on-street events shown in Fig. \ref{libya} are similar to each other, each is very different from that predicted by a well-mixed standard SIR model which is characterized by exponential decays and no revivals. \begin{figure} \includegraphics[width=0.7\linewidth]{civil-unrest2.pdf} \includegraphics[width=0.7\linewidth]{civil-unrest1p1.pdf} \caption{(Color online) (Top) Bursts of civil unrest. Columns show burst index, start time, end time, and bursts' features in days ($T$ and $T_m$) and number of events ($H$). (Bottom) Outbreak profile descriptors $H/N$ and $T_m/T$ compared to empirical data of on-street civil unrest (numbered circles) \cite{IARPA} from the numbered top list. Theoretical lines obtained by integrating the coupled differential equations for different values of mobility $\gamma_m$. Thick black line shows result for standard (i.e. well-mixed) SIR model. $N=1000$, $q_i = 0.002$ throughout. Each trajectory starts near origin for $\lambda\equiv q_i/q_r=10^{-3}$ and grows until $\lambda=1$ in steps of $\delta\lambda = 10^{-3}$. To estimate $N$ from the real data, we chose the $97\%$ quantile of $H$ (i.e. a z-score of $3$) from a larger sample and obtained a quantile of $27$.} \label{CSC-Ext2} \end{figure} Figure \ref{CSC-Ext2} goes further by comparing the extensive features of the model's infection profile to that obtained from empirical data of on-street civil protest events in Latin America (numbered circles). Again we are taking the number of infected $I(t)$ at a given timestep as a proxy for the number of people incited to protest, and the space $G$ as some physical or even online space (e.g. city center or chatroom) where individuals become sufficiently motivated to protest. While we are not suggesting it provides a unique or definitive explanation of these phenomena, the model (thin colored lines) does capture the wide variability of outbreak profiles in a way that a standard SIR model cannot (thick black line). The on-street civil unrest data (numbered circles) come from a unique multi-year, national research project involving exhaustive event analysis by subject matter experts (SMEs) across an entire continent (see Refs. \cite{unrest,IARPA}). The start and end of each burst is identified using the analysis of Ref. \cite{Karsai2} and cross-checked manually. The key to extract features from the sequence of events, is to construct the infection profile(s). First, we segment a long sequence of civil unrest events by a pre-specified threshold $d$. That is, if the interval between two consecutive events is not larger than $d$, the latter event is in the same segment as the previous event. Second, the curve of infection is built based on one segment of events, by making the reciprocal of the intervals between events as the $y$ values and the time step as the $x$ value. Then we can extract the features of that infection curve, forming one numbered circle in Fig. \ref{CSC-Ext}(bottom). \subsection{Broadcast SIR mechanism} In the broadcast-type infection model, the space $G$ has a constant infection rate $q_i$ for infecting the susceptible who happen to be in $G$ at that timestep -- which is akin to having contaminated surfaces in a hospital, school or airport. All the infecteds have a recovery rate $q_r$ to recover and become immune. These dynamics are governed by the equations: \begin{eqnarray} \frac{dS_g}{dt}&=&-q_i S_g -p_l(S_g - q_i S_g)+p_j (S-S_g),\nonumber\\ \frac{dI_g}{dt}&=&q_i S_g-q_r I_g - p_l(I_g+q_i S_g-q_r I_g)\nonumber\\ &&+(1-q_r)p_j(I-I_g),\nonumber\\ \frac{dR_g}{dt}&=&q_rI_g-p_l(R_g+q_r I_g)+p_j((R-R_g)\nonumber\\ &&+q_r(I-I_g)),\nonumber\\ \frac{dS}{dt}&=&-q_iS_g,\nonumber\\ \frac{dI}{dt}&=&q_iS_g-q_rI,\nonumber\\ \frac{dR}{dt}&=&q_rI. \label{SIReqsBC} \end{eqnarray} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{SIR_bc.pdf} \caption{(Color online) (a) Trajectories of evolution of the system in the $S-R$ space. Initially the system starts from the right-hand lower corner. (b) The evolution of $S(t)$ in time, and (c) the evolution of $I(t)$ in time. The rougher and smoother curves are obtained by simulation and by integrating the set of equations (Eq. (\ref{SIReqsBC})) respectively. The systems have the same mean number of agents in $G$ given by $N \gamma_s$ where $\gamma_s = 0.1$. Other parameters are: $\lambda = 1.0$, $q_i = 0.01$, $p_j = 0.01$ and $p_l = 0.09$.} \label{sir_bc} \end{figure} An explicit formula for $S(t)$ can be obtained by solving the first and fourth equations in Eq. (\ref{SIReqsBC}). The result is \begin{eqnarray} S(t)&=&C_1\exp\left\{\frac{1}{2}(-A-\sqrt{A^{2}-4B})t\right\}\nonumber\\&&+C_{2}\exp\left\{\frac{1}{2}(-A+\sqrt{A^{2}-4B})t\right\} \label{SolS_BC} \end{eqnarray} where $A = ((p_j + p_l ) + (1-p_l )q_i )$, $B = p_j q_i $, and $C_1$ and $C_2$ are determined by the initial conditions. Equation (\ref{SolS_BC}) shows that the decrease of susceptibles is not related to the number of infecteds, i.e. $q_r$ is irrelevant. This is in contrast with the person-to-person case, where the results depend significantly on $q_r$. Figure \ref{sir_bc} shows the results of this dynamical process using the equations and direct simulation. The two sets of results are basically consistent with each other. Figure \ref{num-si} compares results for the broadcast-type infection and person-to-person infection, by solving Eq. (\ref{SIReqsP2P}) and Eq. (\ref{SIReqsBC}). Comparing the behavior in the two cases for $q_i > q_r$ (Fig. \ref{num-si}(a)), we note that there is a difference in behavior in both $S(t)$ and $I(t)$ at short times. This difference is more apparent for the case of $q_i < q_r$ , as shown in Fig. \ref{num-si}(b). \begin{figure} \centering \includegraphics[width=0.95\linewidth]{Eq-p2p-BC.pdf} \caption{(Color online) Comparison between the broadcast (left column) and person-to-person (right column) cases. (a) $q_i > q_r$; (b) $q_i < q_r$. Results obtained by solving Eq. (\ref{SIReqsP2P}) for the person-to-person case and Eq. (\ref{SIReqsBC}) for the broadcast case. Parameters are shown in the figure.} \label{num-si} \end{figure} In order to analyze further the two infection mechanisms, we look at the quantities $(1/N)(dS/dt)$ and $(1/N)(dR/dt)$. Figure \ref{num-si} shows the results obtained by solving Eq. \ref{SIReqsP2P} (for person-to-person infection) and Eq. \ref{SIReqsBC} (for broadcast infection). Figure \ref{sim-dsdr} shows the results for the person-to-person and broadcast mechanism obtained by simulations. For the cases of Fig. \ref{sim-dsdr}(a) and Fig. \ref{sim-dsdr}(c) where the infection probability is high and the recovery probability is low ($q_i = 0.9$ and $q_r = 0.1$), the behaviors for the two infection mechanisms are similar. This is because the parameters correspond to the situation where a susceptible getting into $G$ will almost certainly become infected, regardless of the infection mechanism (i.e. very high infection probability in comparison with recovery). However, for the cases in Fig. \ref{sim-dsdr}(b) and Fig. \ref{sim-dsdr}(d) where the infection probability is low and the recovery probability is high ($q_i = 0.1$ and $q_r = 0.2$), the two mechanisms give different behaviors with the broadcast mechanism showing a less variable $dR/dt$. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{Sim_P2P_BC-dS.pdf} \caption{(Color online) (a-b) Derivative $d(S(t)/N)/dt$ and (c-d) derivative $d(R(t)/N)/dt$ as a function of time for the broadcast and person-to-person infection mechanisms obtained by numerical simulations. The parameters are: $p_j = 0.1$, $p_l = 0.9$, (a) and (c) $q_i = 0.9$, $q_r = 0.1$; and, (b) and (d) $q_i = 0.1$, $q_r = 0.2$.} \label{sim-dsdr} \end{figure} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{Sim_P2P_BC.pdf} \caption{The fraction of infected individuals $I(t)/N$ and the derivative $d(R(t)/N)/dt$ as a function of time for the broadcast and person-to-person mechanism. The parameters are: (a) and (e) $\gamma_s = 0.1$, $\gamma_m = 0.018$, $q_i = 0.9$, $q_r = 0.1$, (b) and (f) $\gamma_s = 0.1$, $\gamma_m = 0.018$, $q_i = 0.1$, $q_r = 0.2$, (c) and (g) $\gamma_s = 0.1$, $\gamma_m = 0.0018$, $q_i = 0.01$, $q_r = 0.1$, (d) and (h) $\gamma_s = 0.1$, $\gamma_m = 0.18$, $q_i = 0.01$, $q_r = 0.1$. (a-b) and (e-f) correspond to fixed average group size and mobility, but different infection and recovery probabilities. (c-d) and (g-h) correspond to fixed infection and recovery probabilities, but different mobilities.} \label{sim-Idr} \end{figure*} Figures \ref{sim-dsdr} and \ref{sim-Idr} compare simulation results of $I(t)/N$ and $d(R(t)/N)/dt$ for the two infection mechanisms, for several different sets of model parameters. Figure \ref{sim-Idr}(a) and (b) (Fig. \ref{sim-dsdr}(c) and (d)) correspond to cases with fixed average occupation of $G$ and mobility but with different infection and recovery probabilities. When the infection probability is high and the recovery probability is low, the two mechanisms give similar results (Fig. \ref{sim-Idr}(a) and Fig. \ref{sim-dsdr}(c)). When the infection probability is higher than the recovery probability ($\lambda = q_i /q_r = 2.0$) as in Fig. \ref{sim-Idr}(b) and Fig. \ref{sim-dsdr}(d), the number of infecteds increases faster in the early stage for the person-to-person mechanism. Figures \ref{sim-Idr}(c-d) and (g-h) correspond to cases with fixed parameters for the viral process (fixed $q_i$ and $q_r$ ), but different parameters in terms of joining and leaving the space $G$. In Fig. \ref{sim-Idr} (c) and (g), the mobility is low ($\gamma_m = 0.0018$). Although the infection probability is small, the person-to-person infection mechanism still shows a strong epidemic at short times and then an oscillatory behavior. For the same infection probability, the broadcast mechanism shows a weak epidemic. Thus, the low mobility enhances the epidemic even though the infection probability is small, by retaining and thus increasing the number of infecteds within $G$ who can then further infect susceptibles. This infection reinforcement mechanism is missing from the broadcast case. In Figs. \ref{sim-Idr}(d) and (h), the mobility is high ($\gamma_m = 0.18$ with the same $q_i$ and $q_r$ as in (c)) and the person-to-person mechanism does not cause an epidemic, however the broadcast mechanism does lead to an epidemic. Whether an infection is spread through personal contact or through contact with the physical space itself (e.g. contaminated surfaces) is therefore crucial in dictating the infection profile $I(t)$ to be expected. \section{SIS process} \subsection{Person-to-person SIS mechanism} We now study the same mobility model involving $G$, but now using an SIS (Susceptible-Infected-Susceptible) viral process. As before, infected individuals can only infect others when they are present in the space $G$, and each infected in $G$ has a probability $q_i$ (per unit step) to infect a susceptible in $G$. All the infected individuals in the system (inside and outside $G$) have a probability $q_r$ to recover and become susceptible again. At the beginning of the process, we randomly select an agent in $G$ to be infected. The viral dynamics of the system are governed by the following equations in the mean-field limit: \begin{eqnarray} \frac{{dS_g }}{{dt}} &=& - q_i S_g I_g + q_r I_g - p_l (S_g - q_i S_g I_g + q_r I_g )\nonumber\\ && + p_j (S - S_g + q_r (I - I_g )) \nonumber \\ \frac{{dI_g }}{{dt}} &=& q_i S_g I_g - q_r I_g - p_l (I_g + q_i S_g I_g - q_r I_g )\nonumber \\ &&+ p_j (1 - q_r )(I - I_g ) \nonumber \\ \frac{{dS}}{{dt}} &=& - q_i S_g I_g + q_r I \nonumber \\ \frac{{dI}}{{dt}} &=& q_i S_g I_g - q_r I \label{SISeqs} \end{eqnarray} For the sake of brevity, we focus on a few choice sets of parameters $(q_i,q_r)$ and $(p_j,p_l)$, for both the simulations ($N=1000$ agents) and for the numerical integration of Eq. (\ref{SISeqs}). Results are shown in Fig. \ref{fig:ptp}. The numerical integration results agree well with the simulation results showing a monotonic increase in the fraction of infected individuals for recovery probabilities equal to and larger than the infection probability. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.95\linewidth]{person-to-person.pdf} \end{center} \caption{(Color online) Fraction $I/N$ as a function of time $t$, for SIS model with person-to-person infection mechanism. Several sets of parameters are selected to illustrate the features.} \label{fig:ptp} \end{figure} To fully describe the dynamical process, we then add the master equation concerning the dynamics in $G$: \begin{equation} \frac{d N_g}{dt}= -p_l(S_g+I_g)+p_j(S-S_g+I-I_g). \label{GRPeq} \end{equation} The steady state (e.g. $S(\infty)$) may be found by setting the right-hand side of the equations (Eqs. (\ref{SISeqs})-(\ref{GRPeq})) to zero. We then obtain \begin{equation} S_g({\infty})= S(\infty)\gamma_s, \label{Sg} \end{equation} \begin{equation} I_g({\infty})= I(\infty)\gamma_s, \label{Ig} \end{equation} \begin{eqnarray} S({\infty}) &=& N + \frac{1}{{2q_r A}}N\gamma _s ( - B + \sqrt {B^2 - C} )\nonumber \\ &&+ \frac{1}{{4q_i q_r A^2 }}( - B + \sqrt {B^2 - C} )^2, \label{S} \end{eqnarray} where \begin{equation} A = p_j + q_r (1 -(p_j + p_l )), \end{equation} \begin{equation} B = N\gamma _s q_i A + q_r A + p_l q_r, \end{equation} \begin{equation} C = 4Nq_i q_r A(p_j + \gamma _s q_r (1 - (p_j + p_l ))). \end{equation} So far, this steady-state solution is general. For the special case scenario discussed earlier in which $p_j+p_l=1$, we obtain $\gamma_{s} = p_{j}$ and $\gamma_{m} = 2p_{j}(1-p_{j}) = 2 \gamma_{s}(1-\gamma_{s})$. In this case: \begin{equation} S(\infty)=\frac{q_r}{q_i \gamma_s^2}. \end{equation} Though the total number $N$ disappears from the limit $S(\infty)$, $N$ still enters as a bound. More explicitly, the above expression should be written as \begin{equation} S(\infty)=Min[\frac{q_r}{q_i \gamma_s^2},N]. \end{equation} The corresponding value of $I(\infty)/N$ in the case of $p_j+p_l=1$ is \begin{equation} \frac{I(\infty)}{N}=1-Min[\frac{q_r}{Nq_i \gamma_s^2},1]. \end{equation} As an example of accuracy, Fig.\ref{fig:ptp}(c-d) compares the steady state $I(\infty)/N$ with the outcome from the differential equation as well as simulations for two sets of parameters that fulfill the condition $p_j+p_l=1$. In fact, under this special condition, the master equation for $S(t)$ becomes \begin{equation} \frac{{dS}}{{dt}} = - q_i \gamma _s^2 SI + q_r I, \label{eff-well-mixed} \end{equation} and is easy to solve for $S(t)$. Equation (\ref{eff-well-mixed}) corresponds to an effective (well-mixed) SIS system. Substituting $I=N-S$ into the above equation, we obtain two solutions. One solution is (for decaying $I$) \begin{equation} S(t) = \frac{{q_r e^{Nq_i \gamma _s^2 t + q_r C} - Ne^{q_r t + Nq_i \gamma _s^2 C} }}{{q_i \gamma _s^2 e^{Nq_i \gamma _s^2 t + q_r C} - e^{q_r t + Nq_i \gamma _s^2 C} }} \;. \end{equation} If the initial condition is $S(0)=N_0$, the constant $C$ is given by \begin{equation} C = \frac{{\ln (q_r - N_0 q_i \gamma _s^2 )}}{{Nq_i \gamma _s^2 - q_r }}. \end{equation} The other solution is (for increasing $I$) \begin{equation} S(t) = \frac{{q_r e^{Nq_i \gamma _s^2 (t + C)} + Ne^{ - q_r (t - C)} }}{{q_i \gamma _s^2 e^{Nq_i \gamma _s^2 (t + C)} + e^{q_r (t + C)} }} \;. \end{equation} If the initial condition is $S(0)=N_0$, the constant $C$ is given by \begin{equation} C = \frac{{\ln (N_0 q_i \gamma _s^2 - q_r )}}{{q_r - Nq_i \gamma_s^2 }}. \end{equation} \subsection{Broadcast SIS mechanism} We next consider the broadcast SIS mechanism, i.e. every susceptible who enters the group will be infected at a constant rate $q_i$. All the infected individuals have a recovery rate $q_r$ at which they become susceptible again. Interestingly, we note that this process is analogous to spintronics in condensed matter physics: When an electric current consisting of unpolarized electrons enters a spintronic device, such as a spin-valve transistor, the output current will be spin-polarized. The spin-polarized conducting electrons may be thought of as infected agents. The spin-polarized electrons will naturally tend to forget their polarization over time, e.g., by scattering (decoherence) or noise effects, and hence recover to become susceptible again (i.e. unpolarized). Any outbreak in the system may be described by the following equations in the mean-field limit: \begin{eqnarray} \frac{{dS_g }}{{dt}}&=& - q_i S_g + q_r I_g - p_l (S_g - q_i S_g + q_r I_g )\nonumber \\ &&+ p_j (S - S_g + (I - I_g )q_r )\nonumber \\ \frac{{dI_g }}{{dt}}&=& q_i S_g - q_r I_g - p_l (I_g - q_r I_g + q_i S_g )\nonumber \\ &&+ p_j (I - I_g )(1 - q_r )\nonumber \\ \frac{{dS}}{{dt}}&=& - q_i S_g + q_r I \nonumber \\ \frac{{dI}}{{dt}}&=&q_i S_g - q_r I \end{eqnarray} Some results are shown in Fig. \ref{fig:bd01}. The simulation and numerical integration results generally agree with each other. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.95\linewidth]{broadcast01.pdf} \end{center} \caption{(Color online) Fraction $I/N$ as a function of time $t$, for SIS model with a broadcast-type infection mechanism. Several sets of parameters are selected to illustrate the features.}\label{fig:bd01} \end{figure} Compared to the numerical integration results, the simulation output fluctuates. This is because the number of newly infected agents only depends on the number of susceptible in $G$. It is therefore expected that the fluctuations will be smaller in a system of larger $N$, where simulation results will agree better with iteration results. Recalling that the mean number of agents in $G$ is $N\gamma_{s}$, if we fix $\gamma_{s}$ and vary $N$ then the group size in $G$ will change. Results for $I(t)/N$ with different values of $N$, are shown in Fig. \ref{fig:bd02}. The results for large $N$ show smaller fluctuations, as expected. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.95\linewidth]{broadcast02.pdf} \end{center} \caption{(Color online) Fraction $I/N$ as a function of time $t$, for SIS model with a broadcast type infection mechanism. Results for two sets of parameters and for different system sizes $N$ are shown.} \label{fig:bd02} \end{figure} The steady state $S(\infty)$ is given by \begin{equation} S({\infty}) = N + \frac{q_i k_1}{q_r k_2}, \end{equation} where $k_{1}$ and $k_2$ are defined as \begin{eqnarray} k_1&=&Np_j q_r + N\gamma _s q_r^2 - N\gamma _s p_j q_r^2 - N\gamma _s p_l q_r^2 \\ k_2&=& - p_j q_i - p_j q_r - p_l q_r - q_i q_r + p_j q_i q_r \nonumber\\&&+ p_l q_i q_r - q_r^2 + p_j q_r^2 + p_l q_r^2. \end{eqnarray} For the special condition $p_j+p_l=1$, we get \begin{equation} S({\infty}) = \frac{{Nq_r }}{{p_j q_i + q_r }} \end{equation} and hence the fraction of infecteds is given by \begin{equation} \frac{I({\infty})}{N} =1- \frac{{q_r }}{{p_j q_i + q_r }}=\frac{{p_j q_i }}{{p_j q_i + q_r }}\ . \end{equation} As for the SIR case under this same special condition, the equations take on the form of those for an effective well-mixed SIS system with effective parameters, that can be easily solved for $S(t)$. The solution is given by \begin{equation} S(t) = \frac{{Nq_r }}{{q_i \gamma _s + q_r }} + Ce^{ - (q_i \gamma _s + q_r )t}\ . \end{equation} If the initial condition is $S(0)=N_0$, the constant $C$ is given by \begin{equation} C = N_0 - \frac{{Nq_r }}{{q_i \gamma _s + q_r }}. \end{equation} Figure \ref{fig:eq} shows the results for how $I$ depends on $\gamma_s$ (which determines the group size in $G$) and $\gamma_{m}$ (which determines the mobility through $G$) for both the person-to-person case and broadcast case. \begin{figure} \begin{center} \includegraphics[width=0.95\linewidth]{theory.pdf} \end{center} \caption{Fraction $I(\infty)/N$ as a function of $\gamma_s$ (main panel). Insets show dependence of $I(\infty)/N$ on $\gamma_m$. Parameters are $N=1000$, $q_i=0.0005$, $q_r=0.015$. The special condition $p_j+p_l=1$ is satisfied, hence $\gamma_{m} = 2 \gamma_{s} (1 - \gamma_{s})$. Results shown for (a) person-to-person case and (b) broadcast case. Lines are analytic results from integrating the differential equations, while symbols are simulation results.} \label{fig:eq} \end{figure} \section{Summary} We have presented and analyzed a simple but highly non-trivial model of co-existing mobility and infection dynamics. The model considers an SIR or SIS process for people transiting and revisiting a popular space $G$, with person-to-person and broadcast infection mechanisms. Our model can be solved through simulation, or by integrating a set of dynamical equations. Varying the mobility (i.e. agents entering and leaving the space $G$) and the infection probability has a significant impact on the overall infection profile even when both the mean group size in $G$ and the ratio of the infection and recovery probabilities are kept constant. The addition of a dynamical component through $G$, as compared to traditional well-mixed models, generates a far wider range of infection profiles and allows us to capture features observed recently in social outbreak phenomena. Our results reveal a highly non-linear dependence on mobility that generates the counter-intuitive prediction that by increasing the flow of individuals through a region of contagion, the infection's severity can be decreased. A special case arises for certain values of the parameters ($p_j + p_l = 1$) under which the system can be represented by an effective well-mixed model in which the group can be thought of as re-organized randomly in every time step. We also presented results for a broadcast-type infection mechanism and compared these to the person-to-person infection mechanism. We identified a range of infection probabilities where the two are comparable. More generally, we found that the wide variety of infection profiles that emerged from the time-dependent interplay between mobility and infection dynamics, was representative of recent real-world contagion phenomena in the social setting. \section{Acknowledgments} NFJ is grateful to the National Science Foundation (NSF) for funding under grant CNS1522693 and to the Air Force (AFOSR) for grant 16RT0367. The views and conclusions contained herein are solely those of the authors and do not represent official policies or endorsements by any of the entities named in this paper.
\section{Extra plots}\label{sec:appendix} \input{figures/3dplots.tex} \input{figures/other_accuracy_results.tex} \subsection{Abstraction}\label{subsec:abstraction} What is abstraction? In fact, there are several closely related notions of abstraction. First there is the philosophical notion of abstraction; Locke defines abstraction as follows (bolding ours): \begin{displayquote}[\cite{Locke1689-LOCAEC-4}] The acts of the mind, wherein it exerts its power over simple ideas, are chiefly these three: \textellipsis The third is \textbf{separating them from all other ideas that accompany them in their real existence}: this is called abstraction \textellipsis \end{displayquote} Then there is mathematical abstraction; Russell defines abstraction as follows: \begin{displayquote}[\cite{Russell1937-RUSPOM-7}] This principle [of abstraction] asserts that, whenever a relation, of which there are instances, has the two properties of being symmetrical and transitive, then the relation in question is not primitive, but is analyzable into sameness of relation to some other term; and that this common relation is such that there is only one term at most to which a given term can be so related, though many terms may be so related to a given term. \end{displayquote} Intriguing as these notions of abstraction may be, they are distinctly different from the notion of abstraction in computer science; in particular with respect to mathematical abstraction (bolding ours): \begin{displayquote}[\cite{abstraction}] \textellipsis the primary product of mathematics is \textit{inference structures}, while the primary product of computer science is \textit{interaction patterns}. This is a crucial difference, and it shapes their use of formalism and the kind of abstraction used in the two disciplines. \vspace{4pt} \textellipsis computer science is distinguished from mathematics in the use of a kind of abstraction that computer scientists call \textit{information hiding}. The complexity of behaviour of modern computing devices makes the task of programming them impossible without abstraction tools that hide, but do not neglect, \textbf{details that are essential in a lower-level processing context but inessential in a [particular] software design and programming context}. \end{displayquote} This understanding of abstraction is widely agreed upon; notably Abelson, Sussman, and Sussman in their much revered \textit{Structure and Interpretation of Programs}: \begin{displayquote}[\cite{abelson1996structure}] We are not at that moment concerned with how the procedure computes its result, only with the fact that it computes the square. The details of how the square is computed can be suppressed, to be considered at a later time. Indeed, as far as the \code{good-enough?} procedure is concerned, \code{square} is not quite a procedure but rather an abstraction of a procedure, a so-called \textit{procedural abstraction}. At this level of abstraction, any procedure that computes the square is equally good. \end{displayquote} Thus, abstraction is the modulation of concern for details in accordance with the needs of the user and \textit{levels of abstractions} are graded by the degree of the elision (bolding ours): \begin{displayquote}[\cite{abstraction}] To specify nontrivial computational processes in machine language is a practical impossibility for humans, and so programming languages with higher levels of abstraction are necessary. \vspace{4pt} \textellipsis At a higher level of [abstraction], a \textit{subroutine}, \textit{function}, or \textit{procedure} is an abstraction of a segment of memory that hides the details of how the segment represents a piece of a program that is passed certain parameter values and returns a value as a result. \vspace{4pt} \textellipsis A \textit{garbage collector} is an abstraction of a special process that collects garbage and makes it once again available to the original program, hiding from that program the details of how this is done. \vspace{4pt} \textellipsis \textbf{This use of code libraries is an example of \textit{procedural abstraction}}, or the ability to execute code through the calling of named procedures that accept explicitly described parameters and return certain guaranteed results. It is an example of abstraction because the details of how the procedure performs its computation are hidden from the procedure's caller; since the caller only makes use of the procedure for its results, there is no need for it to know the internals. \end{displayquote} Taking \textit{information and concern encapsulation} as our operational definition of abstraction, in what sense shall we measure the costs of the abstractions employed by DL frameworks? An immediate candidate measure of cost is the asymptotic time (or space) complexity of various operations and data structures that comprise the abstraction. We claim that, with rare exception\footnote{One result does come to mind: Pippenger~\cite{10.1145/244795.244798} produces a program that runs in O$(n)$ on an impure implementation (i.e. with side-effects) LISP but which runs in $\Theta(n \log n)$ on a pure LISP\@.}, asymptotic complexity is a poorly suited measure of the complexity or cost of abstractions in the sense that we here deal with. If the abstraction is truly abstract then it bears no resemblance to the realization (recall Locke's definition of abstraction) and if the abstraction is reified then the analysis becomes completely impractical (owing to the numerous components and operations). Even if such analysis were practicable the result would most likely be uninteresting and inconsequential for actual DL frameworks and their users. It is well known that the constant factors in the complexity and particularities of hardware systems themselves more closely govern performance than the order terms. For example, Quicksort, an O$\left(n^2\right)$ sorting routine, outperforms even many $\Theta(n\log n)$ sorting routines because it is more cache efficient~\cite{10.5555/1410219}. Another way to reason about the cost of abstractions is according to the ``zero-overhead'' principle as articulated by Bjarne Stroustrup: \begin{displayquote}[\cite{10.1007/978-3-642-28869-2_1}] In general, C++ implementations obey the zero-overhead principle: What you don't use, you don't pay for. And further: What you do use, you couldn't hand code any better. \end{displayquote} Therefore we make the expedient and practical assumption that what is more interesting and valuable to the DL community than asymptotics is, in fact, an empirical study of the resource efficiency of the abstractions; namely execution time, memory usage, and GPU utilization. \subsection{GPUs}\label{subsec:gpus} We briefly review NVIDIA GPUs% \footnote{A more comprehensive introduction to GPUs themselves and CUDA programming is available in~\cite{10.5555/1891996}.} in order that the performance criteria we measure in~\cref{sec:methodology} are legible. A GPU consists of many simple processors, called streaming multiprocessors (SMs), which are comprised by many compute \textit{cores} that run at relatively low clock speeds% \footnote{For example, individual NVIDIA GTX-1080 Ti cores run at $\sim$1500MHz.}. Each compute core in an SM can execute one floating-point or integer operation per clock cycle. See ~\cref{fig:fermi_arch} for a diagram of NVIDIA's Fermi architecture, where each SM consists of 32 cores, 16 load/store (LD/ST) units, four special-function units (SFUs) which compute transcendental functions (such as $\sin$, $\cos$, $\exp$), a relatively large register file% \footnote{For example, Intel's Haswell architecture supports 168 integer and 168 floating-point registers.}% , and thread control logic (to be discussed in the proceeding). Each SM has access to local memory, several cache levels, and global memory. In the Fermi architecture (and subsequent architectures) local memory is configurable in software; a fraction of it can be apportioned as either local memory or L1 cache (for workloads that query global memory in excess of local memory). One final feature worth mentioning, though irrelevant for us here, is the L2 cache's atomic \code{read-modify-write} facilities; this enables sharing data across groups of threads more efficiently than possible in conventional CPUs% \footnote{On a CPU, atomic \code{test-and-set} instructions manage a semaphore, which itself manages access to memory (therefore incurring a cost of at least two clock cycles).}. Such an architecture, particularly suited to maximizing throughput, necessitates a programming model distinct from that of a conventional, general purpose processor architecture. A unit of computation deployed to a GPU is called a \textit{kernel}; kernels can be defined using NVIDIA's Compute Unified Device Architecture (CUDA) extensions to C, C++, and FORTRAN% \footnote{In fact, CUDA compiles down to a virtual machine assembly code (by way of \code{nvcc}) for a virtual machine called the Parallel Thread Execution (PTX) virtual machine. So, in effect, it is compilers all the way down.}. Compiled kernels are executed by many \textit{threads} in parallel, with each thread starting at the same instruction; NVIDIA describes this addition to Flynn's taxonomy~\cite{5009071} as Single Instruction Multiple Thread (SIMT)% \footnote{They key difference between SIMD and SIMT is that while in SIMD all vector elements in a vector instruction execute synchronously, threads in SIMT can diverge; branches are handled by predicated instructions~\cite{cuda_toolkit}.}. The large register file enables very fast thread context switching ($\sim$25 microseconds on the Fermi architecture~\cite{Glaskowsky2009NVIDIAS}), performed by a centralized hardware thread scheduler. Multiple threads are grouped into blocks (SMs are single tenant with respect to blocks) and blocks are grouped into \textit{grids} (grids execute a single kernel). All threads in a block, by virtue of running on the same SM, coordinate (execute in arbitrary order, concurrently, or sequentially) and share memory. Thread blocks are partitioned into \textit{warps} of 32 threads; it is these warps that are dispatched by the warp scheduler (see~\cref{fig:cuda_cores}) and starting with the Fermi architecture two warps can be executed concurrently on the same SM in order to increase utilization% \footnote{That is, one warp can occupy the compute cores while the other occupies the SFUs or Load/Store units.}. \input{figures/fermi.tex} We present an example CUDA program in~\cref{fig:cuda_hello_world} to illustrate some of the artifacts of the CUDA threading model. The premise of the program is performing an element-wise sum of two $32 \times 48$ entry matrices. Note that all of the data weighs in at $3 \times 32 \times 48 \times 4 = 18$ kilobytes (well within the bounds of shared memory on any one SM). The actual work of summing is partitioned across a grid of six thread blocks, each containing $16 \times 16$ threads. Such a partitioning means each thread can be logically responsible for exactly one sum and therefore the kernel is quite simple (see~\cref{lst:cuda_hello_world}). Within the context of a kernel, each thread is uniquely identified by its multi-index in the thread hierarchy (\code{threadIdx} and \code{blockIdx}). Hence, to carry out the sum, the kernel maps this multi-index to the physical address of the data% \footnote{In CUDA C/C++ data is laid out in row-major order but this is not fixed (in CUDA FORTRAN the data is laid out in column-major order).}. This (grid, block, thread)-to-data mapping is, in effect, the mechanism that implements the SIMT architecture. Note that, since each block is allocated to exactly one SM, this sum will take $\left( 16 \times 16 \right) \div 16 = 16$ clock cycles on the Fermi architecture; better throughput could be achieved by increasing the number of blocks (and therefore the number of SMs assigned work). \input{figures/cuda_code.tex} \subsection{Graph compilers and Tensors}\label{subsec:graph-compilers} DL frameworks primarily function as graph compilers and tensor abstractions\footnote{A tensor in this context is a data structure similar to a multidimensional array that supports some useful operations (e.g. slicing, flattening, index permutation). Most DL frameworks also abstract memory layout on hardware behind this abstraction.}. They typically also include some ``quality of life'' utilities useful for the training of DL models (e.g.\ optimizers and data loaders). PyTorch's \code{Tensor} abstraction is responsible for a great deal of the complexity and implementation overhead of the framework. Due to the framework's broad support for hardware and data types, dynamic dispatch\footnote{Kernels live in shared-object libraries (e.g. \code{libcaffe2.so}, \code{libcaffe2\_gpu.so}) and therefore call sites of virtual functions (indirection) are resolved at runtime.} is employed to resolve methods on \code{Tensor}s (see~\cref{fig:dispatch}). This dynamic dispatch produces deep call stacks for every single operation on a \code{Tensor} (see~\cref{fig:stacks}); it remains to be seen whether the context switching\footnote{Every function call corresponds to a stack frame allocation and register allocations. In addition indirection to far away call sites leads to poor instruction cache efficiency~\cite{10.5555/3314872.3314876}} between function contexts incurs any appreciable execution time penalty. \input{figures/dispatch.tex} \input{figures/stack_traces/call_graph.tex} DL graph compilers are distinct from other dataflow compilers (such as VHDL and Verilog\footnote{Verilog and Very High Speed Integrated Circuit Hardware Description Language (VHSIC-HDL or VHDL) are specification languages for specifying circuits on field programmable gate arrays.}); in addition to keeping account of how the data streams through the compute graph, they also keep account of how the gradients of the data stream through the graph (i.e.\ the \textit{gradient-flow}). This is called \textit{automatic differentiation} (often shortened to \textit{autodiff}). In principle autodiff is implemented by using the rules of Newton's calculus to calculate the derivatives of primitive functions and the chain rule to calculate derivatives of compositions of primitive functions. There are two types of autodiff: \textit{forward mode} (or \textit{forward accumulation}) and \textit{reverse mode} (or \textit{reverse accumulation})% \footnote{Briefly, for a composition of functions $y=f(g(h(x)))$, forward mode evaluates the derivative $y'(x)$, as given by the chain rule, inside-out while reverse mode evaluates the derivative outside-in. For those familiar with functional programming, these operations correspond to \code{foldl} and \code{foldr} on the sequence of functions with $\partial_x$ as the operator.}. Reverse mode autodiff enables the framework to effectively calculate the gradients of parameters of a neural network with respect to some relevant loss or objective function. Note that such gradients can be \textit{back-propagated} through the neural network in order to adjust the parameters of the neural network such that it minimizes the loss\footnote{In which case, it is, in fact, the negatives of the gradients that are back-propagated.} or maximizes the objective. Dataflow graphs (and their corresponding gradient-flow graphs) can be specified either statically, with fan-in and fan-out for all functions predetermined, or dynamically, where compositions of functions are determined ``on-the-run''. There are advantages and disadvantages to both specification strategies. Static specifications tightly constrain\footnote{For example, branches and loops are cumbersome to specify statically.} the intricacy of the dataflow graph but, obversely, can be leveraged to improve performance and scalability~\cite{le2019tflms,Pradelle2017PolyhedralOO}. TensorFlow (prior to v2.0) is an example of a DL framework that compiles statically specified graphs. Conversely, dynamic specifications can be very expressive and user friendly, including such conveniences as runtime debugging, but are much more difficult to optimize. PyTorch is an example of a DL framework that supports dynamic specification. Both PyTorch and TensorFlow also support just-in-time (JIT) compilation strategies (TorchScript and XLA respectively); such JIT compilers strike a balance between fluency and scalability. In this work we investigate TorchScript (see~\cref{sec:methodology}). It warrants mention that, in addition to vertically integrated DL frameworks (i.e.\ specification language and hardware compiler), recently there has been work on intermediate bytecode representations for dataflow graphs that arbitrary compiler ``frontends'' can target. The Multi-Level Intermediate Representation (MLIR)~\cite{lattner2020mlir} project has goals that include supporting dataflow graphs, optimization passes on those graphs and hardware specific optimizations% \footnote{Interestingly enough, the project is headed by Chris Lattner who, in developing LLVM, pioneered the same ideas in general purpose programming languages.}. Stripe~\cite{zerrell2019stripe} is a polyhedral compiler% \footnote{A polyhedral compiler models complex programs (usually deeply nested loops) as polyhedra and then performs transformations on those polyhedra in order to produce equivalent but optimized programs~\cite{Griebl98codegeneration}.} that aims to support general machine learning kernels, which are distinguished by their high parallelism with limited mutual dependence between iterations. Tensor Comprehensions~\cite{vasilache2018tensor} is an intermediate specification language (rather than intermediate bytecode representation) and corresponding polyhedral compiler; the syntax bears close resemblance to Einstein summation notation and the compiler supports operator fusion and specialization for particular data shapes. Finally, Tensor Virtual Machine (TVM)~\cite{10.5555/3291168.3291211} is an optimizing graph compiler that automates optimization using a learning-based cost modeling method that enables it to efficiently explore the space of low-level code optimizations. \section{Introduction}\label{sec:introduction} \input{introduction} \section{Background}\label{sec:background} \input{background} \section{Methods}\label{sec:methodology} \input{methods} \section{Results}\label{sec:results} \input{results} \section{Discussion}\label{sec:discussion} \input{discussion} \section{Conclusion and future work}\label{sec:futurework} \input{futurework} \section{Speculation}\label{sec:speculation} \input{speculation} \section{Acknowledgements}\label{sec:acks} We would like to thank Rick Stevens and Ian Foster for their constructive criticism and feedback on the project and paper itself. \bibliographystyle{science}
\section{Introduction} With the advent of the era of social media, graph theory has become an increasingly important tool for studying user behavior in such fields as social networks, academic citation, online retails, web analysis, etc. Various graph algorithms have been proposed to attack the problems people encounter during the study of graphs. For example, in order to find the web pages that are of uttermost importance and relevance to a set of keywords, people designed the PageRank algorithm to rank the tens of millions of web pages that are available on line\cite{page1999pagerank}. Nodes on a connected graph may form tightly bound communities, which can be detected using an algorithm that aims to maximize the modularity of each potential cluster\cite{PhysRevE.70.066111, newman2006modularity, PhysRevE.74.036104}. We can equally well study the division of a connected graph into weakly linked communities via the Laplacian matrix, which is derived from a graph's adjacency matrix\cite{fortunato2010community}. It is also of significant interest to find a metric that can measure the distance between any two nodes on a connected graph, and the node2vec method is there to our resort\cite{grover2016node2vec}. All of these algorithms depend on a direct or indirect invocation of a graph's adjacency matrix, an invocation that is understandable due to the convenience of the adjacency matrix in uniquely identifying a graph, whether be it directed or undirected. The detection of potential fraudsters in a social network is an imperative task for online retailers and requires the development of an efficient algorithm for performing label propagation in a connected graph. Users of online retailers may have connections with each other, and we can capture these connections by analyzing users' behavior. For example, different users may share the same shipping address, or they may log into their accounts via the same device. By analyzing data like this, we can create a social network of the users in which each user is represented by a node in a graph. If two users share the same login device, we can link an edge between these two users. By this way, we can create a graph that represents the social links between users, who may be normal or fraudulent. In order to protect the normal users from being exploited by fraudsters, we need to find a method to identify these fraudsters and build an aegis for the normal users against them. One way to identify these fraudsters is to employ a set of rules and see who has violated them. However, due to the possibly large number of fraudsters in a social network, it is impractical to try to weed out all fraudsters purely by hand. Here, we will perform a label propagation in a social network so that once we have found out a handful of fraudsters via rules, we can continue to find more potential fraudsters automatically. The propagation of labels in a social network requires a precise definition of the similarity between users. In this paper, we will explore an algorithm that could measure the distance (and thus similarity) between any pair of nodes in a graph. By calculating the distances between nodes in a graph, we can gain a deeper and more intuitive understanding about the similarities between apparently disconnected social network users since the similarity between a pair of users should be inversely proportional to their distance. There are already well developed algorithms for measuring the distance between nodes in a graph, such as the geodesic path distance, or the node2vec algorithms. However, these algorithms either utterly disregards the graph structure or is too time-consuming to run. Here, we propose an algorithm for measuring the distance between a pair of nodes by considering the concept of hitting time for a random walk on a graph. Hitting time of a random walk is the number of steps traversed by a random walker before it hits a pre-specified target node for the first time. This hitting time can be either obtained by Monte Carlo simulation which is again much too time-consuming to be practical for a large graph or can be exactly calculated by an analytical formula which we will derive in this paper. We will validate our algorithm by applying it to some real world problems that we encountered in our daily work. All of these test cases will be presented in detail in the main text of this paper. The organization of the paper is as follows. In section \ref{background}, we will give a brief review of the previous methods for calculating the distance between a pair of nodes. We will summarize the advantages and disadvantages of these methods, and explain why we want to propose an alternative method to calculate the distance using the notion of hitting times in random walks\cite{ross1996stochastic}. After outlining our algorithm in section \ref{method_outline} and detailing the algorithm in section \ref{analytical_method}, we continue to apply our analytical and numerical methods to small and huge graphs respectively, in section \ref{results}. We also highlight the asymmetricity of our distance function under the exchange of its two arguments in the same section. In section \ref{comparison}, we make a comparison of our method with other existing methods. Finally, we make a conclusion in section \ref{conclusion}. \section{Background} \label{background} This paper explores the influence of a node on another node, or on a specific set of nodes in a graph. PageRank algorithm is a convenient method that can measure how influential and important a single node is to the graph as a whole, whereas sometimes we also need to know how influential a node is to another specific node, a situation that could arise when we want to know how susceptible a community of nodes is to the presence of a labeled user in a social network. For example, we can create a connected graph whose nodes represent the users of an online retailer. If we already added a label to a user in the social network, then we want to know how that label will propagate among the other users that cooccur with the labeled user. Intuitively, this process of label propagation depends on the distance between each pair of users in the social network. The shorter the distance between two nodes, the more similar these two nodes are to each other, and thus the easier it is to propagate a label from one user to another. Nowadays, there are already plenty of algorithms that can measure the distance or similarity between each pair of nodes in a graph. However, there is no universal definition of this distance function, and for each specific case, people can devise their own version of distance function. Two distance functions that have gained much popularity are the geodesic distance\cite{baesens2015fraud} and the cosine distance which is a byproduct of node2vec algorithm\cite{grover2016node2vec}. The geodesic distance between two nodes in an undirected graph is defined as the length of the shortest path connecting these two nodes. The geodesic distance, which is calculated by finding the shortest distance from one node say $A$ to anther node say $B$ using Dijkstra's algorithm for sparse graph or Floyd's algorithm for dense graph, is a deterministic algorithm. In this algorithm, we are considering a deterministic walk on the graph. Due to the non-randomness of this algorithm, when we employ it to find the distance between two nodes in a graph, we have failed to capture a significant part of the known information about the graph. The disregarding of the rich structures of a graph from which we could have extracted a huge amount of precious information about the relationship between a pair of nodes constitutes one major disadvantage of this algorithm. In node2vec algorithm, we calculate the distance between two nodes by first mapping each node in the graph into a dense vector using the word2vec method\cite{mikolov2013distributed}, and then using the cosine distance between two mapped dense vectors as the distance between a pair of nodes. This algorithm, which can be considered as an extension of the word2vec algorithm to graphs, requires the pre-existence of a node corpus that can only be generated by performing tens of thousands of random walks on a graph. The generation of this corpus is pretty time-consuming and memory-intensive, thus precluding its application to extremely large graphs. Another thing that is noteworthy is that both of these two algorithms yield symmetric distance functions for any pair of nodes in an undirected graph. The distance function is symmetric in the sense that the distance from node $A$ to node $B$ is guaranteed to be identical to the distance from node $B$ to node $A$. However, even for an undirected graph such as the friendship social network of Facebook, it is unreasonable to believe the distance from an influential user to an obscure user should be the same as the distance from an obscure user to an influential user. Since not all users of a social network share identical reputation and influence, we claim that the relationships between social network users are non-equivalent, non-reflective and asymmetric. Thus, \textit{a good definition of distance function between two nodes of a graph should take account of this non-equivalence, non-reflectivity and asymmetricity of relationships even for undirected graphs}. As we have noted above, a deterministic walk on a graph tends to be blind to the rich structure of a graph. Therefore, here in this paper, we will focus our attention on random walks on graphs. There are many scenarios for performing random walks on graphs. One such scenario starts from a node, say $A$, and selects a node, say $B$, as its target, and performs a multitude of random walks starting from $A$ and counts how many times this random walker encounters node $B$ within a pre-specified number of steps. This encountering frequency for the random walk provides a measure of the distance between nodes $A$ and $B$. The larger the frequency, the shorter the distance between $A$ and $B$. This method of of measuring the distance between two nodes, although valid in some sense, has several drawbacks, the most prominent of which is its strong dependence upon such capricious parameters as the maximum number of nodes each random walk is allowed to traverse, and the number of random walks to be performed for the encountering frequency to be statistically stable and meaningful. Another weak point of this method is that it is again much too time-consuming to perform enough number of random walks to gain a statistically significant result for two nodes that are located afar in a huge graph. The application of this random walk scenario to a small sized graph is no less troublesome due to the fact that a random walker starting from one node in a connected graph is guaranteed to reach any other node in the same graph as long as the random walk lasts long enough, thus rendering all the distances between any pair of nodes almost the same. Considering the time-expensiveness of performing sufficiently large number of random walks on a large graph and the strong dependence of the final results on the hard-to-select hyper-parameters, we prefer to find an alternative method that can deliver an exact solution to the random walk problem, thus avoiding this lengthy and tedious process of Monte Carlo simulations from the beginning. For sake of concreteness, consider a graph in which we have labeled some nodes as ``black", some as ``white", and some as ``unknown", as detailed in Ref. \cite{doyle2000random}. We can estimate the color of the unknown nodes either by performing a Monte Carlo simulation or by solving a discrete Laplacian equation. The inference of the colors of these unknown nodes is equivalent to performing label propagation in a graph. It is shown in Ref. \cite{doyle2000random} that solution of discrete Laplacian equation gives us more accurate results using far less time. Unfortunately, solution of Laplacian equations requires the pre-existence of boundary conditions, which are not always available\cite{zhu2003semi}. The black and white labels in Ref. \cite{doyle2000random} are the boundary conditions for a direct solution of Laplacian equation to be feasible. However, if all the known labels are marked black, then the only thing that a solution of Laplacian equation can tell us is that all the colors of the unknown nodes should be black, which is practically useless to us. For example, in order to quantify the influence of a black node on the other nodes that cooccur in a social network, we also need the existence of at least one node that is explicitly labeled as ``white", a label that is not always available. In order to avoid these conundrums, here we propose a new algorithm that can measure the distance between any two nodes in a graph by giving an exact solution to a random walk problem on undirected graphs, just like the analytical solution of discrete Laplacian equation for color inference as described above. \textbf{The advantage of this algorithm} is that the final result is uniquely obtained by solving a sparse linear system, thus releasing us of the unnecessarily thorny duty of selecting a set of appropriate parameters, and saving us tens of thousands of CPU hours from performing Monte Carlo simulations thanks to the highly efficient numerical linear algebra packages that are readily available for performing sparse matrix multiplications. \textbf{Another advantage of this algorithm} is that it gives us a distance function that is asymmetric between a pair of nodes, reflecting the reality that users in a social network generally have non-equivalent and non-reflective relationships with each other. \section{Proposed method} \label{method_outline} In this section, we will propose an analytical method for finding expected hitting times of a random walk on an undirected and connected graph $G$. The connectedness of the graph does not constitute a major restriction to our method due to the availability of efficient algorithms for finding connected components of an undirected graph. The adjacency matrix of this graph is $A$, which is a $|\mathbb{V}| \times |\mathbb{V}|$ matrix ($\mathbb{V}$ is the set of vertices in the graph, and $|\mathbb{V}|$ is the cardinality of the set), with matrix elements $A_{ij} = 1$ if there is an edge between node $i$ and node $j$, otherwise $A_{ij} = 0$. Because we are considering a social network of users who would have relationships only with others, we demand that the graph in this paper should be simple, meaning that none of the nodes are self-looped. The matrix dimension $|\mathbb{V}|$ is the number of nodes in the graph, and the number of non-zero matrix elements of $A$ gives us the edge number. $A$ is symmetric if graph $G$ is undirected, or else it is generally non-symmetric. In this paper, we are trying to calculate the probability of reaching a target node from any other node in the graph, whereas in a directed graph, a node may not be reachable from another node, thus here we only consider undirected and connected graphs for which the adjacency matrix is always symmetric. Furthermore, our method also applies to weighted graphs, for which $A_{ij} = w > 0$ if the edge connecting node $i$ to $j$ has weight $w$, and $A_{ij} = 0$ if there is no edge between nodes $i$ and $j$. A random walk from a start node to a target node on a graph is defined as follows: \begin{algorithm}[H]\label{random_walk_algorithm} \caption{Random walk on a graph} \begin{algorithmic}[1] \Require{An undirected and connected Graph $G$} \Procedure{RandomWalk}{$s, t$}\Comment{$s$ is the start node, and $t$ is the target node.} \State $c \gets s$ \Repeat \State $r \gets $ a random neighbor of $c$ \State $c \gets r$ \Until{$c = t$} \EndProcedure \end{algorithmic} \end{algorithm} For a random walk that starts from node $s$, \textbf{the hitting time is defined as the number of steps needed for the random walker to reach a target node $t$ for the first time}. According to this definition, the hitting time is a random variable that depends on the graph structure, the starting node $s$, and the target node $t$. Therefore, we can denote the hitting time as $N_{t}^{(s)}$. Denote the probability of hitting target $t$ after exactly $n$ steps starting from node $s$ as \begin{align} x^{(s)}_n = P(N_t^{(s)} = n) \end{align} Assume that node $s$ has $m$ neighboring nodes, of which at most one is the target node $t$. We enumerate these $m$ nodes using indices $i_s = 1, 2, ..., m$. Then the probability $P(N_t^{(s)} = n)$ can be recursively represented as \begin{eqnarray} P(N_t^{(s)} = n) = \sum_{\substack{i_s = 1 \\ i_s \ne t}}^{m} \frac{w_{s, i_s}}{W_s} P(N_t^{(i_s)} = n-1) \end{eqnarray} In the above equation, $W_s = \sum_{i_s = 1}^{m} w_{s, i_s}$ is the total weight associated with node $s$, $\frac{w_{s, i_s}}{W_s}$ represents the probability for the random walker to make a transition from node $s$ to one of its neighbors $i_s$, and $P(N_t^{(i_s)} = n-1)$ is the probability of reaching target node $t$ from node $i_s$ after exactly $n - 1$ steps. Since we are calculating the probability of reaching target $t$ for the first time from node $s$ after exactly $n$ steps, the probability of reaching target starting from the target itself is zero for any non-zero number of steps, i.e., $P(N_t^{(t)} = n) = 0, \forall n > 0$. Thus, we demand that the random walker in the above recursive equation should not make a transition to the target node $t$ even if $t$ is one of the neighbors of node $s$, which justifies our notation $i_s \ne t$ in the summation subscript. If we scan all possible starting vertices $s$, we can obtain a simultaneous system of difference equations for the hitting probabilities $P(N_t^{(i)} = n), \forall i \in \mathbb{V}, i \ne t$, where $\mathbb{V}$ is the set of all vertices in graph $G$. Specifically, if a vertex $j$ has only one single neighbor, and this very neighbor is just our target $t$, then we can directly write out the probability of reaching target $t$ after exactly $n > 0$ steps starting from node $j$ as $P(N_t^{(j)} = n) = \delta_{n, 1}$, with $\delta_{n, 1}$ being the Kronecker $\delta$ function. Since we already know the probability distribution of hitting times for such queer nodes which we call \textit{adherents} to target $t$, we can ignore those nodes when establishing the simultaneous system of difference equations for $P(N_t^{(s)} = n)$. \begin{defn} A node in graph $G$ is called an \textbf{\textit{adherent}} to target node $t$ if and only if this node has the target node as its only neighbor. \end{defn} With these notations, we can write out a simultaneous system of difference equations for the probabilities of hitting target node $t$ for the first time with exactly $n > 0$ steps after starting from different nodes $i \in \mathbb{V}$ as \begin{align} \label{recursive_equation} P(N_t^{(i)}= n) = \sum_{\substack{j = 1 \\ j \ne t}}^{|\mathbb{V}|} B_{ij} P(N_t^{(j)} = n - 1), \end{align} where we have imposed the restriction that the starting node $i$ should not be equal to $t$, and should not be an \textit{adherent} to target $t$. \textbf{The $B$ matrix is called probability transition matrix}, the elements of which are \begin{align} B_{ij} = \begin{cases} \frac{w_{ij}}{\sum_{i^{\prime}}w_{ii^{\prime}}}, i^{\prime} \in \{neighbors \; of \; i\} & A_{ij} \ne 0; \\ 0 & A_{ij} = 0. \end{cases} \end{align} Most of the time, $B$ is a sparse matrix. Note that although in an undirected graph the adjacency matrix $A$ is always symmetric, the probability transition matrix is generally non-symmetric. Moreover, due to the exclusion of the target node $t$ in the definition of probability transition matrix, the sum of matrix elements for each row in $B$ is not necessarily equal to 1. In fact, for a connected undirected graph, there is at least one row of $B$ whose sum is less than 1. The rule is that $\sum_{j}B_{ij} = 1$ if the target node $t$ is not a neighbor of node $i$; otherwise $\sum_{j}B_{ij} <1$. Since we have excluded the target node $t$ and all its adherent nodes from the set of starting nodes, the matrix $B$ has a dimension that is smaller than that of matrix $A$. For an undirected graph, the matrix $B$ is guaranteed to be square due to the fact that a random walker starting from a node that is not the target cannot possibly reach an adherent node to the target. The fact that matrix $B$ has rows with sum that are less than 1 means that this matrix is not a Markov matrix, and that all of its eigenvalues have a magnitude smaller than 1. As a result, the spectral radius of matrix $B$ is also smaller than 1. We will take advantage of this fact later in this paper. We can consider the hitting time $N_{t}^{(i)}$ as the $i$th component of a column vector $\boldsymbol{N}_{t}$. Our aim in this paper is to study the probability distribution of this random vector. Once we have already created the probability transition matrix $B$, we can directly write out the expectation values of $\boldsymbol{N}_{t}$ as \begin{align} \label{expectation} \langle \boldsymbol{N}_{t} \rangle = \sum_{n = 0}^{\infty} B^{n} \mathbf{1}, \end{align} where $\mathbf{1}$ is a column vector of which each element is 1, i.e, $\mathbf{1} = \begin{pmatrix} 1 & 1 & \ldots & 1\end{pmatrix}^{\text{T}}$. Since the spectral radius of $B$ is smaller than 1, the summation of power series in Eq. [\ref{expectation}] will converge. By terminating the summation at a power that is high enough, we can obtain numerical results for the expected hitting times with arbitrary precision. We will show that the expected hitting times can be used to measure the distance between two nodes in an undirected graph. We can also obtain higher order moments of $\boldsymbol{N}_{t}$, the formulae for which are no more complicated than the one in Eq. [\ref{expectation}]. We will show how to calculate all the moments of $\boldsymbol{N}_{t}$ in the next section. \section{Theoretical proof} \label{analytical_method} In the previous section, we have given a formula for calculating the expected hitting times from an arbitrary node to a target node in an undirected graph. In this section, we will give the necessary mathematical details for obtaining that formula. Actually, we will overshoot this goal by giving a generating function whose derivatives give us moments of any order for hitting time distribution. Previously, we have obtained a recursive equation for $P(N_{t}^{(i)} = n), n \ge 2$, which is the probability for a random walker to hit target node $t$ starting from node $s$ after exactly $n$ steps. By introducing the notation $\boldsymbol{X}^{(i)}_{n} = P(N_{t}^{(i)} = n)$, we can rewrite Eq. [\ref{recursive_equation}] into matrix form as \begin{align} \label{matrix_equation} \boldsymbol{X}_{n} = B \boldsymbol{X}_{n - 1}, n \ge 2 \end{align} An iterative solution to Eq. [\ref{matrix_equation}] is \begin{align} \boldsymbol{X}_{n} = B^{n-1} \boldsymbol{X}_{1}, n \ge 1 \end{align} The initial probability vector $\boldsymbol{X}_1$ can be conveniently obtained by the observation that $X_{1}^{(i)} = 0$ if the target node $t$ is not a neighbor of node $i$, and that $X_{1}^{(i)} = w_{it}/w_{ii^{\prime}}, i^{\prime} \in \{neighbors \; of \; i \}$ if the target node $t$ is a neighbor of node $i$. Now that we have known matrix $B$ and the initial probability vector $\boldsymbol{X}_{1}$, we can calculate all the hitting probabilities for any valid starting node. Although we can calculate all the hitting probabilities, most of the time, we are more interested in the observable quantities associated with these probability distributions. We can calculate the moments of the probability distribution by invoking their definitions, which are \begin{eqnarray} \langle N_t^{(i)m} \rangle = \sum_{n = 1}^{\infty} P(N_t^{(i)} = n) n^{m} := \sum_{n = 1}^{\infty} X_{n}^{(i)} n^{m} \end{eqnarray} The expectation and variance of the first hitting time starting from any node $i$ can be easily calculated from the first and second moments of the hitting probability distribution. At first sight, it seems that we need to know all the hitting probabilities before we can calculate their moments. However, we can exploit the fact the spectral radius of matrix $B$ is less than 1 and directly calculate the moments vector $\langle \boldsymbol{N}_{t}^{m} \rangle$ from the recursive Eq. [\ref{matrix_equation}]. To accomplish this, we need to first define the characteristic function for the probability density function $f(x)$ as \begin{align} \hat{f}(\omega) = \int_{x \in \mathbb{R}} f(x) e^{\text{i} \omega x} dx \end{align} For a discrete series like $X_{n}^{(i)}$, the probability density function is \begin{align} f^{(i)}(x) = \sum_{n = 1}^{\infty} X_{n}^{(i)} \delta(x - n), \end{align} where $\delta(x - n)$ is the Dirac $\delta$ function with the property that for any continuous function $f(x)$, we always have \begin{align} \int_{x\in\mathbb{R}} f(x)\delta(x - x_0) dx = f(x_0). \end{align} The characteristic function of $f^{(i)}(x)$ is \begin{align} \hat{f}^{(i)}(\omega) & = \int_{x \in \mathbb{R}} f^{(i)}(x) e^{\text{i} \omega x } dx \\\nonumber &= \sum_{n = 1}^{\infty} X_{n}^{(i)} e^{\text{i} \omega n } \end{align} If we further define $z = e^{\text{i} \omega}$, then the characteristic function can be more compactly rewritten as \begin{align} \tilde{f}^{(i)}(z) = \sum_{n = 1}^{\infty} X_{n}^{(i)} z^{n} \end{align} We can read off the expectation value and variance of hitting probabilities $X_{n}^{(i)}$ from the first and second derivatives of $\tilde{f}(z)$ as \begin{align} & \langle N_{t}^{(i)} \rangle = \frac{d}{dz}\Big( \tilde{f}^{(i)}(z) \Big) \Big|_{z = 1} \\\nonumber & \langle N_{t}^{(i)2} \rangle = \frac{d^2}{dz^2}\Big( \tilde{f}^{(i)}(z) \Big) \Big|_{z = 1} + \frac{d}{dz}\Big( \tilde{f}^{(i)}(z) \Big) \Big|_{z = 1} \\ \nonumber & \text{Var}(N^{(i)}_t) = \langle N_{t}^{(i)2} \rangle - \langle N_{t}^{(i)} \rangle ^2 \end{align} If we consider $N_{t}^{(i)}$ as the $i$th component of $\boldsymbol{N}_{t}$, and $N_{t}^{(i)2}$ as the $i$th component of $\boldsymbol{N}^2_{t}$ which is a component-wise square of vector $\boldsymbol{N}_{t}$, then the above three relations can be simplified into the form \begin{align} & \langle \boldsymbol{N}_{t} \rangle = \boldsymbol{\tilde{f}}^{\prime}(1) \\ & \langle \boldsymbol{N}_t^2 \rangle = \boldsymbol{\tilde{f}}^{\prime\prime}(1) + \boldsymbol{\tilde{f}}^{\prime}(1) \\ & \text{Var}(\boldsymbol{N}_t) = \langle \boldsymbol{N}_t^2 \rangle - \langle \boldsymbol{N}_{t} \rangle^2 \end{align} Here, we have defined a vector function $\boldsymbol{\tilde{f}}(z) $ as \begin{align} \boldsymbol{\tilde{f}}(z) = \sum_{n = 1}^{\infty} \boldsymbol{X}_{n} z^{n} \end{align} Since the coefficients of $\tilde{f}^{(i)}(z)$ are the hitting probabilities $X_{n}^{(i)}$, we call it the \textit{generating function} of hitting probabilities. Plug Eq. [\ref{matrix_equation}] into the above definition, and we get \begin{align} \label{generating_function} \boldsymbol{\tilde{f}}(z) &= \Big( \sum_{n = 1}^{\infty} z^{n} B^{n-1} \Big) \boldsymbol{X}_{1} \\ \nonumber &= z (I - z B)^{-1} \boldsymbol{X}_{1} \end{align} The second line of the above equation stems from the fact that $|z| = 1$ and that the spectral radius of matrix $B$ is smaller than 1. We have already known how to calculate the probability transition matrix $B$ and the initial probability vector $\boldsymbol{X}_{1}$, we can in principle calculate exactly the generating function from Eq. [\ref{generating_function}]. However, calculating the inverse of matrix $I - zB$ is no easy task, especially when the graph is huge. Moreover, the fact that the sparsity of matrix $B$ which we should take full advantage of can get lost after matrix inversion compels us to shun the idea of directly inverting matrix $I - zB$ to calculate hitting probability moments. Therefore, we have devised a trick for finding probability moments without resort to matrix inversion. For this purpose, we rewrite Eq. [\ref{generating_function}] as \begin{align} \label{generating_function_reordered} (I - z B) \boldsymbol{\tilde{f}}(z) = z \boldsymbol{X}_{1} \end{align} Performing first order derivative of both sides with respect to $z$ and setting $z = 1$ yields \begin{align} - B \boldsymbol{\tilde{f}}(1) + (I - B) \boldsymbol{\tilde{f}}^{\prime}(1) = \boldsymbol{X}_{1} \end{align} By definition, $\boldsymbol{\tilde{f}}(1) = \begin{pmatrix} 1 & 1 & 1 ... & 1\end{pmatrix}^{\text{T}}$, and $\boldsymbol{\tilde{f}}^{\prime}(1)$ gives us the first order moment of $ \boldsymbol{X}_{n}$, which is equal to \begin{align}\label{first_moment} \boldsymbol{\tilde{f}}^{\prime}(1) &= (I - B)^{-1} (B \boldsymbol{\tilde{f}}(1) + \boldsymbol{X}_{1}) \\ \nonumber &= (I - B)^{-1} \boldsymbol{\tilde{f}}(1) \end{align} The second line of the above equation is due to the identity that $\boldsymbol{\tilde{f}}(1) = B \boldsymbol{\tilde{f}}(1) + \boldsymbol{X}_{1}$, which can be easily verified by plugging $z = 1$ into Eq. [\ref{generating_function}]. We can avoid inverting sparse matrices, an operation that will destroy the sparsity of a matrix, by noting that Eq. [\ref{first_moment}] can be rewritten as (remember that the spectral radius of $B$ is smaller than 1) \begin{align} \label{first_moment_power_series} \boldsymbol{\tilde{f}}^{\prime}(1) = \sum_{n = 0}^{\infty} B^{n} \boldsymbol{\tilde{f}}(1) \end{align} It is noteworthy that the first order moments of hitting probabilities starting from each valid vertex are independent of the initial probability vector $\boldsymbol{X}_{1}$, and depend only on the probability transition matrix $B$. Taking the second order derivative of Eq. [\ref{generating_function_reordered}] yields \begin{align} \boldsymbol{\tilde{f}}^{\prime\prime}(1) & = 2 B (I - B)^{-2} \boldsymbol{\tilde{f}}(1) \\ \nonumber &= 2 \sum_{n = 1}^{\infty} n B^{n} \boldsymbol{\tilde{f}}(1) \end{align} The pseudocode for calculating mean and variance of hitting times is shown below: \begin{algorithm}[H] \caption{Hitting time calculation algorithm}\label{hitting_time} \begin{algorithmic}[1] \Require{probability transition matrix $B$ must be square} \Require{max iteration number $N$ must be positive} \Require{error limit $\epsilon$ must be positive} \Procedure{HittingTimeCalculator}{$B, N, \epsilon$} \State $i \gets 0$ \State $d \gets B.dimension$\Comment{Dimension of matrix B} \State $ones \gets \textit{vector of all 1's, shape = (d, 1)}$ \State $zeros \gets \textit{vector of all 0's, shape = (d, 1)}$ \State $power \gets ones$ \State $\mu \gets ones$ \Comment{$\mu$: expectation of hitting times} \State $var \gets zeros$ \Comment{$var$: variance of hitting times} \While{$i \le N$} \State $i \gets i + 1$ \State $power \gets B * power$\Comment{Matrix multiplication} \State $\mu \gets \mu + power$ \State $var \gets var + i * power$ \State $error \gets \text{norm of } i * power$ \If{$error < \epsilon $} \State \textbf{break} \EndIf \EndWhile \State $var \gets 2 * var$ \State $var \gets var + \mu - \mu^2(\text{element wise square})$ \State \Return $\mu, var$ \EndProcedure \end{algorithmic} \end{algorithm} Higher order derivatives of Eq. [\ref{generating_function_reordered}] yield higher order moments. Now we have already developed the algorithm for calculating the moments of hitting probabilities using both analytical and numerical methods, next we will illustrate the effectiveness of this algorithm using both small and huge graphs. The significance of the first order moment lies in that it is a measure of the distance from a starting node to a target node. If we already know that the target node is a fraudulent user in the social network, then we can infer that the nodes with average distance smaller than a threshold value could be considered to be potential fraudulent users. From intuition, we make a claim that the smaller the distance between any two nodes, the more similar they are to each other. \section{Experimental evidence} \label{results} \subsection{An analytical calculation of hitting time distribution on a simple graph} \label{exact_solution} In this section, we will show how to calculate the hitting time distribution on a small graph using analytical methods. This graph contains five nodes, which are denoted as $0, 1, 2, 3, 4$, as shown in Fig. [\ref{small_graph}]. \begin{figure}[h!] \centerline{\includegraphics[scale = 0.25]{small_graph.png}} \caption{An undirected graph that is small enough to be solved using analytical formulae. We use node 3 as our target node. } \label{small_graph} \end{figure} The adjacency matrix of this graph is \begin{align} A = \begin{pmatrix} 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \end{pmatrix} \end{align} Our target node is 3, and we want to calculate the probability of hitting the target starting from each vertex for the first time after exactly $n$ steps. Since node 4 is an adherent to target 3, we can directly write out its hitting probability as \begin{align} P(N_3^{(4)} = n) = \delta_{n, 1} \end{align} For nodes 0, 1, 2, we can define the probabilities of starting from each node and ending at node 3 after exactly $n$ steps. We encapsulate these probabilities into a column vector as \begin{align} \boldsymbol{X}_{n} = \begin{pmatrix} P(N_3^{(0)} = n) \\ P(N_3^{(1)} = n) \\ P(N_3^{(2)} = n) \end{pmatrix} \end{align} The probability transition matrix is \begin{align} B = \begin{pmatrix} 0 & 1/2 & 1/2 \\ 1/2 & 0 & 1/2 \\ 1/3 & 1/3 & 0 \end{pmatrix} \end{align} The probability vector satisfies this equation \begin{align} & \boldsymbol{X}_{n} = B \boldsymbol{X}_{n - 1}, n \ge 2, \end{align} with initial condition \begin{align} \boldsymbol{X}_1 = \begin{pmatrix} 0 \\ 0 \\ 1/3 \end{pmatrix} \end{align} The moment generating function for $\boldsymbol{X}_{n}$ is \begin{align} \label{close_form} \boldsymbol{\tilde{f}}(z) & = z( I - zB)^{-1} \boldsymbol{X}_1 \\\nonumber & = \frac{z}{3} \frac{1}{1 - \frac{7}{12}z^2 - \frac{z^3}{6}} \begin{pmatrix} \frac{z}{2} + \frac{z^2}{4} \\ \frac{z}{2} + \frac{z^2}{4} \\ 1 - \frac{z^2}{4} \end{pmatrix} \end{align} We can easily get the first order moments by differentiating the above equation with respect to $z$ at $z = 1$, the results of which are \begin{align} \boldsymbol{\tilde{f}}^{\prime}(1) = \begin{pmatrix} \langle N_3^{(0)} \rangle \\ \langle N_3^{(1)} \rangle \\ \langle N_3^{(2)} \rangle \end{pmatrix} = \begin{pmatrix} 9 \\ 9 \\ 7 \end{pmatrix} \end{align} The above result means that the average distance from nodes 0 and 1 to node 3 are equal, both being 9, and the average distance from node 2 to node 3 is 7. We interpret these results as demonstrating that nodes 0 and 1 have equal distance to node 3, whereas node 2 is nearer to node 3 than both 1 and 2. Node 4, being an adherent to target node 3, always has an average distance 1 to the target. These results are consistent with our intuition and signify to us that node 4 is most susceptible to the influence of node 3, node 2 is second most susceptible, and nodes 0 and 1 are least susceptible to its influence. In order to make a further test of the analytical results, we also perform a Monte Carlo simulation for the random walk on this graph. In the Monte Carlo program, we use node 3 as our target node, and start each random walk from node 0, 1, and 2. Each random walk terminates at node 3 after some number of steps. By repeating this process thousands of times, we obtain the average number of steps required before the random walker finally reaches the target. For each node in the set $\{0, 1, 2\}$, we perform $10^6$ random walks, each of which yields a step number, and then we calculate the mean value of these $10^6$ numbers. The Monte Carlo simulation results we get are pretty similar to the analytical results, which are shown together in Table \ref{small_table}. We can see that the Monte Carlo simulation results are consistent with our analytical results, with relative errors being approximately $10^{-3}$. We do not expect Monte Carlo simulation to give us high precision numerical results, and the final results of Monte Carlo simulations may vary slightly for different random number generators. Using the algorithm outlined in the previous section, we can equally well compute the average step number starting from each node 0, 1, 2 using numerical methods. We can use either Eq. [\ref{first_moment}] or Eq. [\ref{first_moment_power_series}] for this purpose, because the probability transition matrix is small enough for the direct inversion of matrix to be feasible. However, Eq. [\ref{first_moment}] is no longer practical when the graph is large, and thus even for this small graph, we still prefer to use Eq. [\ref{first_moment_power_series}], where we need to calculate the sum of an infinite power series. Due to the quick convergence of this series, we artificially impose a cutoff condition such that the summation series should terminate if the norm of the summand vector is smaller than a pre-specified error limit $\epsilon$, which we choose to be $10^{-13}$ here. We run the Python program listed in the previous section on macOS Mojave, and obtain numerical results that are shown together with analytical results and Mont Carlo simulation results in Table \ref{small_table}. It is clear that the numerical results obtained using our algorithm have a much higher precision than that of the Monte Carlo simulation results. \begin{table}[] \begin{tabular}{|c|c|c|c|} \hline $\langle N_3^{(i)} \rangle$ & Analytical & Monte Carlo & Numerical \\ \hline $\langle N_3^{(0)} \rangle$ & 9 & 9.00323 & 8.99999999999999 \\ \hline $\langle N_3^{(1)} \rangle$ & 9 & 8.96693 & 8.99999999999999 \\ \hline $\langle N_3^{(2)} \rangle$ & 7 & 7.00013 & 7.000000000000002 \\ \hline \end{tabular} \caption{Random walk results for the graph shown in Fig. \ref{small_graph}. $\langle N_t^{(i)} \rangle$ is the expected hitting time for a random walk that starts from node $i$ and ends at target node $t$. Here, $t = 3$. The above table lists the expected hitting times for random walkers to first reach target node 3 starting from nodes 0, 1, and 2, respectively. We obtain these results using three methods: analytical, Monte Carlo simulation, and numerical. Both Monte Carlo simulation results and numerical computation results are consistent with analytical results, although the numerical results have much higher precision, which justifies our introduction of the numerical algorithm for attacking this problem. \label{small_table}} \end{table} \subsection{Numerical computation of hitting time distribution on large graphs} \label{numerical_solution} When dealing with large graphs, it is both tedious and impractical to get an analytical formula as Eq. [\ref{close_form}]. Instead, we will resort to Eq. [\ref{first_moment_power_series}] to find numerical values of hitting times. Another method to find the hitting times is to use Monte Carlo simulation, although we will see that for a large graph, the running time of Monte Carlo simulation is much longer than the numerical method, thus making the Monte Carlo simulation an inferior alternative compared to Eq. [\ref{first_moment_power_series}]. In this section, we apply the numerical method and Monte Carlo simulation method to a connected graph as shown in Fig. [\ref{large_graph}]. \begin{figure}[h!] \centerline{\includegraphics[scale = 0.2]{large_graph.png}} \caption{An undirected graph with 100 vertices and 740 edges. This graph contains two communities, and is generated by the rule that each pair of vertices within the same community is connected by an edge with probability $p_{\text{in}} = 0.3$, whereas each pair of vertices from different communities is connected by an edge with probability $p_{\text{out}} = 0.02$.} \label{large_graph} \end{figure} We will calculate the hitting times from each vertex in the graph to the target node which is chosen to be node 1. In order to visualize the results, we sort the vertices in the graph according to their hitting times to the target node. In Fig. [\ref{sorted_distances}], we plot the results from Monte Carlo simulation method and numerical method. We can see from the figure that these two methods give almost the same results, although we know that results from Monte Carlo simulation have a precision that is much lower that obtained from numerical method. Another weak point of Monte Carlo simulation is that it is much more time-consuming than the numerical method. In order to obtain the results shown in Fig. [\ref{sorted_distances}], we need to run $10^{5}$ random walks from each vertex in the graph except the target node, and the whole process takes about 157 seconds, whereas in the numerical method, we only need to compute the power series in Eq. [\ref{first_moment_power_series}] up to 6180 terms, and it takes only 3.06 seconds to obtain results with machine precision. Actually, the larger the graph, the more time-saving the numerical method is compared to the Monte Carlo simulation method. \begin{figure}[h!] \centerline{\includegraphics[scale = 0.3]{sorted_distances.pdf}} \caption{An undirected graph with 100 edges and 740 edges. This graph contains two communities, and is generated by the rule that each pair of vertices within the same community is connected by an edge with probability $p_{\text{in}} = 0.3$, whereas each pair of vertices from different communities is connected by an edge with probability $p_{\text{out}} = 0.02$.} \label{sorted_distances} \end{figure} Another feature that is worth mentioning in Fig. [\ref{sorted_distances}] is that we can distinguish the two communities in the original graph by looking at the distribution of hitting times with respect to vertices. It is clear that there is a transition region which connects two plateaus in the hitting time vs. sorted vertices curve. The two plateaus correspond to the two communities in the graph. We can interpret the emergence of these two plateaus by noting that a random walker that starts from within one of the two communities tend to get trapped in a community. Once the random walker gets trapped in a community, the hitting times for each vertex in the community would not change substantially from vertex to vertex, which gives rise to a plateau in the curve. However, as soon as the random walker finds a bridge leading from one community to another, it will make a rapid transition across the two communities, and consequently the hitting times experience a significant change. Thus, the calculation of hitting times provides us a tool for community detection as a by-product. However, we should make a caveat that this method of community detection is usable only when the number of communities in the graph is small enough and the communities are clearly separated from each other. Or else, this method of community detection is not as good as the ones compiled in Ref. \cite{fortunato2010community}. \subsection{Directional distances and non-reciprocal relationships} The distance between a pair of nodes can be thought of as a function that maps two nodes to a non-negative real number whose value represents the distance between these two nodes, i.e., $d: (u, v) \mapsto \mathbb{R}^{+}$, where $u, v$ are two nodes in a graph. Previously, people define the distance function between a pair of nodes with the implicit assumption that this function should remain invariant with the exchange of its two arguments, which means $d(u, v) = d(v, u)$. This property, which we call the symmetricity of the distance function, is however undesirable for our situation where we want to study the influence of a node $A$ upon another node $B$. It is a truth universally acknowledged that not all users in a social network are equally influential, and thus we do not expect the mutual influence between a pair of nodes to be the same regardless of the direction. Based on this consideration, we dictate that the distance function for our case should be directional even if we restrict our attention to undirected graphs. Take the undirected graph shown in Fig. \ref{directional_distance} as a concrete example. \begin{figure}[h!] \centerline{\includegraphics[scale = 0.255]{directional_distance.png}} \caption{A figure that illustrates the directedness or asymmetricity of the distance function between a pair of nodes. In this figure, we do not expect the distance from node 10 to 11 to be identical to the distance in the reverse direction due to the fact that node 10 is more closely associated with nodes $1 - 9$ than with node 11, whereas node 11, being less social, has a pretty strong association with node 10. Here, we assume all the edges to have equal weight. } \label{directional_distance} \end{figure} In this graph, where all edges are assumed to be have equal weight, we do not expect the distance from node 10 to node 11 to be identical to the distance from node 11 to node 10. It is obvious from the figure that node 10, which belongs to a clique that consists of nodes $1 - 10$, should have a strong association with nodes $1 - 9$ and a ridiculously tenuous association with node 11, although node 11 is a direct neighbor of node 10. On the other hand, node 11, which has only three direct neighbors, would have an appreciably strong association with node 10. Therefore, we claim based on this graph that node 10 would exert a stronger influence on node 11 than the influence that node 11 would exert on node 10. In other words, the affection from node 11 to node 10 tend not to be reciprocated. Our assertion on the non-reciprocal relationship between node 10 and node 11 is buttressed by numerical results according to which the distance from node 10 to 11 is 91 whereas the distance from node 11 to 10 is 7, indicating that node 11 is more loyal to 10 than node 10 is to loyal to 11. Due to the fact that in most real world examples the relationships and influences are generally non-reciprocal and non-equivalent, we claim that our directional distance function is more suitable for describing the influence that one node exerts upon another one than other symmetric distance functions . \section{Comparison with existing methods} \label{comparison} \section{Conclusion and future work} \label{conclusion} In this paper, we have derived an analytical formula for calculating the hitting time from a starting vertex to a target vertex in a connected undirected graph. This method relies on the probability transition matrix that can be calculated conveniently from the graph's adjacency matrix. We also propose a quick method for implementing this formula using Python code, without the need to invert a possibly huge sparse matrix. Since hitting time is a core concept in random walk which can directly be simulated using Monte Carlo method, we can obtain an approximate value of the hitting time using Monte Carlo simulation. We tested our formula by applying the analytical method and Monte Carlo simulations to undirected connected graphs, and show that these two methods can give similar results within tolerance of error. The advantage of the analytical method over Monte Carlo simulation is that the former is much quicker and more accurate than the latter. Our calculation of the hitting times for vertices in a graph can also give us a glimpse of the community structures of the graph on which we perform random walks, although this method for detecting communities is not as good as other existing algorithms when the communities in graph are numerous and not clearly separated. \begin{acknowledgements} \end{acknowledgements}
\section{Introduction}\label{s1} The one-dimensional minimax bin-packing problem with bin size constraints (MINIMAX\_BSC) is a special case of the bin-packing problem that appears in the area of psychology \cite{VanderLinden2005}. The MINIMAX\_BSC case can be formally defined as follows: There are $T$ sets ($1\leq t\leq T$), and each set is composed of $B$ items ($1\leq b\leq B$). Each item has an associated weight $w_{tb}$, and these items must be grouped into $B$ groups in such a way that each group contains exactly one item of each one of the sets. The total weight of each group is equal to the sum of the weights of the items assigned to the group, and the objective is to minimize the maximum sum of the weights of the items in any group. In the context of test design \cite{VanderLinden2005}, the items represent questions, and each question has a level of difficulty; the sets represent groupings of questions that have comparable difficulty; the final groups represent the questionnaires, each of which has one question from each set. The difficulty of each questionnaire should be as even as possible, which is an objective that is achieved by minimizing the difficulty (weight) of the most difficult questionnaire (based on the sum of the weights). Brusco, K\"{o}hn and Steinley \cite{Brusco2013} recently studied the MINIMAX\_BSC, proposed a mixed zero-one integer linear programming model, and used the CPLEX commercial software to solve the model. Because this method could not verify optimality on large instances, they also proposed a simulated annealing (SA) algorithm to obtain near-optimal solutions. While \cite{Brusco2013} did not address the complexity of the problem, the analysis of the results showed that CPLEX as well as the proposed SA algorithm optimally solved large-sized instances with $B=2$, leading the authors to conjecture that this special case might be solvable in polynomial time. In this note, we address the complexity of the MINIMAX\_BSC, and we propose a constructive heuristic that has an absolute performance guarantee. The remainder of this paper is structured as follows. In section \ref{s2}, we show that the partition problem and the 3-partition problem can be reduced to the MINIMAX\_BSC, and we also propose a pseudo-polynomial algorithm for the case in which $B=2$. In section \ref{s3}, we present the proposed heuristic. Finally, section \ref{s4} provides some conclusions. To ease the presentation of the following sections, we define the following notation: the range of the set $t$, $r_t$, is defined as $r_t=\max_{1\leq b\leq B} \{w_{tb}\} -\min_{1\leq b\leq B} \{w_{tb}\}$. The maximum range $R$ is defined as $R=\max_{1\leq t\leq T}\{r_t\}$. The total weight among all of the items is denoted as $W$ ($W=\sum_{1\leq t\leq T, 1\leq b\leq B} w_{tb}$). Note that a lower bound on the objective is $W/B$ (see \cite{Brusco2013}). \section{Complexity}\label{s2} In this section, we first reduce the partition problem to the MINIMAX\_BSC problem in which $B=2$. Furthermore, we propose a pseudo-polynomial algorithm for this case. Finally, we show that the 3-partition problem reduces to a general MINIMAX\_BSC problem. \textbf{Theorem 1}. The MINIMAX\_BSC problem with $B=2$ is NP-Hard. \textbf{Proof}.We use a reduction for the partition problem that is known to be NP-Complete \cite{Garey1979}. \emph{Problem PARTITION}. Given a finite set $A$ and a size $s(a)\in Z^+$ for each $a\in A$, is there a subset $A’ \subseteq A$ such that $\sum_{a\in A'}s(a)=\sum_{a\in A-A'}s(a)$? To ease the explanation of the reduction procedure, we assume that $A$ is an ordered set, and thus, we can refer to the $t$-th element of $A$ as $a_t$. \emph{Reduction:} Given an instance of the problem PARTITION, the corresponding instance of MINIMAX\_BSC with $B=2$ is constructed as follows: Let $T$ be the cardinality of $A$ ($T=|A|$). Each set($1\leq t\leq T$) is composed of two items with weights $w_{t1}=a_t$ and $w_{t2}=0$. Clearly, the optimum objective value for this instance is equal to $\sum_{a\in A}s(a)/2$, if and only if the answer to the original instance of the Problem PARTITION is “Yes”.$\Box$ \textbf{Observation 1}. This problem can be solved in pseudo-polynomial time using a modification of the dynamic programming (DP) algorithm for the subset sum problem (see, for example, \cite{Martello1990}), which is known to be easily solvable in many practical applications. The DP formulation determines all of the feasible weights for one of the bins. Once the feasible weights are available, the problem consists in finding the feasible weight that minimizes the absolute difference between the weight and $W/2$ (which is the lower bound on the optimal solution). The states of the DP ($0\leq s\leq W$) identify the weights of the items that are assigned to the group. The DP has $T$ stages that are associated with the assignment of the items of set $t$ ($1\leq t\leq T$) to the group. The recurrence function $f_t(s)$ calculates which states are feasible (having a value equal to 1) or not (having a value equal to 0), as follows: \begin{tabular}{ll} For $t=1$, & $f_1(w_{11})=1$, $f_1(w_{12})=1$, $f_1(s\neq w_{11} \wedge s\neq w_{12})=0$. \\ For $t=2,..,T$, & $f_2(s)= \max \{f_{t-1}(s-w_{t-1,1}); f_{t-1}(s-w_{t-1,2}) \}$. \\ \end{tabular} The optimal solution of the problem can be obtained by reconstructing the solution from a feasible state of $f_T(s)$ with the smallest absolute value of $s-W/2$. $\Box$ \textbf{Theorem 2}.The general MINIMAX\_BSC problem is strongly NP-Hard. \textbf{Proof}.We use a reduction for the 3-partition problem that is known to be strongly NP-Complete [3]. \emph{Problem 3-PARTITION}. Given a set $A$ of $3\cdot m$ elements, a bound $U\in Z^+$, and a size $s(a)\in Z^+$ for each $a\in A$ such that $U/4<s(a)<U/2$ and such that $\sum_{a\in A}s(a)=m\cdot U$, can $A$ be partitioned into $m$ disjoint sets $A_1$,$A_2$,...,$A_m$ such that, for $1\leq i\leq m$, $\sum_{a\in A_i}s(a)=U$? \emph{Reduction}: Given an instance of problem 3-PARTITION, the corresponding instance of MINIMAX\_BSC is constructed as follows: Let $B=m$ and $T=3\cdot m$. Then, let $w_{t1}=a_t$, $w_{tb}=0$ for $2\leq b\leq B$, $1\leq t\leq T$. The optimum objective value for this instance of the MINIMAX\_BSC is equal to $U$ if and only if the answer to the original instance of Problem 3-PARTITION is “Yes”. $\Box$ \section{Evaluation of the heuristic procedure }\label{s3} We propose a fast heuristic that has $O(T\cdot B\cdot \log B)$ complexity and an absolute performance guarantee equal to $R$. Algorithm 1 depicts the algorithm\\ \textbf{Algorithm 1.}\\ Step 1. Set the accumulated weights of each group $W_b$ to $0$ for $1\leq b\leq B$.\\ Step 2. For each set $1\leq t\leq T$ do \\ Step 2a. Order the items of the set into a non-decreasing order of the weights, and order the groups into a non-increasing order of the accumulated weights. Assign the $t$-th item to the $t$-th group ($1\leq t \leq T$). \\ Step 2b. Update the accumulated weights.\\ Step 3. End For\\ The computationally expensive part of the algorithm is the ordering of the sets and items on the groups, which is step 2a. The for loop is executed T times, and each ordering has $O(B\cdot logB)$ complexity. The final computational complexity is $O(T\cdot B \cdot logB)$. The rationale behind the previous algorithm is to always maintain the difference among the groups within some limits. We now proceed to prove our claim on the performance guarantee. First, we show that the maximum difference of the accumulated weights between any two groups is equal to $R$. To simplify the explanation, let d denote the difference in the accumulated weights between two groups. During step 1, $d$ is set to 0. Accordingly, after the first assignment of items to groups, $R\geq d$ holds. During subsequent assignments, $d$ is bounded by $max\{d, r_t\}$, with $r_t$ equal to the range of the assigned set (if $r_t\leq d$, then $d-r_t$ is bounded by $d$, and if $r_t\geq d$, then $r_t-d$ is bounded by $r_t$). Because both $d\leq R$ and $r_t\leq R$ hold, the difference between any two groups will always be smaller than $R$. Second, note that for any valid solution, $min_{1\leq b\leq B} \{W_b\}$ is a lower bound on the optimal solution (the maximum possible value for $min_{1\leq b\leq B} \{W_b\}$ is $W/B$, which corresponds to a solution in which all of the groups have identical accumulated weights). Because the difference between any two groups obtained using Algorithm 1 is bounded by $R$, the difference between the solution provided by Algorithm 1 and a lower bound for the problem is bounded by $R$. This construct proves our initial claim that Algorithm 1 provides a solution that has an absolute performance guarantee equal to $R$. \section{Conclusions}\label{s4} In this note, we have studied the complexity of the one-dimensional minimax bin-packing problem with bin size constraints, and we have shown that the problem is NP-Hard, even if the number of bins is limited to 2. The studied problem is relevant in the area of psychology, as well as other areas in which the objective is to evenly distribute elements of different sets among groups. In addition to the complexity issues, we have also proposed a constructive heuristic that has an absolute performance guarantee that is equal to the largest difference in the weights among the items on any set. While the quality of the solutions provided by the heuristic might not be sufficient in some circumstances, this constructive heuristic is very fast. We implemented Algorithm 1 and ran some limited tests on instances that were generated as proposed in \cite{Brusco2013} and \cite{VanderLinden2005}. Our results indicate that (1) the experimental efficiency of Algorithm 1 is improved if the sets are ordered according to non-increasing values of $r_t$; (2) the running times for the instances that have up to 6000 items and 300 groups are below 0.02 seconds on a current commodity computer (3.06 GHz Intel Core 2 Duo); and (3) the average gap between the solution provided by the heuristic and the trivial lower bound is below 0.07. While the SA procedure from \cite{Brusco2013} should be the recommended solution method for this problem if the running time is not a limiting constraint, these results indicate that the SA could be improved by initializing the search using the proposed algorithm rather than randomly generating the initial solution. We believe that this modification would reduce the running time that is required by the SA algorithm to reach good solutions.
\section{Introduction} Let $\Gamma$ be an undirected graph and let $G$ be a subgroup of the automorphism group $\mathop{\mathrm{Aut}}(\Gamma)$ of $\Gamma$. For each vertex $v$ of $\Gamma$, we let $\Gamma(v)$ denote the set of vertices adjacent to $v$ in $\Gamma$ and $G_v^{\Gamma(v)}$ the permutation group induced on $\Gamma(v)$ by the vertex-stabiliser $G_v$ of $v$. We let $G_v^{[1]}$ denote the subgroup of $G_v$ fixing point-wise $\Gamma(v)$ and we let $G_{uv}^{[1]}=G_u^{[1]}\cap G_{v}^{[1]}$ denote the subgroup of the arc-stabiliser $G_{uv}$ fixing point-wise $\Gamma(u)$ and $\Gamma(v)$. Similarly, given $r\in\mathbb{N}$, we denote by $G_v^{[r]}$ the point-wise stabiliser in $G$ of the ball of $\Gamma$ of radius $r$ centred in $v$ and $G_{uv}^{[r]}=G_u^{[r]}\cap G_v^{[r]}$. The graph $\Gamma$ is said to be $G$-\emph{vertex-transitive} if $G$ acts transitively on the vertices of $\Gamma$. We say that the $G$-vertex-transitive graph $\Gamma$ is \emph{locally primitive} if $G_v^{\Gamma(v)}$ is primitive. In $1978$ Richard Weiss~\cite{Weiss} conjectured that for a finite connected $G$-vertex-transitive, locally primitive graph $\Gamma$ and for a vertex $v$ of $\Gamma$, the size of $G_v$ is bounded above by some function depending only on the valency of $\Gamma$ (see also the introduction in~\cite{Weiss2}, where the hypothesis of $\Gamma$ being finite is replaced by the much weaker hypothesis that the order of $G_v$ is finite). The truth of the Weiss Conjecture is still open and only partial results are known: for example, the case of $G_v^{\Gamma(v)}$ being $2$-transitive has been settled affirmatively with a long series of papers~\cite{T0,T1,T2,T3,T4,Weiss,Weiss0,Weiss1} by the work of Weiss and Trofimov. A modern method for analysing a finite primitive permutation group is via the O'Nan-Scott theorem. In~\cite[Theorem]{LPS} five types of primitive groups are defined (depending on the group- and action-structure of the socle): HA (\emph{Affine}), AS (\emph{Almost Simple}), SD (\emph{Simple Diagonal}), PA (\emph{Product Action}) and TW (\emph{Twisted Wreath}), and it is shown that every primitive group belongs to one of these types. The only primitive groups $X$ where the socle $\mathop{\mathrm{soc}} (X)$ of $X$ is a regular normal subgroup are of Affine type or of Twisted Wreath type. In the former case, $\mathop{\mathrm{soc}}(X)$ is an elementary abelian $p$-group and, in the latter case, $\mathop{\mathrm{soc}}(X)$ is isomorphic to the direct product $T^\ell$ with $\ell\geq 2$ and with $T$ a non-abelian simple group. One indication supporting the Weiss Conjecture is~\cite{Spiga}, where we have shown that if $\Gamma$ is a connected $G$-vertex-transitive graph of valency $d$, $G_v^{\Gamma(v)}$ is a primitive group of TW type and $(u,v)$ is an arc of $\Gamma$, then $G_{uv}^{[1]}=1$. Another main indication supporting the Weiss Conjecture is in~\cite{Weiss}, where Weiss shows that, if $\Gamma$, $d$, $u$ and $v$ are as above and $G_v^{\Gamma(v)}$ is a primitive group of HA type, then either $G_{uv}^{[1]}=1$ or the socle of $G_v^{\Gamma(v)}$ is an elementary abelian $2$- or $3$-group (more information on the structure of $G_v^{\Gamma(v)}$ in the latter case is given in~\cite[Theorem~$(i)$]{Weiss}). In this paper, as an application of the Local $C(G,T)$ Theorem, we completely settle the case of $G_v^{\Gamma(v)}$ containg an abelian regular subgroup, that is, of HA type. \begin{theorem}\label{thm}Let $\Gamma$ be a connected $G$-vertex-transitive graph and let $(u,v)$ be an arc of $\Gamma$. Suppose that $G_v^{\Gamma(v)}$ is a primitive group containg an abelian regular subgroup and let $|\Gamma(v)|=r^\ell$ for some prime $r$ and positive integer $\ell$. Then $G_{uv}^{[1]}=1$ when $r\geq 5$, $G_{uv}^{[2]}=1$ when $r=3$ and $G_{uv}^{[3]}=1$ when $r=2$. In particular, $G_{v}^{[4]}=1$. \end{theorem} Other recent and significant indications towards a positive solution to the Weiss Conjecture are given in~\cite{Giu1,Giu2,MSV,PoSV,PSV,PPSS}. \subsection*{Acknowledgements}I am in debt (and I will always be) to Bernd Stellmacher. In fact, it was Bernd that pointed out to me the relevance of the Local $C(G,T)$ Theorem for a proof of Theorem~\ref{thm}. Actually, the ideas and the proof of Theorem~\ref{thm} are based on conversations and a manuscript of Bernd. I am also in debt to Luke Morgan for reading a preliminary version of this paper. To both Bernd and Luke, my sincere thanks. \section{Preliminaries}\label{preliminaries} All graphs considered in this paper are simple (without multiple edges and without loops), undirected and connected. Given a subgroup $G$ of the automorphism group $\mathop{\mathrm{Aut}}(\Gamma)$ of $\Gamma$ and a vertex $v$ of $\Gamma$, we denote by $G_v$ the stabiliser of $v$ in $G$, that is, $G_v:=\{g\in G\mid v^g=v\}$. Moreover, we let $G_v^{\Gamma(v)}$ be the permutation group induced by the action of $G_v$ on the neighbourhood $\Gamma(v)$ of $v$. In what follows we let $\Gamma$ be a connected graph and $G$ be a subgroup of $\mathop{\mathrm{Aut}}(\Gamma)$ with $G_v^{\Gamma(v)}$ transitive for every vertex $v$ of $\Gamma$. We start by recalling some basic facts. \begin{lemma}\label{lemma:21} The group $G$ acts transitively on the edges of $\Gamma$. \end{lemma} \begin{proof} Given two vertices $v,v'$ of $\Gamma$, denote by $d(v,v')$ the length of a shortest path from $v$ to $v'$. Let $\{u,v\}$ and $\{w,t\}$ be two edges of $\Gamma$. We argue by induction on $$d:=\min\{d(u,w),d(u,t),d(v,w),d(v,t)\}. $$ Without loss of generality we may assume that $d=d(u,w)$. Let $u=u_0,u_1,\ldots,u_d=w$ be a path of length $d$ in $\Gamma$ from $u$ to $w$. If $d=0$, then $u=w$ and, as $G_u^{\Gamma(u)}$ is transitive, there exists $x\in G_u$ with $v^x=t$. Thus, $\{u,v\}^x=\{w,t\}$. Suppose that $d>0$. Since $G_w^{\Gamma(w)}$ is transitive, there exists $x\in G_w$ with $u_{d-1}^x=t$ and hence $\{w,u_{d-1}\}^x=\{w,t\}$. As $d(u,u_{d-1})=d-1$, by induction, there exists $g\in G$ with $\{u,v\}^g=\{w,u_{d-1}\}$ and hence $\{u,v\}^{gx}=\{w,t\}$. \end{proof} \begin{hauptlemma Let $\{u,v\}$ be an edge of $\Gamma$ and let $N\leq G_{u}\cap G_{v}$ with $$ ({\bf N}_{{G_{w}}}( {N}))^{\Gamma(w)}\quad\textrm{transitive for each }w\in \{u,v\}. $$ Then $N=1$. \end{hauptlemma} \begin{proof} Write $K:={\bf N}_G (N)$. By hypothesis, $K_u^{\Gamma(u)}$ and $K_v^{\Gamma(v)}$ are both transitive. Thus, from Lemma~\ref{lemma:21}, $K$ acts transitively on the edges of $\Gamma$. As $N\unlhd K$ and $N$ fixes the arc $(u,v)$, we see that $N$ fixes the end points of every edge of $\Gamma$. \end{proof} \begin{notation}\label{notation1}{\rm In what follows we let $\Gamma$ be a connected graph, $G$ be a group of automorphisms of $\Gamma$ and $\{u,v\}$ be an edge of $\Gamma$. We assume that \begin{enumerate} \item $G$ is transitive on the vertices of $\Gamma$; \item $G_v$ is finite; \item $G_v^{\Gamma(v)}$ is primitive. \end{enumerate} We let $G_v^{[1]}$ denote the subgroup of $G_v$ fixing point-wise $\Gamma(v)$ and we let $G_{uv}^{[1]}=G_u^{[1]}\cap G_{v}^{[1]}$ denote the subgroup of the arc-stabiliser $G_{uv}$ fixing point-wise $\Gamma(u)\cup\Gamma(v)$. } \end{notation} We now recall the Thompson-Wielandt Theorem with its formulation as in~\cite{Van}, see also~\cite{Spiga1}. (Here, ${\bf F}^*(X)$ denotes the generalised Fitting subgroup.) \begin{TW}The group $G_{uv}^{[1]}$ is a $p$-group for some prime $p$. Moreover, either $G_{uv}^{[1]}=1$ or ${\bf F}^*(G_v)={\bf O}_p (G_v)$ and ${\bf F}^*(G_{uv})={\bf O}_p ({G_{uv}})$. \end{TW} \begin{notation}\label{notation2}{\rm Together with Hypothesis~\ref{notation1} we also assume that $G_{uv}^{[1]}\neq 1$ and we let $p$ be the prime number with ${\bf F}^* (G_v)={\bf O}_p ({G_v})$ and ${\bf F}^*(G_{uv})={\bf O}_p ({G_{uv}})$. Observe that $G_{uv}^{[1]}$ is a non-trivial $p$-group. For simplicity, given a vertex $w$ of $\Gamma$, we write $$Q_w:={\bf O}_p ({{G_w^{[1]}}}),\quad L_w:=\langle Q_wQ_t\mid t\in \Gamma(w)\rangle,\quad T:=Q_uQ_v.$$} \end{notation} We immediately deduce the following lemma. \begin{lemma}\label{basbas}We have ${\bf F}^* ({G_v^{[1]}})={\bf F}^* (G_v)=Q_v$. \end{lemma} \begin{proof} Suppose that ${\bf O}_p( {G_v})\nleq G_v^{[1]}$. Since $G_v^{\Gamma(v)}$ is primitive, $({\bf O}_p ({G_v}))^{\Gamma(v)}$ is a transitive $p$-group and hence $G_v^{\Gamma(v)}$ is a primitive group containing an abelian regular subgroup and $({\bf O}_p ({G_v}))^{\Gamma(v)}$ is the socle of $G_v^{\Gamma(v)}$. Set $V:=({\bf O}_p ({G_v}))^{\Gamma(v)}$ and $H:=(G_{uv})^{\Gamma(v)}$. The elementary abelian $p$-group $V$ is an irreducible $H$-module via the action of $H$ on $V$ by conjugation. If ${\bf O}_p (H)\neq 1$, then ${\bf C}_V\left( {\bf O}_p ( H) \right)$ is a non-trivial proper $H$-submodule of $V$, a contradiction. Thus ${\bf O}_p (H)=1$. Hence $({\bf O}_p ({G_{uv}}))^{\Gamma(v)}=1$. Therefore ${\bf O}_p ({G_{uv}})\leq G_v^{[1]}$ and ${\bf O}_p ({G_{uv}})={\bf O}_p ({G_v^{[1]}})$. This contradicts the Hauptlemma applied with $N:={\bf O}_p ({G_{v}^{[1]}})$. Therefore ${\bf O}_p ({G_v})\leq G_v^{[1]}$ and hence ${\bf O}_p ({G_v})={\bf O}_p ({G_v^{[1]}})=Q_v$. The rest of the lemma follows from the Thompson-Wielandt Theorem. \end{proof} \begin{lemma}\label{basicL}The permutation group $L_v^{\Gamma(v)}$ is transitive and $[G_v^{[1]},L_v]\leq Q_v$. \end{lemma} \begin{proof} As $L_v\unlhd G_v$ and $G_v^{\Gamma(v)}$ is primitive, we have that either $L_v^{\Gamma(v)}$ is transitive or $L_v\leq G_v^{[1]}$. Suppose that $L_v\leq G_v^{[1]}$. Then $Q_u\leq G_v^{[1]}$. Thus $Q_u\unlhd G_v^{[1]}$ and $Q_u\leq \mathbf O_p( {{G_v^{[1]}}})=Q_v$. Therefore $Q_u=Q_v$. Now the Hauptlemma applied with $N:=Q_v$ gives $Q_v=1$. From Lemma \ref{basbas}, we get ${\bf F}^*(G_v)=1$, a contradiction. As $Q_v\unlhd G_v^{[1]}$, we have $[G_v^{[1]},Q_v]\leq Q_v$. Now, let $w\in \Gamma(v)$. As $G_v^{[1]}$ normalises $Q_w$, we get $[G_v^{[1]},Q_w]\leq G_v^{[1]}\cap Q_w\leq \mathbf O_p ({{G_v^{[1]}}})=Q_v$. Now the second part of the lemma immediately follows from the definition of $L_v$. \end{proof} \begin{lemma}\label{lemma:43}Let $C$ be a non-identity characteristic subgroup of $T$. Then ${\bf N}_{{G_w}}(C)=G_{uv}$, for each $w\in \{u,v\}$. \end{lemma} \begin{proof} Let $w\in \{u,v\}$ and write $N:={\bf N}_{{G}}({C})$. As $C\unlhd G_{uv}$, we have $G_{uv}\leq N_w$ and, by maximality, either $N_w=G_w$ or $N_w=G_{uv}$. Assume that $N_w=G_w$. As $G$ acts transitively on the arcs of $\Gamma$, there exists $t\in {\bf N}_G({{G_{uv}}})$ with $(u,v)^t=(v,u)$. Hence $t$ normalises $Q_u Q_v=T$ and thus also $C$. This gives $N_u=G_u$ and $N_v=G_v$ and the Hauptlemma yields $C=1$. \end{proof} \begin{lemma}\label{lemma:44}Suppose that $T\in {\rm Syl}_p({{L_v}})$. Then there exist subnormal subgroups $E_1,\ldots,E_r$ of $G_v$ such that for $E:=\langle E_1,\ldots,E_r\rangle$ the following hold: \begin{description} \item[(a)]$E\leq L_v$ and $G_v$ acts transitively on $\{E_1,\ldots,E_r\}$; in particular $E\unlhd G_v$. \item[(b)]$[E_i,E_j]=1$, for $i,j\in \{1,\ldots,r\}$ with $i\neq j$. \item[(c)]$G_v=EG_{uv}$. \item[(d)]$\mathbf O_p( {{E_i}})=[\mathbf O_p({{E_i}}),E_i]=[Q_v,E_i]$ and $E_i={\bf O}^p({E_i})$, for $i\in \{1,\ldots,r\}$. \item[(e)] $[E_i, \Omega_1({\bf Z}({{T}}))] \neq 1$, for $i\in \{1,\ldots,r\}$. \item[(f)]For each $i\in \{1,\ldots,r\}$, one of the following holds: \begin{description} \item[(i)]$E_i/\mathbf O_p({{E_i}})\cong (\mathop{\mathrm{SL}}_2(q))'$ with $q=p^n$ for some $n\geq 1$, $\mathbf O_p( {{E_i}})=\Omega_1({\bf Z}({{\mathbf O_p({{E_i}})}}))$ and $\mathbf O_ p ({{E_i}})/{\bf Z}({{E_i}})$ is a natural $(\mathop{\mathrm{SL}}_2(q))'$-module for $E_i/\mathbf O_p( {{E_i}})$, \item[(ii)]$p=3$, $E_i/\mathbf O_3( {{E_i}})\cong (\mathop{\mathrm{SL}}_2(q))'$ with $q=3^n$ for some $n\geq 1$, ${\bf Z}({{E_i}})=\Phi(\mathbf O_3( {{E_i}}))=(\mathbf O_3({{E_i}}))'$, $|{\bf Z}({{E_i}})|=q$, and $\mathbf O_3( {{E_i}})/\Omega_1({\bf Z}({{\mathbf O_3({{E_i}})}}))$ and $\Omega_1({\bf Z}({{\mathbf O_3({{E_i}}})}))/{\bf Z}({{E_i}})$ are both natural $(\mathop{\mathrm{SL}}_2(q))'$-modules for $E_i/\mathbf O_3( {{E_i}})$, \item[(iii)]$p=2$, $E_i/\mathbf O_2({{E_i}})\cong \mathop{\mathrm{Alt}}(2^n+1)$ for some $n\geq 2$, $\mathbf O_2({{E_i}})=\Omega_1({\bf Z}({{\mathbf O_2({{E_i}})}}))$, and $\mathbf O_2( {{E_i}})/{\bf Z}({{E_i}})$ is a natural $\mathop{\mathrm{Alt}}(2^n+1)$-module for $E_i/\mathbf O_2( {{E_i}})$. \end{description} \end{description} \end{lemma} \begin{proof} Recall that, according to~\cite[Definition~$1.2$]{CGT}, a finite group $X$ is said to be of characteristic $p$ if ${\bf C}_X\left({{\mathbf O_p (X)}}\right)\leq \mathbf O_p (X)$. By Lemma~\ref{basbas}, we have $Q_v={\bf F}^*(G_v )={\bf F}^* ({G_v^{[1]}})$ and, in particular, $Q_v=\mathbf O_p ({{L_v}})={\bf F}^*(L_v)$ and $L_v$ is of characteristic $p$. Moreover, by Lemma~\ref{lemma:43}, we have $$\langle {\bf N}_{{L_v}}({C})\mid C\neq 1,\,C\textrm{ characteristic in }T \rangle=G_{uv}\cap L_v<L_v$$ because $L_v^{\Gamma(v)}$ is transitive by Lemma~\ref{basicL}. Now, the group $L_v$ satisfies the hypothesis of~\cite[Corollary~$1.6$]{CGT} and thus the proof follows immediately from the Local $C(G,T)$ Theorem. Parts~(b) and~(c) follow from~\cite[Theorem~$1.5$~(b) and~(c)]{CGT} and the fact that $G_{uv}$ contains the subgroup $C(L_v,T)$ defined in~\cite{CGT}. From~\cite[Theorem~$1.5$~(a)]{CGT}, we have $\{E_1,\ldots,E_r\}^{G_v}=\{E_1,\ldots,E_r\}$ and hence $E\unlhd G_v$. Using Part~{\bf (c)} choose $i\in \{1,\ldots,r\}$ with $E_i\nleq G_{uv}$ and set $X=\langle E_i^g\mid g\in G_v\rangle$. Observe that $X\unlhd G_v$ and hence the primitivity of $G_v^{\Gamma(v)}$ yields $G_v=XG_{uv}$. In particular, replacing the family $\{E_1,\ldots, E_r\}$ and the group $E$ by the family $\{E_i^g\mid g\in G_v\}$ and the group $X$, we may assume that also Part~{\bf (a)} holds. Part~(e) and the equalities $\mathbf O_p( {{E_i}})=[\mathbf O_p( {{E_i})}, E_i]$ and $E_i{=\bf O}^p({E_i})$ in Part~(d) follow from~\cite[Definition~$1.4$~(i)]{CGT}. Now, the equality $\mathbf O_p( {{E_i}})=[Q_v,E_i]$ in Part~(d) follows from $Q_v=\mathbf O_p( {L_v})$ and $E_i=\mathbf O^p(E_i)$. Finally, Part~(f) follows from~\cite[Definition~$1.4$]{CGT}. \end{proof} \begin{lemma}\label{lemma:45} If $T\in{\rm Syl}_p({{L_v}})$, then $G_{uv}^{[2]}=1$ if $p\neq 3$ and $G_{uv}^{[3]}=1$ if $p=3$. \end{lemma} \begin{proof} Assume the notation in Lemma~\ref{lemma:44}. Since $G_{uv}^{[1]}$ is a $p$-group, we see that $G_v^{[2]}$ is a $p$-group, and hence $G_v^{[2]}\leq Q_v$. If $G_{uv}^{[2]}=1$, then the lemma follows immediately and hence we assume that $G_{uv}^{[2]}\neq 1$. For each $i\in \{1,\ldots,r\}$, we set $$V_i:=\Omega_1({\bf Z}({{\mathbf O_p ({{E_i}})}})).$$ For each vertex $w$ of $\Gamma$, we set $$Z_w:=\Omega_1({\bf Z}( {Q_w})).$$ Now we subdivide the proof into six claims from which the proof will immediately follow. \begin{claim} \label{ location of zb and [zb,ei]} $Z_u \leq Q_v$. \end{claim} \noindent Suppose that $Z_u\nleq Q_v$. Observe that $G_{v}^{[2]}=G_v^{[2]}\cap Q_v\leq G_u^{[1]}\cap Q_v\leq \mathbf O_p ({{G_u^{[1]}}})=Q_u$. Hence $Z_u$ centralises $G_v^{[2]}$. Thus also $H_v:=\langle Z_u^x\mid x\in G_v\rangle$ centralises $G_v^{[2]}$. Observe that, as $Z_u\nleq Q_v$, the group $H_v$ acts transitively on $\Gamma(v)$. By arc-transitivity we have $Z_v\nleq Q_u$, and hence a symmetric argument gives that $H_u:=\langle Z_v^x\mid x\in G_u\rangle$ centralises $G_u^{[2]}$ and acts transitively on $\Gamma(u)$. We conclude that $G_{uv}^{[2]}=G_u^{[2]}\cap G_v^{[2]}$ is centralised by $H_v$ and $H_u$. The Hauptlemma yields $G_{uv}^{[2]}=1$, a contradiction. ~$_\blacksquare$ \begin{claim} \label{[vi,ei] in za} $[V_i,E_i] \leq Z_v$ for every $i\in \{1,\ldots,r\}$. \end{claim} \noindent Let $i\in \{1,\ldots,r\}$. Observe that $[Z_v,E_i] \leq [Q_v, E_i] = \mathbf O_p({E_i})$ and hence $Z_v$ and $E_i$ normalise each other. Now $Z_v \cap E_i$ is an elementary abelian normal $p$-subgroup of $E_i$, whence $Z_v \cap E_i \leq \mathbf O_p({E_i})$. Since $\mathbf O_p({E_i})=[Q_v,E_i] \leq Q_v \cap E_i$ and $Z_v$ is central in $Q_v$, we have $Z_v \cap E_i \leq \Omega_1({\bf Z}({\mathbf O_p({E_i})})) = V_i$. This shows $$Z_v \cap E_i = Z_v \cap V_i.$$ Since $V_i / \mathbf Z(E_i)$ is a simple $(E_i/\mathbf O_p( {E_i}))$-module and $Z_v \cap V_i$ is $E_i$-invariant, we have either $Z_v \cap V_i \leq {\bf Z}({E_i})$ or $V_i = (Z_v\cap V_i){\bf Z}({E_i})$. In the latter case, we have $$[V_i,E_i] = [(Z_v\cap V_i ){\bf Z}( {E_i}), E_i] = [Z_v\cap V_i,E_i] \leq Z_v$$ and the claim follows. In the former case, since ${\bf O}^p(E_i)=E_i$, we see that $$ [Z_v,E_i] = [Z_v,E_i,E_i] \leq [Z_v\cap E_i,E_i]=[Z_v\cap V_i, E_i] \leq [{\bf Z}({E_i}),E_i] = 1$$ and hence $E_i$ centralises $Z_v$. From Lemma~\ref{basbas}, we have $Q_v={\bf F}^* (G_v)$ and hence ${\bf F}^*(L_v)=Q_v$ and ${\bf C}_{{L_v}}({{Q_v}})\leq Q_v$. As $Q_v\leq T$, we get ${\bf C}_{{L_v}}({{T}})$ $\leq {\bf C}_{{L_v}}({{Q_v}})\leq Q_v$. From this it follows that $\Omega_1({\bf Z}({T})) \leq Z_v$, but this contradicts Lemma~\ref{lemma:44}(e).~$_\blacksquare$ \begin{claim} \label{ [zb, ei] notin vi} $[Z_u,E_i] \nleqslant V_i$ for every $i\in \{1,\ldots,r\}$. \end{claim} \noindent Suppose that $[Z_u,E_i] \leq V_i$ for some $i\in \{1,\ldots,r\}$. Let $U := \langle Z_u^x\mid x\in E_i \rangle$. Then $U$ is normalised by $E_i$ and is a subgroup of $Q_v$ by Claim~\ref{ location of zb and [zb,ei]}. We have $$[U,E_i] = [\langle Z_u^x\mid x\in E_i\rangle ,E_i] = \langle [Z_u,E_i]^x\mid x\in {E_i} \rangle \leq \langle V_i^x\mid x\in {E_i} \rangle = V_i.$$ As $E_i=\mathbf O^p(E_i)$, using Claim~\ref{[vi,ei] in za}, we obtain $$[U,E_i] = [U,E_i,E_i] \leq [V_i,E_i] \leq Z_v.$$ Thus $[Z_u,E_i] \leq [U,E_i] \leq Z_v$, which shows that $E_i$ normalises $Z_u Z_v$. By Lemma~\ref{lemma:44}~(a) and~(c), $Z_u Z_v$ is normalised by $\langle E_i, G_{uv}\rangle = G_v$. It follows that $Z_u Z_v$ is normalised by $\langle G_v, G_{\{u, v \}} \rangle = G,$ a contradiction.~$_\blacksquare$ \begin{claim}\label{ op(ei) non-abelian}$p=3$ and $\mathbf O_p(E_i)$ is non-abelian for every $i\in \{1,\ldots,r\}$. \end{claim} \noindent By Claim~\ref{ location of zb and [zb,ei]}, we have $[Z_u,E_i]\leq [Q_v,E_i]\leq \mathbf O_p ({E_i})$ and by Claim~\ref{ [zb, ei] notin vi}, we have $[Z_u,E_i]\nleq V_i$. Thus $V_i \neq \mathbf O_p(E_i)$ and hence only case (ii) of Lemma~\ref{lemma:44}~(f) can hold. Thus $p=3$ and $(\mathbf O_p({E_i}))' \neq 1$.~$_\blacksquare$ \begin{claim} \label{ zbg not qb} For every $i\in \{1,\ldots,r\}$, there exists $g \in E_i$ with $Z_u^g \nleq Q_u$. \end{claim} \noindent Let $i\in \{1,\ldots,r\}$. Suppose that $Z_u^g\leq Q_u$ for every $g\in E_i$ and set $U:= \langle Z_u^x\mid x\in {E_i} \rangle$. Since $Z_u$ centralises $Q_u$, we have $Z_u \leq {\bf Z}( U)$. Now $U$ is $E_i$-invariant and moreover, since $Z_u$ is contained also in $Q_v$ by Claim~\ref{ location of zb and [zb,ei]}, we get that $U$ is contained in $Q_v$. Therefore $[U,E_i] \leq [Q_v,E_i] =\mathbf O_p({E_i})$ (where in the last equality we used Lemma~\ref{lemma:44}~(d)). Note that $[Z_u,E_i] \leq [ {\bf Z}( U), E_i]$ and hence $[{\bf Z}( U),E_i] \nleq V_i$ by Claim~\ref{ [zb, ei] notin vi}. Since $\mathbf O_p(E_i)/ V_i$ is a simple $(E_i/\mathbf O_p( {E_i}))$-module by Lemma~\ref{lemma:44}~(f)~(ii), we have $[\mathbf Z(U),E_i] V_i = \mathbf O_p(E_i)$. Now $V_i \leq \mathbf Z(\mathbf O_p(E_i))$ and $[\mathbf Z(U),E_i] \leq \mathbf Z(U)$ since $E_i$ normalises $U$. In particular, $V_i$ and $[\mathbf Z(U),E_i]$ are both abelian and centralise each other. Thus $\mathbf O_p(E_i) = [\mathbf Z(U),E_i]V_i$ is abelian, a contradiction to Claim~\ref{ op(ei) non-abelian}.~$_\blacksquare$ \smallskip Let $i\in \{1,\ldots,r\}$ and let $g\in E_i$ with $Z_u^g\nleq Q_u$. Now set $$w := u^g,\quad X:=\langle Z_{w}^x\mid x\in G_u\rangle.$$ \begin{claim}\label{alst} $X$ acts transitively on $\Gamma(u)$. \end{claim} \noindent Since $X\unlhd G_u$, it suffices to show that $X\nleq G_u^{[1]}$. Observe that $Z_w\leq Q_v$ by Claim~\ref{ location of zb and [zb,ei]}. If $X\leq G_u^{[1]}$, then $Z_{w} \leq G_u^{[1]} \cap Q_v \leq Q_u$, a contradiction to our choice of $w$.~$_\blacksquare$ \smallskip Recall that $G_{vu}^{[1]}$ is a $p$-group and hence so is $G_{vw}^{[1]}$. Since $G_{vw}^{[1]}$ is normalised by $G_w^{[1]}$, we get $G_{vw}^{[1]}\leq \mathbf O_p\left( {G_w^{[1]}}\right)=Q_w$. As $u$ and $w$ are both neighbours of $v$, we have $G_u^{[3]}\leq G_{vw}^{[1]}$. Therefore $G_u^{[3]}\leq Q_w$ and hence $Z_w$ centralises $G_u^{[3]}$. As $G_u^{[3]}$ is $G_u$-invariant, $X$ centralises $G_u^{[3]}$ and hence also $G_{u}^{[3]}\cap G_{v}^{[3]}=G_{uv}^{[3]}$. The arc-transitivity and Claim~\ref{alst} give $({\bf N}_{{G_t}}({{G_{uv}^{[3]}}}))^{\Gamma(t)}$ is transitive, for $t\in \{u,v\}$. The Hauptlemma gives $G_{uv}^{[3]}=1$. \end{proof} \section{Proof of Theorem~\ref{thm}} \begin{proof}[Proof of Theorem~$\ref{thm}$] Let $\Gamma$ be a $G$-vertex-transitive graph and let $v$ be a vertex of $\Gamma$. Suppose that $G_v^{\Gamma(v)}$ is a primitive group containing an abelian regular subgroup and write $|\Gamma(v)|=r^\ell$ for some prime $r$ and some $\ell\geq 1$. Let $u$ be a neighbour of $v$. If $G_{uv}^{[1]}=1$, then there is nothing to prove. Assume then that $G_{uv}^{[1]}\ne 1$. In particular, the hypotheses in Hypotheses~\ref{notation1} and~\ref{notation2} apply to $G$, $\Gamma$ and $\{u,v\}$. Now, we adopt the terminology in Hypothesis~\ref{notation2}. Let $N$ be a normal subgroup of $G_v$ minimal (with respect to set inclusion) subject to $Q_v\leq N\leq L_v$ and $N\nleq G_v^{[1]}$. Observe that $N$ is well-defined because $Q_v\leq L_v\unlhd G_v$ and $L_v\nleq G_v^{[1]}$ by Lemma~\ref{basicL}. As $G_v^{\Gamma(v)}$ is primitive containing an abelian regular subgroup, $N^{\Gamma(v)}$ is the socle of $G_v^{\Gamma(v)}$ and is the unique normal elementary abelian regular $r$-subgroup of $G_v^{\Gamma(v)}$. Now, the definition of $L_v$ and the transitivity of $N^{\Gamma(v)}$ gives $L_v=Q_v\langle Q_w\mid w\in \Gamma(v)\rangle=Q_v\langle Q_u^n\mid n\in N\rangle\leq Q_vQ_uN=TN\leq L_v$. Thus $L_v=NT$. If $r=p$, then $L_v^{\Gamma(v)}$ is a $p$-group and the primitivity of $G_v^{\Gamma(v)}$ gives $L_v=N$. Therefore $T\leq G_v^{[1]}$ and hence $Q_v=Q_u$. Now, the Hauptlemma yields $Q_v=Q_u=1$, a contradiction. Thus $r\neq p$. Now, $N^{\Gamma(v)}\cong N/(N\cap G_v^{[1]})$ is an $r$-group. Moreover, by Lemma~\ref{basicL}, we have $[N\cap G_v^{[1]},L_v]\leq Q_v$, that is, $L_v$ centralises $(N\cap G_v^{[1]})/Q_v$. This shows that $N/Q_v$ is nilpotent and hence the minimality of $N$ gives that $N/Q_v$ is an $r$-group. Therefore $T\in {\rm Syl}_p ({L_v})$. Now, the hypothesis of Lemma~\ref{lemma:44} are satisfied. We adopt the notation in Lemma~\ref{lemma:44}. Since $N$ and $T$ are soluble, so is $L_v$. Therefore $(\mathop{\mathrm{SL}}_2(q))'$ can be a section of $L_v$ only when $q\in \{2,3\}$. We deduce $(p,r)\in \{(2,3),(3,2)\}$. In particular, $r\leq 3$ and $p\leq 3$. Now the proof follows from Lemma~\ref{lemma:45}. \end{proof} \thebibliography{13} \bibitem{CGT}D.~Bundy, N.~Hebbinghaus, B.~Stellmacher, The Local $C(G,T)$ Theorem, \textit{J. Algebra} \textbf{300} (2006), 741--789. \bibitem{Giu1}M.~Giudici, L.~Morgan, A class of semiprimitive groups are that graph-restricted, \textit{Bull. Lond. Math. Soc. }\textbf{46}~(6), 1226--1236. \bibitem{Giu2}M.~Giudici, L.~Morgan, On locally semiprimitive graphs and a theorem of Weiss, \textit{J. Algebra} \textbf{427} (2015), 104--107. \bibitem{LPS}M.~W.~Liebeck, C.~E.~Praeger, J.~Saxl, On the O'Nan-Scott theorem for finite primitive permutation groups, \textit{J. Australian Math. Soc. (A)} \textbf{44} (1988), 389--396 \bibitem{MSV}L.~Morgan, P.~Spiga, G.~Verret, On the order of Borel subgroups of group amalgams and an application to locally-transitive graphs, \textit{J. Algebra} \textbf{434} (2015), 138--152. \bibitem{PoSV}P.~Poto\v{c}nik, P.~Spiga, G.~Verret, On graph-restrictive permutation groups, \textit{J. Comb. Theory Ser. B} \textbf{102} (2012), 820--831. \bibitem{PSV}C.~E.~Praeger, P.~Spiga, G.~Verret, Bounding the size of a vertex-stabiliser in a finite vertex-transitive graph, \textit{J. Comb. Theory Ser. B} \textbf{102} (2012), 797--819. \bibitem{PPSS}C.~E.~Praeger, L.~Pyber, P.~Spiga, E.~Szab\'o, The Weiss conjecture for locally primitive graphs with automorphism groups admitting composition factors of bounded rank, \textit{Proc. Amer. Math. Soc. }\textbf{140} (2012), 2307--2318. \bibitem{Spiga1} P.~Spiga, Two local conditions on the vertex stabiliser of arc-transitive graphs and their effect on the Sylow subgroups, \textit{J. Group Theory} \textbf{15} (2012), 23--35. \bibitem{Spiga}P.~Spiga, On $G$-locally primitive graphs of locally Twisted Wreath type and a conjecture of Weiss, \textit{Journal of Combinatorial Theory A} \textbf{118} (2011), 2257--2260. \bibitem{T0}V.~I.~Trofimov, Vertex stabilizers of graphs with projective suborbits. (Russian) \textit{Dokl. Akad. Nauk SSSR} \textbf{315} (1990), 544--546; English transl., \textit{Soviet Math. Dokl.} \textbf{42} (1991), 825--828. \bibitem{T1}V.~I.~Trofimov, Graphs with projective suborbits. Exceptional cases of characteristic $2$. I, \textit{Izv. Ross. Akad. Nauk Ser. Mat.} \textbf{62} (1998), 159--222; English transl., \textit{Izv. Math.} \textbf{62} (1998), 1221--1279. \bibitem{T2}V.~I.~Trofimov, Graphs with projective suborbits. Exceptional cases of characteristic $2$. II, \textit{Izv. Ross. Akad. Nauk Ser. Mat.} \textbf{64} (2000), 175--196; English transl., \textit{Izv. Math.} \textbf{64} (2000), 173--192. \bibitem{T3}V.~I.~Trofimov, Graphs with projective suborbits. Exceptional cases of characteristic $2$. III, \textit{Izv. Ross. Akad. Nauk Ser. Mat.} \textbf{65} (2001), 151--190; English transl., \textit{Izv. Math.} \textbf{65} (2001), 787--828. \bibitem{T4}V.~I.~Trofimov, Graphs with projective suborbits. Exceptional cases of characteristic $2$. IV, \textit{Izv. Ross. Akad. Nauk Ser. Mat.} \textbf{67} (2003), 193--222; English transl., \textit{Izv. Math.} \textbf{67} (2003), 126--1294. \bibitem{Van}J.~Van Bon, Thompson-Wielandt-like theorems revisited, \textit{Bull. London Math. Soc.} \textbf{35} (2003), 30--36. \bibitem{Weisss}R.~Weiss, $s$-transitive graphs, \textit{Colloq. Math. Soc. J\'anos Bolyai} \textbf{25} (1978), 827--847. \bibitem{Weiss}R.~Weiss, An application of $p$-factorization methods to symmetric graphs, \textit{Math. Proc. Comb. Phil. Soc.} \textbf{85} (1979), 43--48. \bibitem{Weiss0}R.~Weiss, Groups with a $(B, N )$-pair and locally transitive graphs, \textit{Nagoya Math. J.} \textbf{74} (1979), 1--21. \bibitem{Weiss1}R.~Weiss, Permutation groups with projective unitary subconstituents, \textit{Proc. Amer. Math. Soc.} \textbf{78} (1980), 157--161. \bibitem{Weiss2}R.~Weiss, Graphs which are locally Grassmann, \textit{Math. Ann.} \textbf{297} (1993), 325--334. \end{document}
\section{Conclusion} We presented GhostLink, an unsupervised generative model to extract the underlying influence graph in online communities dealing with items of fine taste like movies, food and beer without requiring any explicit user-user links or ratings. Given only timestamped reviews of users, we leverage opinion conformity from overlapping facet descriptions in co-reviewed content and their temporal traces to extract this graph. Furthermore, we use this influence network to improve item rating prediction by $23\%$ over state-of-the-art methods by capturing implicit social influence. We show in large-scale experiments in four real-life communities with $13$ million reviews that GhostLink outperforms several state-of-the-art baselines for tasks like recommendation and identifying influential users. \textbf{Acknowledgements.} This research was supported by the German Research Foundation, Emmy Noether grant GU 1409/2-1. We would like to sincerely thank Christos Faloutsos for his insightful and constructive comments on the paper. \section{Experiments} \label{sec:experiments} We empirically analyze various aspects of GhostLink, using four online communities in different domains: BeerAdvocate ({\tt \small beeradvo cate.com}) and RateBeer ({\tt \small ratebeer.com}) for {\em beer} reviews. Amazon ({\tt\small amazon.com}) for {\em movie} and {\em food} reviews. Table~\ref{tab:statistics} gives an overview. All datasets are publicly available at {\tt \small http://snap.stanford.edu}. We have a total of $13$ million reviews from $1$ million users over $16$ years from all of the four communities combined. From each community, we extract the following quintuple for GhostLink $<$$userId, itemId,$ $timestamp,$ $rating, review$$>$. We set the number of latent facets $K=20$ for all datasets. The symmetric Dirichlet concentration parameters are set as: $\alpha=\frac{1}{K},\eta=\frac{1}{2}, \rho=\frac{1}{U}, \gamma=0.01$.\footnote{We did not fine-tune hyper-parameter $K$. It is possible to improve performance by considering the value of $K$ that gives the best model perplexity. Similarly, we consider symmetric Dirichlet priors for a simplistic model with less hyper-parameters to tune.} Performance improvements of GhostLink over baseline methods are statistically significant at $99\%$ level of confidence determined by {\em paired sample t-test}. \subsection{Likelihood, Smoothness, Fast Convergence} \todo{as said in my email, TWO lines per plot!!!} There are multiple sets of latent variables in GhostLink that need to be inferred during Gibbs sampling. Therefore, it is imperative to show the resultant model is not only stable, but also improves log-likelihood of the data. A higher likelihood indicates a better model. There are several measures to evaluate the quality of facet models; we use here the one from~\cite{wallach}: $LL = \sum_d \sum_{j=1}^{N_d} log\ P(w_{d,j} | \beta; \alpha)$. \begin{figure}[b!] \centering \includegraphics[width=0.5\linewidth]{BA-LL.pdf}% \includegraphics[width=0.5\linewidth]{FF-LL.pdf} \caption{Log-likelihood of GhostLink and Author-Topic Model \cite{rosenzviUAI2004} per-iteration in Beeradvocate \& Amazon Foods.} \label{fig:log-likelihood} \end{figure} {\small \begin{table} \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{p{4cm}p{1cm}p{1cm}p{1cm}p{1cm}} \toprule & Beer & Rate & Amazon & Amazon \\ & advocate & beer & Foods & Movies\\\midrule GhostLink: Fast Implementation & 1.8 & 1.6 & 0.08 & 1.9\\ GhostLink: Basic & 6 & 2.2 & 0.14 & 3.1\\ \bottomrule \end{tabular} \caption{Run time comparison (in hours) till convergence between different versions of GhostLink.} \label{tab:time} \vspace{-3em} \end{table} } \begin{comment} We run our model under two configurations: (i) \textit{Maximum Topic Sampling:} In this, at each stage when sampling a {\em facet / topic} (Equations~\ref{eq6}/\ref{eq7}) --- we always select the facet with the maximum probability (note that there a $K$ possible facets). (ii) \textit{Multinomial Topic Sampling:} In order to introduce some randomness to avoid being stuck at local optima, we use cumulative multinomial sampling to select a facet value based on the density of the region where it lies. \end{comment} Figure~\ref{fig:log-likelihood} shows the log-likelihood of the data per iteration for the \todo[color=green]{for the datasets we have picked!!! all -- or mention here the names; and add a statment that similar results for other two darasets}Beeradvocate and Amazon Foods data. The plots for the other data\-sets are similar. We find that the learning is stable and has a {\em smooth} increase in the data log-likelihood {\em per iteration}. Empirically GhostLink also shows a fast convergence in around $10$ iterations. Table~\ref{tab:time} shows the run time comparison to convergence between the basic and fast implementation of GhostLink\footnote{Experiments are performed in: Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz. Note that our Gibbs sampling based inference process is sequential and not distributed.}. The fast version uses two tricks: (i) instead of computing Equations~\ref{eq4} and~\ref{eq5} separately, it estimates --- $s_j$ and ${v_{d'}}_j | s_j=1$ --- {\em jointly}, and (ii) it estimates facets for each {\em unique} token {\em once} as in Section~\ref{subsec:fast}. We also compare the log-likelihood of our model to another generative model that is closest to our work namely, the Author-Topic Model~\cite{rosenzviUAI2004}. This work models documents (reviews) to have a distribution over authors, authors to have a distribution over topics, and topics to have a distribution over words. This model is easy to mimic in our setting by ignoring the notion of influence (i.e. setting $s=0$ as constant for all authors/users). Figure~\ref{fig:log-likelihood} shows the stark difference in log-likelihood of the two models where GhostLink considering influence ($s \in \{0,1\}$) performs much better than the baseline that ignores the effect of temporal influence ($ s \in \{0\}$). \subsection{Influence-aware Item Rating Prediction} Next, we show the effectiveness of GhostLink for item rating prediction. In Section~\ref{sec:item-rating} we described the set of features and evaluation measure for this task. Table~\ref{tab:MSE} compares the mean-squared error of GhostLink with all the baselines with {\em ten-fold cross validation} --- where we use $90\%$ of the data for training and $10\%$ for test with results averaged over $10$ such splits. We divide our baselines into four main categories. For each category, we chose the state-of-the-art system as a baseline that is the most representative of that category with all the features as applicable. Unavailability of explicit user-user links in our data renders many of the related works inapplicable to our setting. \noindent {\bf (A) Rating and Time-aware Latent Factor Models}: These baselines model users, items, ratings and their temporal dynamics but ignore the text or content of the reviews. For most of these baselines, we used the code repository from {http://cseweb.ucsd.edu/~jmcauley/code/}. Note that these models use the rating bias features (F2) from Section~\ref{sec:item-rating}. Since they do not model text or influence network, the other features are not applicable. \noindent {\em (a) LFM}: This is the classical latent factor model based on collaborative filtering with temporal dynamics~\cite{KorenKDD2010} that considers ratings, latent facets, and time. \noindent {\em (b) Community at uniform rate}: This set of models~\cite{McAuley2013,xiong2010temporal,Xiang2010} consider users and products in a community to evolve using a single global clock with the different stages of community evolution appearing at uniform time intervals. So the preference for items evolves over time. \noindent {\em (c) Community at learned rate}: This extends (b) by learning the rate at which the community evolves with time~\cite{McAuley2013}. \noindent {\em (d) User at uniform rate}: This extends (b) to consider individual users and modeling users' progression based on their maturity and preferences evolving over time. The model assumes a uniform rate for evolution~\cite{McAuley2013}. \noindent {\em (e) User at learned rate}: This extends (d) by allowing each user to evolve on their individual clock, so that the time to attain maturity varies for different users \cite{McAuley2013}. \noindent {\bf (B) Text-aware Latent Factor Model}: Unlike the previous baselines, this model~\cite{mcauleyrecsys2013} considers text of the reviews along with the latent factor models using collaborative filtering for item rating prediction. The authors learn topic/facet distributions from text using a generative model based on Latent Dirichlet Allocation, and tie them to the latent facet distributions learned from the collaborative filtering model based on users and ratings. All of these are jointly learned to minimize the mean squared error for item rating prediction. This is the strongest baseline for our work but ignores the notion of network influence. Note that this baseline uses the rating bias features (F2) and language model (F1) from Section~\ref{sec:item-rating}. The network influence features are not applicable.\footnote{We used their code publicly available at {http://cseweb.ucsd.edu/~jmcauley/code/}}. Also, note that the generative process in this work is similar to the Author Topic Model~\cite{rosenzviUAI2004} with the main difference of the former being tailored for item rating prediction. \noindent {\bf (C) Network-aware Models}: We also experiment with two information diffusion based baselines~\cite{NetInfluence,NetInf}. Both models infer the latent influence network underlying a community based on only the temporal traces of activities (e.g., timestamps of users {\em posting} reviews, nodes {\em adopting} or becoming infected with information); they {\em ignore the review text}. \noindent {\em (f) NetInfluence}: This model~\cite{NetInfluence} learns the probability of one node influencing another based on logs of their past propagation ({\em action logs}). The model assumes that when a user $u$ in a network is influenced to perform an action, it may be influenced by its neighbors ($\Psi_u$ in our setting) who have performed the action before. Therefore, each of these predecessors share the ``credit" for influencing $u$ to perform that action. In order to adapt their model to our setting, we consider the event of writing a review on an item $i$ to be an {\em action} at a given timestamp. Therefore input is the set of actions $\langle u, i, t \rangle$ and $\langle u, v \rangle$. Although the authors do not perform recommendation, we use their estimated ``influence'' scores to construct $\Psi$ (refer to Equation~\ref{eq:psi}).\footnote{We used their code available at {http://www.cs.ubc.ca/~goyal/code-release.php/}}. This allows us to use all the features (F3) in Section~\ref{sec:item-rating} derived from the influence network in addition to the rating bias features (F1). Since they do not model text, the language model features are not applicable. \noindent {\bf (D) GhostLink}: We evaluate GhostLink with various combinations of the feature sets. In particular, we consider: (a) rating bias (F2, F3.i), (b) network influence (F3), (c) combining rating and network influence (F2, F3), (d) language model (F1), and the full model (F1, F2, F3). \begin{table}[!t!] {\small \begin{tabular}{p{4.2cm}cccc} \toprule {\bf } & {\bf Beer} & {\bf Rate} & {\bf \hspace*{-3mm}Amazon\hspace*{-3mm}} & {\bf \hspace*{-3mm}Amazon\hspace*{-3mm}}\\ {\bf Models} & {\bf \hspace*{-3mm}advocate\hspace*{-3mm}} & {\bf beer} & {\bf Foods} & {\bf Movies}\\\midrule {\bf (D) GhostLink} & {\bf 0.282} & {\bf 0.250} & {\bf 0.711} & {\bf 0.646} \\ {(Rating + Network + Time + Text)} & & & & \\ Rating Bias & 0.458 & 0.376 & 1.245 & 1.062 \\ Network Influence & 0.443 & 0.386 & 1.652 & 1.434\\ Rating + Network Influence & 0.433 & 0.347 & 1.236 & 1.050\\ Language Model & 1.069 & 1.148 & 3.481 & 4.427 \\\midrule {\bf (B) Rating + Text-aware} & & & &\\ Text-based Collab. Filtering~\cite{mcauleyrecsys2013, rosenzviUAI2004} & 0.373 & 0.302 & 1.347 & 1.233\\\midrule {\bf (C) Rating + Time + Network{\scriptsize-aware}} & & & &\\ {NetInfluence}~\cite{NetInfluence} & 0.465 & 0.426 & 0.93 & 0.878\\\midrule {\bf (A) Rating + Time-aware} & & & &\\ {LFM}~\cite{KorenKDD2010}& 0.559 & 0.917 & 1.465 & 1.620 \\ {Community at uniform rate}~\cite{McAuley2013,xiong2010temporal,Xiang2010} & 0.582 & 0.945 & 1.530 & 1.727\\ {User at uniform rate}~\cite{McAuley2013} & 0.586 & 0.950 & 1.523 & 1.729\\ {Community at learned rate}~\cite{McAuley2013} & 0.532 & 0.833 & 1.529 & 1.729\\ {User at learned rate}~\cite{McAuley2013}& 0.610 & 0.797 & 1.007 & 0.891\\ \bottomrule \end{tabular} \caption{Mean squared error for rating prediction (lower is better). GhostLink outperforms competing methods. } \label{tab:MSE} \vspace{-3em} } \end{table} \begin{comment} \setlength{\leftmargini}{12pt} \setlength{\leftmarginii}{13pt} \begin{itemize} \item [a)] {\bf \cite{XXX}}: Standard latent factor recommendation model that considers rating information, latent facets, and temporal dynamics but no textual information. \item [b)] {\bf \cite{mcauleyrecsys2013}}: State-of-the-art system that considers rating information, latent facets, and review texts over collaborative filtering model for item rating prediction. We used their code publicly available at {http://cseweb.ucsd.edu/~jmcauley/code/}. \item [c)] {\bf \cite{XXX}}: Diffusion-based model \item [d)] {\bf Ghostlink}: Various combinations of the feature sets coming from language model, rating bias, and influence networks. In particular, we consider: (a) only rating bias (F2), (b) rating bias with temporal information (F2, F3.i), (c) language model (F1), (d) all influence features (F3), (e) all rating and influence features (F2, F3), and the full model (F1, F2, F3). \end{itemize} \end{comment} \noindent {\bf Results:} Table~\ref{tab:MSE} shows the results. Standard latent factor collaborative filtering models and most of its temporal variations (Models: A) that leverage rating and temporal dynamics but ignore text and network influence perform the worse. We observe that the network diffusion based model that incorporates the latent influence network from temporal traces in addition to the rating information (Models: C) perform much better than the previous models not considering the network information. However, these too ignored the textual signals. Finally, we observe that contextual information harnessed from the review content in addition to rating information (Models: B) outperforms all of the previous models. From the variations of Ghostlink (using only language model), we observe that textual features alone are not helpful. GhostLink progressively improves as we incorporate more influence-specific features. Finally, the joint model leveraging all of context, rating, temporal and influence features incurs the least error. Comparison with the best performing baseline models shows the power of combined contextual and influence features over only context (Models: B) or only network influence (Models: C). \noindent {\bf Additional Network-aware models}: We also explored NetInf~\cite{NetInf} for tracing paths of diffusion and influence through networks. Given the times when nodes adopt pieces of information or become infected, NetInf identifies the optimal network that best explains the observed infection times. To adapt NetInf to our setting, we consider all the reviews on an item to form a cascade --- with the total number of cascades equal to the number of items. For each cascade (item), the input is the set of reviews on the item by users $u$ at timestamps $t$. However, NetInf yielded extremely sparse networks on our datasets --- suffering from multiple modeling assumptions like a single influence point for a node in a cascade, static propagation and fixed transmission rates for all the nodes. For example, in BeerAdvocate it extracted only $5$ pairs of influenced interactions. In contrast, both our model as well as NetInfluence, the influence probability $\Psi_{u,v}$ varies for every pair of nodes. \subsection{Facet Preference Divergence} In this study, we want to examine if there is any difference between the latent facet preference of users, as opposed to their \emph{observed} preference, and their preference when acting as an influencer (see Sec.~\ref{sec:network} for the definition of these preference distributions). {\small \begin{table} \centering \setlength{\tabcolsep}{1pt} \begin{tabular}{p{1.9cm}|p{2.1cm}|p{2.2cm}|p{1.9cm}} \toprule {\bf Dataset } & { C1:\ {\scriptsize$\theta^{obs}_u$ vs. $\theta^{latent}_u$}} & { C2:\ {\scriptsize $\theta^{latent}_u$ vs. $\theta^{infl}_u$}} & { C3:\ {\scriptsize $\theta^{obs}_u$ vs. $\theta^{infl}_u$}}\\\midrule Beeradvocate & 0.318 & 0.067 & 0.315\\ Ratebeer & 0.483 & 0.067 & 0.328\\ Amazon Foods & 0.368 & 0.202 & 0.429\\ Amazon Movies & 0.370& 0.110 & 0.321\\ \bottomrule \end{tabular} \caption{Facet preference divergence between distributions.} \label{tab:JSD} \vspace{-2em} \end{table} } We compute the Jensen-Shannon Divergence (JSD) between the different distributions $\theta^{infl}_u, \theta^{latent}_u, \theta^{obs}_u$ to observe their difference. JSD is a symmetrized form of Kullback-Leibler divergence that is normalized between $0$ - $1$ with $0$ indicating identical distributions. We compute the JSD results averaged over all the users $u$ in the community, i.e. $\frac{1}{|U|} \sum_u JSD(\theta^{x}_u \ || \ \theta^{y}_u)$ with the corresponding $x, y \in \{latent, infl, obs\}$. Table~\ref{tab:JSD} shows the results. \noindent We observe (statistically) significant difference between the latent facet preferences of users from that observed/acquired in a community (C1). This result indicates the strong occurrence of social influence on user preferences in online communities. We also find that users are more likely to use their original latent preferences to influence others in the community, rather than their acquired ones. That is, the JSD between the influencer and latent facet preference distribution (C2) is always significantly smaller than the JSD between the observed and the influencer distribution (C3). \subsection{Finding Influential Members} GhostLink generates a directed, weighted influence network $G = (U, E)$ using the user-influencer distribution $\Psi$. Given such a network, we can find influential nodes in the network. We used several algorithms to measure authority like Pagerank, HITS, degree centrality etc. out of which eigenvector centrality performed the best\footnote{Note that these baselines already subsume simpler activity based ranking (e.g., based on number of reviews written).}. The basic idea behind eigenvector centrality is that a node is considered influential, not just if it connects to many nodes (as in simple degree centrality) but if it connects to high-scoring nodes in the network. Given the eigenvector centrality score $x_v$ for each node $v$, we can compute a ranked list of users. {\bf Comparison: } An obvious question is whether this ranking based on the influence graph $\Psi$ is really helpful? Or put differently: Does this perform better compared to a simpler graph-based influence measure? A natural choice, for example, would be the temporal co-reviewing behavior of users. To construct such a graph, we can connect two users $u$ and $v$ with a directed edge if $u$ writes a review following $v$. The weight of this edge corresponds to all such reviews (following the above temporal order) aggregated across all the items. Therefore, $v$ acts as an influencer if $u$ closely follows his reviews. We choose a cut-off threshold of at least $5$ reviews. Also for this graph, we can compute eigenvector centrality scores, and obtain a ranked list of users as described above. {\bf The task: } We want to find which of the above graphs gives a better ranking of users. We perform this experiment in the Beeradvocate and Ratebeer communities. In these communities, users are awarded points based on factors like: their community engagement, how other users find their reviews helpful and rate them, as well as their expertise on beers. This is moderated by the community administrators. For instance, in Beeradvocate users are awarded Karma points\footnote{https://www.beeradvocate.com/community/threads/beer-karma-explained.184895/}. The exact algorithm for calculation of these points is not made public to users as it can be game to manipulation -- and of course, these scores are also not used in GhostLink. {\small \begin{table} \centering \begin{tabular}{lcc} \toprule {\bf Dataset} & {\bf Model} & {\bf Pearson Correlation}\\\midrule Beeradvocate & {\bf GhostLink} & {\bf 0.708}\\ & NetInfluence & 0.616\\ & Temporal co-reviewing & 0.400\\\midrule Ratebeer & {\bf GhostLink} & {\bf 0.736}\\ & NetInfluence & 0.653\\ & Temporal co-reviewing & 0.615\\ \bottomrule \end{tabular} \caption{Pearson correlation (higher is better) between different models to find influential users in the community.} \label{tab:cor} \vspace{-2em} \end{table} } We used these points as a proxy for user authority, and rank the users. This ranked list is used as a reference list (ground-truth) for comparison. That is, we use the ranked list of users based on eigenvector centrality scores from our influence graph, and \text{compute} Pearson correlation with the reference list\footnote{Other ranking measures (Kendall-Tau, Spearman Rho) yield similar improvements.}. A correlation score of $1$ indicates complete agreement, whereas $-1$ indicates complete disagreement. We can also do the same for the ranked list of users based on their co-reviewing behavior. As another strong baseline, we also consider the influence scores for the users as generated by NetInfluence~\cite{NetInfluence}. Table~\ref{tab:cor} shows the results. \begin{comment} {\small \begin{table} \centering \begin{tabular}{lcc} \toprule {\bf Dataset} & {\bf Model} & {\bf Pearson}\\ & & {\bf Correlation}\\\midrule Beeradvocate & OM: Mult. topic sampling & 0.708\\ & OM: Max. topic sampling & 0.701\\ & BM: Temporal co-reviewing & 0.400\\\midrule Ratebeer & OM: Mult. topic sampling & 0.736\\ & OM: Max. topic sampling & 0.729\\ & BM: Temporal co-reviewing & 0.615\\ \bottomrule \end{tabular} \caption{Pearson correlation between different models to find influential users in the community: OM indicates our model, whereas BM points to the baseline model.} \label{tab:cor} \vspace{-3em} \end{table} } \end{comment} {\small \begin{table}[!t!] \centering \setlength{\tabcolsep}{1.4mm} \begin{tabular}{|c|cc|cc|cc|} \toprule {\bf Dataset} & \multicolumn{2}{c|}{{\bf IG}} & \multicolumn{4}{c|}{{\bf MWSF}}\\ & {Edges} & {Weight} & {Edges} & {\% of IG} & {Weight} & {\% of IG}\\\midrule Beeradvocate & 180.5K & 31.8K & 132.5K & 73.40\% & 31.6K & 99.37\% \\ Ratebeer & 152.8K& 24.7K & 95.4K & 62.43\% & 24.5K & 99.19 \%\\ Amazon Foods & 107.5K & 59.51K & 104.1K & 96.84 \% & 59.47K & 99.93\% \\ Amazon Movies & 589K & 145.4K & 476K & 80.81\% & 145K & 99.72\% \\ \bottomrule \end{tabular} \caption{Structure of the latent influence networks: The Influence Graph (IG) is well represented by a Maximum Weighted Spanning Forest (MWSF) } \label{tab:structure} \vspace{-1.5em} \end{table} } \begin{figure*}[!t!] \centering \includegraphics[width=\linewidth, height=4.5cm]{graph-stat.png} \vspace{-2.5em} \caption{Distribution of nodes with scores (weighted degree, hubs, authorities, and eigen vector centralities in order from left to right) in log-scale for the extracted influence graph (note the change in scale of scores for each figure) in Beeradvocate data.} \vspace{-1em} \label{fig:graph-stat} \end{figure*} \begin{figure}[!b!] \vspace{-1em} \centering \includegraphics[width=\linewidth, height=4.5cm]{facet-infl-combined.png} \vspace{-1em} \caption{\small Maximum Weighted Spanning Forest corresponding to a representative facet (left); and its giant component (right).} \label{fig:facet-infl} \end{figure} We observe that the ranking computed with our influence graph performs much better (higher correlation with ground-truth) than the temporal co-reviewing baseline. Thus, the learned influence network indeed captures more information than simple co-review\-ing behavior and even the more advanced diffusion based NetInfluence model, and enables us to find influential users better. Note again, that the point-based scores used for the ground-truth ranking have {\em not been used} in GhostLink. \subsection{Structure of the Influence Network} Last, we analyze the structure of the influence network $\Psi$. Our first research question is: How is the mass (sum of influence/edge weights) distributed in the network? Is it randomly spread out, or do we observe any particular structure (e.g., resembling a tree-like structure). For this, we computed a Maximum Weighted Spanning Tree from the graph (or spanning forest as the graph is not connected) and computed the sum of its edge-weights, i.e. its mass. \begin{comment} {\small \begin{table*} \centering \begin{tabular}{c|c|c|cc|cccc|} \toprule {\bf Dataset} & {\bf Model} & {\bf Nodes} & \multicolumn{2}{c|}{{\bf Influence Graph (IG)}} & \multicolumn{4}{c|}{{\bf Maximum Weighted Spanning Forest (MWSF)}}\\ & & & {Edges} & {Weight} & {Edges} & {Weight} & {\% Weight of IG} & {\% Edges of IG}\\\midrule Beeradvocate & Our model w/ max. topic sampling & 33.1K &213.5K & 32.4K &136.2K &32.2K & 99.38\% & 63.79\% \\ & Our model w/ mult. topic sampling & 32.6K & 180.5K & 31.8K & 132.5K & 31.6K & 99.37\% & 73.40\% \\\midrule Ratebeer & Our model w/ max. topic sampling & 26K & 161.3K & 25K & 89.6K & 24.8K & 99.20\% & 55.55\% \\ & Our model w/ mult. topic sampling & 25.7K & 152.8K& 24.7K & 95.4K & 24.5K & 99.19 \%& 62.43\%\\\midrule Amazon Foods & Our model w. max. topic sampling & 68K & 107.5K & 59.51K & 104.1K & 59.47K & 99.93\% & 96.84 \%\\ & Our model w/ mult. topic sampling & & & & & & & \\\midrule Amazon Movies & Our model w/ max. topic sampling & & & & & & & \\ & Our model w/ mult. topic sampling & & & & & & & \\ \bottomrule \end{tabular} \caption{Structure of inferred influence network: graph statistics.} \label{tab:structure} \vspace{-2em} \end{table*} } \end{comment} Table~\ref{tab:structure} shows the statistics of the constructed MWSF over different datasets and compares it with the (original) influence graph (IG). We observe that {\em the majority of mass of the influence graph is concentrated in giant tree-components}. For example, in Beeradvocate, $99.37\%$ of the mass of the influence graph is concentrated in the MWSF. The forest accounts for $73.40\%$ of the edges in the influence graph. Thus, the remaining $26.6\%$ of the edges contribute only marginally, and can be pruned out. This tree-like influence matches intuition: a user often influences many other users, while she herself gets primarily influenced by a few -- surprisingly, in the majority of cases only by a single other user -- as indicated by the good approximation of the graph via a tree (preservation of mass). Figure~\ref{fig:facet-infl} shows the MWSF for a representative facet ``yuengling", and its giant component. The tree structure in Figure~\ref{fig:facet-infl} shows another characteristics: only a few users seem to influence many others (it resembles a snowflake) in the community. This brings us to our second research question: Do we observe --- similar to real-world networks --- specific power-law behaviors? For example, are the majority of nodes `influencees', and only a few nodes are `influencers'? Figure~\ref{fig:graph-stat} analyzes this aspect. Here we illustrate the distribution of nodes with weighted degree, hub \& authority, and eigen vector centrality scores for our influence graph plotted in {\em log-scale}. These statistics are for the Beeradvocate community. The statistics for other communities are similar. Indeed, we observe power-law like distributions with many influencees and a few influencers. For the HITS algorithm, a hub --- with a lot of outgoing edges --- is a user who influences a lot of other users; whereas an authority --- with a lot of incoming edges --- is the one getting influenced by other influential users. Note that each node can be a hub and an authority with different scores simultaneously. We observe that there are a lot of hubs (influencers) with very low influence scores, and only few with very high influence. From the authority report, we see that there are less number of incoming edges to nodes (note the really small range of authority scores of nodes). This indicates that users generally get influenced by only a few users in the community --- confirming the tree-like structure of the influence graph. \section{Introduction} Traditional works in recommender systems that build upon collaborative filtering \cite{koren2011advances} exploit that similar users have similar rating behavior and facet preferences. Recent works use review content~\cite{mcauleyrecsys2013, wang2011, mukherjeeSDM2014} and temporal patterns~\cite{Gunnemann2014,DBLP:conf/kdd/MukherjeeGW16,DBLP:conf/www/GunnemannGF14} to extract further cues. All of these works assume the users to behave independently of each other. In a social community, however, users are often influenced by the activities of their friends and peers. How can we detect this influence in online communities? \begin{figure}[!htbp] \centering \includegraphics[scale=0.45]{influence.png} \caption{\small Given only timestamped reviews of users in the Beeradvocate community without any explicit user-user link/interaction, GhostLink extracts this latent influence network (of top $K$ influencers) based on opinion conformity. This is compactly represented by a Maximum Weighted Spanning Forest (MWSF) preserving $99.4\%$ of the influence mass from $73.4\%$ of the edges of the inferred influence network depicting a tree-like structure of influence. } \label{fig:infl-graph} \vspace{-1em} \end{figure} One way to answer this question is to exploit the \textit{observed} social network or interaction of users --- like friend circles in Facebook, the follow graph in Twitter, and trust relations in Epinion. Recent works~\cite{Tang:2012:MDM:2124295.2124309,Tang:2013:ELG:2540128.2540519,DBLP:journals/datamine/LiuTHY12,DBLP:conf/sdm/ZhangYWSZ17, DBLP:journals/ida/MeiYSM17, DBLP:journals/jidm/FelicioPAAP16, Ye:2012:ESI:2348283.2348373, 7944514, Krishnan:2010, DBLP:conf/ecai/HuangCGSY10} \text{leverage} such \textit{explicit} user-user relations or the observed social circle to propose \text{social-network} based recommendation. Similarly, in the field of citation \text{networks}, \cite{DBLP:conf/icml/DietzBS07} attempt to extract citation influence given the {\em explicit network} of who-cited-whom. However, there is one big catch: many online review communities like Amazon or Beeradvocate do {\em not} have any explicit social network -- thus, making the above methods not applicable. Can we infer the influence network based on other signals in the data? While some recent works~\cite{Guo:2014:RTE:2554850.2554878, Lin:2014:PNR:2535053.2535249, Ma:2013:ESI:2484028.2484059, Ma:2011:RSS:1935826.1935877} model implicit relationships, they are limited to the historical rating behavior of users, ignoring the textual information. Similarly, works in information diffusion over latent networks model temporal traces ignoring the textual information~\cite{NetInf, Connie, NetRate, NetInfluence} and make some strong assumptions like a homogeneous network with static transmission rates. These techniques being agnostic of the context fail to capture complex interactions resulting in sparse networks. Some recent works on text-based diffusion~\cite{Wang2014, Du2013, HawkesTopic} model context. However, they also make some strong assumptions regarding the topics of diffusion being known apriori and the network being explicit. Most importantly, none of these works are geared for item recommendation, nor do they study the characteristics of review communities. In contrast, in this work, we leverage opinion conformity based on writing style as an indication of influence: where a user echoes/ copies facet descriptions from peers (called influencers) across multiple items. This is a common setting for communities dealing with items of fine taste like movies, beer, food and fine arts where users often co-review multiple items. Our informal goals are: \vspace{-0.5em} \begin{informal} Given only timestamped reviews of users in online communities, \textbf{extract} the underlying influence network of who-influences-whom based on opinion conformity, and \textbf{analyze} the characteristics of this influence network. \vspace{-0.5em} \end{informal} \begin{informal} \textbf{Leverage} the implicit social influence (network) to improve item rating prediction based on peer activities. \vspace{-0.5em} \end{informal} {\small \begin{table} \begin{tabular}{p{8cm}} \toprule {\bf Amazon Movies}\\ \hspace{0.4em}{\bf U1: }style intense pretentious non-linear narrative rapid editing\\ \hspace{0.4em}{\bf U2: }non-linear narrative crazy flashback scene randomly interspersed\\ \midrule {\bf Beeradvocate}\\ \hspace{0.4em}{\bf U1: } cloudy reddish amber color huge frothy head aroma spicy\\ \hspace{0.4em}{\bf U2: } hazy golden amber dissipating head earthy aroma pepper clove\\ \bottomrule \end{tabular} \caption{\small Sample (influenced) review snippets extracted by Ghost- Link from two communities: user U1's review is influenced by U2.} \label{tab:snapshot} \vspace{-3em} \end{table} } To answer these questions, we propose GhostLink, an unsupervised probabilistic graphical model, that automatically extracts the (latent) influence graph underlying a review community. {\em Key idea and approach:} Consider two users reviewing a movie in a review community. The first user expressed fascination for the movie's `non-linear narrative style', `structural complexity', and `cinematography' as outlined in the content of her review. Later, following this review, a second user also echoed similar concepts such as `seamless narrative', `style', and `matured cinematography'. That is, the second review closely resembles the first one {\em conceptually} in terms of \emph{facet descriptions} -- not simply by using the same words. While for a \emph{single} item this could be simply due to chance, {\em a repeated occurrence of this pattern across multiple items} -- where the second user reviewed an item some time {\em after} the first user echoing \emph{similar facet descriptions} -- gives an indication of influence. A user could be influenced by several users for different facets in her review. GhostLink models this notion of multiple influence common in communities dealing with items of fine taste like movies, food and beer where users often co-review multiple items. Table~\ref{tab:snapshot} shows a snapshot of (influenced) review snippets extracted by GhostLink. Based on this idea, we propose a probabilistic model that exploits the facet descriptions and preferences of users --- based on principles similar to Latent Dirichlet Allocation --- to learn an influence graph. Since the influencers for a given user and facets are unobserved, all these aspects are learned solely based on their review content and their temporal footprints (timestamps). Figure~\ref{fig:infl-graph} \todo{please remove the colors! pick only the top-k nodes; remove the facet specfic graph!!!}shows such an influence graph extracted by GhostLink from the Beeradvocate data. Analyzing these graphs gives interesting insights: There are only a few users who influence most of the others, and the distribution of influencers vs. influencees follows a power-law like distribution. Furthermore, most of the mass of this influence graph is concentrated in giant tree-like component(s). We use such influence graphs to perform influence-aware item recommendation; and we show that GhostLink outperforms state-of-the-art baselines that do not consider latent influence. Moreover, we use the influence graph to find influential users and to distinguish between users' latent facet preferences from that of induced/influenced ones. Overall, our contributions are: {\setlength{\leftmargini}{12pt} \vspace*{-1mm} \begin{itemize} \item {\bf Model:} We propose an unsupervised probabilistic generative model GhostLink based on Latent Dirichlet Allocation to learn a latent influence graph in online communities without requiring explicit user-user links or a social network. This is the first work that solely relies on timestamped review data. \item {\bf Algorithm:} We propose an efficient algorithm based on Gibbs sampling~\cite{Griffiths02gibbssampling} to estimate the hidden parameters in GhostLink that empirically demonstrates fast convergence. \item {\bf Experiments:} We perform large-scale experiments in four communities with $13$ million reviews, $0.5$ mil.\ items, and $1$ mil.\ users where we show improved recommendation for item rating prediction by around $23\%$ over state-of-the-art methods. Moreover, we analyze the properties of the influence graph and use it for use-cases like finding influential members in the community. \end{itemize}} \section{Joint Probabilistic Inference} We now describe the inference procedure for GhostLink. That is, given the set of all reviews (and their timestamps), we aim to infer the latent variables. To not clutter notation, we drop the indices of variables when it is clear from context (e.g.\ $\theta_u$ is abbreviated as $\theta$). Let $S, V, Z$ be the set of all latent variables corresponding to the influence variables $s$, influencers $v$, and facets $z$. Let $W'$ denote the set of latent variables corresponding to the observed words, and $U'$ the set of latent variables corresponding to the observed users\footnote{Note that $U'$ refers to the latent variable attached to each review that `stores' the user information. Thus, a user might appear multiple times in $U'$ since she might have written reviews on multiple items. Similar for $W'$.}. The joint probability distribution of our model is: {\small \vspace{-0.5em} \begin{multline} P(S, V, Z, W', \theta, \beta, \psi, \pi | U'; \alpha, \gamma, \rho, \eta) \propto\\ \prod_{u \in U} \big ( P(\pi; \eta)\cdot P(\psi; \rho) \cdot P(\theta; \alpha) \big ) \cdot \prod_{k \in K} P(\beta_k; \gamma)\cdot \\ \prod_{i \in I} \prod_{d \in D_i} \bigg ( \prod_{s \in S} P(s | \pi_{u_d}) \cdot \prod_{v \in V} P(v | \psi_{u_d})^{\mathbb{I}(s=1)} \cdot \\ \prod_{z \in Z} \big( P (z | \theta_{u_d})^{\mathbb{I}(s=0)} \cdot P(z | \tilde{\theta}_v^{d'})^{\mathbb{I}(s=1)} \big ) \cdot \prod_{w \in W'} P(w | \beta_z) \bigg) \end{multline} } Since exact inference is intract\-able, we have to resort to approximate inference. For this purpose, we perform Collapsed Gibbs Sampling \cite{Griffiths02gibbssampling}. In Gibbs sampling, the conditional distribution for each hidden variable is computed based on the current assignment of the other hidden variables. The values for the latent variables are sampled repeatedly from this conditional distribution until convergence. In our problem setting we have three sets of latent variables corresponding to $S, V$ and $Z$ respectively -- the remaining variables $\theta, \beta, \psi, \pi$ are marginalized out (collapsed). Given the current assignment of random variables, we use the shortcuts: $n(u,s)$ denotes the count of words written by $u$ with influence variable $s \in \{0,1\}$. $n(u, v, s=1)$ denotes the count of words written by $u$ under influence from $v$ (i.e. $s=1$) in the community across all items and facets. $n(u, z, s=0)$ denotes the number of times $u$ wrote facet $z$ for any word based on her latent preferences (i.e. $s=0$). $n(v_{d'}, z)$ denotes the count of facet $z$ in review $v_{d'}$, and $n(z,w)$ denotes the number of times word $w$ is used with facet $z$. \noindent\textbf{Collapsing.} We first marginalize out the remaining variables as mentioned above. Exploiting conjugacy of the Categorical and Dirichlet distributions, we can integrate out $\pi$, $\psi$, $\theta$, and $\beta$ from the above distribution to obtain the four posterior distributions\\[-4mm] {\small \[ P(S| U'; \eta )= \frac{\Gamma(\sum_{s} \eta) \prod_{s} \Gamma(n(u,s) + \eta)}{\prod_{s} \Gamma(\eta) \sum_{s} \Gamma(n(u,s) + 2 \cdot \eta) } \] \[ P(V | U', S; \rho)= \frac{\Gamma(\sum_{v} \rho) \prod_{v} \Gamma(n(u,v,s=1) + \rho)}{\prod_{v} \Gamma(\rho) \sum_{v} \Gamma(n(u, v, s=1) + U \cdot \rho) } \] \[ P(Z| U', S, V; \alpha)=\frac{\Gamma(\sum_{z} \alpha) \prod_{z} \Gamma(n(u,z, s=0) + \alpha)}{\prod_{z} \Gamma(\alpha) \sum_{z} \Gamma(n(u,z, s=0) + K \cdot \alpha) } \] \[ P(W' | Z; \gamma)=\frac{\Gamma(\sum_{w} \gamma) \prod_{w} \Gamma(n(z, w) + \gamma)}{\prod_{w} \Gamma(\gamma) \sum_{w} \Gamma(n(z, w) + W \cdot \gamma) } \] } where $\Gamma$ denotes the Gamma function\footnote{The derivation of the following equations --- for integrating out latent variables from the joint distribution exploiting Multinomial-Dirichlet conjugacy and Gibbs Sampling updates --- follow from the standard principles of Latent Dirichlet Allocation, and, therefore, details have been omitted for space.}. \noindent \textbf{Gibbs sampling.} Given the above, the joint probability distribution with conditional independence assumptions is: \[ P(S, V, Z | U', W') \propto P(S|U') \cdot P(V|S,U') \cdot P(Z|V,S,U') \cdot P(W' | Z) \] The factors on the right-hand side capture (in order): vulnerability of the user being influenced, potential influencers given the user, facet distribution to be used (latent or influenced), and subsequent words to be used according to the facet distribution chosen. We infer all the distributions using Gibbs sampling. Let the subscript $-j$ denote the value of a variable excluding the data at the $j^{th}$ position. The conditional distributions for Gibbs sampling for updating the latent variable $S$ --- that models whether the user is going to write on her own or under influence --- is:\\[-6mm] {\small \begin{align} P(s_j=0| u_d, z,s_{-j}) \propto \frac{n(u_d, s_j=0) + \eta}{\sum_s n(u_d, s) + 2 \cdot \eta} \cdot \frac{n(u_d, z, s_j=0) + \alpha}{\sum_z n(u_d, z, s_j=0) + K \cdot \alpha} \label{eq2}\\ \tilde{P}(s_j=1| u_d, v_{d'}, z, s_{-j}) \propto \frac{n(u_d, s_j=1) + \eta}{\sum_s n(u_d, s) + 2 \cdot \eta} \cdot \frac{n(v_{d'}, z) + \alpha}{\sum_z n(v_{d'}, z) + K \cdot \alpha} \label{eq3}\\ P(s_j=1| u_d, z, s_{-j}) \propto {max}_{v_{d'} \in D_i : t_{d'} < t_d} \tilde{P}(s_j=1| u_d, v_{d'}, z, d,s_{-j}) \label{eq4} \end{align} \vspace{-1em} } \todo{I changed the above from argmax to max. Is this correct?} The first factor in Equation \ref{eq2} and \ref{eq3} above models the probability of the user being influenced: as a fraction of how many facets the user wrote under influence ($s=1$), or otherwise ($s=0$), out of the total number of facets written. The second factor in Equation~\ref{eq2} models the user's propensity of using a particular facet based on her latent preferences (when $s=0$); whereas the second factor in Equation~\ref{eq3} models the probability of the user writing about a facet under influence (when $s=1$) from an earlier review on the given item. Note that in this case --- as the user is influenced by another user's review that appeared earlier in her timeline --- she adopted her influencer's {\em used facet distribution} to write about the given facet instead of her own latent facet preference distribution. Note that in the above question, we did not assume the influencers $v_d'$ to be given since this would lead to a very restrictive Gibbs sampling step. Instead, as shown in Equation~\ref{eq4}, we sample the best possible influencer for a given facet to determine the probability for $s=1$. Accordingly, the influencer for a given user and word, when writing under influence, is updated as\\[-9mm] {\small \begin{multline} \label{eq5} {v_{d'}}_j | u_d, s=1, z, {v_{d'}}_{-j} = {argmax}_{v_{d'} \in D_i : t_{d'} < t_d} \bigg(\\ \frac{n(u_d, v_{d'}, s=1) + \rho}{\sum_v n(u_d, v, s=1) + U \cdot \rho} \cdot \frac{n(v_{d'}, z) + \alpha}{\sum_z n(v_{d'}, z) + K \cdot \alpha} \bigg) \end{multline} } The first factor above counts how many times $u$ has been influenced by $v$ on writing about any facet --- out of the total number of times $u$ has been influenced by any other member in the community. The second factor is the facet distribution {\em used} by $v$ in the review that influenced $u$'s current facet description. Instead of computing Equations~\ref{eq4} and~\ref{eq5} separately, we perform the update of both --- $s_j$ and ${v_{d'}}_j | s_j=1$ --- {\em jointly}, thereby, reducing the computation time significantly. The conditional distribution for sampling the latent facet $z$ is:\\[-6mm] {\small \begin{align} P(z_j | u_d, s=0,z_{-j}) \propto \frac{n(u_d, z_j, s=0) + \alpha}{\sum_z n(u_d, z, s=0) + K \cdot \alpha} \cdot \frac{n(z_j, w) + \gamma}{\sum_w n(z_j, w) + W \cdot \gamma} \label{eq6}\\ P(z_j | u_d, v_{d'}, s=1, z_{-j}) \propto \frac{n(v_{d'}, z_j) + \alpha}{\sum_z n(v_{d'}, z) + K \cdot \alpha} \cdot \frac{n(z_j, w) + \gamma}{\sum_w n(z_j, w) + W \cdot \gamma} \label{eq7} \end{align} \vspace{-1em} } The first factor in the above equations models the probability of using the facet $z$ under the user's (own) latent preference distribution (Equation~\ref{eq6}), or adopting the influencer's used facet distribution (Equation~\ref{eq7}). The second factor counts the number of times facet $z$ is used with word $w$ --- out of the total number of times it is used with any other word. \subsection{Overall Processing Scheme} Exploiting the above results, the overall inference is an iterative process consisting of the following steps. We sort all reviews on an item by timestamps. For {\em each word} in each review on an item: \begin{enumerate} \item Estimate whether the word has been written under influence, i.e. compute $s$ using Equations~\ref{eq2} - \ref{eq4} keeping all facet assignments fixed from earlier iterations. \item In case of influence (i.e. $s=1$), an influencer $v$ is jointly sampled from the previous step. \item Sample a facet for the word using Equations~\ref{eq6} and ~\ref{eq7} keeping all influencers and influence variables fixed. \end{enumerate} The process is repeated until convergence of the Gibbs sampling process (i.e. the log-likelihood of the data stabilizes). \subsection{Example} Consider a set of reviews written by three users in the following time order: first Adam, then Bob, then Sam (see Table \ref{tab:example}). The table also shows the {\em current} assignment of the latent variables $z$ and $s$. The goal is to {\em re-sample} the influence variables. For ease of explanation, we ignore the concentration parameters of the Dirichlet distribution in the example and we ignore the subscript $-j$ from the variables. That is, we do not exclude the current state of the own random variable as in Gibbs sampling. Similar to before, let $n(u, s)$ be the number of tokens written by $u$ with influence variable as $s$, $n(d, z)$ be the total number of tokens with topic as $z$ in document $d$, $n(d)$ be the number of tokens in document $d$, and $n(u)$ be the total number of tokens written by $u$. For Adam we have $s = 0$ for each word. As he is the first reviewer, he has no influencers. For Bob, the influence variable $s$ w.r.t.\ the word `non-linear' is based on:\\[-8mm] {\small \begin{multline*} \hspace*{-4mm}P(s_{\text{`non-lin'}}\! =\! 0 | u\!=\!Bob, z\!=\!z_2) \propto \frac{n(u\!=\!Bob, s\!=\!0)}{n(u\!=\!Bob)} \cdot \frac{n(u\!=\!Bob, z\!=\!z_2, s\!=\!0)}{n(u\!=\!Bob, s\!=\!0)}\\ = \frac{1}{2} \cdot \frac{0}{1} = 0\\ P(s_{\text{`non-lin'}} = 1 | u=Bob, v = Adam, z=z_2, v_d=d_1) \\ \propto \frac{n(u=Bob, s=1)}{n(u=Bob)} \cdot \frac{n(z=z_2, v_d=d_1)}{n(v_d=d_1)} =\frac{1}{2} \cdot \frac{2}{3} = \frac{1}{3} \end{multline*} } Therefore, Bob is more likely to write `non-linear' being influenced by Adam's review than on his own. Similarly, for Sam: {\small \begin{align*} P(s_{\text{`non-lin'}}=0 | u=Sam, z=z_2) \propto \frac{1}{2} \cdot 1 = \frac{1}{2} \end{align*} } Note the higher probability compared to the one of Bob since Sam uses further terms (i.e. `thriller') which also belong to facet $z_2$ that he wrote uninfluenced. For the case $s=1$, we would obtain: {\small \begin{align*} P(s_{\text{`non-lin'}}=1 | u=Sam, v=Adam, z=z_2, v_d=d_1) &\propto \frac{1}{2} \cdot \frac{2}{3} = \frac{1}{3}\\ P(s_{\text{`non-lin'}}=1 | u=Sam, v=Bob, z=z_2, v_d=d_2) &\propto \frac{1}{2} \cdot \frac{1}{2} = \frac{1}{4} \end{align*} } As seen, Sam is more likely influenced by Adam's review, rather than by Bob's, when considering facet $z_2$, since $d_1$ has a higher concentration of $z_2$. It is worth noting that the probability of the influence variable $s$ depends only on the facet, and not the exact words. Our model captures semantic or facet influence rather than just capturing lexical match. Overall, however, Sam is likely to write `non-linear' on his own rather than being influenced by someone else since $P(s_{\text{`non-lin'}}=0|...)$ is larger. While the above example considers a single item, in a community setting --- especially for communities dealing with items of fine taste like movies, food and beer where users co-review multiple items --- such statistics are aggregated over several other items. This provides a stronger signal for influence when {\em a user copies/echoes similar facet descriptions from a particular user across several items}. Our algorithm, therefore, relies on three main factors to model influence and influencer in the community: {\setlength{\leftmargini}{12pt} \begin{itemize} \item [a)] The vulnerability of a user $u$ in getting influenced, modeled by $\pi$ and captured in the counts of $n(u,s)$. \item [b)] The textual focus of the influencing review $v_d$ by $v$ on the specific facet ($z$), modeled by $\theta$ and captured in the counts of $n(v_d,z)$; as well as how many times the influencer $v$ influenced $u$, modeled by $\psi$ and captured in counts of $n(u,v,s=1)$ --- aggregated over all facets and items they co-reviewed. \item [c)] The latent preference of $u$ for $z$, modeled by $\theta_u$ and captured in the counts of $n(u,z,s=0)$. \end{itemize}} {\small \begin{table}[!t] \begin{tabular}{llllll} \toprule \textbf{Reviewer} & \textbf{Document} & \textbf{time} & \textbf{Word} &\textbf {Facet} & \textbf{Influence (s=\ )} \\ Adam & $d_1$ & 0 & action & $z_1$ & 0\\ && & non-linear & $z_2$ & 0\\ && & narrative & $z_2$ & 0\\\midrule Bob & $d_2$ & 1 & action & $z_1$ & 0\\ & & & non-linear & $z_2$ & 1\\\midrule Sam & $d_3$ & 2 & non-linear & $z_2$ & 1\\ & & & thriller & $z_2$ & 0\\ \bottomrule \end{tabular} \caption{Example to illustrate our method.} \label{tab:example} \vspace{-2.5em} \end{table} } \subsection{Fast Implementation} \label{subsec:fast} In the above generative process, we sample a facet for {\em each token/word} in a given review. Thus, we may sample different facets for the same word present multiple times in a review. While this makes sense for long documents where a word can belong to multiple topics, for short reviews it is unlikely that the same word is used to represent different facets. Therefore, we reduce the time complexity by sampling a facet for {\em each unique token} present in a review. We modify our sampling equations to reflect this change. In the original sampling equations, we let {\em each token} contribute $1$ unit to the counts of the distribution for estimation. Now, {\em each unique token} contributes $c$ units corresponding to $c$ copies of the token in the review. As we sample a value for a random variable during Gibbs sampling, ignoring its current state, we also need to discount $c$ units for the token (instead of $1$) to preserve the overall counts. All the sampling equations are modified accordingly. \subsection{Constructing the Influence Network}\label{sec:network} Our inference procedure computes values for the latent variables $S$, $V$, $Z$, and corresponding distributions. Using these, our objective is to construct the influence network given by $\psi$: {\small \begin{equation*} \psi_{u, v} = \frac{n(u, v, s=1) + \rho}{\sum_v n(u, v, s=1) + U \cdot \rho} \end{equation*} } The above counts the number of facet descriptions $n(u, v, s=1)$ that are copied by $u$ from $v$ (with $s=1$ depicting influence) out of the ones copied by $u$ from anyone else. Given $\psi$, we can construct a directed, weighted influence network $G = (U, E)$, where each user $u \in U$ is a node, and the edgeset $E$ is given by: {\small \vspace*{-1mm} \begin{equation} \label{eq:psi} E = \big\{ \{ v, u\} \ | \ \psi_{u,v} > 0, u \in U, v\in U \big\} \end{equation} \vspace*{-1mm} } That is, there exists an edge from $v$ to $u$, if $v$ positively influences $u$ with the edge weight being $\psi_{u,v}$. Furthermore, GhostLink can distinguish between different facet preference distributions of each user. The \emph{observed} facet preference distribution $\theta^{obs}_u$ of $u$ is given by: {\small \begin{equation*} \theta^{obs}_{u,z} = \frac{n(u, z) + \alpha}{\sum_z n(u, z) + K \cdot \alpha} \end{equation*} } This counts the proportion of times $u$ wrote about facet $z$ out of the total number of times she wrote about any facet --- with or without influence. This distribution represents essentially the preferences as it is captured by a standard author-topic model~\cite{rosenzviUAI2004}, user-facet model~\cite{mcauleyrecsys2013}, and most of the other works using generative processes to model users/authors. With GhostLink, however, we can derive even more informative distributions: {\small \begin{equation*} \theta^{latent}_{u,z} = \frac{n(u, z,s=0) + \alpha}{\sum_z n(u, z,s=0) + K \cdot \alpha} \end{equation*} } {\small \begin{equation*} \theta^{infl}_{u,z} = \frac{n(v=u,z,s=1) + \alpha}{\sum_z n(v=u,z,s=1) + K \cdot \alpha} \end{equation*} } The distribution $\theta^{latent}_u$ intuitively represents a user's latent facet preference when \emph{not being influenced} from the community (i.e. $s=0$). In contrast, $\theta^{infl}_{u}$ captures the facet distribution of $u$ as an influencer, i.e. that she used to influence someone else. That is, the latter one counts the proportion of times $u$ was chosen as the influencer (i.e. $v=u$, $s=1$) by another user in the community; or in other words when some other user copied from $u$. \section{Item Rating Prediction using Influence Networks} \label{sec:item-rating} Our proposed method learns an influence network $\psi$ from the review data. We hypothesize that using this network helps to improve rating prediction. That is, our objective is to predict the rating $y'_{u, i,t}$ that user $u$ would assign to an item $i$ at time $t$ exploiting her latent social neighborhood given by $\psi_{u}$. Since we know the actual ground ratings $y_{u,i,t}$, the performance for this task can be measured by the mean squared error: MSE = $\frac{1}{|{U, I}|} \sum_{u,i} (y_{u,i,t} - y'_{u,i,t})^2$. \noindent Note that we use the rating data only for the task of rating prediction -- it has {\em not been used} to extract the influence graph. In the following, we describe the features we will create for each review for the prediction task. We will analyze and compare their effects in our experimental study. Recap that each review $d$ consists of a sequence of words $\{w\}$ by $u$ on item $i$ at time $t$. \noindent {\bf F1. Language model features} based on the review text: Using the learned language model $\beta$, we construct $\langle F_w = log ( max_{z}\beta_{z,w}) \rangle$ of dimension $W$ (size of the vocabulary). That is, for each word $w$ in the review, we consider the value of $\beta$ corresponding to the best facet $z$ that can be assigned to the word. We take the log-transformation of $\beta$ which empirically gives better results. \noindent {\bf F2. Rating bias features}: Similar to~\cite{mcauleyrecsys2013,koren2011advances}, we consider: (i) Global rating bias $\gamma_g$: Average rating $avg(\langle y \rangle)$ assigned by all users to all items. (ii) User rating bias $\gamma_u$: Average rating $avg(\langle y_{u,.,.} \rangle )$ assigned by $u$ to all items. (iii) Item rating bias $\gamma_i$: Average rating $avg(\langle y_{.,i,.} \rangle )$ assigned by all users to item $i$. \noindent {\bf F3. Temporal influence features}: Finally, we exploit the temporal and influence information we have learned with our model \noindent (i) Temporal rating bias $\gamma_r$: Average rating $avg(\langle y_{.,i,t} \rangle )$ assigned by all users to item $i$ {\em before} time $t$. This baseline considers the temporal trend in the rating pattern. % \noindent (ii) Temporal influence from rating $\gamma_d$: Let $\langle d_{t} \rangle $ be the set of reviews written by users $\langle v_{d_{t}} \rangle$ before time $t$ on the item $i$. Consider the influence of $v_{d'}$ on $u$, i.e.\ the variable $\psi_{u, v_{d'}}$, as learned by our model. The feature $avg(\langle \psi_{u, v_{d_t}} \cdot y_{v_{d_t},i,t} \rangle )$ aggregates the rating of each previous user and her influence on the current user for item $i$ at time $t$ to model the influence of previous users' ratings on the current user's rating. This baseline combines the temporal trend and the social influence of earlier users' rating. % % \noindent (iii) Temporal influence from context $\gamma_{dc}$: Consider the review $d$ with the sequence of words $\langle w \rangle$. Let $s_w \in \{0,1\}$ be the influence variable sampled for a word $w$, and $v_w$ be the influencer sampled for the word when $s_w=1$, as inferred from our model. Also, let $y_{v_w}$ be the rating assigned by $v_w$ to the current item at time $t' < t$. Consider $\mathbb{I}(.)$ be an indicator function that is $1$ when its argument is true, and $0$ otherwise. We use:\\[-8mm] {\small \begin{equation*} \gamma_{dc} = \frac{1}{|d|} \sum_{w \in d} \bigg( \mathbb{I}(s_w=1) \cdot \big(\psi_{u, v_{w}} \cdot y_{v_w}\big) + \mathbb{I}(s_w=0) \cdot \gamma_u \bigg) \end{equation*} } For each word $w$ in the review $d$, if the word is written under influence ($s_w = 1$), we consider the influencer's rating and her influence on the current user given by $\psi_{u, v_{w}} \cdot y_{v_w}$. Otherwise ($s_w=0$), we consider the user's self rating bias $\gamma_u$. This is aggregated over all the words in the review. This baseline combines the temporal trend and context-specific social influence of earlier users' rating. Using different combinations of these features (see Sec.~\ref{sec:experiments}), we use Support Vector Regression~\cite{drucker97} from LibLinear with default parameters to predict the item ratings $y'_{u,i,t}$, using ten-fold cross-validation ({https://www.csie.ntu. edu.tw/~cjlin/liblinear/}). \section{GhostLink: Influence-Facet Model Our goal is to learn an influence graph between users based on their review content (specifically, overlap of their facet preferences) and timestamps only. The underlying assumption is that when a user $u$ is influenced by a user $v$, $u$'s facet preferences are influenced by the ones of $v$. Since the only signal is the textual information of the reviews -- and their inferred latent facet distributions (also known as topic distributions in the context of LDA) -- we argue that influence is reflected by the used/echoed words and facets. While classical user topic models assume that each word of a document is associated with a topic/facet that follows the user's preference, we assume that the topic/facet of each word might be based on the preferences of \emph{other} users as well -- the influencers. Inspired by this idea, we first describe the generative process of GhostLink followed by the explanation of the inference procedure. \noindent\textbf{Generative Process.} Consider a corpus $D$ of reviews written by a set of users $U$ at timestamps $T$ on a set of items $I$. The subset of reviews for item $i\in I$ is denoted with $D_i\subseteq D$. Let $d \in D_i$ be a review on item $i \in I$, we denote with $u_{d}$ the user and with $t_d$ the timestamp of the review. All the reviews on an item $i$ are assumed to be ordered by timestamps. Each review $d$ consists of a sequence of $N_d$ words denoted by $d=\{w_1,\ldots ,w_{N_d}\}$, where each word is drawn from a vocabulary $W$ having unique words indexed by $\{1, \dots, W\}$. The number of latent facets/topics corresponds to~$K$. In most review communities, a user browses through other reviews on an item before making a decision (say) at time $t$. Therefore, the set of users and corresponding reviews that could potentially influence the given user's perspective on an item $i$ consists of all the reviews $d'$ written at time $t_{d'} < t$. We call the corresponding set of users -- the potential influence set $ IS_{u,i}=\{u'\in U\mid \exists d,d'\in D_i:u=u_d \wedge u'=u_{d'} \wedge t_{d'} < t_d\} $ for user $u$ and item~$i$. In our model, each user is equipped with a latent facet preference distribution $\theta_u$, whose elements $\theta_{u,k}$ denote the preference of user $u$ for facet $k \in K$. That is, $\theta_u$ is a $K$-dimensional categorical distribution; we draw it according to $\theta_u \sim Dirichlet_K(\alpha)$ with concentration parameter $\alpha$. These distributions later govern the generation of the review text (similar to LDA). Furthermore, for each user an influence distribution $\psi_u$ is considered, whose elements $\psi_{u,v}$ depict the influence of user $v$ on user $u$. That is $\psi_u$ represents a $U$-dimensional categorical distribution -- \emph{and all $\psi_*$ together build the influence graph we aim to learn} (see also Sec.~\ref{sec:network}). Similar to above we define $\psi_u \sim Dirichlet_U(\rho)$. When writing a review, a user $u$ can decide to write an original review based on her latent preferences $\theta_u$ --- or be influenced by someone's perspective from her influence set for the given item; that is, using the preferences of $\theta_v$ for some $v\in IS_{u,i}$. Since a user might not be completely influenced by other users, we allow each word of the review to be either original or based on other influencers. More precise: For each word of the review, we consider a random variable $s\sim Bernoulli(\pi_u)$ that denotes whether it is original or based on influence, where $\pi_u$ intuitively denotes the `vulnerability' of the user $u$ to get influenced by others. If $s=0$ the user uses her own latent facet preferences. That is, following the idea of standard topic models, the latent facet for this word is drawn according to $z\sim Categorical(\theta_u)$. If $s=1$, user $u$ writes under influence. In this case, the user chooses potential influencer(s) $v$ according to the strength of influence given by $\psi_u$. Since for a specific item $i$, the user $u$ can only be influenced by users in $ IS_{u,i}$ who have written a review {\em before} her, we write $v \sim Categorical(\psi_u\cap IS_{u,i})$ to denote restriction of the domain of $\psi_u$ to the currently considered influence set. \begin{figure}[htbp] \centering \includegraphics[scale=0.6]{modelnew.pdf} \vspace{-2em} \caption{Plate diagram for the generative process. Each dashed box indicates a single review.} \vspace{-1em} \label{fig:model} \end{figure} Given the sampled user $v$, the latent facet for this word should be drawn according to $v$'s preferences. Now, we are faced with a modeling choice. We can use the influencer's overall facet distribution $\theta_v$ (as is used in citation-network based topic models). However, by using $\theta_v$, one considers $v$'s \emph{generic} facet distribution -- which might be very unrelated to the item under consideration. That is, while $v$ might prefer specific facets, in his actual review $d'$ about item $i$ these facets might have not been used. And accordingly, since the user $u$ only sees the observed review $d'$ -- and not the latent facet distribution of $v$ -- the user $u$ cannot be influenced by facets which have not been considered. Thus, in our model, instead of considering the influencer's (gene\-ric) facet distribution $\theta_v$, we consider the facet distribution that the influencer has actually {\em used} for writing his review $d'$ for the given item $i$. Since the review of the user $v$ has already been generated (otherwise the user would not be in $IS_{u,i}$), the used facets $z$ for each word of his review are known. Thus, instead of considering $\theta_v$, we consider the `observed' facet distribution based on the actual review, denoted with $\tilde{\theta}_{v}^{d'}$. Given this distribution, we sample $z\sim Categorical(\tilde{\theta}_{v}^{d'})$. Since the model samples an influencer for each facet, a user can have {\em multiple influencers} corresponding to multiple facets in his review. In summary, in the above process, the user $u$ either draws the facet $z$ from $\theta_u$ (if $s=0$) or $\tilde{\theta}_{v}^{d'}$ (if $s=1$). Given facet $z$, we draw the actual word $w\sim Categorical(\beta_z)$ following the generative process of Latent Dirichlet Allocation \cite{Blei2003LDA}. As usual, $\beta_z\sim Dirichlet_W(\gamma)$ denotes corresponding per-facet word distributions. Overall, the user's review can be regarded as being generated by a \emph{mixture} of her latent preferences and preferences of her influencers. Algorithm \ref{algo:1} summarizes the generative process, and the graphical model is illustrated in Figure \ref{fig:model}, where we indicated with $u_d$ the (observed) user for each review. \begin{algorithm}[t] \SetAlgoLined \DontPrintSemicolon {\small 1. Draw $\theta_u \sim \text{Dirichlet}_K(\alpha)$ // latent facet preference of each user \; 2. Draw $\psi_u \sim \text{Dirichlet}_U(\rho)$ // influencer distribution of each user \; 3. Draw $\pi_u \sim \text{Beta}(\eta)$ // vulnerability of each user to be influenced \; 4. Draw $\beta_k \sim \text{Dirichlet}_W(\gamma)$ // word distribution of each facet \; \For {each item $i\in I$} { \For {each review $d \in D_i$ on $i$ at time $t$ by user $u$} { \For {each word $w$ in $d$} { 5. Draw $s \sim \text{Bernoulli}(\pi_u)$ \; \If {$s=0$} { 6. $\theta' = \theta_u$ // use latent facet preference of user \; } \If {$s=1$} { 7. Draw $v \sim \text{Categorical}(\psi_u\cap IS_{u,i})$ \; 8. $\theta' = \tilde{\theta}_{v}^{d'}$ // use the influencer's facet preference \; /* where $\tilde{\theta}_{v}^{d'}$ is the facet distribution {\em used} by $v$ for review $d'$ written at time $t' < t$ */ \; } 9. Draw $z \sim \text{Categorical}(\theta')$ \; 10. Draw $w \sim \text{Categorical}(\beta_z)$ \; } } } } \caption{Generative process for influence -- facet model.} \label{algo:1} \end{algorithm} \section{Related Work} \todo{this sentence is broken!!! and some references are missing.. probably you did not push the bib?}State-of-the-art recommender systems exploit user-user and item-item similarities using latent factor models \cite{korenKDD2008, koren2011advances}. Temporal patterns in ratings such as bursts, bias, and anomalies are studied in \cite{KorenKDD2010, XiangKDD2010, Gunnemann2014}. Recent works~\cite{mcauleyrecsys2013, wang2011, mukherjeeSDM2014} have further considered review texts for content-aware recommender systems. However, all of these works assume that users participate independently in the community which is rarely the case. Social-aware recommender systems~\cite{Tang:2012:MDM:2124295.2124309, Tang:2013:ELG:2540128.2540519, DBLP:journals/datamine/LiuTHY12,DBLP:conf/sdm/ZhangYWSZ17, DBLP:journals/ida/MeiYSM17, DBLP:journals/jidm/FelicioPAAP16, Ye:2012:ESI:2348283.2348373, 7944514, Krishnan:2010, DBLP:conf/ecai/HuangCGSY10} exploit peers and friends of users to extract more insights from their activities, likes, and content sharing patterns using homophily. In absence of explicit social networks in many communities, some works~\cite{Guo:2014:RTE:2554850.2554878, Lin:2014:PNR:2535053.2535249, Ma:2013:ESI:2484028.2484059, Ma:2011:RSS:1935826.1935877} exploit collaborative filtering to extract implicit social relationships based on the historical rating behavior. Some of these works also leverage signals like pre-defined trust metrics, and partial or explicit social links. \cite{Lin:2014:PNR:2535053.2535249, DBLP:conf/icdm/ZhangWZLTZ16} use time as an additional dimension along with ratings. Information diffusion based works~\cite{NetInf,Connie,NetRate,NetInfluence} that model underlying {\em latent} influence or diffusion networks do not consider text. Some of them have strong assumptions in terms of known transmission rates, static and homogeneous transmission etc. Recent works on text-based diffusion~\cite{Wang2014, Du2013, HawkesTopic} alleviate some of these assumptions. However, they also make some assumptions regarding topics of diffusion being known, network being explicit etc. Most importantly, none of these works are geared for item recommendation and do not study the characteristics of review communities. Works in modeling influence in heterogeneous networks~\cite{DBLP:journals/datamine/LiuTHY12} and citations networks~\cite{DBLP:conf/icml/DietzBS07} assume the presence of explicit user-user links. Prior works on modeling influence propagation and cascades~\cite{Myers:2012:IDE:2339530.2339540} also consider a given network to propagate influence scores. Learning a latent influence network has been possible in the field of information propagation when observing cascades of events \cite{DBLP:journals/tkdd/Gomez-RodriguezLK12,DBLP:conf/icdm/ZhangWZLTZ16}. However, these works have not considered the setting where only review text is \todo[color=green]{Subho, it would be good if you look in the paper \cite{DBLP:conf/icdm/ZhangWZLTZ16}. I guess they have a good RW. We might want to cite some of their further papers. Just that no reviewer complains!} available, and no explicit networks. {\em In contrast to prior works, GhostLink learns the latent influence network solely from timestamped user reviews, without requiring any explicit user-user link/rating information}. It uses this network to improve item rating prediction considering implicit social influence. \begin{comment} However, recent works in These assume that a user may be influenced by her social circle to adopt/share an item Recent works exploit the content of reviews along with the ratings State-of-the-art recommender systems \cite{korenKDD2008, koren2011advances} harness user-user and item-item similarities by means of latent factor models. Time-dependent phenomena such as bursts in item popularity, bias in ratings, and the temporal evolution of user community are investigated in~\cite{KorenKDD2010, xiongSDM2010, XiangKDD2010}. There is also prior work on anomaly detection~\cite{GunnemannGF14,Gunnemann2014}, capturing changes of social links~\cite{MaWSDM2011} and linguistic norms~\cite{DanescuWWW2013}. None of these prior works take into account the evolving experience and behavior of individual users. Prior work that analyzed user review texts focused on sentiment analysis~\cite{linCIKM2009}, learning latent aspects and their ratings~\cite{wang2011, mukherjeeSDM2014, mcauleyRecSys2013}, and user-user interactions~\cite{West-etal:2014}. However, all of these prior approaches operate in a static, snapshot-oriented manner, without considering time at all. The work~\cite{mcauleyWWW2013}, one of our baselines, has modeled and studied the influence of evolving user experience on rating behavior and for targeted recommendations. However, it disregards the vocabulary in the users' reviews. Our own recent work~\cite{Subho:ICDM2015} addressed this very limitation, by means of language models that are specific to the experience level of an individual user, and by modeling transitions between experience levels with a Hidden Markov Model. However, both of these works are limited to {\em discrete} experience levels leading to abrupt changes in both experience and language model. To address the above, and other related drawbacks, the current paper introduces continuous-time models for the smooth evolution of both user experience and language model. Wang et al.~\cite{Wang2006} modeled topics over time. However, the topics themselves were constant, and time was only used to better discover them. Dynamic topic models have been introduced by Blei et al. in \cite{BleiDTM,BleiCTM}. This prior work developed generic models based on Brownian Motion, and applied them to news corpora. \cite{BleiCTM} argues that the continuous model avoids making choices for discretization and is also more tractable compared to fine-grained discretization. Our language model is motivated by the latter. We substantially extend it to capture evolving user behavior and experience in review communities using Geometric Brownian Motion. \end{comment} \begin{comment} Fazeli et al. [4] survey and compare the performance of different trust strength metrics on an explicit social network combined with SocialMF. Fang et al. [3] use support vector regression with matrix factorization to learn both the ratings and strengths from an explicit trust network. However, in order to train this model, a binary social network is still required. Despite the quadratic time complexity of evaluating relations, Guo et al. [6] study user-based collaborative filtering that recommends items using a trust network generated from predefined trust metrics. With the existence of extra features, Lin et al. [15] (rating time ) and Guo et al. [7] (text review) present methods to obtain implicit social networks. There are also several studies that apply matrix factorization techniques on implicit social networks [18]. And Social Regularization [16] reads implicit social networks generated by the evaluation of cosine similarity and Pearson correlation respectively together with predefined thresholds to determine the social connections. he given social connections can harm recommendation performance. These concerns emerges another research direction named implicit social recommendation, which aims at mining implicit user social relationships from historical rating data for better recommendations. Implicit social recommendation can be further divided into two types, one is to determine the social connection strength of the existing binary social network [3] [4] to enhance the quality of recommendation based on the given rating data. The other is data to generate an implicit social network from given historical ratings without any explicit social data [6] [15] [16] [20]. \end{comment}
\section{Introduction} \label{sec:intro} It is commonly accepted that every active galaxy harbors a super-massive black hole (BH) in its center \citep{ferra96,larkin16,kormendy13}. A likely mechanism for powering their relativistic jets is the release of the rotational energy of the BHs \citep{bla77}, which is referred to as the Blandford-Znajek (BZ) process, as demonstrated by general-relativistic (GR) magnetohydrodynamic simulations \citep{Koide:2002:Sci,McKinney:2012:MNRAS}. In the polar region of a BH magnetosphere, centrifugal force prevents accretion toward the rotation axis and a high vacuum is maintained \citep{hirose04}. In this nearly vacuum BH magnetosphere, electrons and positrons ($e^{\pm}$'s) are supplied via the collisions of MeV photons emitted from the equatorial, advection-dominated accretion flow (ADAF) \citep{ichimaru77,narayan94}. Particularly, when the mass accretion rate is much less than the Eddington rate, the collisions can no longer sustain a force-free magnetosphere \citep{levi11}. In such a charge-starved magnetosphere, an electric field appears along the magnetic field lines. In this vacuum gap, a portion of the BZ flux is dissipated as particle acceleration and radiation \citep{bes92,Hirotani:1998:ApJ,nero07,hiro16a}. Such a highly vacuum BH magnetosphere has been investigated by the particle-in-cell (PIC) scheme one-dimensionally along a radial magnetic field line \citep{Levinson:2018:AA,Chen:2018:ApJL,Chen:2020:ApJ,Kisaka:2020:arXiv}, and two-dimensionally in the poloidal plane \citep{Parfrey:2019:PhRvL,Crinquand:2020:PhRvL}. In the present paper, adopting a fixed magnetic field in the poloidal plane, we perform a two-dimensional, axisymmetric PIC simulation around a stellar-mass BH. We will examine the temporal evolution of the poloidal components of the electric field, toroidal component of the magnetic field, and the distribution functions of the electrons and positrons that are created and accelerated within the BH mangetosphere. After describing the background spacetime in \S~\ref{sec:geometry}, we consider the initial conditions of the PIC simulation in \S~\ref{sec:stationary}. Then we present the results of our 2-D GR PIC simulation in \S~\ref{sec:PIC}, demonstrating that the magnetohydrodynamic (MHD) approximations totally break down in the vicinity (i.e., at the jet-launching region) of rotating BHs that accrete plasmas at much lower rate than the Eddington rate. In the present paper we focus on a set of astrophysical cases, as described in \S~\ref{sec:nonstationary}. Code testing cases, including comparison with exact solutions (e.g., development of the electromagnetic field in a plasma-free spacetime, or propagations of charged particles in a fixed electromagnetic field) have been performed, but will be deferred to subsequent papers. Finally, in \S~\ref{sec:disc} we discuss the implication of astrophysical applications to the very-long-baseline-interferometric (VLBI) observations of supermassive BHs in the center of low-luminosity active galactic neclei. \section{Stationary magnetosphere} \label{sec:stationary} Let us begin with the stationary solution of the electromagnetic fields in a magnetosphere of a rotating black hole. We will use this stationary solution as the initial condition of time-dependent particle-in-cell (PIC) simulations. Throughout the present paper, we consider two-dimentional structure of the BH magnetosphere in which electrons and positrons are created and accelerated, assuming axisymmetry with respect to the rotation axis of the BH. \subsection[]{Background geometry} \label{sec:geometry} Around a rotating, non-charged BH, the background geometry is described by the Kerr metric \citep{kerr63}. In the Boyer-Lindquist coordinates \citep{boyer67}, the line element can be expressed as \begin{equation} ds^2= g_{tt} dt^2 +2g_{t\varphi} dt d\varphi +g_{\varphi\varphi} d\varphi^2 +g_{rr} dr^2 +g_{\theta\theta} d\theta^2, \label{eq:metric} \end{equation} where \begin{equation} g_{tt} \equiv -\frac{\Delta-a^2\sin^2\theta}{\Sigma}, \qquad g_{t\varphi} \equiv -\frac{2Mar \sin^2\theta}{\Sigma}, \label{eq:metric_2} \end{equation} \begin{equation} g_{\varphi\varphi} \equiv \frac{A \sin^2\theta}{\Sigma} , \qquad g_{rr} \equiv \frac{\Sigma}{\Delta} , \qquad g_{\theta\theta} \equiv \Sigma ; \label{eq:metric_3} \end{equation} $\Delta \equiv r^2-2Mr+a^2$, $\Sigma\equiv r^2 +a^2\cos^2\theta$, $A \equiv (r^2+a^2)^2-\Delta a^2\sin^2\theta$. In equations~(\ref{eq:metric})--(\ref{eq:metric_3}), we adopt the geometrized unit, putting $c=G=1$, where $c$ and $G$ denote the speed of light and the gravitational constant, The horizon radius, $r_{\rm H} \equiv M+\sqrt{M^2-a^2}$, is obtained by $\Delta=0$, where $M$ corresponds to the gravitational radius. The spin parameter becomes $a=M$ for a maximally rotating BH, and $a=0$ for a non-rotating BH. \subsection[]{The Zero Angular Momentum Observer (ZAMO)} \label{sec:ZAMO} As a fiducial observer, let us introduce the Zero Angular Momentum Observer (ZAMO), who is static in the poloidal plane ($r$,$\theta$) but rotates around the BH at the same angular frequency as the spacetime frame dragging frequency, $\omega \equiv -g_{t\varphi}/g_{\varphi\varphi}$. The tetrad of ZAMO reads \begin{equation} \tilde{\mbox{\boldmath$e$}}_{(\hat{t})} = \alpha^{-1} (\mbox{\boldmath$e$}_{(t)} +\omega \mbox{\boldmath$e$}_{(\varphi)}), \label{eq:rotationg_e0z} \end{equation} \begin{equation} \tilde{\mbox{\boldmath$e$}}_{(\hat{r})} = \sqrt{\frac{\Delta}{\Sigma}} \mbox{\boldmath$e$}_{(r)}, \label{eq:rotationg_e1z} \end{equation} \begin{equation} \tilde{\mbox{\boldmath$e$}}_{(\hat{\theta})} = \frac{1}{\sqrt{\Sigma}} \mbox{\boldmath$e$}_{(\theta)}, \label{eq:rotationg_e2z} \end{equation} \begin{equation} \tilde{\mbox{\boldmath$e$}}_{(\hat{\varphi})} = \frac{1}{\sqrt{g_{\varphi\varphi}}} \mbox{\boldmath$e$}_{(\varphi)}, \label{eq:rotationg_e3z} \end{equation} where \begin{equation} \alpha \equiv \frac{d\tau}{dt} = \frac{\rho_{\rm w}}{\sqrt{g_{\varphi\varphi}}} = \sqrt{\frac{\Delta\Sigma}{A}} \label{eq:lapse} \end{equation} denotes the lapse; $d\tau$ refers to the ZAMO proper time. The tilde (\,$\tilde{}$\,) represents a ZAMO-measured quantity and the caret (\,$\hat{}$\,) does that the component is projected on an orthnormal basis. At the horizon we have $\alpha=0$, while away from the BH we have $\alpha=1$. The coordinate bases are defined as \begin{equation} \mbox{\boldmath$e$}_{(t)} = \partial_t, \quad \mbox{\boldmath$e$}_{(r)} = \partial_r, \quad \mbox{\boldmath$e$}_{(\theta)} = \partial_\theta, \quad \mbox{\boldmath$e$}_{(\varphi)} = \partial_\varphi. \label{eq:coord_bases} \end{equation} We use the ZAMO to solve the electromagnetic fields in a stationary magnetosphere (\S~\ref{sec:stationary_laws}), as well as to present the current densities (figs.~\ref{fig:J1J2} \& \ref{fig:chJp}) and the particle distribution functions (figs.~\ref{fig:Distr_LF} \& \ref{fig:Distr_PA}) in a non-stationary magnetosphere (\S~\ref{sec:PIC}). \subsection{Gauss's and Biot-Savart laws} \label{sec:stationary_laws} To describe the stationary electromagnetic field, we should simultaneously solve the Gauss's law and the Biot \& Savart law. The expressions of these two laws are derived in the Appendix~\ref{sec:appendix_stationary}. For convenience, we replace the independent variable $r$ with the so-called \lq\lq tortoise coordinate'', $r_\ast$. It is defined by \begin{equation} \frac{d r_\ast}{dr}=\frac{r^2+a^2}{\Delta}. \label{eq:tortoise} \end{equation} In this coordinate, the horizon corresponds to $r_\ast \rightarrow -\infty$. At large distances, $dr_\ast/dr \rightarrow 1$. What is more, to avoid the sigular behaviour due to the $\csc\theta$ factors in the Gauss's law (\ref{eq:Poisson_3}) and the Biot and Savart law (\ref{eq:BS_2}) at the poles (i.e., at $\theta=0$ and $\pi$), we introduce a new meridional variable, $y \equiv 1-\cos\theta$. Adopting this $y$ coordinate, we obtain \begin{equation} \frac{1}{\sin\theta}\frac{\partial}{\partial\theta} = \frac{\partial}{\partial y} \label{eq:xi} \end{equation} To avoid the change of the type of the second-order partial differential equation (specifically, the Biot and Savart law) at the static limit, we replace the electro-static potential $A_t$ with the ZAMO-measured value, $A_{\hat{t}}$. The tetrad of ZAMO (eqs.~[\ref{eq:rotationg_e0z}]--[\ref{eq:rotationg_e3z}]) gives \begin{equation} A_t= \alpha A_{\hat{t}} - \omega A_\varphi \label{eq:At} \end{equation} and \begin{equation} A_\varphi= \sqrt{g_{\varphi\varphi}} A_{\hat{\varphi}}. \label{eq:Aphi} \end{equation} Using $A_{\hat{t}}$, $A_\varphi$, $r_\ast$ and $y$, the Biot and Savart law (eq.~[\ref{eq:BS_2}]) can be rewritten as \begin{eqnarray} -\lefteqn{ \frac{\Delta\Sigma}{A} \frac{\partial^2 A_\varphi}{\partial r_\ast{}^2} + C_1 \frac{\partial A_\varphi}{\partial r_\ast} -\frac{\Delta^2 \Sigma \sin^2\theta} {(r^2+a^2)^2 A} \frac{\partial^2 A_\varphi}{\partial y^2} } \nonumber\\ && +\left( \frac{\Delta}{r^2+a^2} \right)^2 \frac{4 M a^2 r \sin^2\theta \cos\theta} {\Sigma A} \frac{\partial A_\varphi}{\partial y} +C_0 A_\varphi \nonumber\\ && +\frac{2 M a r \sin^2\theta}{\Sigma} \frac{\partial^2}{\partial r_\ast{}^2} (\alpha A_{\hat{t}}) + C_2 \frac{\partial}{\partial r_\ast} (\alpha A_{\hat{t}}) \nonumber\\ && +\frac{2 M a r \Delta \sin^2\theta} {(r^2+a^2)^2 \Sigma} \frac{\partial^2}{\partial y^2}(\alpha A_{\hat{t}}) \nonumber\\ && +\frac{4 M a r \Delta \sin^2\theta \cos\theta} {(r^2+a^2) \Sigma^2} \frac{\partial}{\partial y}(\alpha A_{\hat{t}}) \nonumber\\ && = 4 \pi \left( \frac{\Delta}{r^2+a^2} \right)^2 \Sigma \sin^2\theta \cdot J^\varphi, \label{eq:BS_3} \end{eqnarray} where \begin{eqnarray} \lefteqn{ C_0 \equiv -\frac{2 M a r \Delta \sin^2\theta}{(r^2+a^2)^2 \Sigma} \left\{ \right. \Delta (\partial_r{}^2 \omega) } \nonumber\\ && +\left[ \frac{(r^2-a^2\cos^2\theta)\Delta}{r\Sigma} -\frac{2M(r^2-a^2)}{r^2+a^2} \right] (\partial_r \omega) \nonumber\\ && +\sin^2\theta (\partial_y{}^2 \omega) \nonumber\\ && +\frac{2(r^2+a^2)\cos\theta}{\Sigma} (\partial_y \omega) \left. \right\}, \label{eq:def_c0} \end{eqnarray} \begin{eqnarray} \lefteqn{ C_1 \equiv \frac{2 \Delta}{(r^2+a^2)^2 \Sigma^2} } \nonumber\\ && \times \left[ \right. -(r-M)(r^2+a^2)\Sigma + \frac{(r^2+a^2)\Sigma}{A} C_3 \qquad\qquad \nonumber\\ && \qquad +(\Delta-a^2\sin^2\theta) r a^2 \sin^2\theta +\frac{4M^2 a^4 r^3 \sin^4\theta}{A} \nonumber\\ && \qquad + 2 M a r \sin^2\theta (r^2+a^2) \Sigma (\partial_r \omega) \left. \right] , \label{eq:def_c1} \end{eqnarray} \begin{eqnarray} \lefteqn{ C_2 \equiv \frac{2 M a \sin^2\theta}{(r^2+a^2)^2 \Sigma^2} } \nonumber\\ && \times \left[ \right. -r^6 +a^2(-2+\cos^2\theta) r^4 + 4Ma^2 \sin^2\theta r^3 \nonumber\\ && \qquad +a^4 (1-2\sin^2\theta) r^2 + a^6 \cos^2\theta \left. \right] , \label{eq:def_c2} \end{eqnarray} \begin{eqnarray} \lefteqn{ C_3 \equiv (r-M)A +a^2 \sin^2\theta } \nonumber\\ && \times \left[ -r^3 -3Mr^2+(2M^2-a^2\cos^2\theta)r +a^2 M \cos^2\theta \right] . \nonumber\\ \label{eq:def_c3} \end{eqnarray} It follows that all the four highest-order derivative terms of equation~(\ref{eq:BS_3}) have definite signs; thus, the Biot-Savart law now become an elliptic type in the entire simulation region, because we adopt a physical observer, ZAMO. The Gauss's law~(\ref{eq:Poisson_3}) can also be re-written with respect to $\alpha A_{\hat{t}}$ and $A_\varphi$. Multiplying $\left[ \Delta / (r^2+a^2) \right]^2$ on both sides, we obtain \begin{eqnarray} && \frac{A}{\Sigma^2} \frac{\partial^2 (\alpha A_{\hat{t}})}{\partial r_\ast^2} +D_1 \frac{\partial (\alpha A_{\hat{t}})}{\partial r_\ast} -\frac{\Delta A}{(r^2+a^2)\Sigma^2} \left( \partial_r \omega \right) \frac{\partial A_\varphi}{\partial r_\ast} \nonumber\\ && +\frac{\Delta A \sin^2\theta}{(r^2+a^2)^2 \Sigma^2} \frac{\partial^2 (\alpha A_{\hat{t}})}{\partial \xi^2} \nonumber\\ && +\frac{2\Delta \cos\theta}{(r^2+a^2)^2 \Sigma^3} \left[ (r^2+a^2)A-a^2 \Delta \Sigma \sin^2\theta \right] \frac{\partial (\alpha A_{\hat{t}})}{\partial \xi} \nonumber\\ && +\frac{2 \Delta \sin^2\theta}{(r^2+a^2)^2 \Sigma^2} \left[ -A (\partial_\xi \omega) +\Delta a^2\cos\theta \cdot \omega \right] \frac{\partial A_\varphi}{\partial \xi} \nonumber\\ && + D_0 A_\varphi = 4\pi \rho \left( \frac{\Delta}{r^2+a^2} \right)^2 , \label{eq:Poisson_4} \end{eqnarray} where \begin{eqnarray} \lefteqn{ D_0 \equiv \frac{\Delta A}{(r^2+a^2)^2 \Sigma^2} \biggl\{ \biggr. -\Delta (\partial_r{}^2 \omega) -\sin^2\theta (\partial_\xi{}^2 \omega) } \nonumber\\ && +\left[ 2(r-M)-\Delta \partial_r \left( \ln\frac{\Delta A} {\Sigma} \right) \right] (\partial_r \omega) \nonumber\\ && -\frac{2\cos\theta}{\Sigma} \left[ r^2+a^2-\frac{\Delta \Sigma}{A} a^2 \sin^2\theta \right] (\partial_\xi \omega) \biggl. \biggr\} \quad \label{eq:Def_D0} \end{eqnarray} and \begin{eqnarray} \lefteqn{ D_1 \equiv -\frac{A}{(r^2+a^2)\Sigma^2} } \nonumber\\ && \times \left[ 2(r-M)-\Delta \partial_r \left( \ln \frac{(r^2+a^2)A}{\Sigma} \right) \right] \label{eq:Def_D1} \end{eqnarray} Note that $\partial^2 A_\varphi / \partial r_\ast{}^2$ and $\partial^2 A_\varphi / \partial \xi^2$ terms vanish in equation~(\ref{eq:Poisson_4}). \subsection[]{Boundary conditions} \label{sec:BDCs} We search for the stationary solution that satisfy the Gauss's law (\ref{eq:Poisson_4}) and the Biot and Savart law (\ref{eq:BS_3}). To this end, we must impose boundary conditions $\alpha A_{\hat{t}}$ and $A_\varphi$, or equivalently on $A_t$ and $A_\varphi$. In the present paper, we solve these two equations in the first and fourth quadrants of the poloidal plane ($r$,$\theta$). The region is bordered by \\ $\bullet$ the polar boundary at $\theta=0$ (north polar axis) and at $\theta=\pi$ (south polar axis),\\ $\bullet$ the inner boundary at $r=r_{\rm H}$, and \\ $\bullet$ the outer oundary at $r=r_{\rm out}$, where $r_{\rm out} \gg r_{\rm g} \equiv GMc^{-2}=M$.\\ In this subsection, we describe the boundary conditions at these three boundaries. At the northern and southern polar boundaries, we impose $E_\theta=\partial_\theta A_t=0$, that is, a Neumann condition on $A_t$. We also impose $B^\theta \propto F_{\varphi r}=-\partial_r A_\varphi=0$. Thus, we put $A_\varphi=0$ at $\theta=0$ and $\pi$ and measure the magnetic flux function $A_\varphi$ from the rotation axis. At the inner boundary, we impose $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} =0$ and $F_{\varphi r}=0$, where $F^{r\theta}=0$ is assumed at $t=0$. For instance, in ZAMO, these conditions constrain that both the radial component of the electric field and the meridional component of the magnetic field vanish at the inner boundary. At the outer boundary ($r=r_{\rm out} \gg M$), we impose $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} \propto (\partial_r A_t) (\partial_\theta A_\varphi) -(\partial_\theta A_t) (\partial_r A_\varphi) =0$ and $\partial_r A_\varphi=0$, the latter of which comes from the assumption of a split-monopole magnetic field, $J^\varphi{}_{\rm eq} \propto r^{-4}$. Thus, in the present paper we impose the Neumann condition, $\partial_r A_t=0$. However, in general, if we impose the magnetic field direction, $\partial_r A_\varphi / \partial_\theta A_\varphi$ (e.g., if we adopt a paraboloidal magnetic field, see BZ77 for details), $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} =0$ constrains the direction of the $A_t={\rm constant}$ surface at the outer boundary. \subsection[]{Disk toroidal current} \label{sec:disk_current} We assume that the plasmas in an ADAF produce a toroidal current $J^\varphi{}_{\rm eq} = C_{\rm eq} r^{-4}$ near the equator all the way to the horizon within the colatitudes $87^\circ < \theta < 93^\circ$; outside of this equatorial region, $J^\varphi{}_{\rm eq}=0$ is assumed. For a slowly rotating BH, this disk current produces a split-monopole magnetic field (BZ77). The normalization factor $C_{\rm eq}$ is adjusted so that the meridionally averaged, ZAMO-measured radial magnetic field strength at $r=2M$, \begin{equation} \langle B^r(2M) \rangle \equiv \frac{\int_0^\pi \tilde{B}^{\hat{r}}(2M,\theta) \sqrt{A}\sin\theta d\theta} {\int_0^\pi \sqrt{A}\sin\theta d\theta}, \label{eq:Br_avr} \end{equation} may match a fraction of the equipartition field strength \citep{Yuan:2014:ARA&A}, \begin{equation} B_{\rm eq}(r) = 9.7 \times 10^7 \left( \frac{\dot{m}}{M_1} \right)^{1/2} \left( \frac{r}{2M} \right)^{-5/4} \mbox{ G}, \label{eq:B_eq} \end{equation} which is obtained if there is an equipartition between the magnetic field energy density and the plasma internal energy density; the alpha viscous parameter is assumed to be $0.3$. The dimensionless accretion rate $\dot{m}$ is defined by \begin{equation} \dot{m} \equiv \frac{\dot{M}}{\dot{M}_{\rm Edd}}, \end{equation} where $\dot{M}$ denotes the mass accretion rate. The Eddington accretion rate is defined by \begin{equation} \dot{M}_{\rm Edd} \equiv \frac{L_{\rm Edd}}{\eta_{\rm eff} c^2} = 1.39 \times 10^{19} M_1 {\rm g \ s}^{-1}, \end{equation} where $L_{\rm Edd}$ denotes the Eddington luminosity; the conversion efficiency is assumed to be $\eta_{\rm eff}=0.1$. In the present paper, we adopt a relatively small mass accretion rate, $\dot{m}=2.25 \times 10^{-4}$, and $\langle B^r(2M) \rangle = B_{\rm eq}(2M)$. \subsection[]{Stationary solution} \label{sec:stationary_EM} In the present paper, we consider a ten solar-mass BH, $M_1 \equiv M/(10M_\odot)=1$, and solve equations~(\ref{eq:BS_3}) and (\ref{eq:Poisson_4}) iteratively in the region $r_{\rm H} < r \le 20M$ and $0 \le \theta \le \pi$. The solved electromagnetic fields are presented in figure~\ref{fig:Epara_0}. In each panel, thin black curves show the equi-$A_\varphi$ contours, which indicate the poloidal magnetic field lines for a distant static observer if $a=0$. We superpose $ \mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} / [B_{\rm eq}(r=2M)]^2 $, whose values are indicated in the color code. The left and right panels correspond to the cases of $a=0$ and $a=0.9M$, respectively. In the left panel, $ \mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} = 0 $ holds everywhere; thus, the background color is entirely white. It follows that the poloidal magnetic field becomes radial for a slowly rotating BH when $J^\varphi{}_{\rm eq} \propto r^{-4}$, which is consistent with the analytical conclusion \citep{bla77,mck07}. Because of $a=0$, there exists no magnetic-field-aligned electric field, $E_\parallel$; thus, $A_\varphi$ is solved only from the Biot-Savart law. However, as the BH spins up (i.e., if $a \ne 0$), the right panel shows that there arises a non-vanishing $E_\parallel$ due to the relative rotational motion of the magnetic field lines with respect to the space time. At the same time, equi-$A_\varphi$ lines deviates from a radial shape, as the solid curves indicate. For a rapidly rotating case, $a=0.9M$, the magnetic field lines are laterally pushed toward the rotation axis in the ergosphere \citep{Tanabe:2008:PhRvD,Tchekhovskoy:2010:ApJ}. Note that the magnetosphere is assumed to be vacuum in the present analysis, while it is force-free (i.e., plasma-filled) in \citet{bla77,Tanabe:2008:PhRvD,Tchekhovskoy:2010:ApJ}. To examine how the magnetic field lines are actually deformed for a physical observer, we adopt ZAMO and plot in figure~\ref{fig:Bp} the radial ($\tilde{B}^{\hat{r}}$, left panel) and the meridional ($\tilde{B}^{\hat\theta}$, right panel) components of the magnetic field, where their explicit expressions are given by equations~(\ref{eq:Br_ZAMO}) and (\ref{eq:Bth_ZAMO}). It shows that $\vert \tilde{B}^{\hat{\theta}} \vert$ is kept below $\vert \tilde{B}^{\hat{r}} \vert$ except for the equatorial region where $\tilde{B}^{\hat{r}}$ vanishes by symmetry. To compare with slowly rotating cases, we adopt the same power-law in the disk current, $J^\varphi{}_{\rm eq} \propto r^{-4}$, for all the cases of $a$. To avoid a substantial $\vert \tilde{B}^{\hat{\theta}} \vert$ in the lower-latitude ergosphere, we could adopt another functional form of $J^\varphi{}_{\rm eq}$ for $a \ne 0$; however, such a fine-tuning is out of the scope of the present paper. We assume a positive $J^\varphi{}_{\rm eq}$; thus, $F_{\theta\varphi}$ (i.e., the radial component of the magnetic field) is positive (or negative) in the northern (or southern) hemisphere. Accordingly, a positive (or a negative) sign of $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$}$ indicates that the electric field points outwards in the northern (or southern) hemisphere. Thus, as plasmas are created (via photon-photon collisions) near the horizon, $r<2M$, electrons (or positrons) are accelerated inwards (or outwards) in both hemispheres in this stationary solution. Such accelerated electrons and positrons produce electric currents in the magnetosphere, whose poloidal components modifies the poloidal electric field through the Ampere's law. In the next section, we will focus on the temporal evolution of the electromagnetic fields and the particle distribution functions starting from the initial conditions described in this section. \begin{figure*} \hspace{4.3cm} \includegraphics[width=0.5\textwidth]{fig_A3EB.pdf} \caption{ Stationary equi-$A_\varphi$ contours (solid curves) and the distribution of $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} / B_{\rm eq}(2M){}^2$ (color) on the poloidal plane ($r$,$\theta$). The equitorial current density is assumed to depend on $r$ as $J^\varphi{}_{\rm eq} \propto r^{-4}$. The ordinate represents the distance along the rotational axis of the BH, while the abscissa does the distance $r\sin\theta$ from the rotation axis, Both axes show the lengths in the Boyer-Lindquist coordinates, normalized by the gravitational radius, $r_{\rm g}=GMc^{-2}=M$. The equatorial plane corresponds to the ordinate of $0$. The black filled circle shows the BH. Left panel shows the equi-$A_\varphi$ contours when $a=0$, in which case the solid curves denote the magnetic field lines measured by a distant static observer. For $a=0$, there arises no electric fields along the magnetic field lines; thus, the background color is entirely white. Right panel shows the case of $a=0.9M$. The amplitude of $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$}$ increases with increasing $a$, because the frame-dragging effect increases with increasing BH's spin, $a$. } \label{fig:Epara_0} \end{figure*} \begin{figure*} \vspace*{-0.0truecm} \hspace{3.8cm} \includegraphics[width=0.6\textwidth]{fig_BrBth_init.pdf} \vspace*{-0.0truecm} \caption{ Poloidal magnetic field measured in ZAMO at $t=0$ for $a=0.9M$. The abscissa and ordinate are common with figure~\ref{fig:Epara_0}, but the BH vicinity is closed up. Left and right panels show $B^{\hat r}({\rm ZAMO})$ and $B^{\hat{\theta}}({\rm ZAMO})$, respectively, in Gauss. } \label{fig:Bp} \end{figure*} \section{The particle-in-cell (PIC) scheme} \label{sec:PIC} Let us look briefly at the collisionless nature of the plasmas in \S~\ref{sec:collisionless}, before turning to a closer examination of the temporal evolution of the BH magnetosphere in the rest of this section. \subsection{Collisionless plasmas} \label{sec:collisionless} Denoting the density of a pair plasma with $n_\pm = \kappa n_{\rm GJ}$, we can express the collision frequency as \begin{equation} \nu_{\rm c} \sim \kappa n_{\rm GJ} \sigma c, \label{eq:nu_c1} \end{equation} where $\sigma$ refers to the collisional cross section, and \begin{equation} n_{\rm GJ} \equiv \omega_{\rm H} B / (4\pi ce) \label{eq:nGJ} \end{equation} denotes the Goldreich-Julian (GJ) number density, which is rotaionally induced; $e$ denotes the charge on the electron. If the plasma density is comparable to the GJ value, $\kappa$ becomes of the order of unity. Let us evaluate the cross section by $\sigma \sim \pi l_{\rm c}{}^2$, where $l_{\rm c}$ denotes the typical impact parameter. Equating the potential and kinetic energies, we find $e^2/l_{\rm c} \sim (\gamma-1) m_{\rm e} c^2$. Then combining it with equation~(\ref{eq:nu_c1}), we obtain \begin{equation} \nu_{\rm c} \sim \frac{\kappa \pi n_{\rm GJ} e^4} {\gamma^2 m_{\rm e}{}^2 c^3} \label{eq:nu_c2}, \end{equation} where $\gamma \gg 1$ is assumed. On the other hand, the gyro frequency is given by \begin{equation} \nu_{\rm B}= \frac{e B}{2\pi \gamma m_{\rm e} c}. \label{eq:nu_B} \end{equation} We thus obtain \begin{equation} \frac{\nu_{\rm c}}{\nu_{\rm B}} \sim \frac{\pi}{4} \frac{\kappa}{\gamma} \frac{a}{M} \frac{r_0}{r_{\rm H}}, \label{eq:collisionless} \end{equation} where $r_0 \equiv e^2/(m_{\rm e} c^2)$ denotes the classical electron radius. Since $r_0 / r_{\rm H} \sim 10^{-19} M_1{}^{-1}$ holds, we can conclude that the collision frequency is much less than the gyro frequency, and that the assumption of collisionless plasmas in the PIC scheme is justified. This conclusion solely comes from the fact that the GJ density corresponds to a high vacuum in BH or pulsar magnetospheres. It should be noted that the Ohm's law, which is necessary to close the system of equations in MHD, requires that many collisions should take place within a single gyration. However, equation~(\ref{eq:collisionless}) shows this assumption cannot be justified in a black hole magnetosphere unless the plasma density greatly exceeds the GJ value. In the present paper, we thus construct the electric current from the actual motion of charged particles, adopting the PIC scheme. By this method, we can, for instance, incorporate the currents carried by the drift motion of charged particles, as well as the anisotropic distribution of particles' momenta (e.g., along and perpendicular to the magnetic field lines). \subsection[]{The Maxwell equations} \label{sec:Maxwell} In the present paper, we assume $F_{\varphi t}=0$ throughout the simulations. Accordingly, together with $\partial_\varphi=0$ for all the quantities, we find $\partial_t F_{\theta\varphi}=0$ and $\partial_t F_{\varphi r}=0$ from the Faraday's law. Thus, in the present paper, we treat both $B^r$ and $B^\theta$ are constant in time (and hence unchanged from the initial condition), and solve the temporal evolution of only $F_{r t}$, $F_{\theta t}$, and $F^{r \theta}$. This assumption of $F_{\varphi t}=0$ is justifined as long as the toroidal currents curried by the simulated particles are small compared to the stationary, equatorial toroidal current that is carried by the accreting plasmas. Under this assumption, the Faraday and Ampere's laws give \begin{equation} \frac{\partial F^{r \theta}} {\partial t} = -\frac{\Delta}{\Sigma^2} \left( \frac{\partial F_{\theta t}}{\partial r} -\frac{\partial F_{r t}}{\partial \theta} \right), \label{eq:Maxwell_1} \end{equation} \begin{equation} \frac{\partial F_{r t}} {\partial t} = \frac{\Sigma^2}{A} \left[ \frac{1}{\sqrt{-g}} \frac{\partial}{\partial \theta} \left( \sqrt{-g} F^{r \theta} \right) -4\pi J^r \right], \label{eq:Maxwell_2} \end{equation} \begin{equation} \frac{\partial F_{\theta t}} {\partial t} = \frac{\Sigma^2 \Delta}{A} \left[-\frac{1}{\sqrt{-g}} \frac{\partial}{\partial r} \left( \sqrt{-g} F^{r \theta} \right) -4\pi J^\theta \right]. \label{eq:Maxwell_3} \end{equation} Replacing the $r$ derivatives with $r_\ast$ derivatives, and introducing the following dependent variables, \begin{eqnarray} B && \equiv \sqrt{-g} F^{r \theta}, \nonumber\\ D && \equiv F_{r t}, \nonumber\\ E && \equiv F_{\theta t} \sin\theta, \label{eq:def_EDB} \end{eqnarray} all of which are well-behaved at the horizon, we can rewrite the three Maxwell equations~(\ref{eq:Maxwell_1})--(\ref{eq:Maxwell_3}) into \begin{equation} \frac{\partial B}{\partial t} = -c_1 \frac{\partial E}{\partial x} +c_2 \frac{\partial D}{\partial y} \label{eq:Maxwell_1b} \end{equation} \begin{equation} \frac{\partial D}{\partial t} = c_3 \frac{\partial B}{\partial y} -4\pi \frac{\Sigma^2}{A} J^r \label{eq:Maxwell_2b} \end{equation} \begin{equation} \frac{\partial E}{\partial t} = -c_4 \frac{\partial B}{\partial x} -4\pi \frac{\Sigma^2}{A} \Delta\sin\theta \cdot J^\theta \label{eq:Maxwell_3b} \end{equation} where \begin{equation} x \equiv r_\ast, \qquad y \equiv 1-\cos\theta, \end{equation} \begin{equation} c_1 \equiv \frac{r^2+a^2}{\Sigma}, \qquad c_2 \equiv \frac{\Delta \sin^2\theta}{\Sigma}, \nonumber \end{equation} \begin{equation} c_3 \equiv \frac{\Sigma}{A}, \qquad c_4 \equiv \frac{\Sigma (r^2+a^2)}{A}. \label{eq:app_c4} \end{equation} All the coefficients $c_1$, $c_2$, $c_3$, and $c_4$ are positive definite. Both $c_1$ and $c_4$ are close to unity in the entire region. However, $c_2 \ll 1$ holds near the horizon or near the pole but $c_2 \rightarrow 1$ at $r \gg M$ in the lower latitudes. We have $c_3 \ll 1$ at $r \gg M$ and $c_3 \approx 0.25$ near the horizon. Note that the singular behaviour (i.e., the polynomial pole) at $\theta=0$ in equations~(\ref{eq:Maxwell_2}) and (\ref{eq:Maxwell_3}) is eliminated by introducing a new meridional coordinate, $y$. To solve these three Maxwell equations (eqs.~[\ref{eq:Maxwell_1b}]--[\ref{eq:Maxwell_3b}]), we must impose appropriate boundary conditions that are consistent with the initial, stationary state (\S\ref{sec:BDCs}). Along the northern and southern polar axes (i.e., at $\theta=0$ and $\theta=\pi$), we impose \begin{equation} B = 0, \quad \frac{\partial D}{\partial y} = 0, \quad E = 0. \label{eq:BDC_1} \end{equation} At the inner boundary, $x=r_\ast=-\infty$, we impose \begin{eqnarray} \frac{\partial B}{\partial x} = 0, & & \quad \tilde{E_{\hat{r}}} \propto D+\omega F_{r \varphi}=0, \nonumber \\ & &\frac{\partial E}{\partial x} = \left( \frac{\partial E}{\partial x} \right)_{t=0}, \label{eq:BDC_3} \end{eqnarray} where the quantity within $()_{t=0}$ indicates the initial value at $t=0$. At the outer boundary, $x=r_\ast= r_{\rm out}$, we impose \begin{equation} \frac{\partial B}{\partial x} = 0, \quad D = 0, \quad \frac{\partial E}{\partial x} = \left( \frac{\partial E}{\partial x} \right)_{t=0}. \label{eq:BDC_4} \end{equation} Note that we set $B=0$ in the entire region at $t=0$. \subsection[]{Particle equation of motion} \label{sec:EOM} In a highly vacuum BH magnetosphere, charged leptons are deccelerated by the radiation-reaction forces. To include these forces from the first principles, we must adopt tiny time steps and consider the force on one part of the charge by the fields of another part, taking account of retardation within the particle itself. However, in actual simulations, it is unrealistic to adopt such tiny timesteps. Thus, as a compromize, we include the radiation reaction force as a friction term in the equation of motion (EOM). With such a friction term, the EOM can be expressed as \citep{Thorne:1982:MNRAS,Hughes:1994:PhRvD,Bacchini:2018:ApJS} \begin{eqnarray} \frac{du_i}{dt} = &-&\alpha u^t \partial_i \alpha +u_k \partial_i \beta^k -\frac12 \frac{u_j u_k}{u^t} \partial_i g^{jk} \nonumber\\ &+& \frac{q}{m} \left( F_{it}+F_{ij}\frac{u^j}{u^t} \right) +\frac{(F_{\rm rad})_i}{u^t}, \label{eq:EOM} \end{eqnarray} where $\alpha$ is defined by equation~(\ref{eq:lapse}), $\beta^r=\beta^\theta=0$, $\beta^\varphi= g_{t\varphi}/g_{\varphi\varphi}$, $q/m$ refers to the charge-to-mass ratio, and \begin{equation} u^t= \frac{\sqrt{1+g^{jk} u_j u_k}}{\alpha}. \label{eq:ut} \end{equation} The Latin indices run over 1, 2, 3. We can evaluate the radiation-reaction force by the covariant form, \citep[\S~17.8 of][]{jackson62} \begin{equation} F_{\rm rad}^j = \frac{2}{3} \frac{r_0}{c} \left( \frac{d^2 u^j}{d\lambda^2} +u^j \frac{du^\nu}{d\lambda} \frac{du_\nu}{d\lambda} \right), \label{eq:def_Prad} \end{equation} where $r_0$ denotes the classical electron radius, $\lambda$ does the particle's proper time, and the Greek indices run over 0, 1, 2, 3. This radiation reaction force includes the effects of any kind of acceleration acting on the particles. For instance, photon emissions as a result of the acceleration in an electro-magnetic field (e.g., via the synchro-curvature process) and that in a gravitational field, are included (Appendix~\ref{sec:app_covaniance}). However, this radiation-reaction force does not include the effect of inverse-Compton scatterings, which should be considered separately in future works. Definition of the four velocity $u^\mu$ gives \begin{equation} \frac{dr}{dt} = \frac{u^r}{u^t}, \label{eq:drdt} \end{equation} \begin{equation} \frac{d\theta}{dt} = \frac{u^\theta}{u^t}, \label{eq:dthdt} \end{equation} and \begin{equation} \frac{d\varphi}{dt} = \frac{u^\varphi}{u^t}. \label{eq:dphdt} \end{equation} For presentation purpose, we can convert $u_i= dx_i/d\lambda$ in terms of the ZAMO-measured spatial velocity, $u_{\hat j}$, as described in Appendix~\ref{sec:app_ZAMO}. We integrate both equations~(\ref{eq:EOM}) and (\ref{eq:drdt})--(\ref{eq:dphdt}) in the phase space with the global time variable $t$, which corresponds to the proper time of a distant static observer (i.e., us). Let us briefly describe the boundary conditions on the motion of electrons and positrons. Due to the symmetry, we assume that the particles moving across the polar axis (at $\theta=0$ or $\pi$) will be reflected equator-ward with opposite meridional velocity. Both the inner and outer boundaries are treated as particle sinks. Thus, when particles move accross these two radial boundaries, they are excluded from the simulation. \subsection[]{Plasma supply} \label{sec:supply} In BH magnetospheres, pairs can be supplied via two-photon and/or one-photon (i.e., magnetic) pair production processes. In the present paper, we focus on the former process, and consider the collisions of MeV photons emitted from the equatorial ADAF via Bremsstrahlung. In subsequent papers, we will also consider the collisions between the gap-emitted (inverse-Compton) photons and the ADAF-emitted (synchrotron) photons. The pair supply rate (pairs per second per volume) is given by \begin{equation} \dot{N}_\pm = c \sigma_{\gamma\gamma} n_\gamma{}^2, \label{eq:dotN} \end{equation} where $\sigma_{\gamma\gamma}$ denotes the total cross section of photon-photon pair production, and $n_\gamma$ does the MeV photon density. Adopting the Newtonian self-similar ADAF model \citep{Mahadevan:1997:ApJ}, and assuming that the most energetic MeV photons are emitted within $r=4M$, we obtain (Appendix~\ref{sec:app_ADAF}) \begin{equation} \dot{N}_\pm \approx 1.0 \times 10^{24} \dot{m}^4 M_1{}^{-2} \max \left[ \left(\frac{r}{4M}\right)^{-4}, 1 \right] \label{eq:dotN2} \end{equation} in cgs unit (i.e., in $\mbox{pairs s}^{-1}\mbox{ cm}^{-3}$). We randomly introduce a macro particle in each cell at every time step with probability $1/k_{\rm create}=0.1$; that is, particles are injected in each cell at every $k_{\rm create}=10$ time steps on average. In this case, each created macro positron or electron has the electric charge \begin{equation} q_i= \pm e \dot{N}_\pm k_{\rm create} \Delta_t \Delta_V, \end{equation} where $\Delta_t$ denotes the interval of each time step, and $\Delta_V$ the invariant volume of each cell. Note that $\Delta_t \Delta_V= \sqrt{-g} dt dr d\theta d\varphi = 2\pi \sqrt{-g} \Delta_t \Delta_r \Delta_\theta$ holds, where $\Delta_r$ and $\Delta_\theta$ denote the intervals in Boyer-Linquist radial and meridional coordinates, both of which are non-uniformly gridded. In the initial state, there are no macro particles in any cell. As the PIC simulation proceeds, the number of macro particles increases with $t$ to saturate at a few hundred in each PIC cell on average. Here, the maximum value of the Courant number is set to be $0.5$ for uniform grid intervals in $x=r_\ast$ and $y=1-\cos\theta$ coordinates. In total, there are about $3 \times 10^8$ macro particles in the entire simulation region. \subsection[]{Nonstationary magnetosphere} \label{sec:nonstationary} It is checked {\it a posteriori} (\S~\ref{sec:disc_resolution}) that the invariant grid intervals resolve the skin depth \begin{equation} l_{\rm p}= \frac{c}{\omega_{\rm p}}, \label{eq:skin_depth} \end{equation} at every point at any elapsed time, where the plasma frequency $\omega_{\rm p}$ is computed by the plasma density $n_\pm$ and its mean Lorentz factor $\langle\gamma\rangle$ as \begin{equation} \omega_{\rm p}= \sqrt{\frac{4\pi e^2 n_\pm}{m_{\rm e} \langle \gamma \rangle}}. \label{eq:plasma_freq} \end{equation} For stellar-mass BHs, we obtain \begin{equation} \frac{l_{\rm p}}{r_{\rm g}} = 6.84 \times 10^{-2} k \gamma_6{}^{1/2} \left( \frac{a}{M} \cdot \frac{B}{B_{\rm eq}} \right)^{-1/2} M_1{}^{-1/4}, \label{eq:sdepth_2} \end{equation} where $\gamma_6 \equiv \langle \gamma \rangle / 10^6$, and \begin{equation} k \equiv \kappa^{-1/2} \dot{m}^{-1/4} \left( \frac{r}{r_{\rm H}} \right)^{5/8} \left( \frac{m_{\rm e}}{m_{\rm p}/1836} \right)^{1/2}; \end{equation} $m_{\rm p}$ denotes the rest mass of the proton. To resolve the skin depth, $l_{\rm p}$, most PIC simulations are performed in one or two dimensions with small ion-to-electron mass ratios (e.g., \citet{Bohdan:2019:ApJ}), covering limited time and length scales. In the present paper, adopting a heavy electron mass of $m_{\rm e}= m_{\rm p}/20$, we perform a two-dimensional PIC simulation within about ten gravitational radii over the time duration of several hundred dynamical time scales. We rescale every expression according to this modified electron mass. Under this assumption of heavy charged leptons, we need about $10^3$ grid points to fully resolve the skin depth. On these grounds, we adopt a radial grid of 960 uniform cells between $-15.81807 < x < 12.82443$, which corresponds to $1.46514M < r < 13.68548M$, and 1920 uniform cells between $0 < y=1-\cos\theta < 2$, which corresponds to $0^\circ < \theta < 180^\circ$. We present the explicit expression of the discretized Maxwell equations and the particle equation of motion in Appendix~\ref{sec:discretization}. To construct the electric currents from particle motion (Appendix~\ref{sec:current}), we adopt the area weighting \citep{villa92}. \subsubsection{Electromagnetic fields} \label{sec:acc} We start with the magnetic-field-aligned electric field. In figure~\ref{fig:Epara_1}, we present the distribution of $ \mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} / [B_{\rm eq}(2M)]^2$. For $M=10 M_\odot$ and $\dot{m}=2.25 \times 10^{-4}$, we obtain $B_{\rm eq}(2M)= 1.452 \times 10^{6}$~G. The left and right panels show the distribution of $ \mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} / [B_{\rm eq}(2M)]^2$ at $t=0$ and $t=430 M$, respectively. In the northern hemisphere, since $F_{\theta\varphi}>0$ holds, a positive (or a negative) sign of $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$}$ indicates that an outward (or an inward) electric field arises along the local magnetic field lines. In the same manner, in the southern hemisphere, it follows from $F_{\theta\varphi}<0$ that a positive (or negative) sign of $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$}$ means an inward (or an outward) electric field. \begin{figure*}[t] \vspace*{-0.0truecm} \hspace{2.8cm} \includegraphics[width=0.7\textwidth]{fig_EB.pdf} \vspace*{-.0truecm} \caption{ Distribution of $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} / B_{\rm eq}(2M){}^2$ on the poloidal plane ($r$,$\theta$) for $a=0.9M$, $B=B_{\rm eq}$, and $\dot{m}=2.25 \times 10^{-4}$. Left and right panels show the distribution at $t=0.00$ and $t=430.00 GM c^{-3}$, respectively. The abscissa and ordinate are common with figure~\ref{fig:Bp}. Magnetic surfaces (i.e., constant $A_\varphi$ lines) are not depicted, but are common with the right panel of figure~\ref{fig:Epara_0} } \label{fig:Epara_1} \end{figure*} \begin{figure*}[t] \hspace{0.0cm} \includegraphics[width=1.0\textwidth, angle=0]{fig_BZflux_4thetas.pdf} \vspace*{-2.0truecm} \caption{ Radial component of the Blandford-Znajek (BZ) flux as a function of the elapsed time at radius $r=r_{\rm H}+0.25M$. The ordinate is normalized by its analytical estimate (see text for details), while the abscissa by the dynamical time scale. Each panel shows the BZ flux at discrete colatitudes as labeled. } \label{fig:LBZ_Time} \end{figure*} At $t=0$, there are no poloidal currents in the magnetosphere because of the lack of plasmas. Thus, the magnetic field has no toroidal component. Accordingly, the rotational energy of the BH is not being extracted. On the other hand, because of the relative motion of the magnetic field lines with respect to the space time, there appears a strong electromagnetic field along the mangetic field line. The left panel of figure~\ref{fig:Epara_1} shows that such a magnetic-field-aligned electric field is exerted in both hemispheres. As time elapses, $\vert \mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} \vert$ evolves, exerting electric currents in the magnetosphere. These currents modify the poloidal electric field through the Ampere's law. For example, at $t=430M$ in the northern hemisphere, the right panel shows that a negative $\mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$}$ appears in the higher-middle latitudes, exerting inwards currents there. In the same manner, in the southern hemisphere, $\mbox{\boldmath$E$}$ points inwards in the higher-middle latitudes outside the ergosphere. Note that $F_{\theta\varphi}$ changes sign across the equator. Because of this nearly symmetric distribution of the electric field between the two hemispheres, electrons (or positrons) are accelerated outwards (or inwards) in the higher-middle latitudes in both hemispheres. In the lower latitudes, however, positrons (or electrons) are accelerated outwards (or inwards) by the strong magnetic-field-aligned electric field in both hemispheres. As a result, currents flow inwards in the middle latitudes and flow outwards in the lower latitudes. For the details of the electric currents, see \S~\ref{sec:ptcl}. Such a pattern of the magnetospheric currents lead to a positive toroidal magnetic field ($\propto F^{r \theta}$) in the northern hemisphere, while a negative $F^{r \theta}$ in the southern hemisphere. Thus, $F^{r \theta}$ vanishes on the equator. Now let us consider the Poynting flux, or equivalently the BZ flux, \begin{equation} T_{\rm em}{}^r{}_t = \frac{c}{4\pi} F^{r \mu} F_{\mu t} \propto F^{r \theta} F_{\theta t} \label{eq:Poynting} \end{equation} in the BH vicinity. If it becomes positive, it means that the BH's rotational energy is being extracted electromagnetically. In figure~\ref{fig:LBZ_Time}, we plot the temporal evolution of $T_{\rm em}{}^r{}_t (t,r,\theta)$ at $r= r_{\rm H}+0.25M$ at four discrete colatitudes as labeled. The ordinate is normalized by the typical BZ flux, which is analytically given by $F_{\rm analytical} \equiv L_{\rm BZ}/S_{\rm area}$, where $S_{\rm area} \equiv \int\int \sqrt{A}\sin\theta d\theta d\phi$. The BZ power (i.e., the spin-down luminosity) can be estimated to be \begin{equation} L_{\rm BZ} = \frac{1}{128} \left( \frac{a}{M} \right)^2 B_\perp{}^2 r_{\rm H}{}^2 c \label{eq:LBZ} \end{equation} in the slow-rotating limit, $\vert a \vert \ll M$, where $B_\perp$ denotes the averaged strength of the radial magnetic field. It follows from the figure that the solution exhibits rapid plasma oscillations, as reported by \citet{Levinson:2018:AA}. It also follows that the simulated BZ flux is consistent with its analytical estimate. See the supplementary material of \citet{Crinquand:2020:PhRvL} for a consistent discussion between the simulated BZ flux and their analytical estimates. It should be noted that the BZ flux increases during the elapsed time $ 320M < t < 450M$ along the middle latitudes, $45^\circ \le \theta \le 75^\circ$. During this flux-enhancement phase, the BZ flux (i.e., the Poynting flux) fluctuates relatively mildly compared to its amplitude within $45^\circ \le \theta \le 60^\circ$. In the present magnetically dominated magnetosphere, in which the magnetic energy density dominates the particles' rest-mass energy densities, particles' energy flux is typically less than $10^{-7}$ compared to the Poynting flux; thus, we neglect their contribution when we consider the energy flux. In figure~\ref{fig:LBZ_Angle}, we present the angular dependence of the BZ flux at four elapsed time as labeled. During the flux-enhancement phase, the BH's rotational energy is efficiently extracted from the middle latitudes, $40^\circ < \theta < 75^\circ$ in the northern hemisphere, and $105^\circ > \theta > 135^\circ$ in the southern hemisphere. \begin{figure*}[t!] \hspace{1.2cm} \includegraphics[width=0.8\textwidth, angle=0]{fig_BZflux_theta.pdf} \vspace*{-.truecm} \caption{ Radial component of the Blandford-Znajek flux as a function of the colatitude angle, $\theta$. The blue dotted, red solid, black dashed, and green dash-dotted curves show the BZ flux at time $t=330.00 GMc^{-3}$, $380.00 GMc^{-3}$, $430.00 GMc^{-3}$, and $480.00 GMc^{-3}$, respectively. } \label{fig:LBZ_Angle} \end{figure*} To smear out the variation, we take a moving average with a period of $5 GMc^{-3}$, and plot the BZ fluxes in figure~\ref{fig:LBZ_Time_MA}. In this particular figure, we compare the results for three accretion rates: the top, middle, and bottom panels show the BZ fluxes at $\dot{m}=0.000250$, $0.000225$, and $0.000200$, respectively. We find that the flux is enhanced for typically $140 \sim 180$ dynamical time scales, and that the flux peaks in the middle latitudes during the enhancement irrespective of the accretion rate, as the blue dashed ($\theta=50^\circ$), the blue solid ($\theta=60^\circ$), and the black dashed ($\theta=70^\circ$) curves indicate in each panel. For example, at $\theta=60^\circ$, the FWHMs of the moving-averaged flux become $157M$ and $149M$ in the northern and southern hemispheres, respectively, when $\dot{m}=2.50 \times 10^{-4}$ (as the top two panels show). They become $136M$ (northern) and $184M$ (southern) when $\dot{m}=2.25 \times 10^{-4}$ (middle two panels), and $141M$ (northern) when $\dot{m}= 2.00 \times 10^{-4}$ (bottom left panel). However, it is still not possible to measure the FWHM for the sourthern hemisphere when $\dot{m}=2.00\times 10^{-4}$ (bottom right panel). The flux distributes more symmetrically between northern and southern hemispheres, as the accretion rate increases. \begin{figure*}[t!] \hspace{0.0cm} \includegraphics[width=1.0\textwidth, angle=0]{fig_BZflux_5MA.pdf} \vspace*{-.5truecm} \caption{ Moving-averaged BZ fluxes with a period of $5 GMc^{-3}= 5M$, for three dimensionless accretion rates, $\dot{m}=2.50 \times 10^{-4}$ (top), $\dot{m}=2.25 \times 10^{-4}$ (middle), and $\dot{m}=2.00 \times 10^{-4}$ (bottom). Left (or right) panels depict the BZ flux in the northern (or southern) hemispheres. Each curve denotes the BZ flux at colatitude $\theta$ as labeled. } \label{fig:LBZ_Time_MA} \end{figure*} \subsubsection{Particle distribution and currents} \label{sec:ptcl} We next consider the distribution functions of $e^\pm$'s, and the resultant current distribution. Figure~\ref{fig:NeNp} shows the densities of electrons (left) and positrons (middle) at the burst peak, $t = 430M$, in log scale. The right panel shows the GJ value at each point. It follows that both electrons and positrons have greater densities than the GJ value particularly in the lower latitudes. In the polar regions, because of the polarization drift caused by the varying meridional electric field (and the constant radial magnetic field), electrons migrate meridionally to accumulate in $\theta < 15^\circ$ and $\theta > 165^\circ$, whereas positrons in $15^\circ < \theta < 20^\circ$ and $165^\circ > \theta > 160^\circ$ at $t \sim 430M$. In the lower latitudes, the leptonic densities attain $10^{7.7} \mbox{cm}^{-3}$. The charged leptons carry electric currents, as depicted in figure~\ref{fig:J1J2}. In the left and right panels, we present the radial and meridional components of the electric currents at each point in the ZAMO frame. For a quantity $f(r,\theta)$, we plot \begin{equation} F= \mbox{sign} \left\{ \lg \left[\max(|f|,1) \right] , f \right\}, \label{eq:log_f} \end{equation} where $\mbox{sign}(a,b)= |a|$ if $b \ge 0$ and $= -|a|$ if $b<0$; we set $f=J^{\hat{r}}$ and $J^{\hat\theta}$ for the left and right panels, respectively. In the left panel, the yellow-red regions show that currents flow inwards in the middle latitudes, while the blue-violet regions show that they flow outwards in the lower latitudes. These radial currents are closed by meridional currents flowing within the ergosphere, as the right panel shows. For example, in the right panel, the blue (or red) region in the lower-middle latitudes within $r < 2M$ in the northern (or southern) hemisphere shows equator-ward meridional currents. Because of this current closure, it is confirmed that the BZ process is, indeed, facilitated. To grasp the current distribution more easily, we plot the direction and strength of the poloidal currents as red arrows in figure~\ref{fig:chJp}. The current density is averaged over the area in which we compute the direction and length of each arrow. In the figure, the length of the arrows indicates the strength of the current density in logarithmic scale, as indicated by the right panel. We can confirm the current pattern discussed in the foregoing paragraph. It also follows that the averaged currents mostly flow in the middle and lower latitudes; thus, the low density regions in the polar funnels do not essentially affect the entire structure of the magnetosphere. In this figure, we also plot the charge density, $(n_+ -n_-)/n_{\rm GJ}$, in color, where $n_+$ and $n_-$ refer to the positronic and electronic number densities. Values are plotted using the same method as figure~\ref{fig:J1J2} (eq.~[\ref{eq:log_f}]). It follows that the real charge density becomes even greater than the GJ value (right panel of fig.~\ref{fig:NeNp}), which indicates that the electron-positron plasmas become highly non-neutral near the BH. We next consider the distribution functions of the charged leptons. The dimensionless distribution functions of $e^\pm$'s, $n_\pm$, are sliced between the colatitude $\theta_1 < \theta < \theta_2$, \begin{equation} N_\pm (r_\ast,\gamma) \equiv \frac{1}{n_{\rm GJ}} \int_{\theta_1}^{\theta_2} d\theta \frac{\partial^2 n_\pm (r_\ast,\theta,\gamma)} {\partial\theta \partial\gamma}. \label{eq:N_pm} \end{equation} In figure~\ref{fig:Distr_LF}, we present $N_-$ and $N_+$ as a function of $r_\ast$ and $\gamma$ (i.e., the Lorentz factor) in the left and right columns, respectively. The range of $\theta \in [\theta_1,\theta_2)$ increases from upper to lower rows as described in the caption. It shows that the Lorentz factors are saturated at a terminal value at each point. The terminal value is determined by the balance between the electrostatic acceleration and the synchro-curvature radiation drag force. Since $\vert \mbox{\boldmath$E$} \cdot \mbox{\boldmath$B$} \vert$ increases with decreasing radius significantly at $r_\ast < 0$ (or equivalently, at $r < 4.1M$), particles gain large kinetic energies inside $r < 4M$. Nevertheless, by virtue of the residual magnetic-field-aligned electric field, particle motion is kept relativistic in the entire region. Lorentz factors attain $\gamma > 10^{4.5}$ in $r_\ast < 1M$ (or equivalently, in $r<4.7M$). To examine the pitch-angle dependence, in figure~\ref{fig:Distr_PA}, we plot $N_\pm(r_\ast,\chi)$, where the pitch angle $\chi$ becomes $+1$ (or $-1$) when a particle is moving outwards (or inwards) with a very small trans-magnetic-field momentum, and becomes $0$ when it has no longitudinal momentum. At $r_\ast > 0$, particles have small trans-magnetic-field momenta. However, in the higher-middle latitudes, $\theta \sim 40^\circ$, the middle two panels show that electrons (or positrons) migrate outwards with positive $\cos\chi$ (or inwards with negative $\cos\chi$), indicating an inward poloidal current there. Figures~\ref{fig:Distr_LF} and \ref{fig:Distr_PA} show that particles are highly relativistic with non-thermal and anisotropic distribution functions. \begin{figure*}[t!] \vspace*{0.0truecm} \includegraphics[width=\textwidth, angle=0]{fig_NeNpGJ.pdf} \vspace*{0.0truecm} \caption{ Densities of electrons (left) and positrons (middle), and the Goldreich-Julian density (right), $n_{\rm GJ}$, in ${\rm cm}^{-3}$ unit at $t=430M$. Values are plotted in decadic logarithm. The abscissa and ordinate are common with figure~\ref{fig:Bp}. } \label{fig:NeNp} \end{figure*} \begin{figure*}[t!] \vspace*{-0.0truecm} \hspace{3.5cm} \includegraphics[width=0.6\textwidth, angle=0]{fig_J1J2.pdf} \vspace*{-0.0truecm} \caption{ Electric currents measured by ZAMO in the poloidal plane at $t=430M$. The abscissa and ordinate are common with figure~\ref{fig:Bp}. The left panel shows the radial component, $J^{\hat{r}}$, while the right one does the meridional component, $J^{\hat{\theta}}$. To plot the values, we take the decadic lorarithm of the absolute value of a quantity, then put the same sign as the quantity. For example, in the left panel, the value of $10^6$ (or $-10^6$) corresponds to an outward (or an inward) current density whose absolute value is $10^6 \mbox{ statampere cm}^{-2}$. In the same way, in the the right panel, a positive (or a negative) $J^{\hat{\theta}}$ means an equator-ward current in the northern (or southern) hemisphere. } \label{fig:J1J2} \end{figure*} \begin{figure*}[t!] \vspace*{-0.0truecm} \hspace{2.7cm} \includegraphics[width=0.6\textwidth, angle=0]{fig_chJp.pdf} \caption{ Dimensionless charge denisity $(n_+ - n_-)/n_{\rm GJ}$ (color image) and poloidal electric current (red arrows). The abscissa and ordinate are common with figure~\ref{fig:Bp}. The charge density is plotted in linear scale. The green (or yellow) regions show positive (or negative) dimensionless charge densities. The right panel shows four example arrow lengths corresponding to the indicated strengths of the poloidal current densities in $\mbox{statampere cm}^{-2}$. } \label{fig:chJp} \end{figure*} \begin{figure*}[t] \hspace{0.7cm} \includegraphics[width=0.90\textwidth, angle=0]{fig_Lf_450.pdf} \vspace*{-0.0truecm} \caption{ Lorentz factor dependence of the distribution of electrons (left) and positrons (right) as a function of the dimensionless tortoise coordinate, $r_\ast / M$, at elapse time $t=430 M$. The color images show the logarithmic value of the dimensionless distribution functions (see text) along discrete colatitudes. The top, middle, and bottom rows show the distribution function between the meridional range $16.606^\circ - 23.568^\circ$, $37.678^\circ - 41.432^\circ$, and $57.235^\circ - 60.034^\circ$, respectively. } \label{fig:Distr_LF} \end{figure*} \begin{figure*}[t] \hspace{0.7cm} \includegraphics[width=0.9\textwidth, angle=0]{fig_PA_450.pdf} \vspace*{-.0truecm} \caption{ Similar figure as figure~\ref{fig:Distr_LF}, but the pitch-angle dependence of the distribution functions of electrons (left) and positrons (right) at $t=430 M$. The color images show the logarithmic value of the dimensionless distribution functions within the same meridional range as fig.~\ref{fig:Distr_LF}. } \label{fig:Distr_PA} \end{figure*} \section{Discussion} \label{sec:disc} To sum up, we simulated the evolution of a BH magnetosphere by a PIC scheme, when a poloidal magnetic field is sustained by a disk toroidal current, and when the electron-positron pair plasmas are steadily supplied homogeneously per invariant volume basis. Provided that the mass accretion rate is much less than the Eddington rate, both the electromagnetic fields and the particle distribution functions exhibit rapid variability. The rotational energy of the BH is, indeed, extracted via the Blandford-Znajek process, whose energy flux concentrates in the middle latitudes, particularly during the flux-enhancement phase that lasts approximately $160 \pm 20$ dynamical time scales. We have demonstrated that the collision time scale is much longer than the gyration time scale (i.e., the Ohm's law cannot be justified), that the pair plasma is highly non-neutral, that the particles' energy distribution is non-Maxwellian, and that the momentum distribution is anisotropic. Thus, we must discard the MHD approximation when we consider the jet launching region around the BH whose mass accretion rate is highly sub-Eddington. In this section, we discuss the dominant radiative process in \S~\ref{sec:disc_radiation}, the validity of gridding in \S~\ref{sec:disc_resolution}, comparison with other works in \S~\ref{sec:disc_cf}, and an implication to the collimation of VLBI jets in \S~\ref{sec:disc_SMBH}. \subsection{Dominant radiative process} \label{sec:disc_radiation} Although the synchro-curvature radiation process is incorporated in the radiative reaction force, $F_{\rm rad}^j$, ICS are not considered as a radiation drag. Thus, we have to confirm that the ICS process is negligible compared with the synchro-curvature process around stellar-mass BHs accreting at $\dot{m} \ll 1$. Here we briefly compare the pure curvature, pure synchrotron, and ICS processes, and discuss the dominant process. To make a general discussion, we adopt the actual electron mass $m_{\rm e}=m_{\rm p}/1836$, instead of $m_{\rm p}/20$, in this subsection. The magnitude of the radiation drag force due to the pure curvature process is given by \begin{eqnarray} F_{\rm curv} &=& \frac23 e^2 \frac{\gamma^4}{\rho_{\rm c}^2} \nonumber\\ &=& 1.70 \times 10^{-4} \left(\frac{\rho_{\rm c}}{2M}\right)^{-2} M_1{}^{-2} \gamma_7{}^4 \mbox{ dyn}, \qquad \label{eq:F_curv} \end{eqnarray} where $\rho_{\rm c}$ refers to the curvature radius of the particle's center of gyration, $2M=2MGc^{-2}$ the Schwarzschild radius, and $\gamma_n \equiv \gamma/10^n$. If this force balances with the electrostatic force, \begin{equation} e E_\parallel = 4.80 \times 10^{-5} \vert E_{\parallel,5} \vert \mbox{ dyn}, \label{eq:electrostatic} \end{equation} we find that the particles saturate at the Lorentz factor, \begin{equation} \gamma_{\rm curv} = 7.28 \times 10^6 \left(\frac{\rho_{\rm c}}{2M}\right)^{1/2} M_1{}^{1/2} E_{\parallel,5}{}^{1/4}, \label{eq:gamma_curv} \end{equation} where $E_{\parallel,5} \equiv E_\parallel / (10^5 \mbox{ statvolt cm}^{-1}) $. The magnitude of the radiation-drag force due to the pure synchrotron process is given by \begin{eqnarray} F_{\rm sync} &=& \frac23 r_0{}^2 \gamma^2 B^2 \sin^2\chi \nonumber\\ &=& 5.29 \times 10^{-5} \gamma_5{}^2 B_6{}^2 \frac{\sin^2 \chi}{0.1} \mbox{ dyn}, \label{eq:F_sync} \end{eqnarray} where $r_0$ denotes the classical electron radius, and $\chi$ the pitch angle. Equating equations~(\ref{eq:electrostatic}) and (\ref{eq:F_sync}), we obtain the terminal Lorentz factor \begin{equation} \gamma_{\rm sync} = 9.52 \times 10^4 B_6{}^{-1} \left( \frac{\sin^2\chi}{0.1} \right)^{-1/2} E_{\parallel,5}^{1/2}. \label{eq:gamma_sync} \end{equation} If $\gamma_{\rm sync} < \gamma_{\rm curv}$, the pure synchrotron process dominates the pure curvature process. If $m_{\rm e}=m_{\rm p}/1836$, We find that the condition $\gamma_{\rm sync} < \gamma_{\rm curv}$ is satisfied, because \begin{equation} B_6 \left( \frac{\sin\chi}{0.1} \right) E_{\parallel,5}^{-1/4} \left( \frac{\rho_{\rm c}}{2M} \right)^{1/2} > 1 \label{eq:sync_dom} \end{equation} is satisfied by the vast majority of the particles. In another word, the synchro-curvature process reduces to the pure synchrotron process for most of the particles. However, since we assume heavy electrons in this paper, the increased gyro radius makes the synchrotron process less efficient; thus, the radiation drag force is given by the synchro-curvature process in general for heavy electrons. We next compare the pure synchrotron process with the ICS. At radius $r$ from the central BH, the number density of the ADAF synchrotron photons are given by \citep{Mahadevan:1997:ApJ} \begin{equation} N_{\rm ph} = \frac{L_{\rm sync}} {4\pi r^2 h \nu c}, \label{eq:Nph} \end{equation} where $h \nu$ refers to the energy of the photons emitted from the ADAF via the synchrotron process. The synchrotron luminosity is given by \begin{equation} L_{\rm sync} = 1.67 \times 10^{36} M_1{}^{1/2} \dot{m}^{3/2} \mbox{ ergs s}^{-1}. \label{eq:Lsync} \end{equation} The typical photon energy is given by equation~(22) of \citet{Mahadevan:1997:ApJ}. Accordingly, we obtain \begin{equation} N_{\rm ph} = 1.55 \times 10^{20} T_{\rm e,9}{}^5 M_1 \dot{m} \mbox{ photons cm}^{-3}, \label{eq:Nph_2} \end{equation} where $T_{\rm e,9}$ refers to the electron temperature, $T_{\rm e}$, normalized by $10^9 K$. The ICS drag force per electron (or positron) is given by \begin{equation} F_{\rm ICS} \approx N_{\rm ph} \sigma_{\rm KN} \gamma m_{\rm e} c^2, \label{eq:F_ICS} \end{equation} where \begin{equation} \sigma_{\rm KN} \approx \frac38 \sigma_{\rm T} x^{-1} (\ln 2x + \frac12 ), \label{eq:KN} \end{equation} refers to the Klein-Nishina cross section, and $x \approx \gamma$ holds on average. We thus have \begin{equation} F_{\rm ICS} \approx 3.89 \times 10^{-10} M_1 \dot{m} T_{\rm e,9}{}^5 \label{eq:F_ICS_2} \end{equation} We thus obtain \begin{equation} \gamma_5{}^2 B_6{}^2 \left( \frac{\sin\chi}{0.1} \right)^2 M_1^{-1} \dot{m}^{-1} T_{\rm e,9}{}^{-5} \gg 10^{-5} \label{eq:ICSvsS} \end{equation} On these grounds, we obtain $F_{\rm sync} \gg F_{\rm curv} \gg F_{\rm ICS}$ for the actual electron mass. Note that the Lorentz factor will increase with decreasing electron mass. However, for heavy electrons, energy transfer efficiency increases due to their greater mass ratio to protons. Thus, ICS may not be negligible even for stellar-mass BHs. However, we neglected ICS in this paper, considering future extension to smaller electron masses. In short, for the actual electron mass, it is possible that the ICS process is negligible compared to the synchro-curvature process, when we consider stellar-mass BHs in a quiescent state. However, for supermssive BHs, we generally obtain $F_{\rm ICS} > F_{\rm curv} \gg F_{\rm sync}$ because of the large curvature radius and the weak magnetic field strength \citep{Hirotani:2016:ApJ} for the actual electron mass. \subsection{Grid interval versus skin depth} \label{sec:disc_resolution} Let us compare the invariant grid interval $ \epsilon r_{\rm g} \equiv \Delta_s \equiv {\rm max} \left( \sqrt{g_{rr}} \Delta_r, \sqrt{g_{\theta\theta}} \Delta_\theta \right) $ with the skin depth, $l_{\rm p}$ (eq.~\ref{eq:skin_depth}), where $\Delta_r$ and $\Delta_\theta$ denote the interval in $r$ and $\theta$ coordinates, respectively. Representative values of the dimensionless function $\epsilon(r,\theta)$ are presented in table~1. Because of a constant $\sin\theta \Delta_\theta$ gridding, $\Delta_\theta$ increases with decreasing $\sin\theta$. Note that the dimensionless grid interval $\epsilon$ is defined independently from the BH mass. Adopting heavy electrons, $m_{\rm e}= m_{\rm p}/20$, and normalizing the skin depth with $r_{\rm g}$, we obtain \begin{equation} \frac{l_{\rm p}}{r_{\rm g}} = 3.4 \left( \gamma_5 n_5{}^{-1} \right)^{1/2} \label{skin_depth_2} \end{equation} where $\gamma_\nu \equiv \langle\gamma\rangle / 10^\nu$ and $n_\nu \equiv n_\pm / 10^\nu $; $\langle\gamma\rangle$ denotes the averaged Lorentz factor. The highest density region appears in the lower latitudes at $r<4M$, i.e., at $r_\ast < -0.25M$. In this region, $n < 10^{7.7} \mbox{ cm}^{-3}$ and $\langle \gamma \rangle > 10^{4.7}$; thus, we obtain $l_{\rm p} > 0.108 r_{\rm g} > 5 \Delta_{\rm s}$ there. Other regions have greater skin depth. Accordingly, the skin depth can be resolved at every point (and in fact at every time step) during the PIC simulation, although it is marginal when $l_{\rm p} \sim 5 \Delta_{\rm s}$ happens at the flux peak. \begin{center} \vspace*{1.0truecm} {\bf Table~1}\\ Invariant grid interval, $\epsilon(r,\theta)=\Delta_s / r_{\rm g}$ \begin{tabular}{ccccc}\hline\hline $r/r_{\rm g}$ & $\theta=2.62^\circ$ & $29.0^\circ$ & $60.0^\circ$ & $90.0^\circ$ \\ \hline 8.507 & .2762 & .0262 & .0261 & .0260 \\ 4.099 & .1355 & .0217 & .0215 & .0214 \\ 2.021 & .0714 & .0122 & .0117 & .0114 \\ 1.471 & .0557 & .0036 & .0028 & .0026 \\ \hline \label{tbl:grid_interval} \end{tabular} \end{center} \subsection{Comparison with previous works} \label{sec:disc_cf} Let us compare the present work with recent two works on 2-D GR PIC simulations \citep{Parfrey:2019:PhRvL,Crinquand:2020:PhRvL}. First, we discuss the difference with \citet{Parfrey:2019:PhRvL}. Ignoring radiative transfer, and considering instead an injection of pairs whose rate is proportional to the local magnetic-field-aligned electric field, they performed 2-D GR PIC simulations of the magnetosheres of an extremely rotating BH with $a=0.999M$. The BH mass is not explicitly specified in their paper, but presumably supermassive. They adopted an extremely small magnetic field strength to emphasize the Penrose process, an energy-extraction mechanism from rotating BHs when particles fall onto the horizon with negative energies measured at infinity. Particles were created in the reconnecting current sheet on the equatorial plane. The magnetization parameter was $2000$; that is, the magnetosphere was still magnetically dominated. Magnetic field configuration is initially the Wald's vacuum solution, which is produced by a toroidal ring current flowing on the equatorial plane at large distances. Then the field lines are bend back toward the BH to penetrate the horizon. For M87*, their dimensionless magnetic field strength, $\tilde{B}_0 = 10^3$ corresponds to the actual strength $B= \tilde{B}_0 (m_{\rm e}c^2/e r_{\rm g}) \sim 10^{-9}$~G, which is about $10^{-11}$ times smaller than what is observed \citep{EHT:2019:ApJL}. Under this condition, they found that the charged leptons created in the current sheet plunge onto the horizon with negative energies as a result of the interaction with the electromagnetic field, and that the Penrose process contribute in the extraction of the energy and angular momentum from a maximally rotating BH. In the present paper, on the other hand, we consider a stellar-mass BH. In this case, we can resolve the skin depth with a realistic magnetic field strength, $B= B_{\rm eq}$. Accordingly, the magnetization parameter becomes of the order of $10^7$; thus, we did not mention about the particle contribution on the energy extraction from a rotating BH. Nevertheless, particle energy flux does become negative in our simulation as well; thus, the Penrose process is indeed working, although their contribution is negligible. We assume a homogeneous and constant plasma supply in the present work, which forms a constast to \citet{Parfrey:2019:PhRvL}, who considered pair creation in the reconnecting current sheet. The magnetic field is assumed to be fixed on the poloidal plane during the simulation in the present paper. Second, let us discuss the difference with \citet{Crinquand:2020:PhRvL}. In their 2D GR PIC code, implementing inverse-Compton scatterings (ICS) and photon-photon pair production self-consistently, they solved the PIC equations together with the radiative transfer equation. Electron-positron pair plasmas are supplied in response to gap opening. They presumed supermassive BHs. In this case, ICS dominates the pure-curvature radiation \citep{Hirotani:2016:ApJ}, where the particle's pitch angles become small enough due to the ICS and continuous acceleration along the magnetic field lines \citep{bes92}. Thus, their treatment, which takes account of only the ICS as radiative processes, can be justified. \citet{Crinquand:2020:PhRvL} assumed a true monopole magnetic field, whose radial component point outwards in the entire magnetosphre. Accordingly, there appear no current sheets in their solution. Electric currents flow inwards (or outwards) in the northern (or southern) hemisphere, leading to a negative $F^{r \theta}$ in the entire magnetosphere. Since $E_\theta=F_{\theta t}>0$ holds in both hemispheres, the resultant Poynting flux is also positive in both hemispheres. In their Zelton code, they solved all the six components of the electromagetic fields, computing the evolution of $F_{\varphi t}$ from the toroidal component of the current density $J^\varphi$ that is constructed from the azimuthal motion of the charged particles. In the present paper, on the other hand, instead of solving the radiative transfer equation, we simply adopt a uniform and constant pair production in the particle source term. We consider a stellar-mass BH, in which case synchro-curvature process becomes non-negligible compared to ICS \citep{Hirotani:2016:ApJ}. We assume a split-monopole magnetic field, whose radial component is positive (or negative) in the northern (or southern) hemisphere. Accordingly, there appears a current sheet on the equator. Electric currents flow inwards in the higher latitudes in both hemispheres, and flow outwards in the lower latitudes, leading to a negative (or positive) $F^{r\theta}$ in the northern (or southern) hemisphere, and a positive BZ flux in both hemispheres, as discussed in \S\ref{sec:acc}. Let us briefly point out the limitation of the present analysis. Unlike the Zelton code, we solved only three of the six components of the electromagnetic fields in the present paper, putting $F_{\varphi t}=0$, and hence $\partial_t F_{\theta \varphi}= \partial_t F_{\varphi r}= 0$ throughout the simulation, as the first step. If the local physics (as supposed in recent 1-D GR PIC simulations) does not significantly affect the entire structure of the magnetosphere, such an assumption of a fixed poloidal magnetic field could be justified. However, when we apply the present 2D method to a global magnetosphere, toroidal currents carried by the charged leptons at each position can alter the entire magnetic field configuration on the poloidal plane. Thus, in the same manner as in recent 1-D models \cite{Levinson:2018:AA,Chen:2020:ApJ,Kisaka:2020:arXiv}, we must draw attention to the limitation of $F_{\varphi t}=0$. In the next paper, we will extend our analysis to the case $F_{\varphi t} \ne 0$, constructing $J^\varphi$ at each position from the actual motion of the charged leptons, and solving all the six components of the electromagnetic fields. \subsection{Implication for supermassive black holes} \label{sec:disc_SMBH} Let us finally discuss what can be expected if the present results obtained for stellar-mass BHs can be applied to supermassive BHs. We have demonstrated that the BH's rotational energy is efficiently extracted along the magnetic field lines that cross the event horizon in the middle latitudes. Let us discuss an implication of this result on the formation of limb-brightened jets. It has been revealed that the innermost region of the M87 jet exhibits a limb-brightened structure by the VLBA observations in 15--43~GHz \citep{Kovalev:2007:ApJL,Ly:2007:ApJ,Junor:1999:Natur, hada11,hada13,Walker:2018:ApJ}. At 86~GHz, this limb-brightened structure is already well developed at 0.15~mas from the VLBI core, where the corresponding apparent opening angle becomes approximately $100^\circ$ \citep{hada16}. If a relatively large viewing angle of $\theta_{\rm view} \sim 30^\circ$ is adopted \citep{Ly:2007:ApJ,hada16}, the jet has a deprojected opening angle $\chi_{\rm open} \sim 50^\circ$ at the deprojected distance $z=84 GMc^{-2}$. However, if a smaller view angle, $\theta_{\rm view} \sim 17^\circ$, is adopted \citep{Biretta:1999:ApJ,Wang:2009:MNRAS,Perlman:2011:ApJ, Nakamura:2014:ApJ,Mertens:2016:AA,Walker:2018:ApJ}, we obtain $\chi_{\rm open} \sim 30^\circ$ at $z=108 GMc^{-2}$. If a jet begins to collimate outside the outer light surface \citep{came86b}, we may assume that the jet is radial within the distance $\zeta \varpi_{\rm LC}$ from the rotation axis, and becomes paraboloidal outside of it \citep{Asada:2012:ApJL,hada13, Nakamura:2013:ApJ,Asada:2016:ApJ,Nakamura:2018:ApJ}, where $\varpi_{\rm LC} \equiv c / \Omega_{\rm F}$ denotes the typical radius of the outer light surface. Figure~\ref{fig:jet} sketches the geometry of this jet downstream region. Assuming that we observe the jet at the position ($r$,$\theta$), we can express $\zeta$ in terms of the observables $z$ and $\theta=\chi_{\rm open}/2$ as $r(1-\cos\theta)= (\zeta \varpi_{\rm LC}/\sin\theta_0)(1-\cos\theta_0)$, which gives \begin{equation} \zeta \approx \frac{z}{\varpi_{\rm LC}} \frac{1-\cos\theta}{\cos\theta} \frac{\sin\theta_0}{1-\cos\theta_0}, \label{eq:jet} \end{equation} where $z=r\cos\theta$. If $\Omega_{\rm F} \approx 0.5 \omega_{\rm H}$, $a \approx 0.9M$ gives $\varpi_{\rm LC} \approx 6.4 GM/c^2$. (Note that $\Omega_{\rm F}=F_{t\theta}/F_{\theta\varphi}$ is not assumed but solved in the PIC simulation.) If the magnetic field line with the footpoint angle $\theta_0 \approx 60^\circ$ (or $75^\circ$) is brightened at downstream $z$ with half opening angle $\theta= \chi_{\rm open}/2$, we obtain $\zeta \approx 2.4$ (or $\zeta \approx 1.8$) for $\theta_{\rm view}=30^\circ$, and $\zeta \approx 1.0$ (or $\zeta \approx 0.78$) for $\theta_{\rm view}=17^\circ$, using the 86~GHz VLBI observations. On these grounds, if the BZ flux concentrates in the middle latitudes also around supermassive BHs, it is possible that the M87 jet begins to collimate slightly outside the outer light surface, typically within the distance $2.4 \varpi_{\rm LC}$ from the rotation axis, which may become a good target of the Event Horizon Telescope and GRAVITY. \begin{figure*} \hspace{4.3cm} \includegraphics[width=1.0\columnwidth, angle=0]{fig_jet.pdf} \caption{ Schematic picture of the jet downstream region. The jet shape is assumed to be radial inside the radius $r_0$, and paraboidal outside of it. The jet is observed with VLBI at radius $r$. Both $\theta$ and $\theta_0$ are measured from the jet axis, which coincides with the BH's rotation axis. } \label{fig:jet} \end{figure*} \acknowledgments We thank the anonymous referee for careful reading and valuable comments to this manuscript. The authors acknowledge grant support to the CompAS/Antares group, and the access to high-performance computing facility from the Institute of Astronomy and Astrophysics in Academia Sinica (ASIAA). The authors acknowledge grant support from Ministry of Science and Technology (MoST) under 105-2119-M-001-044-MY3, 108-2112-M-001-009- and 109-2112-M-001-028-.
\section{Introduction} Image Super-Resolution (SR) involves increasing the resolution of images beyond their native resolution and is a long-standing problem in the field of image processing. Here, we focus on Single Image Super Resolution (SISR) which involves estimating sub-pixel scale values based only on a single coarsely resolved input image \cite{sr_review} (as opposed to SR frameworks that utilize multiple images \cite{multi_view, multi_view_misr,video_sr,video_sr2}). The simplest approach to SISR is 2D-interpolation, and many schemes exist that produce High Resolution (HR) outputs of various qualities. Sophisticated SISR schemes can perform different operations at different locations in an image depending on the local Low Resolution (LR) pixel data: using a dictionary of image-patch exemplars for instance \cite{a_plus}. Nasrollahi and Moeslund (2014) \cite{sr_review} provide an overview of SISR schemes. \subsection{Super Resolution with Neural Networks} Recently, deep Convolutional Neural Networks (CNNs) have been applied to SISR and have significantly outperformed past algorithms. In 2016, a 3-layer CNN achieved state of the art SISR results \cite{srcnn}, and the approach was quickly expanded to use significantly deeper CNNs \cite{deep_srcnn}. SISR CNN architectures have rapidly developed, and complex SISR networks are now built up of many blocks of convolutional layers that include skip connections such as dense blocks \cite{densenet}, residual blocks \cite{resnet}, and channel attention blocks \cite{can}. The CNNs' internal spatial upsampling operators have progressed from bicubic upsampling \cite{srcnn}, to learned kernels \cite{fully_conv}, to the ``pixel-shuffle'' approach \cite{pixel_shuffle}. The loss functions have also evolved from simple pixel-wise errors, to feature-loss and adversarial-loss\cite{srgan} which allow the CNNs to hallucinate plausible sub-pixel scale features. Current CNN SR schemes combine many of these concepts \cite{drln}. Wang, Chan and Hoi (2020) \cite{cnn_sr_review} review CNN-based SISR. \subsection{Invertible Super Resolution Networks} SISR CNNs are not typically invertible. Training them usually involves degrading HR images and tasking the CNN to reconstruct them, but applying the same degradation to the CNN output does not necessarily reproduce the input image. SISR is an ill-posed problem because there are usually multiple HR images that produce the same LR image when downsampled, and constraining SISR CNN outputs to this manifold of possible HR images is desirable \cite{pulse}. Several studies have approximated this type of invertibility with a modified loss function that computes the pixel-error between the downsampled output and the LR input \cite{pulse,dsloss1,dsloss2,zhang_2020}. Additionally, in cases where the HR image is known but needs to be intentionally degraded (image compression) training two CNNs simultaneously to perform upsampling and downsampling provides better performance than other SR schemes \cite{learned_downscaling, kim_2018}. These approaches only approximate invertibility however, and may not be sufficient for super-resolving scientific datasets. \subsection{Contributions and Impacts} This study introduces a method referred to as ``Downsampling Enforcement'' (DE), that strictly constrains a super resolution CNN's output to be \textit{exactly} invertible under 2D-average downsampling. This is accomplished using transfer function that is applied after the last convolutional layer of the CNN. We demonstrate this method using seven different CNN SISR architectures on five common image datasets, and find that it improves performance in every case. We also demonstrate the method on scientific datasets from a weather radar, satellite imager, and climate model; all cases where strictly enforcing conservation laws is important. \subsection{Motivation} \label{applications_sec} There are many scientific fields where CNN-based SR could be applied to image data or gridded datasets. In these applications, guaranteed physical consistency under 2D-averaging can ensure the SR scheme obeys physical conservation laws. For example, satellite imagers often have a much larger dynamic range than handheld cameras and undergo rigorous calibration and validation to ensure that measured radiances are accurate \cite{landsat_calib}; this should be considered when applying CNN-based super-resolution \cite{liebel_2016, lanaras_2018, muller_2020}. Other possible applications of this method include data from ranging instruments such as radars \cite{radarsr}, sonars \cite{sonar_sr}, and lidars \cite{lidar_sr}. Weather radars can be used to estimate precipitation rates for instance \cite{nexrad_rainfall}, a physical quantity that should be conserved under spatial averaging. Super resolution can be used to enhance output from gridded numerical simulations \cite{downscaling_sr}, and CNN-SISR has already been demonstrated on several real-world numerical simulation problems, including: precipitation modeling \cite{wang_2021}, wind and solar modeling \cite{stengel_2020}, and climate modeling \cite{vandal_2018}. In climate simulations strict enforcement of conservation laws is of particular importance because climate signals can be relatively weak. Also, if downscaling is used during model integration, even small errors can grow rapidly over many time-steps and significantly impact results. The lack of an internal representation of physics or strict adherence to physical laws in CNNs has been identified as a major hurdle that must be addressed before their impressive capabilities can be fully brought to bear on important imaging and modeling problems in the physical sciences \cite{reichstein_2019, tsagkatakis_2019}. In these SISR applications, and many others, strict conservation of large-scale statistical properties is often just as important as the visual fidelity of the HR output, and our method can ensure both. \section{The Downsampling Enforcement Operator} \label{f_interp_sec} Typically, during training, CNN-SISR schemes are provided LR input images produced by degrading HR-images and tasked with recovering the original. If $I_{HR}$ and $I_{LR}$ are the high- and low-resolution images respectively, $D$ is the image downsampling operator and $S$ is the super resolution scheme, CNN-SISR schemes try to find $S$ such that: $I_{HR} \approx S\{I_{LR}\}$, and during training: $I_{HR} \approx S\{D\{I_{HR}\}\}$. Here, we derive a ``Downsampling Enforcement" (DE) operator, that can be incorporated into most common SR CNN and ensures the CNN also satisfies: $I_{LR} = D\{S\{I_{LR}\}\}$. Here, we assume that $D$ represents 2D-average downsampling though solutions can likely be derived for other downsampling schemes. The DE operator $f(\mathbf{x},P)$ operates on each $N\times N$-pixel block in the HR image. $P$ denotes the value of a single pixel in the LR image and $x_i \in \mathbf{x}$ are the $N \times N$ corresponding HR-image pixels output by the last conv-layer in the CNN. $P$ and $x_i$ are assumed to have pixel intensities bounded by $[-1,1]$. \begin{equation} f(\mathbf{x},P)_i = \begin{cases} x_i + \left(\frac{P-\bar{x}}{1-\bar{x}}\right)(1-x_i) & \bar{x} < P \\ x_i & \bar{x} = P \\ x_i + \left(\frac{P-\bar{x}}{1+\bar{x}}\right)(1 + x_i) & \bar{x} > P\\ \end{cases} \qquad \text{where:} \qquad \bar{x} = \frac{1}{N^2}\sum_{x_j \in \mathbf{x}} x_j \label{f_piecewise} \end{equation} Which can also be written: \begin{equation} f(\mathbf{x},P)_i = x_i + (P-\bar{x})\left(\frac{\sigma + x_i}{\sigma + \bar{x}}\right) \qquad \text{where:} \qquad \sigma = \text{sign}(\bar{x}-P) \label{f_compact} \end{equation} A detailed derivation of $f(\mathbf{x},P)$ is provided in the Section 1 of the supplement. This formulation of $f$ has several useful properties: \begin{equation}\frac{1}{N^2}\sum_{i = 0}^{N^2} f(\mathbf{x},P)_i = P \label{summation_constraint} \end{equation} \begin{equation}f(\mathbf{x},P)_i \in [-1,1] \label{range_constraint} \end{equation} \begin{equation}x_i > x_j \rightarrow f(\mathbf{x},P)_i \geq f(\mathbf{x},P)_j \label{inequality_constraint}\end{equation} \begin{equation}f(\mathbf{x},P)_i \text{ is piecewise differentiable} \label{differentiation_constraint}\end{equation} (\ref{summation_constraint}) ensures invertibility under 2D-average downsampling. (\ref{range_constraint}) bounds $f(\mathbf{x},P)$ to the input image's dynamic range of $[-1,1]$. (\ref{inequality_constraint}) maintains the order of the initial output pixels' intensities. Finally, (\ref{differentiation_constraint}): $f(\mathbf{x},P)$ will be included as a part of the CNN during training and must be differentiable for backpropagation to work. Short proofs that (\ref{f_piecewise}) satisfies these conditions are given in Supplement Section 2. Equation (\ref{f_compact}) has a physical interpretation: it operates on initial SISR-CNN image outputs (the last conv-layer of the CNN prior to applying (\ref{f_compact}) has 3-channel RGB output and a $tanh$ transfer function). (\ref{f_compact}) is a correction applied independently to each channel that ensures the intensity of each $N\times N$ block of HR output pixels ($\mathbf{x}$) exactly averages to the value of the corresponding LR input pixel $P$. When $P$ exceeds $\bar{x}$, the remaining unused output pixel intensity is computed for each output pixel $(1-x_i)$ and a constant fraction of it is added to the output pixel values. A similar approach is applied when $\bar{x}>P$. Figure \ref{fx_fig} shows the magnitude of the correction ($f(\mathbf{x},P)_i-x_i$) when the DE operator is applied to a hypothetical block of output pixels (ranged between $[-1,1]$ with mean 0) for a range of LR input pixel values. It demonstrates that the correction term varies smoothly with respect to $P$ and $x_i$. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\linewidth]{./fx_small.png} \end{center} \caption{Visualization of the correction $f(\mathbf{x},P)_i-x_i$ as a function of varying $P$ with a sample input of 16 $x_i$'s ranging from -1 to 1 with a mean of $0$.} \label{fx_fig} \end{figure} \section{Super Resolution of Images} To evaluate the ``downsampling enforcement'' approach to SISR, we have implemented a selection of SISR-CNNs from the recent literature and trained them under identical conditions both with and without the DE operator. This section demonstrates the method on image datasets frequently used in the SR literature. In Section \ref{sds_section} we apply the method to several scientific datasets where strict enforcement of conservation laws more important. \subsection{Neural Networks} \label{cnns_section} Here, we have reproduced seven different CNN architectures from the recent SISR literature. For unbiased comparison, we have altered each model slightly so they all have a similar number of trainable parameters: $5 \times 10^6$ (except for SR-CNN and Lap-SRN which have fewer). The CNNs were implemented in Keras with a Tensorflow backend and the code and model diagrams can be found on github\footnote{\url{https://github.com/avgeiss/invertible_sr}}. We provide a more detailed overview of the CNNs and our implementations in the Supplement Section 3. The CNNs are: SR-CNN \cite{srcnn}, Lap-SRN \cite{lapsrn}, Dense U-Net (DUN) \cite{unet,densenet,radarsr}, Deep Back Projection Network (DBPN) \cite{dbpn}, Dense SR Net (DSRN) \cite{densesr}, Enhanced Deep Residual Network (EDRN) \cite{edrn}, and Residual Dense Network (RDN) \cite{rdn}. Each are trained both with and without strictly enforced invertibility. \subsection{Image Datasets} The Div2k \cite{div2k} dataset was used for training. It contains 800 high resolution training images with a 100-image test set. The last 10 training images are held out and used to compute validation scores \cite{edrn}. Trained CNNs are evaluated on several image datasets that were used because of their prevalence in the SISR literature \cite{cnn_sr_review}: SET5 \cite{set5}, SET14 \cite{set14}, BSDS100 \cite{bsds100}, Manga109 \cite{manga109}, Urban 100 \cite{urban100} and the 100-image Div2k validation set \cite{div2k}. Manga 109 are illustrated images and Urban 100 contains photographs of urban scenes while the other datasets contain miscellaneous photographs. \subsection{Training and Testing} \label{training_section} This study uses 2D-average downsampling for image degradation. Bicubic downsampling with anti-aliasing is more common in the SISR literature; specifically the Matlab scheme, but 2D-averaging was assumed in deriving (\ref{f_compact}). The CNNs here perform 4x SISR, converting 48x48 pixel inputs to 192x192 outputs. Images are standardized to a [-1,1] scale and a $tanh$ activation is applied to the output. In the DE cases the $tanh$ activation is applied \textit{before} applying (\ref{f_compact}). Each CNN is trained for 300 epochs, with the learning rate reduced by a factor of 10 after the 200th epoch. Epochs are 1000 batches of 16 image chips selected randomly from the training set with random flips and rotations. Pixel-wise mean squared error (MSE) is used as a loss function and the Adam optimizer is used with an initial learning rate of $10^{-4}$, $\beta_1 = 0.9$, $\beta_2 = 0.999$, and $\epsilon=10^{-7}$ \cite{edrn,rdn}. Two evaluation metrics are used: Peak Signal to Noise Ratio (PSNR) \cite{cnn_sr_review} and the Structural Similarity Index (SSIM) \cite{ssim}. PSNR is computed on the intensity (Y) channel after converting the CNN's output to the YCbCr color space \cite{srcnn}. SSIM is a metric designed to be more representative of the perceptual quality of an image than pixel-wise metrics. It scales from -1 to 1 and higher values are better. During validation and testing, each LR image is broken into 48x48 pixel chips using a 24-pixel stride and PSNR and Structural Similarity Index (SSIM) \cite{ssim} are then calculated on the 96x96 center portions of each of the HR outputs. For each CNN, both with and without DE, a five-member ensemble was trained from randomly initialized weights and the ensemble mean test scores are reported in Table \ref{test_data_table}. Figure \ref{training_loss} shows PSNR computed throughout training on the 10-image validation set for the first ensemble member for each CNN. \begin{figure*} \begin{center} \includegraphics[width=1\linewidth]{./train_perf.png} \end{center} \caption{(a-g): Validation set PSNR during training for the seven CNN architectures with (red) and without (black) Downsampling Enforcement (DE). The blue line in (e) uses an additional loss function term instead of DE. (h): Applies conventional CNN-SISR (black) and DE-SISR that incorrectly assumes 2D-average downsampling (green) to bicubic-downsampled images.} \label{training_loss} \end{figure*} \subsection{Results} Figure \ref{training_loss} (a-g) shows validation PSNR loss during training for each CNN (panels) both with (red lines) and without (black lines) the DE operator. All of the CNN architectures perform comparably or better when DE is added, with the largest advantage early during training. Examples of sample outputs from each of the CNNs for each of the training sets both with and without DE are included in the supplementary material (Section 4, Figures 1-2). While there are some differences on close inspection, the small differences in PSNR shown in Figure \ref{training_loss} do not relate to any dramatic change in perceptual image quality of the output. These figures help demonstrate that the DE approach can achieve state of the art SISR performance while strictly enforcing physical conservation laws within the CNN architecture. Table \ref{test_data_table} summarizes final performance for every CNN/test-set pair, with the better scores denoted by \textbf{bold text}. Adding DE to the CNN improved performance in all but one case (the Dense-Net had slightly worse PSNR on Set5, though note that Set5 has only 5-images and results in this column are more likely to be affected by small sample size). The improvements are often small, but are of comparable size to recent generational improvements in CNN architectures, EDRN vs. RDN for instance.\footnote{\url{https://paperswithcode.com/sota/image-super-resolution-on-bsd100-4x-upscaling} Accessed: 28-Jan-2021} Furthermore, in most cases the improvement in the mean test score due to adding DE passes a $99\%$ confidence test (one-sided t-Test for difference in means \cite{rice_2006}). These cases are shown in \textcolor{red}{\textbf{bold red text}}. Overall, the results in Table \ref{test_data_table} show that in addition achieving the primary goal of exact enforcement of conservation rules between the input and output images our approach yields robust and consistent performance improvements when applied across a large sampling of CNN-types and image datasets. \begin{table*}[t] \begin{center} \caption{Evaluation of several super resolution CNN architectures, both with and without Downsampling Enforcement (DE), applied to standard test datasets for image super resolution. Entries show Peak Signal to Noise Ratio / Structural Similarity Index (PSNR/SSIM), with higher scores in bold. Values are averaged across 5 training runs with random initializations and red-colored entries pass a 99\% confidence test for difference in means.} \begin{small} \begin{tabular}{ c || c | c | c | c | c | c } & SET5 & SET14 & BSDS100 & Manga-109 & Urban-100 & Div2k \\ \hline SR-CNN & 32.26/0.8914 & 27.06/0.7466 & 26.34/0.7151 & 27.40/0.8444 & 24.22/0.7242 & 29.04/0.8087 \\ w/ DE & \textcolor{red}{\textbf{32.44}}/\textcolor{red}{\textbf{0.8954}} & \textcolor{red}{\textbf{27.18}}/\textcolor{red}{\textbf{0.7507}} & \textcolor{red}{\textbf{26.41}}/\textcolor{red}{\textbf{0.7181}} & \textcolor{red}{\textbf{27.65}}/\textcolor{red}{\textbf{0.8518}} & \textcolor{red}{\textbf{24.37}}/\textcolor{red}{\textbf{0.7321}} & \textcolor{red}{\textbf{29.15}}/\textcolor{red}{\textbf{0.8134}} \\ \hline DUN & 33.26/0.9032 & 27.61/0.7593 & 26.67/0.7270 & 28.65/0.8673 & 25.00/0.7566 & 29.57/0.8224 \\ w/ DE & \textbf{33.30}/\textcolor{red}{\textbf{0.9047}} & \textcolor{red}{\textbf{27.68}}/\textcolor{red}{\textbf{0.7618}} & \textcolor{red}{\textbf{26.72}}/\textcolor{red}{\textbf{0.7290}} & \textcolor{red}{\textbf{28.85}}/\textcolor{red}{\textbf{0.8721}} & \textcolor{red}{\textbf{25.15}}/\textcolor{red}{\textbf{0.7632}} & \textcolor{red}{\textbf{29.66}}/\textcolor{red}{\textbf{0.8256}} \\ \hline Lap-SRN & 33.22/0.9037 & 27.63/0.7605 & 26.70/0.7289 & 28.86/0.8709 & 25.13/0.7608 & 29.66/0.8249 \\ w/ DE & \textbf{33.29}/\textcolor{red}{\textbf{0.9048}} & \textcolor{red}{\textbf{27.69}}/\textcolor{red}{\textbf{0.7615}} & \textcolor{red}{\textbf{26.76}}/\textcolor{red}{\textbf{0.7302}} & \textcolor{red}{\textbf{28.94}}/\textcolor{red}{\textbf{0.8720}} & \textcolor{red}{\textbf{25.31}}/\textcolor{red}{\textbf{0.7672}} & \textcolor{red}{\textbf{29.73}}/\textcolor{red}{\textbf{0.8268}} \\ \hline DBPN & 33.49/0.9075 & 27.85/0.7673 & 26.88/0.7354 & 29.58/0.8824 & 25.72/0.7816 & 29.99/0.8333 \\ w/ DE & \textbf{33.54}/\textcolor{red}{\textbf{0.9085}} & \textcolor{red}{\textbf{27.88}}/\textcolor{red}{\textbf{0.7688}} & \textcolor{red}{\textbf{26.91}}/\textcolor{red}{\textbf{0.7368}} & \textcolor{red}{\textbf{29.70}}/\textcolor{red}{\textbf{0.8855}} & \textcolor{red}{\textbf{25.86}}/\textcolor{red}{\textbf{0.7871}} & \textcolor{red}{\textbf{30.06}}/\textcolor{red}{\textbf{0.8353}} \\ \hline EDRN & 33.52/0.9075 & 27.90/0.7676 & 26.89/0.7362 & 29.61/0.8824 & 25.80/0.7853 & 30.04/0.8348 \\ w/ DE & \textbf{33.58}/\textcolor{red}{\textbf{0.9086}} & \textbf{27.93}/\textcolor{red}{\textbf{0.7690}} & \textcolor{red}{\textbf{26.92}}/\textcolor{red}{\textbf{0.7372}} & \textcolor{red}{\textbf{29.68}}/\textcolor{red}{\textbf{0.8852}} & \textcolor{red}{\textbf{25.88}}/\textcolor{red}{\textbf{0.7882}} & \textcolor{red}{\textbf{30.07}}/\textcolor{red}{\textbf{0.8360}} \\ \hline DNSR & \textbf{33.59}/0.9081 & 27.88/0.7674 & 26.87/0.7350 & 29.52/0.8821 & 25.72/0.7822 & 29.97/0.8331 \\ w/ DE & 33.56/\textbf{0.9087} & \textbf{27.89}/\textcolor{red}{\textbf{0.7684}} & \textcolor{red}{\textbf{26.88}}/\textcolor{red}{\textbf{0.7358}} & \textcolor{red}{\textbf{29.60}}/\textcolor{red}{\textbf{0.8846}} & \textcolor{red}{\textbf{25.79}}/\textcolor{red}{\textbf{0.7854}} & \textbf{29.99}/\textcolor{red}{\textbf{0.8342}} \\ \hline RDN & 33.50/0.9070 & 27.87/0.7669 & 26.88/0.7357 & 29.61/0.8815 & 25.85/0.7860 & 30.05/0.8344 \\ w/ DE & \textbf{33.51}/\textbf{0.9076} & \textcolor{red}{\textbf{27.92}}/\textcolor{red}{\textbf{0.7687}} & \textcolor{red}{\textbf{26.91}}/\textcolor{red}{\textbf{0.7367}} & \textcolor{red}{\textbf{29.67}}/\textcolor{red}{\textbf{0.8834}} & \textcolor{red}{\textbf{25.95}}/\textcolor{red}{\textbf{0.7898}} & \textbf{30.07}/\textbf{0.8352} \\ \hline \end{tabular} \end{small} \label{test_data_table} \end{center} \end{table*} \section{Application to Scientific Datasets} \label{sds_section} Here we apply the DE super resolution method to three scientific datasets where strict adherence to conservation principles is important. These datasets are from diverse sources: a satellite imager, a weather radar, and a numerical weather model. In each case we use the EDRN CNN as described in Supplement Section 3 with the same training procedure described in Section \ref{training_section} (except that a lower initial learning rate of $2\times 10^{-5}$ was used for the SEVIR data). Figure \ref{sds_samples} shows an example input, degraded image, and SR output for each dataset. Figure \ref{sds_scores} shows the loss on the test sets while training (no parameter tuning was done on these datasets so no validation sets were used). In each case, inclusion of DE allows for exact enforcement of conservation laws while providing a modest improvement in performance. \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{./sds_samples.png} \end{center} \caption{Example degraded inputs (left), super-resolved outputs (center), and ground truth (right) for three different scientific datasets.} \label{sds_samples} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{./sds_scores.png} \end{center} \caption{Test set RMSE evaluated for each scientific dataset throughout training. Final test set scores (without/with DE) were: (GOES: 13.72/13.71 $Wm^{-2} sr^{-1} \mu m^{-1}$), (ERA 5: 7.56/7.55 $\%$), (SEVIR: 0.507/0.502 $kgm^{-2}$)} \label{sds_scores} \end{figure} \subsection{GOES-17 L1b Radiance} The Geostationary Operational Environmental Satellite 17 (GOES-17) \cite{goes_abi_tbds} is a geostationary satellite currently orbiting above the Equatorial Pacific at 137.2W. Here we apply super resolution to level-1b radiance data from the Advanced Baseline Imager band 2 (the red 640nm band) which has a resolution of 0.5km at nadir \cite{goes_users_guide}. The images selected for this study are full-disk scans taken near 12:00 LST on various days during 2019 and 2020. The images are cropped to pixels 2452-19348 height and 8100-17700 width to avoid extreme viewing angles and low illumination near the edge of the Earth's disk. The exact file names are given in the supplement (Section 5). The L1b radiances have units of $Wm^{-2}sr^{-1}\mu m^{-1}$, and enforcing strict conservation yields a slight performance improvement for the SR scheme (Figure \ref{sds_scores}a). Example outputs are shown in Figure \ref{sds_samples} panels a-c. \subsection{ERA5 Cloud Fraction} The European Center for Medium Range Weather Forecasting Reanalysis version 5 (ERA5) is a reconstruction of the past state of the atmosphere from 1979-present. The reanalysis is performed by assimilating historical atmospheric observations with a numerical weather model \cite{era5}. Here we apply super-resolution to cloud fraction data which represent the fraction (0-100$\%$) of each model grid cell area covered by cloud. We use daily $0.25^{\circ} \times 0.25^{\circ}$ resolution data, between latitudes $\pm 45^{\circ}$, at 0Z for the period 1-Jan-1979 to 31-Dec-2018 for training and data from 2019 for testing. Note that these data are on a lat-lon grid while our DE technique assumes an equal area grid, so here we have used only data near the equator where there is less distortion. It is possible to modify the DE technique to include latitude weightings but we leave this for future work. Nonetheless, enforcing strict conservation rules in this context provides a performance improvement for the SR scheme (Figure \ref{sds_scores}b). Examples are shown in Figure \ref{sds_samples} panels d-f. \subsection{NEXRAD Vertically Integrated Liquid Water} The Storm EVent ImageRy (SEVIR) weather dataset \cite{sevir} is composed of co-located satellite and radar observations of 20,000 weather events over the continental United States between 2017 and 2020. Here we train EDRN to perform 4x super resolution on $192\times192$ pixel (1km resolution) chips of vertically integrated liquid water (VIL), a radar derived product from the NEXRAD radar network. VIL has units of $kgm^{-2}$ and 2D average downsampling enforces conservation of mass. We use the 25th time-sample from the first 18,000 events to train and remaining 2,000 to test. Figure \ref{sds_samples} Panels g-i show an example case from the test set. The downsampling enforcement method is able to conserve liquid water mass while achieving a slight performance improvement over conventional training (Figure \ref{sds_scores}c). \section{Additional Experiments and Discussion} In this section we provide discussion of the limitations of our method, comparison to recent literature, and outline potential areas of future research. \subsection{Related Methods} No existing methods strictly enforce invertibility of CNN SR but, past studies \cite{pulse, dsloss1, dsloss2,zhang_2020} have highlighted its importance and have used loss functions to approximate invertibility. PulseGAN \cite{pulse}, adds an MSE term computed between the LR input and the downsampled output to an SR GAN's adversarial loss function, ensuring more physically plausible outputs. This method does not improve CNNs that optimize pixel-wise MSE because it effectively imposes the same loss function twice at different resolutions. We confirm this by training the EDRN-CNN with the loss function: \begin{equation} \mathcal{L} = MSE + \lambda \overline{(D\{x\}-D\{\hat{x}\})^2} \end{equation} where $MSE$ is the pixel-wise mean squared error, $x$ and $\hat{x}$ are ground truth and predicted HR pixel values respectively, $D\{*\}$ is a $4 \times 4$ averaging downsampling operator, $\lambda$ is a weighting coefficient (here, $\lambda=16$), and the over-bar represents averaging. Validation PSNR throughout training is shown in Figure \ref{training_loss}e as a blue line. The PSNR/SSIM computed on the Div2k test set was: 29.95/0.8327, lower than the scores in Table \ref{test_data_table}. The key difference of the DE approach is that it directly modifies the CNN not the loss function, and it guarantees exact instead of approximate invertibility. \subsection{Other Downsampling Schemes} \label{bicub} Assuming 2D-average downsampling is often correct for enforcing conservation laws, but it is not always used to train SISR CNNs. We demonstrate the impact of an incorrectly assumed downsampling scheme by training the EDRN CNN using a bicubic downsampling scheme, both with and without DE that assumes 2D-averaging. Predictably, the conventional EDRN outperforms the DE version in this case (Figure \ref{training_loss}h). For the div2k test set the conventional scheme had a PSNR/SSIM of: 30.26/0.8393 while the DE-EDRN scored: 30.13/0.8356. While 2D-averaging is used here, it may be possible to derive similar operators for other downsampling schemes, by modifying (\ref{f_piecewise} and \ref{f_compact}) to include the weights of a bicubic downsampling kernel for instance. Finally, some of the outputs from the DE CNNs contain faint checkerboard artifacts; in the trees in the BSDS100 sample image in Supplementary Figures 1 and 2 for instance. Some checkerboarding also occurs in the no-DE cases, so we hypothesize that these patterns are partially a result of using 2D-average downsampling without anti-aliasing. In preliminary experiments we have observed that the problem is more pronounced when the DE operator is used for larger upsampling ratios (x16 upsampling) however. This is a limitation of our algorithm for large resolution increases, but an in-depth exploration of the problem and possible mitigation strategies is left as future research. \subsection{The Magnitude of Pixel Corrections} \label{alpha_sec} \begin{figure} \begin{center} \includegraphics[width=0.5\linewidth]{./reg_samples.png} \end{center} \caption{Examination of intermediate outputs from our downsampling enforcement implementation of EDRN \cite{edrn} for a sample image from BSDS100 \cite{bsds100} before and after the DE-operator is applied. (a): output prior to DE layer; (b): final output after DE layer; (c) and (d): same as (a) and (b) but with regularization on the magnitude of the correction; (e): ground truth image for reference.} \label{intermediate_outputs_1} \end{figure} In Section \ref{f_interp_sec}, $f(\mathbf{x},P)$ is interpreted as a correction applied to an intermediate image output by the CNN. Here, we investigate whether the quality of this intermediate output improves during training, or if the CNN learns to rely on the correction by examing the value of: $|\mathbf{x} - f(\mathbf{x},P)|$, the difference between the initial output and the corrected output. After training on the Div2k test set, the average difference was 108, meaning the HR image requires a correction to ensure that it will downsample to the input image. Figure \ref{intermediate_outputs_1} panels a and b, show the intermediate output and the corrected output for an example HR image chip from the Div2k test set respectively (the original image is shown in panel e). The magnitude of the correction can be reduced by re-training with a regularizer in the loss function: \begin{equation} \mathcal{L} = MSE + \lambda \overline{|\mathbf{x} - f(\mathbf{x},P)|} \end{equation} where $\lambda=100$. After training with this regularization term, $|\mathbf{x} - f(\mathbf{x},P)|$ averaged over the Div2k test set was 0.3. This is demonstrated in panels c and d of Figure \ref{intermediate_outputs_1}, where the intermediate output (c) is now a near perfect match for the final output (d). Finally, the overall performance of the SISR scheme was not substantially altered and the regularized CNN had a PSNR of 30.07 and a SSIM of 0.8359 on the Div2k test set, comparable to results without regularization. Because the final outputs are nearly identical with or without the regularizer, it is not necessary to include it for most SR use cases, but the ability to increase the accuracy of the CNN's intermediate output may be useful for future applications. \section{Conclusions} Here, we demonstrated a new method to ensure that the output from any super-resolution-CNN, when downsampled with 2D averaging, exactly reproduces the low resolution input. In addition to providing physical consistency between the input and output data, this approach improves the CNN performance for many different super resolution architectures across several common image datasets. The method involves constructing the CNN with ``Downsampling Enforcement,'' and does not require any modifications to the data, training procedure, or loss function. CNN-based super resolution is applicable to many types of imagery and gridded data where a guarantee that the statistics of the LR data are preserved is impactful. Here, we demonstrated how this approach could be used to: generate high resolution satellite imagery without introducing non-physical radiances; downscale coarse resolution output from a numerical model without breaking physical conservation laws; or super resolve radar data while preserving vertically integrated water mass. In these applications, preserving the LR image statistics in the HR image is paramount, and the technique presented here can deliver the high visual fidelity provided by CNN-based super resolution schemes without sacrificing physical consistency.
\section{Introduction} \label{sec:intro} Nuclear matter produced in heavy ion collisions at RHIC and LHC appears to be well described by relativistic fluid dynamics at the time shortly after the collision, i.e. for $t>\tau_H$, where the ``hydrodynamization'' time $\tau_H$ is of the order of $1 - 2$ fm/c \cite{Teaney:2000cw,Heinz:2013wva,Luzum:2008cw,Luzum:2009sb,Schenke:2010rr,Song:2010mg}. The hydrodynamic description fits the available experimental data well provided the shear viscosity - entropy density ratio of the resulting nuclear fluid is low, $\eta/s \sim \hbar/4 \pi k_B$. An interesting and not fully understood question is how the matter reaches the hydrodynamic stage of its evolution so quickly and which physical mechanisms are responsible for such a rapid thermalization at intermediate values of QCD coupling. The regime of intermediate coupling can in principle be approached from either the weak or the strong coupling side and accordingly, issues related to thermalization have been studied in kinetic theory at weak coupling and in gauge-string duality (holography) at strong coupling. While the kinetic theory approach and the holographic methods are very different, it is clear that in one and the same theory (e.g. in ${\cal N}=4$ supersymmetric $SU(N_c)$ Yang-Mills (SYM) theory at infinite $N_c$) one should expect an interpolation between strong and weak coupling results for observables describing thermalization, similar to the coupling constant dependence of the shear viscosity - entropy density ratio \cite{Kovtun:2004de,Buchel:2004di} or pressure \cite{Gubser:1998nz,Blaizot:2006tk}. The goal of this paper is to investigate such a dependence for a number of models where corrections to known holographic results at infinitely strong coupling can be computed by using higher derivative terms in the dual gravity action. Among relevant observables, we focus on the hierarchy of times characterizing the approach to thermal equilibrium. In simple models of kinetic theory, the appropriate time scales emerge as eigenvalues of the linearized collision operator, with the largest eigenvalue, $\tau_{\scriptscriptstyle R}$, essentially (within a specified approximation scheme) setting the time scale for transport phenomena \cite{ford-book,gross-1959,grad-1963,liboff-book} (see Section \ref{sec:relaxation} for details). In particular, for the shear viscosity in the non-relativistic kinetic theory one typically obtains \cite{chapman-book} \begin{equation} \eta = \tau_{\scriptscriptstyle R}\, n\, k_B\, T\,, \label{eq:rel-visc-nr} \end{equation} where $n$ is the particle density. The relativistic analogue of Eq.~(\ref{eq:rel-visc-nr}) is \begin{equation} \eta = \tau_{\scriptscriptstyle R}\, s\, T\,, \label{eq:rel-visc-rel} \end{equation} where $s$ is the volume entropy density\footnote{To get the factors of $k_B$ right, one may consult the equation carved on Boltzmann's tombstone.}. In kinetic theory, the relaxation time $\tau_{\scriptscriptstyle R}$ is simply proportional to the (equilibrium) mean free time for corresponding particles or quasiparticles and thus the internal time scale associated with the kinetic operator acquires a transparent physical meaning. In the regime of validity of Eq.~(\ref{eq:rel-visc-rel}), the dependence of $\eta/s$ on e.g. the coupling is the same as the dependence on the coupling of $\tau_{\scriptscriptstyle R} T$ and thus we expect the ratio $\eta/s \tau_{\scriptscriptstyle R} T$ to be (approximately) constant in that regime. Another interesting feature of kinetic theory models is the breakdown of the hydrodynamic description for sufficiently large values of the wave vector $q>q_c$ and the appearance of the strongly damped Knudsen modes \cite{boltzmann-book}. We shall see that these phenomena have their counterparts in the regime of strong coupling despite the fact that kinetic theory is not applicable in that regime. It is believed that the quark-gluon plasma created in heavy ion collisions at energies available at RHIC or LHC is a strongly interacting system, for which a direct or effective (via a suitable quasiparticle picture) application of kinetic theory is difficult to justify. Instead, insights into the time-dependent processes at strong coupling are obtained by studying qualitatively similar strongly coupled theories having a dual holographic description in terms of higher-dimensional semiclassical gravity. Holography \cite{jorge-book, Ammon:2015wua, nastase-book, natsuume-book, zaanen-book} provides a convenient framework for studying non-equilibrium phenomena in strongly interacting systems. The dynamics and evolution of non-equilibrium states in a strongly interacting quantum many-body system is mapped (in the appropriate limit) into the dynamics and evolution of gravitational and other fields of a dual theory. Holography should in principle be capable of encoding all types of non-equilibrium behavior. In particular, evolution of the system towards thermal equilibrium is expected to be described by the dynamics of gravitational collapse. Numerical and analytical studies of processes involving strong gravitational fields including black holes and neutron stars mergers resulting in black hole formation and particles falling into black holes show a characteristic scenario in which a primary signal (strongly dependent on the initial conditions) is followed by the quasinormal ringdown (dependent on the final state parameters only) and then a late-time tail (see e.g. \cite{Frolov:1998wf}, \cite{Berti:2009kk}). A holographic description of fully non-equilibrium quantum field theory states via dual gravity has been developed over the last several years and the results suggest that the quasinormal spectrum (i.e. the eigenvalues of the linearized Einstein's equations of the dual black brane background) and in particular the fundamental (the least damped non-hydrodynamic) quasinormal frequency play a significant role in the description of relaxation phenomena. Recent studies (including sophisticated numerical general relativity approaches) of equilibration processes in the dual gravity models \cite{Chesler:2008hg, Chesler:2009cy,Chesler:2015wra,Chesler:2015fpa,Casalderrey-Solana:2013aba,Casalderrey-Solana:2013aba,Heller:2011ju,Bantilan:2014sra,Buchel:2015saa,Jankowski:2014lna,Keranen:2015mqc} reveal that the hydrodynamic stage of evolution is reached by a strongly coupled system long before the pressure gradients become small and that the relevant time scales are essentially determined by the lowest quasinormal frequency, even for non-conformal backgrounds \cite{Buchel:2015saa,Janik:2015waa,Janik:2015iry,Attems:2016ugt,Janik:2016btb,Gursoy:2016ggq}. The characteristic time scale here is set by the inverse Hawking temperature of the dual equilibrium black hole. A seemingly natural question to ask is whether the relation between transport phenomena and the relaxation time(s) familiar from kinetic theory exists also at strong coupling and if yes, how it changes as a function of coupling. Is there a limiting value of the wave vector beyond which hydrodynamic description breaks down at large but finite coupling? Extrapolating kinetic theory results to the regime of intermediate coupling was the subject of recent investigation by Romatschke \cite{Romatschke:2015gic}. In holography, these questions can be studied by computing coupling constant corrections to the full quasinormal spectra using the appropriate higher derivative terms in dual gravity. Recently, such corrections have been studied in Refs.~\cite{Stricker:2013lma}, \cite{Waeber:2015oka}. In this paper, we compute the quasinormal spectra of metric perturbations of the gravitational background with $R^4$ higher derivative term (dual to $\mathcal{N}=4$ SYM at finite temperature and large but finite 't Hooft coupling), and for the background with $R^2$ terms including Gauss-Bonnet gravity in $d=5$ dimensions. Normally, higher derivative terms are treated as infinitesimally small corrections to the second order equations of motion of Einstein gravity, otherwise one is doomed to encounter the Ostrogradsky instability and related problems. Accordingly, extrapolating results from infinitesimal to finite values of the corresponding parameters requires caution. Gauss-Bonnet and more generally Lovelock gravity are good laboratories since their equations of motion are of second order and thus can handle finite values of the parameters multiplying higher derivative terms. However, such theories appear to suffer from internal inconsistencies for any finite value of the parameters \cite{Camanho:2014apa} (for an apparently dissenting view, see \cite{Reall:2014pwa}). The passage between Scylla and Charybdis of those two difficulties may be hard to find, if it exists at all. We find some solace in the fact that our results show a qualitatively similar picture regardless of the exact form of higher derivative terms used. The paper is organized as follows: Our main results are summarized in Section \ref{sec:relaxation}, where we also review some facts about the relaxation times in quantum critical, kinetic and gravitational systems, adding a number of new observations along the way. In Section \ref{sec:SYM}, we compute the (inverse) 't Hooft coupling corrections to the quasinormal spectrum of gravitational fluctuations in AdS-Schwarzschild black brane background modified by the higher derivative terms and discuss the relaxation time behavior, the density of poles and the inflow of extra poles from infinity. In Sections \ref{sec:GB} and \ref{sec:r2}, correspondingly, a similar procedure is applied to Gauss-Bonnet gravity and to the background with generic curvature squared terms. We briefly discuss the results in the concluding Section \ref{sec:discussion}. Some technical issues and comments about our numerical procedures appear in the Appendices. \section{Relaxation times at weak and strong coupling} \label{sec:relaxation} In this Section, we briefly review the appearance of the hierarchy of relaxation times in kinetic theory, holography and some models of condensed matter physics, emphasizing their similarities and adding some new observations. In this context, at the end of the Section, we list the main results of the present paper. In kinetic theory, transport coefficients and relaxation time(s) are intimately related. To be clear, by the relaxation time we mean the characteristic time interval during which a local thermal equilibrium (e.g. a local Maxwell-Boltzmann equilibrium) is formed everywhere in the system. We are not interested in the momentum-dependent equilibration time-scales of the densities of conserved charges (these densities always relax hydrodynamically) which are, strictly speaking, infinite in the limit of vanishing spatial momentum. Consider, for illustration, non-relativistic Boltzmann equation obeyed by the one-particle distribution function $F(t,{\bf r},{\bf p})$ \begin{equation} \frac{\partial F}{\partial t} + \frac{p_i}{m}\, \frac{\partial F}{\partial r^i} -\frac{\partial U (r)}{\partial r^i}\, \frac{\partial F}{\partial p_i} = C[F]\,, \end{equation} where $U(r)$ is the external potential and $C[F]$ is the Boltzmann collision operator containing details of the interactions. For small deviations from the local thermal equilibrium described by the distribution function $F_0({\bf r},{\bf p})$, the kinetic equation can be linearized by the ansatz \begin{equation} F(t,{\bf r},{\bf p}) = F_0({\bf r},{\bf p})\left[1 + \varphi (t, {\bf r},{\bf p})\right]\,, \label{eq:anz} \end{equation} where $\varphi \ll 1$. The ansatz (\ref{eq:anz}) leads to the evolution equation \begin{equation} \frac{\partial \varphi}{\partial t} = - \frac{p_i}{m}\, \frac{\partial \varphi}{\partial r^i} + \frac{\partial U (r)}{\partial r^i}\, \frac{\partial \varphi}{\partial p_i} + L_0 [\varphi ]\,, \label{eq:linear_collision_op} \end{equation} where $L_0$ is a linear integral operator resulting from linearization of $C[F]$. Formal solution to Eq.~(\ref{eq:linear_collision_op}) with the initial condition $\varphi (0, {\bf r},{\bf p}) = \varphi_0 ({\bf r},{\bf p})$ can be written in the form \cite{ferziger-kaper-book} \begin{equation} \varphi (t, {\bf r},{\bf p}) = e^{t L} \, \varphi_0 ({\bf r},{\bf p}) = \frac{1}{2\pi i} \int\limits_{\gamma - i \infty}^{\gamma+i \infty} \, e^{s t} \, R_s d s\, \varphi_0 ({\bf r},{\bf p})\,, \end{equation} where $R_s = \left( s I - L\right)^{-1}$ is the resolvent whose analytical structure in the complex $s$-plane determines the relaxation properties. In some simple cases, such as e.g. the relaxation of a low-density gas of light particles in a gas of heavy particles, the resolvent can be constructed explicitly and the time dependence fully analyzed \cite{silin-book}. Generically, however, the time evolution is not known explicitly. For spatially homogeneous equilibrium distributions and perturbations, a simple ansatz $\varphi (t, {\bf p}) = e^{-\nu t} h ({\bf p})$ reduces the linearized kinetic equation to the eigenvalue problem for the linear collision operator: \begin{equation} - \nu h = L_0 [h]\,. \end{equation} The eigenvalues of $L_0$ determine the spectrum of (inverse) relaxation times in the system. One can then write a general solution of the linearized kinetic equation in the form \begin{equation} \varphi (t, {\bf p}) = \sum\limits_n C_n e^{-\nu_n t} h_n ({\bf p})\,, \label{eq:sum-eigen} \end{equation} where the coefficients $C_n$ are determined by the initial conditions and the sum should be replaced by an integral if the spectrum turns out to be continuous. The hierarchy $\{ \nu_n \}$ in Eq.~(\ref{eq:sum-eigen}) is clearly reminiscent of the hierarchy of imaginary times of the quasinormal modes in the dual gravity treatment of near-equilibrium processes at strong coupling. The spectrum of the operator $L_0$ for (classical) particles interacting via the potential $U(r) = \alpha/r^n$ has been investigated by Wang Chang and Uhlenbeck \cite{ford-book} and by Grad \cite{grad-1963}. The spectrum consists of a five-fold degenerate null eigenvalue, corresponding to conserved quantities and the rest of the spectrum which can be discrete (for $n=4$) or continuous \cite{ford-book, grad-1963,liboff-book}, with or without a gap, depending on $n$ (see Fig.~\ref{fig:spectrum_kinetic}). The time dependence is obviously sensitive to the type of the spectrum: discrete spectrum leads to a clear exponential relaxation, whereas continuous spectrum implies a more complicated pattern including a pure power-law fall-off in the gapless case. \begin{figure}[htbp] \centering \includegraphics[width=0.65\textwidth]{figs/spectrum-thermalization.pdf} \caption{The spectrum of a linear collision operator: a) discrete spectrum, b) continuous spectrum with a gap, realized for the interaction potential $U=\alpha/r^n$, $n>4$, c) gapless continuous spectrum, realized for the interaction potential $U=\alpha/r^n$, $n<4$, d) Hod spectrum (see text): $0 \leq \nu_{min} \leq \nu_c$. In all cases, $\nu =0$ is a degenerate eigenvalue corresponding to hydrodynamic modes (at zero spatial momentum).} \label{fig:spectrum_kinetic} \end{figure} Assuming the spectrum is discrete and denoting $\tau_{\scriptscriptstyle R} = 1/\nu_{min}$, in the relaxation time approximation, when the sum in (\ref{eq:sum-eigen}) is dominated by a single term with $\nu_n = \nu_{min}$, we find \begin{equation} \frac{\partial F}{\partial t} = - \frac{F - F_0}{\tau_{\scriptscriptstyle R}}\,. \end{equation} Generalization to weakly inhomogeneous systems gives \cite{kvasnikov-book,liboff-book} \begin{equation} \frac{\partial F}{\partial t} + \frac{p_i}{m}\, \frac{\partial F}{\partial r^i} -\frac{\partial U (r)}{\partial r^i}\, \frac{\partial F}{\partial p_i} = - \frac{F - F_0}{\tau_{\scriptscriptstyle R}}\,. \label{eq:relax-term-eq} \end{equation} The equation (\ref{eq:relax-term-eq}) has been remarkably successful in describing transport phenomena in systems with a kinetic regime\footnote{The equation (\ref{eq:relax-term-eq}) with a semi-phenomenological $\tau_{\scriptscriptstyle R} = \tau_{\scriptscriptstyle R} (v)$ is sometimes called the Krook-Gross-Bhatnagar (KGB) equation \cite{KGB}.} \cite{gross-1959}, \cite{ferziger-kaper-book}. In particular, assuming $\tau_{\scriptscriptstyle R} = const$, for the shear viscosity one obtains the result (\ref{eq:rel-visc-nr}). Estimates of $\tau_{\scriptscriptstyle R}$ based on Ritz variational method relate the relaxation time to the mean free time: $\tau_{\scriptscriptstyle R} = 15/8\, \tau_{\scriptscriptstyle mft} \sim \sqrt{m}/\sqrt{T} n \sigma$, where $\sigma$ is the interaction cross-section. The account above may look too schematic but a more detailed treatment is available in the standard kinetic theory \cite{ferziger-kaper-book} (including relativistic and quantum cases \cite{liboff-book}, \cite{groot-book}), in the mathematical theory of Boltzmann equation \cite{saint-raymond-book} and in thermal gauge theory \cite{Arnold:2002zm,Arnold:2003zc}. Do the relations between transport coefficients and relaxation time(s) similar or identical to the ones in Eqs.~(\ref{eq:rel-visc-nr}) and (\ref{eq:rel-visc-rel}) hold beyond the regime of applicability of kinetic theory and in the absence of quasiparticles? One may appeal to dimensional analysis and the uncertainty principle \cite{Kovtun:2003wp} or "general wisdoms'' \cite{zaanen-book} when arguing for an affirmative answer\footnote{Indeed, the characteristic time scale in the kinetic regime is the mean free path $\tau \sim t_{mfp}$ and in the regime of strong coupling it is the inverse temperature of a dual black hole, $\tau \sim \hbar/k_B T$. Assuming $\eta/s \sim \tau k_B T$, we have $\eta/s \sim \sqrt{m T}/n \sigma$ in the first case and $\eta/s \sim \hbar /k_B$ in the second.} but in all cases the concept of weakly interacting quasiparticles seems to be lurking behind such reasoning. At the same time, the concepts of relaxation time and transport are meaningful irrespective of whether or not the kinetic theory arguments are applicable. \begin{figure}[tbp] \begin{center} \includegraphics[width=.7\textwidth]{figs/quantum_critical_diagram_black.pdf} \caption{\label{fig:quantum_critical_diag} Phase diagram of the $d=1+1$ quantum Ising model \cite{sachdev-book-2}. The relaxation time in the quantum critical region is determined by the lowest quasinormal frequency of the BTZ black hole.} \end{center} \end{figure} In particular, in condensed matter physics, considerable attention has been drawn to the studies of quantum critical regions \cite{sachdev-book-2}, where the characteristic time scales of strongly interacting theories at finite temperature are of the order of $\tau \sim \hbar/k_B T$ (see Fig.~\ref{fig:quantum_critical_diag}). Moreover, estimates of thermal equilibration time $\tau_{\scriptscriptstyle R}$ in relevant models suggest that \cite{sachdev-book-2} \begin{equation} \tau_{\scriptscriptstyle R} \geq {\cal C} \, \frac{\hbar}{k_B T}\,, \label{eq:sachdev-const} \end{equation} where ${\cal C}$ is a constant of order one, with the inequality saturated in the quantum critical region. In some models, the constant ${\cal C}$ can be computed analytically. For the quantum Ising model in $d=1+1$ dimension serving as one of the main examples illustrating quantum critical behavior in \cite{sachdev-book-2}, the relaxation time of the order parameter $\hat{\sigma}_z$ having the anomalous dimension $\Delta = 1/8$ in the quantum critical region is determined by the correlation function of a $1+1$-dimensional CFT at finite temperature. The (equilibrium) retarded two-point correlation function of an operator of (non-integer) conformal dimension $\Delta$ in momentum space is given by \cite{Son:2002sd} \begin{align} G^{R}(\omega , q) & = {{\cal C}_\Delta \over \pi\, \Gamma^2 ( \Delta -1)\sin{\pi\Delta}}\left| \Gamma \left( \frac{\Delta}{2}+ {i (\omega - q)\over 4 \pi T }\right)\Gamma \left(\frac{\Delta}{2}+ {i (\omega + q)\over 4 \pi T }\right)\right|^2 \nonumber \\ &\quad \times \Biggl[ \cosh{{q\over 2 T}} -\cos{\pi\Delta}\cosh{ {\omega\over 2 T} } + i \sin{\pi\Delta}\sinh{{\omega\over 2 T} }\Biggr]\,, \label{eq:full_green_ni} \end{align} where ${\cal C}_\Delta$ is the normalization constant and we put $T_L=T_R=T$. The correlator has a sequence of poles at \begin{equation} \omega = \pm q - i 4 \pi T \left( n +\frac{\Delta}{2}\right)\,, \end{equation} where $n=0,1,2,...$. Note that these are precisely the quasinormal frequencies of the dual BTZ black hole \cite{Birmingham:2001pj}, \cite{Son:2002sd}. At zero spatial momentum, the lowest quasinormal frequency determines the relaxation time \begin{equation} \tau_{\scriptscriptstyle R} = \frac{1}{2\pi \Delta}\, \frac{\hbar}{k_B T}\,, \label{eq:rel-time-scaling-dim} \end{equation} and thus the constant ${\cal C}$ in Eq.~(\ref{eq:sachdev-const}) is ${\cal C} = 4/\pi \approx 1.273$ for the Ising model considered\footnote{In \cite{sachdev-book-2}, the relaxation time was determined by expanding the denominator of the correlation function in Taylor series around $\omega =0$. This approximates the singularity of the correlator rather crudely giving $\tau_{\scriptscriptstyle R} = \frac{\hbar}{2 k_B T} \cot{[\frac{\pi}{16}]}$ and ${\cal C} = \frac{1}{2} \cot{[\frac{\pi}{16}]} \approx 2.514$.} in \cite{sachdev-book-2}. Curiously, inserting $\Delta=2$ (the scaling dimension of the energy-momentum tensor) into Eq.~(\ref{eq:rel-time-scaling-dim}) and using Eq.~(\ref{eq:rel-visc-rel}), one formally\footnote{The shear viscosity is not defined in $d=1+1$.} finds $\eta/s = 1/4\pi$. In holography, the importance of the quasinormal spectrum as the fundamental characteristic feature of near-equilibrium phenomena in a dual field theory has been recognized early on \cite{KalyanaRama:1999zj}, \cite{Horowitz:1999jd}, \cite{Danielsson:1999fa} and later it was observed \cite{Birmingham:2001pj} and shown \cite{Son:2002sd}, \cite{Kovtun:2005ev} that the quasinormal frequencies correspond to poles of the dual retarded correlators. A typical distribution of poles in the complex frequency $\omega$ plane at fixed spatial momentum $q$ of an equilibrium retarded correlator computed via holography in the supergravity approximation (e.g. at infinite 't Hooft coupling and infinite $N_c$ in $\mathcal{N}=4$ SYM) is shown in Fig.~\ref{fig:cuts_poles} (right panel), where the spectrum of a scalar fluctuation is shown \cite{Starinets:2002br}. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{figs/cuts.pdf} \includegraphics[width=0.45\textwidth]{figs/qnm_diagram_poles.pdf} \caption{Singularities of a thermal two-point correlation function in the complex frequency plane at (vanishingly) small \cite{Hartnoll:2005ju} (left panel) and infinitely large \cite{Starinets:2002br} (right panel) values of the coupling.} \label{fig:cuts_poles} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{figs/qnm_diagram_normal.pdf} \includegraphics[width=0.45\textwidth]{figs/qnm_diagram_anomalous.pdf} \caption{Singularities of thermal two-point correlation function of the energy-momentum tensor in the shear channel in the complex frequency plane at large coupling at $\eta/s>\hbar/4\pi k_B$ (left panel) and at $\eta/s<\hbar/4\pi k_B$ (right panel). Poles at infinitely large coupling are indicated by squares. At large but finite coupling, their new positions are shown by crosses.} \label{fig:poles_finite_coupling} \end{figure} For correlators of conserved quantities such as the energy-momentum tensor, the spectrum, in addition to an infinite tower of gapped strongly damped modes $\omega_n = \omega_n (q)$, contains also a sector of gapless hydrodynamic modes $\omega = \omega (q)$ with the property $\omega (q) \rightarrow 0$ for $q\rightarrow 0$ \cite{Starinets:2002br,Nunez:2003eq,Kovtun:2005ev}. Asymptotics of these spectra were computed in Refs.~\cite{Cardoso:2004up,Natario:2004jd} (for large $n$) and in Ref.~\cite{Festuccia:2008zx} (for large $q$). Curiously, at weak coupling the correlators at finite spatial momentum $q$ seem to have branch cuts stretching from $-q$ to $q$ rather than poles \cite{Hartnoll:2005ju} (see left panel of Fig.~\ref{fig:cuts_poles}). At zero spatial momentum, the branch cuts reduce to a sequence of poles on the imaginary axis \cite{Hartnoll:2005ju}. These issues are further discussed in \cite{Romatschke:2015gic} and in the present paper. Finite coupling corrections to the quasinormal spectrum can be computed by using higher derivative terms in the appropriate supergravity action. Such corrections for gravitational backgrounds involving $R^4$ higher derivative term were recently computed in Refs.~\cite{Stricker:2013lma,Waeber:2015oka}. In this paper, we consider $R^4$ and $R^2$ terms, including Gauss-Bonnet gravity. We find a number of novel features in addition to those reported in Refs.~\cite{Stricker:2013lma,Waeber:2015oka}. Our observations can be summarized as follows (see Sections \ref{sec:SYM}, \ref{sec:GB}, \ref{sec:r2} for full details): \begin{itemize} \item The positions of all poles change with the coupling. In the shear channel in particular, two qualitatively different trends are seen depending on whether $\eta/s >\hbar/4\pi k_B$ or $\eta/s <\hbar/4\pi k_B$ (see Fig.~\ref{fig:poles_finite_coupling}). In the first case (realized, for example, in $\mathcal{N}=4$ SYM), the symmetric branches of non-hydrodynamic poles lift up towards the real axis\footnote{In their motion toward the real axis, the branches remain essentially straight, in agreement with earlier observations in Ref.~\cite{Waeber:2015oka}. We do not observe the phenomenon of poles with large imaginary parts bending toward the real axis reported in Ref.~\cite{Stricker:2013lma}.} and the diffusion pole moves deeper down the imaginary axis. In the second case (corresponding to known examples of the dual gravity actions with curvature squared corrections, in particular, Gauss-Bonnet gravity with positive coupling), the branches move up only very slightly and the diffusion pole comes closer to the origin. \item For $\eta/s >\hbar/4\pi k_B$, the density of poles in the symmetric branches increases monotonically with the coupling changing from strong to weak values as shown schematically in Fig.~\ref{fig:poles_finite_coupling}. Qualitatively, this seems to be compatible with the poles merging and eventually forming branch cuts $(-\infty,q]$ and $[q,\infty)$, where $q$ is the spatial momentum, in the complex frequency plane at vanishing coupling. For $\eta/s <\hbar/4\pi k_B$, however, the density of poles decreases and they seem to disappear from the finite complex plane completely in the limit of vanishing viscosity. \item In the holographic models we considered, the function $\eta / s\, \tau_{\scriptscriptstyle R} T$ is a slowly varying function of the coupling, with an appreciable change in the vicinity of infinite coupling only, suggesting that approximations of the type $\eta /s \sim const\, \tau_{\scriptscriptstyle R} T$ are not unreasonable in the strongly coupled regime even though they cannot possibly follow from kinetic theory arguments. \item In view of the relation between $\eta/s$ and relaxation time, a bound on quasinormal frequencies of black branes similar to the one proposed by Hod for black holes \cite{Hod:2006jw} may imply a bound on $\eta/s$. This is further discussed in Section \ref{sec:discussion}. \item As $\eta/s$ increases well beyond $\hbar/4\pi k_B$ and the poles approach the real axis, we expect them to be visible as clear quasiparticle-like excitations (i.e. well-defined, high in amplitude and very narrow peaks) in the appropriate spectral function of the dual field theory, well known from weakly coupled theories. This is indeed the case (see Section \ref{sec:SpectFunGBShear} for a calculation of the shear channel spectral function in the Gauss-Bonnet theory where these feature can be seen explicitly). \item An inflow of new poles from complex infinity is observed at finite coupling. The new poles ascend from the negative infinity towards the origin along the imaginary axis as the coupling changes. The behavior of these new poles as a function of coupling also depends on whether $\eta/s > \hbar/4\pi k_B$ or $\eta /s < \hbar/4\pi k_B$. In the Gauss-Bonnet model with $\eta /s < \hbar/4\pi k_B$ (i.e. with positive values of the Gauss-Bonnet coupling), the poles reach the asymptotic values known analytically \cite{GBNesojen}, without interfering with the hydrodynamic pole. However, in models with $\eta/s > \hbar/4\pi k_B$ ($\mathcal{N}=4$ SYM or Gauss-Bonnet holographic liquid with negative coupling), a qualitatively different picture is observed. In this case, in the shear channel, the least damped new pole reaches the hydrodynamic pole at a certain value of the coupling (for each fixed $q$), the two poles merge and then move off the imaginary axis. Furthermore, as the coupling constant varies at fixed $q$, the poles previously describing the hydrodynamic excitations (diffusion and sound) become the leading (i.e. having the smallest $\mbox{Im} |\omega|$) poles of the two symmetric branches. We interpret these phenomena as the breakdown of the hydrodynamic gradient expansion at some value of the coupling (for each $q$). Phrased differently, at each value of the coupling $\lambda$, there exists a critical value of the wave vector $q_c (\lambda)$ such that for $q > q_c (\lambda)$ the hydrodynamic description becomes inadequate. In the holographic models we considered, the function $q_c (\lambda)$ is a monotonically increasing function of the coupling suggesting that the range of validity of the hydrodynamic description is larger at strong coupling. Details are reported in Sections \ref{sec:SYM} and \ref{sec:GB}. This is reminiscent of the weak coupling kinetic theory behavior mentioned earlier \cite{boltzmann-book} and also the one described in \cite{Romatschke:2015gic}, although our interpretation is somewhat different from the one in Ref.~\cite{Romatschke:2015gic}. \end{itemize} The reported observations (admittedly, made only for a few holographic models and suffering from various limitations mentioned above) seem to suggest the following picture: First, the relations such as (\ref{eq:rel-visc-rel}) may still hold in the regime of the coupling where the kinetic theory approach used to derive them can no longer be justified. This may explain why using the kinetic theory formally outside its regime of applicability can still give results compatible with experimental data. Second, it seems that for a fixed value of the coupling, there exist critical length- and time-scales beyond which the hydrodynamic approximation fails. The dependence of these critical scales on coupling extracted from the holographic models suggests that hydrodynamics has a wider range of applicability at strong coupling in comparison to weaker coupling. This appears to be compatible with the widely reported ``unreasonable effectiveness of hydrodynamics'' in models of strongly coupled plasma. \section{Coupling constant corrections to equilibrium energy-momentum tensor correlators in strongly interacting $\mathcal{N}=4$ SYM theory} \label{sec:SYM} For $\mathcal{N}=4$ supersymmetric $SU(N_c)$ Yang-Mills (SYM) theory in $d=3+1$ (flat) dimensions, corrections in inverse powers of the 't Hooft coupling $\lambda=g^2_{YM} N_c$ at infinite $N_c$ to thermodynamics \cite{Gubser:1998nz,Pawelczyk:1998pb} and transport \cite{Buchel:2004di,Buchel:2008sh,Benincasa:2005qc,Buchel:2008ac,Buchel:2008bz,Buchel:2008kd,Saremi:2011nh,Grozdanov:2014kva} have been computed using the higher derivative $R^4$ term \cite{Grisaru:1986px,Gross:1986iv} in the effective low-energy type IIB string theory action\footnote{The full set of $\alpha'^3$ terms in the ten-dimensional effective action is currently unknown. Corrections involving the self-dual Ramond-Ramond five-form were considered in Refs.~\cite{Green:2003an,deHaro:2003zd,Green:2005qr,Paulos:2008tn}. Following the arguments in \cite{Myers:2008yi}, in this paper we assume that the (unknown) corrections to fields whose background values vanish to leading order in $\alpha'^3$ for a given supergravity solution will not modify the quasinormal spectrum at order $\alpha'^3$ and thus can be neglected. We thank A.~Buchel and K.~Skenderis for discussing these issues with us.}. In particular, for the shear viscosity to entropy density ratio the coupling constant correction to the universal infinite coupling result is positive \cite{Buchel:2004di,Buchel:2008sh}: \begin{equation} \frac{\eta}{s} = \frac{1}{4\pi} \left( 1 + 15\zeta (3) \lambda^{-3/2}+ \ldots \right)\,. \label{eq:eta-s-correction} \end{equation} The result (\ref{eq:eta-s-correction}) can be found, in particular, by computing the $\lambda^{-3/2}$ correction to the hydrodynamic (gapless) quasinormal frequency in the shear channel of gravitational perturbations of the appropriate background. Coupling constant corrections to the full quasinormal spectrum of gravitational perturbations of the AdS-Schwarzschild black brane background, dual to finite-temperature $\mathcal{N}=4$ SYM were previously computed by Stricker \cite{Stricker:2013lma} (see also \cite{Waeber:2015oka}). In this Section, we reproduce those results and find some new features focusing on the relaxation time and the behavior of the old and new poles. \subsection{Equations of motion} The source of finite 't Hooft coupling corrections is the ten-dimensional low-energy effective action of type IIB string theory \begin{align} S_{IIB} = \frac{1}{2\kappa_{10}^2} \int d^{10} x \sqrt{-g} \left( R - \frac{1}{2} \left(\partial \phi\right)^2 - \frac{1}{4\cdot 5!} F_5^2 + \gamma e^{-\frac{3}{2} \phi} \mathcal{W} + \ldots \right)\,, \label{eq:10DAct} \end{align} where $\gamma = \alpha'^3 \zeta(3) / 8$ and the term $\mathcal{W}$ is proportional to the contractions of the four copies of the Weyl tensor \begin{align} \label{eq:Wterm} \mathcal{W} = C^{\alpha\beta\gamma\delta}C_{\mu\beta\gamma\nu} C_{\alpha}^{~\rho\sigma\mu} C^{\nu}_{~\rho\sigma\delta} + \frac{1}{2} C^{\alpha\delta\beta\gamma} C_{\mu\nu\beta\gamma} C_{\alpha}^{~\rho\sigma\mu} C^\nu_{~\rho\sigma\delta}\,. \end{align} Considering corrections to the AdS-Schwarzschild black brane background and its fluctuations, potential $\alpha'$ corrections to supergravity fields other than the metric and the five-form field have been argued to be irrelevant \cite{Myers:2008yi}. Moreover, as discussed in \cite{Buchel:2008ae}, for the purposes of computing the corrected quasinormal spectrum one can use the Kaluza-Klein reduced five-dimensional action \begin{align} S = \frac{1}{2\kappa_5^2} \int d^5 x \sqrt{-g} \left(R + \frac{12}{L^2} + \gamma \mathcal{W} \right)\,, \label{eq:hd-action} \end{align} where $\mathcal{W}$ is given by Eq.~(\ref{eq:Wterm}) in $5d$. The parameter $\gamma$ is related to the value of the 't Hooft coupling $\lambda$ in $\mathcal{N}=4$ SYM via $\gamma = \lambda^{-3/2}\zeta (3) L^6/8$ (we set $L=1$ in the rest of this Section). Higher derivative terms in the equations of motion are treated as perturbations and thus any reliable results are restricted to small values of the parameter $\gamma$. The effective five-dimensional gravitational constant is connected to the rank of the gauge group by the expression $\kappa_5 = 2\pi /N_c$. To leading order in $\gamma$, the black brane solution to the equations of motion following from (\ref{eq:hd-action}) is given by \cite{Gubser:1998nz,Pawelczyk:1998pb} \begin{align} ds^2 = \frac{r_0^2}{u} \left( - f(u) Z_t dt^2 + dx^2 +dy^2 +dz^2 \right) + Z_u \frac{du^2}{4u^2 f}\,, \label{eq:corrected_metric} \end{align} where $f(u) = 1 - u^2$, $r_0$ is the parameter of non-extremality of the black brane geometry and the functions $Z_t$ and $Z_u$ are given by \begin{align} Z_t = 1 - 15\gamma\left(5u^2+5u^4-3 u^6 \right) , && Z_u = 1 + 15\gamma \left(5u^2 + 5 u^4 - 19 u^6 \right) . \end{align} The $\gamma$-corrected Hawking temperature corresponding to the solution (\ref{eq:corrected_metric}) is $T = r_0 (1+15\gamma)/\pi$. For the isotropic $\mathcal{N}=4$ SYM medium, we now consider fluctuations of the metric of the form $g_{\mu\nu} = g_{\mu\nu}^{(0)} + h_{\mu\nu}(u,t,z)$, where $g_{\mu\nu}^{(0)}$ is the background (\ref{eq:corrected_metric}). We Fourier transform the fluctuations with respect to $t$ and $z$ to introduce $h_{\mu\nu}(u,\omega,q)$, choose the radial gauge with $h_{ u \nu} = 0$ and follow the recipes in \cite{Kovtun:2005ev} to write down the equations of motion for the three gauge-invariant modes $Z_i = Z^{(0)}_i + \gamma Z^{(1)}_i$, $i=1,2,3$, in the scalar, shear and sound channels, respectively. Explicitly, the three modes and the corresponding equations of motion are given by the following expressions\footnote{We note that there seems to be a typo in Eq.~(23) of Ref.~\cite{Stricker:2013lma} describing metric fluctuations in the shear mode.}: \paragraph{\bf Scalar channel} \begin{align} &Z_1 = \frac{ u}{\pi^2 T_0^2} h_{xy}, \label{eq:Ginv4g1} \\ &\partial^2_u Z_1 - \frac{1+u^2}{u\left(1-u^2\right)} \partial_u Z_1 + \frac{\mathfrak{w}^2 - \mathfrak{q}^2 \left(1-u^2 \right)}{u \left(1-u^2\right)^2} Z_1 = \gamma \, \mathcal{G}_1 \left[ Z_1 \right] \,. \label{ScalarEqN4} \end{align} \paragraph{\bf Shear channel} \begin{align} &Z_2 =\frac{ u }{\pi^2 T_0^2} \left( q h_{tx} + \omega h_{xz} \right), \label{eq:Ginv4g2} \\ &\partial^2_u Z_2 - \frac{\left(1+u^2\right) \mathfrak{w} ^2-\mathfrak{q}^2 \left(1-u^2\right)^2}{u \left(1-u^2\right) \left(\mathfrak{w}^2-\mathfrak{q}^2 \left(1-u^2\right)\right)} \partial_u Z_2 +\frac{\mathfrak{w} ^2 - \mathfrak{q}^2 \left(1-u^2\right)}{u \left(1-u^2\right)^2} Z_2 = \gamma \, \mathcal{G}_2\left[Z_2\right]\,. \label{ShearEqN4} \end{align} \paragraph{\bf Sound channel} \begin{align} &Z_3 = - \frac{u}{2 \pi^2 T_0^2} \left[1 - \frac{q^2}{\omega^2} \left(1+u^2 + 15 \gamma u^2 \left(21 u^6-40 u^4+5\right)\right) \right] \left(h_{xx} + h_{yy} \right) \nonumber \\ & \qquad \;\; +\frac{u}{\pi^2 T_0^2} \,\left[ \frac{q^2}{\omega^2} h_{tt}+ h_{zz} + \frac{2q}{\omega} h_{tz} \right]\,, \label{Ginv4g3} \\ &\partial^2_u Z_3 - \frac{3 \left(1+u^2\right) \mathfrak{w} ^2 - \mathfrak{q}^2 \left(3-2 u^2 +3 u^4\right)}{u \left(1-u^2\right) \left(3 \mathfrak{w}^2 - \mathfrak{q}^2 \left(3-u^2\right)\right)} \partial_u Z_3 \nonumber\\ &+\frac{3 \mathfrak{w} ^4 - 2 \left(3-2 u^2\right) \mathfrak{w} ^2 \mathfrak{q}^2 - \mathfrak{q}^2 \left(1-u^2\right) \left(4 u^3+\mathfrak{q}^2 \left(u^2-3\right)\right)}{u \left(1-u^2\right)^2 \left(3 \mathfrak{w} ^2 - \mathfrak{q}^2 \left(3-u^2\right)\right)} Z_3 = \gamma \, \mathcal{G}_3\left[Z_3\right]\,.\label{eq:SoundEqN4} \end{align} The functions $\mathcal{G}_1$, $\mathcal{G}_2$ and $\mathcal{G}_3$ appearing on the right hand side of the equations can be found in Appendix \ref{sec:appendix-N4}. Here and in the rest of the paper we use the dimensionless variables \begin{align} \mathfrak{w} = \frac{\omega}{2\pi T}, \qquad \mathfrak{q} = \frac{q}{2\pi T}\,. \label{eq:gothic} \end{align} The equations of motion are solved numerically and the quasinormal spectrum is extracted using the standard recipes \cite{Starinets:2002br,Nunez:2003eq,Kovtun:2005ev,Buchel:2004di,Buchel:2008sh,Benincasa:2005qc}. Our numerical approach is described in Appendix \ref{sec:Numerics}. \subsection{The spectrum of the metric fluctuations} \label{sec:N4Results} Given the smooth dependence of the equations of motion on the parameter $\gamma$, we may expect the eigenvalues to shift somewhat in the complex frequency plane with respect to their $\gamma=0$ positions. This is indeed the case, as noted previously in Refs.~\cite{Stricker:2013lma,Waeber:2015oka} and the details of this shift are interesting. In addition to this, we observe an inflow of new poles from complex infinity along the imaginary axis. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Scalar-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Scalar-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Scalar-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Scalar-4} \end{subfigure} \caption{Poles (shown by squares) of the energy-momentum retarded two-point function of $\mathcal{N}=4$ SYM in the scalar channel, for various values of the coupling constant and $\mathfrak{q}=0.1$. From top left: $\gamma = \{10^{-5},\, 10^{-4}, \,10^{-3},\, 10^{-2}\} $ corresponding to values of the 't Hooft coupling $\lambda \approx \{609,\, 131, \, 28,\, 6\} $. Poles at $\gamma = 0$ ($\lambda\rightarrow \infty$) are shown by circles.} \label{fig:N=4+gamma-Scalar-channel} \end{figure} These poles are non-perturbative in $\gamma$ (the relevant quasinormal frequencies scale as $1/\gamma$) but under certain conditions they are visible in the finite complex frequency plane and can even be approximated by analytic expressions. The new poles appear in all three channels of perturbations. In the shear and sound channels, they interfere with the hydrodynamic poles and effectively destroy them at still sufficiently small, $q$-dependent values of $\gamma$. A qualitatively similar phenomenon is observed in Gauss-Bonnet gravity where the equations of motion are second order and fully non-perturbative (see Section \ref{sec:GB}). \subsubsection{Scalar channel} The scalar equation of motion \eqref{ScalarEqN4} is solved numerically with the incoming wave boundary condition at $u=1$ and Dirichlet condition at $u=0$ for fixed small values of $\gamma > 0$. A typical distribution of the quasinormal frequencies (poles of the scalar components of the energy-momentum retarded two-point function of $\mathcal{N}=4$ SYM) in the complex frequency plane is shown in Fig.~\ref{fig:N=4+gamma-Scalar-channel}. The two symmetric branches of the modes move up towards the real axis relative to their $\gamma = 0$ position. Here and in all subsequent calculations, we do not observe the bending of the quasinormal modes with large real and imaginary parts towards the real axis reported earlier in Ref.~\cite{Stricker:2013lma}. Rather, our findings agree with the results of Ref.~\cite{Waeber:2015oka}, where the two branches lift up without bending. The two branches become more and more horizontal with the 't Hooft coupling decreasing and move closer to the real axis. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Scalar-zoom-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Scalar-zoom-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Scalar-zoom-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Scalar-zoom-4} \end{subfigure} \caption{The closest to the origin poles (shown by black dots) of the energy-momentum retarded two-point function of $\mathcal{N}=4$ SYM in the scalar channel, for various values of the coupling constant and $\mathfrak{q}=0.1$. From top left: $\gamma = \{0.005,\, 0.010, \,0.020,\, 0.060\} $ corresponding to values of the 't Hooft coupling $\lambda \approx \{10,\, 6, \, 4,\, 2\}$. The crude analytical approximation \eqref{eq:ScalarN4newpole} to the new pole on the imaginary axis becomes more accurate for larger $\gamma$.} \label{fig:N=4+gamma-Scalar-zoom} \end{figure} At the same time, the distance between the poles in the branches decreases: in a sense, there is an inflow of new poles from complex infinity along the branches. This last effect is too small to be noticeable e.g. in Fig.~\ref{fig:N=4+gamma-Scalar-channel} because in $\mathcal{N}=4$ SYM we are restricted to the $\gamma\ll 1$ regime. Extrapolating to larger values of $\gamma$ (smaller values of 't Hooft coupling) would not be legitimate with the $R^4$ corrections treated perturbatively but it is conceivable that in the limit of vanishing 't Hooft coupling the poles in the two branches merge forming two symmetric branch cuts $(-\infty,-q]$ and $[q,\infty)$. We shall see more evidence for this behavior in Gauss-Bonnet gravity, where the equations of motion are second-order and the coupling dependence is fully non-perturbative (see Section \ref{sec:GB}). The closeness of the two branches of poles to the real axis at intermediate and small values of the 't Hooft coupling raises the question of the behavior of the spectral function and the appearance of quasiparticles. Again, this is investigated in detail in Gauss-Bonnet gravity in Section \ref{sec:GB}, where we are not constrained by the smallness of the perturbation theory parameter. We also observe a novel phenomenon: a sequence of new poles ascends along the imaginary axis towards the origin as $\gamma$ increases from zero to small finite values. The first of these poles reaches the vicinity of the origin at $\gamma \sim 0.01$. One can find a crude analytic approximation for this top pole by solving the equation in the regime $|\mathfrak{w}| \ll 1$ (for simplicity, we also take $|\mathfrak{q}| \ll 1$). We assume the scaling $\mathfrak{w} \to \epsilon \mathfrak{w}$ and $\mathfrak{q} \to \epsilon \mathfrak{q}$ for $\epsilon \ll 1$, so that to first order in $\epsilon$, the function $Z_1 (u) = (1 - u)^{-i \mathfrak{w} / 2 } \left( z_1^{(0)} + \epsilon z_1^{(1)} \right)$. The functions $z_1^{(0,1)}$ are found perturbatively to first order in $\gamma$. To find the quasinormal frequency, we solve the polynomial equation $Z_1(u=0,\mathfrak{w},\mathfrak{q}) = 0$, looking for a solution of the form $\mathfrak{w}(\mathfrak{q})$. To leading order in $\mathfrak{q}$, we find a gapped pole on the imaginary axis with the dispersion relation \begin{align} \mathfrak{w} = \mathfrak{w}_{\mathfrak{g}} = -\frac{2 i}{373 \gamma -\ln 2} \approx -\frac{2 i}{373 \gamma}. \label{eq:ScalarN4newpole} \end{align} As shown in Fig.~\ref{fig:N=4+gamma-Scalar-zoom}, the analytic approximation \eqref{eq:ScalarN4newpole} works better for larger values of $\gamma$. For $\gamma \rightarrow 0$, the pole recedes deep into the complex plane along the negative imaginary axis (the approximate formula \eqref{eq:ScalarN4newpole} is compatible with this observation but breaks down when $|\mathfrak{w}|$ becomes large). \subsubsection{Shear channel} In the shear channel, the distribution of poles at finite coupling is similar to the one in the scalar channel. The exception is the gapless hydrodynamic pole on the imaginary axis responsible for the momentum diffusion. The poles are shown in Fig.~\ref{fig:N=4+gamma-Shear-channel} for several values of $\gamma$. General properties of non-hydrodynamic poles described in detail for the scalar channel are observed here as well. The new feature is the interaction between the diffusion pole and the first of the new poles rising up from complex infinity along the imaginary axis with increasing $\gamma$. The dispersion relation for the diffusion pole is given by the formula \cite{Policastro:2002se,Baier:2007ix,Grozdanov:2015kqa} \begin{align} \omega = - i \frac{\eta}{\varepsilon + P}\, q^2 - i \left[ \frac{\eta^2\tau_\Pi}{(\varepsilon + P)^2}-\frac{\theta_1}{2(\varepsilon + P)}\right] q^4 + \cdots \,, \label{eq:shear_disp} \end{align} where in the absence of the chemical potential $\varepsilon + P = s T$. In $\mathcal{N}=4$ SYM theory, one has \cite{Policastro:2001yc,Buchel:2004di,Buchel:2008sh,Baier:2007ix,Benincasa:2005qc,Bhattacharyya:2008jc,Buchel:2008bz,Buchel:2008kd,Buchel:2008bz,Grozdanov:2015kqa} \begin{align} \frac{\eta}{s} &= \frac{1}{4\pi} \left( 1 + 120 \gamma + \cdots \right)\,, \\ \tau_\Pi &= \frac{2 - \ln {2}}{2 \pi T} + \frac{375 \gamma}{4\pi T} + \cdots\,, \\ \theta_1 &= \frac{N_c^2 T}{32 \pi} + O(\gamma)\,. \label{eq:coeffi-n=4} \end{align} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Shear-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Shear-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Shear-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Shear-4} \end{subfigure} \caption{Poles (shown by squares) of the energy-momentum retarded two-point function of $\mathcal{N}=4$ SYM in the shear channel, for various values of the coupling constant and $\mathfrak{q}=0.1$. From top left: $\gamma = \{10^{-5},\, 10^{-4}, \,10^{-3},\, 10^{-2}\} $ corresponding to values of the 't Hooft coupling $\lambda \approx \{609,\, 131, \, 28,\, 6\} $. Poles at $\gamma = 0$ ($\lambda\rightarrow \infty$) are shown by circles.} \label{fig:N=4+gamma-Shear-channel} \end{figure} The coupling constant correction to the coefficient $\theta_1$ of the third-order hydrodynamics is currently unknown. However, for $\mathfrak{q} \ll 1$, the $\mathfrak{q}^2$ term in Eq.~(\ref{eq:shear_disp}) dominates and the pole moves down the imaginary axis with $\gamma$ increasing, in agreement with our numerical findings. For certain values of $\gamma \ll 1$, the leading new pole ascending the imaginary axis approaches the hydrodynamic pole. The two poles collide on the imaginary axis at some critical value of $\gamma$ at fixed $\mathfrak{q}$ (or equivalently, at some $\mathfrak{q} = \mathfrak{q}_c (\gamma)$ at fixed $\gamma$) and then for larger $\gamma$ they symmetrically move off the imaginary axis, both having acquired non-zero real parts (see Fig.~\ref{fig:N=4+gamma-Shear-zoom}). At this point, the hydrodynamic pole (\ref{eq:shear_disp}) ceases to exists and for $\mathfrak{q} > \mathfrak{q}_c$ the hydrodynamic description appears to be invalid. We interpret this as the breakdown of hydrodynamics at sufficiently large, coupling-dependent value of the wave-vector. The function $\mathfrak{q}_c(\gamma)$ is shown in Fig.~\ref{fig:N=4+gamma-Shear-critical}. It is monotonically decreasing with $\gamma$ suggesting that hydrodynamics has a wider applicability range at larger 't Hooft coupling as far as the spatial momentum dependence is concerned. The phenomenon just described can be approximated analytically in the region of small $\mathfrak{w}$ and $\mathfrak{q}$ (although this approximation is not very precise quantitatively, it captures the behavior of the poles correctly). Indeed, solving the equation \eqref{ShearEqN4} perturbatively in $\mathfrak{w} \ll 1$ and $\mathfrak{q} \ll 1$ (still with $\gamma \ll 1$) and imposing the Dirichlet condition $Z_2(u=0,\mathfrak{w},\mathfrak{q})=0$, we find a quadratic equation \begin{align} 2 \mathfrak{w} + i \mathfrak{w}^2 \log{2} + i \mathfrak{q}^2 + i 120 \gamma \mathfrak{q}^2 - i 373 \gamma \mathfrak{w}^2 = 0\,. \label{eq:shear-quad-eq} \end{align} This equation has two roots parametrized by $\gamma$ and $\mathfrak{q}$, \begin{align} &\mathfrak{w}_1 = \frac{- i +i \sqrt{-44760 \gamma ^2 \mathfrak{q}^2-373 \gamma \mathfrak{q}^2+120 \gamma \mathfrak{q}^2 \ln 2+\mathfrak{q}^2 \ln 2+1}}{373 \gamma -\ln 2}, \label{eq:ShearNeww1}\\ &\mathfrak{w}_2 = \frac{-i -i \sqrt{-44760 \gamma ^2 \mathfrak{q}^2-373 \gamma \mathfrak{q}^2+120 \gamma \mathfrak{q}^2 \ln 2+\mathfrak{q}^2 \ln 2+1}}{373 \gamma -\ln 2}.\label{eq:ShearNeww2} \end{align} At fixed $\mathfrak{q}$ and sufficiently small $\gamma$, the roots are purely imaginary, moving closer to each other with increasing $\gamma$. Finally, the two roots merge and then acquire non-zero real parts for larger $\gamma$. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Shear-zoom-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Shear-zoom-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Shear-zoom-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Shear-zoom-4} \end{subfigure} \caption{The closest to the origin poles (shown by black dots) of the energy-momentum retarded two-point function of $\mathcal{N}=4$ SYM in the shear channel, for various values of the coupling constant and $\mathfrak{q}=0.1$. From top left: $\gamma = \{0.011,\, 0.012, \,0.013,\, 0.020\} $ corresponding to values of the 't Hooft coupling $\lambda \approx \{5.7,\, 5.4, \, 5.1,\, 3.8\}$. The hydrodynamic pole moving down the imaginary axis and the new gapped pole moving up the axis merge and move off the imaginary axis. All other poles are outside the range of this plot.} \label{fig:N=4+gamma-Shear-zoom} \end{figure} The physical meaning of the solutions \eqref{eq:ShearNeww1} and \eqref{eq:ShearNeww2} becomes transparent from their small $\mathfrak{q}$ expansions: \begin{align} &\mathfrak{w}_1 = - \frac{1}{2} i \left( 1 + 120\gamma\right)\mathfrak{q}^2 + \ldots \, , \label{eq:ShearN4dispQ2}\\ &\mathfrak{w}_2 = \mathfrak{w}_{\mathfrak{g}} + \frac{1}{2} i \left( 1 + 120\gamma\right)\mathfrak{q}^2 + \ldots \, , \label{eq:ShearN4disp2Q2} \end{align} where $\mathfrak{w}_{\mathfrak{g}}$ is given by Eq.~\eqref{eq:ScalarN4newpole}. Here, the mode \eqref{eq:ShearN4dispQ2} is the standard hydrodynamic momentum diffusion pole predicted by Eq.~(\ref{eq:shear_disp}), whereas the mode \eqref{eq:ShearN4disp2Q2} approximates the new gapped pole moving up the imaginary axis. Note that the gap $\mathfrak{w}_{\mathfrak{g}}$ in the mode \eqref{eq:ShearN4disp2Q2} is the same as in the scalar channel. Using Eqs.~(\ref{eq:ShearN4dispQ2}) and (\ref{eq:ShearN4disp2Q2}), we can find an approximate analytic expression for the function $\mathfrak{q}_c(\gamma)$ plotted in Fig.~\ref{fig:N=4+gamma-Shear-critical}: \begin{align} \mathfrak{q}_c = \sqrt{\frac{2}{373 \gamma \left(1+120 \gamma \right)} } \sim 0.04 \, \lambda^{3/2}\,. \label{eq:q-crit} \end{align} As is evident from Fig.~\ref{fig:N=4+gamma-Shear-critical}, the analytic approximation becomes more precise with larger $\gamma$. \subsubsection{Sound channel} The quasinormal spectrum in the sound channel is found by solving Eq.~(\ref{eq:SoundEqN4}) and imposing the Dirichlet condition $Z_3 (u=0,\mathfrak{w},\mathfrak{q}) = 0$. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figs/N=4+gamma-Shear-critical} \caption{Critical value of the spatial momentum $\mathfrak{q}_c$, limiting the hydrodynamic regime, as a function of higher derivative coupling $\gamma$ in the shear channel of $\mathcal{N}=4$ SYM. Hydrodynamics has a wider range of applicability in $\mathfrak{q}$ at smaller $\gamma$ (larger 't Hooft coupling).} \label{fig:N=4+gamma-Shear-critical} \end{figure} The distribution of poles in the complex frequency plane at various values of the coupling is shown in Fig.~\ref{fig:N=4+gamma-Sound-channel}. The movement of the poles with varying coupling is qualitatively similar to the one observed in the scalar and shear channels. The two gapless sound poles symmetric with respect to the imaginary axis have the dispersion relation predicted by hydrodynamics \cite{Policastro:2002se,Baier:2007ix,Grozdanov:2015kqa} \begin{align} \omega = \pm c_s \, q - i \Gamma\, q^2 \mp \frac{\Gamma}{2 c_s} \left( \Gamma - 2 c_s^2 \tau_\Pi \right)\, q^3 - i \left[ \frac{8}{9}\frac{\eta^2 \tau_\Pi}{(\varepsilon+P)^2} - \frac{1}{3} \frac{\theta_1+\theta_2}{\varepsilon +P}\right]\, q^4 + \cdots \,, \label{eq:sound_disp} \end{align} where $c_s = 1/\sqrt{3}$ for conformal fluids in $d=3+1$ dimensions, $\Gamma = 2 \eta/3(\varepsilon +P)$ and $\varepsilon + P = s T$ at zero chemical potential. For $\mathcal{N}=4$ SYM theory, the coefficients $\eta/s$, $\tau_\Pi$ and $\theta_1$ are given in Eq.~(\ref{eq:coeffi-n=4}) and \begin{align} \theta_2 = \frac{N_c^2 T}{384 \pi} \left( 22 - \frac{\pi^2}{12} - 18 \ln{2} +\ln^2 2 \right)+ O(\gamma)\,. \end{align} The full $\gamma$-dependence of the quartic term in Eq.~(\ref{eq:sound_disp}) is currently unknown. With $\gamma$ increasing, the leading new gapless pole rising along the imaginary axis approaches the region of the sound poles (see Fig.~\ref{fig:N=4+gamma-Sound-zoom}). For $\mathfrak{w} \ll 1$ and $\mathfrak{q} \ll 1$, the equation (\ref{eq:SoundEqN4}) can be solved perturbatively and from the Dirichlet condition $Z_3 (u=0,\mathfrak{w},\mathfrak{q}) = 0$ one finds a quintic equation \begin{align} &420 \gamma \mathfrak{q}^4-2546 i \gamma \mathfrak{q}^4 \mathfrak{w}-8 i \mathfrak{q}^4 \mathfrak{w}+4 \mathfrak{q}^4+4797 i \gamma \mathfrak{q}^2 \mathfrak{w}^3+12 i \mathfrak{q}^2 \mathfrak{w}^3 \nonumber\\ &-1260 \gamma \mathfrak{q}^2 \mathfrak{w}^2-18 \mathfrak{q}^2 \mathfrak{w}^2-3357 i \gamma \mathfrak{w}^5+18 \mathfrak{w}^4 =0. \end{align} Expanding further in $\gamma \ll 1$ and $\mathfrak{q} \ll 1$, we obtain the following analytic expressions for the three closely located modes of interest: \begin{align} &\mathfrak{w}_{1,2} = \pm \frac{1}{\sqrt{3}} \mathfrak{q} - \frac{1}{3} i (1 + 120\gamma) \mathfrak{q}^2 + \ldots \,, \label{eq:sound-gam}\\ &\mathfrak{w}_3 = \mathfrak{w}_{\mathfrak{g}} + \frac{2}{3} i (1 + 120\gamma) \mathfrak{q}^2 + \ldots \label{eq:sound-gam-gap}\,. \end{align} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Sound-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Sound-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Sound-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Sound-4} \end{subfigure} \caption{Poles (shown by squares) of the energy-momentum retarded two-point function of $\mathcal{N}=4$ SYM in the sound channel, for various values of the coupling constant and $\mathfrak{q}=0.1$. From top left: $\gamma = \{10^{-5},\, 10^{-4}, \,10^{-3},\, 10^{-2}\} $ corresponding to values of the 't Hooft coupling $\lambda \approx \{609,\, 131, \, 28,\, 6\} $. Poles at $\gamma = 0$ ($\lambda\rightarrow \infty$) are shown by circles.} \label{fig:N=4+gamma-Sound-channel} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Sound-zoom-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Sound-zoom-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Sound-zoom-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/N=4+gamma-Sound-zoom-4} \end{subfigure} \caption{Closest to the origin poles (shown by black dots) of the energy-momentum retarded two-point function of $\mathcal{N}=4$ SYM in the sound channel, for various values of the coupling constant and $\mathfrak{q}=0.1$. From top left: $\gamma = \{0.005,\, 0.01, \,0.02,\, 0.03\} $ corresponding to values of the 't Hooft coupling $\lambda \approx \{10,\, 6, \, 4,\, 3\}$. All other poles are outside the range of this plot.} \label{fig:N=4+gamma-Sound-zoom} \end{figure} Here, the Eq.~(\ref{eq:sound-gam}) is the standard dispersion relation for the two sound modes as in Eq.~(\ref{eq:sound_disp}), and Eq.~\eqref{eq:sound-gam-gap} is the new gapped pole with $\mathfrak{w}_{\mathfrak{g}}$ given by Eq. \eqref{eq:ScalarN4newpole}. Assuming, perhaps somewhat arbitrarily, that the hydrodynamic description fails when the imaginary part of the new gapped pole becomes equal to the one of the sound mode, from Eqs.~(\ref{eq:sound-gam}) and (\ref{eq:sound-gam-gap}) we find the critical value of the spatial momentum $\mathfrak{q}_c$ which turns out to be exactly the same as in Eq. \eqref{eq:q-crit}. \subsubsection{Coupling constant dependence of the shear viscosity - relaxation time ratio} \label{sec:RatioN4} The dependence of real and imaginary parts of the smallest in magnitude quasinormal frequencies in the symmetric branches on $\gamma$ (at fixed $\mathfrak{q}$) in the scalar, shear and sound channels, respectively, is shown in Figs.~\ref{fig:N=4+gamma-Scalar-Poles-vs-gamma}, \ref{fig:N=4+gamma-Shear-Poles-vs-gamma} and \ref{fig:N=4+gamma-Sound-Poles-vs-gamma}. In all three channels, a relatively strong dependence of the spectrum on $\gamma$ in the vicinity of $\gamma =0$ changes to a nearly flat behavior at larger values of $\gamma$. As discussed in the Introduction, these data can be used to test whether the relations between transport coefficients and the relaxation time typical for a kinetic regime of the theory may still hold at strong coupling. In kinetic theory, the hierarchy of relaxation times arises as the non-hydrodynamic part of the spectrum of a linearized Boltzmann operator (see Section \ref{sec:relaxation}). At strong coupling, it seems natural to associate this hierarchy with the (inverse) imaginary parts of the quasinormal spectrum frequencies. In particular, the relaxation time $\tau_{\scriptscriptstyle R}$ can be defined as \begin{equation} \tau_{\scriptscriptstyle R} (q, \lambda ) = \frac{2 \pi T}{\mbox{Im}\, \omega_F (q,\lambda)} = \frac{1}{\mbox{Im}\, \mathfrak{w}_F (q,\lambda)}\,, \end{equation} where $\omega_F$ is the fundamental (lowest in magnitude) quasinormal frequency. The prediction of kinetic theory is that Eq.~(\ref{eq:rel-visc-rel}) holds at least at weak coupling, i.e. that the ratio $\eta / s\, \tau_{\scriptscriptstyle R} T$ is approximately constant. In Fig.~\ref{fig:N=4+gamma-Shear-etaOverStauT-vs-gamma}, we plot the ratios $\eta / s\, \tau_k T$, $k=1,2,3,4$, as functions of $\gamma$ using the data for $\tau_k=1/\mbox{Im} \, \mathfrak{w}_k$ of the leading four non-hydrodynamic quasinormal frequencies (including the fundamental one) in the shear channel at $\mathfrak{q}=0$. Curiously, although rapid decrease of all four functions is seen in the vicinity of $\gamma =0$, the dependence changes to a nearly flat one very quickly, already at $\gamma \approx 2 \times 10^{-3}$ (corresponding to the 't Hooft coupling $\lambda \sim 18$), which is well within the regime of small $\gamma$. Note that the 't Hooft coupling correction to $\eta / s$ for $\gamma \approx 2 \times 10^{-3}$ is approximately $25\%$. Thus, the naive use of kinetic theory expressions such as Eq.~(\ref{eq:rel-visc-rel}) may not be so disastrous at moderate or even strong coupling. We shall see in the next Sections that the features discussed here for the specific gravity dual with the higher derivative term of the type $R^4$ are also observed for gravity backgrounds with $R^2$ terms, in particular Gauss-Bonnet gravity. \begin{figure}[ht] \centering \includegraphics[width=0.47\linewidth]{figs/N=4+gamma-Scalar-Rew-vs-gamma} \includegraphics[width=0.47\linewidth]{figs/N=4+gamma-Scalar-Imw-vs-gamma} \caption{$\mathcal{N}=4$ SYM: Real (left panel) and imaginary (right panel) parts of the lowest four quasinormal frequencies in the scalar channel at $\mathfrak{q} = 0.1$.} \label{fig:N=4+gamma-Scalar-Poles-vs-gamma} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.47\linewidth]{figs/N=4+gamma-Shear-Rew-vs-gamma} \includegraphics[width=0.47\linewidth]{figs/N=4+gamma-Shear-Imw-vs-gamma} \caption{$\mathcal{N}=4$ SYM: Real (left panel) and imaginary (right panel) parts of the lowest four quasinormal frequencies in the shear channel at $\mathfrak{q} = 0.1$.} \label{fig:N=4+gamma-Shear-Poles-vs-gamma} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.47\linewidth]{figs/N=4+gamma-Sound-Rew-vs-gamma} \includegraphics[width=0.47\linewidth]{figs/N=4+gamma-Sound-Imw-vs-gamma} \caption{$\mathcal{N}=4$ SYM: Real (left panel) and imaginary (right panel) parts of the lowest four quasinormal frequencies in the sound channel at $\mathfrak{q} = 0.1$.} \label{fig:N=4+gamma-Sound-Poles-vs-gamma} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{figs/N=4+gamma-Shear-etaOverStauT-vs-gamma} \caption{The ratios $\eta/s \tau_k T $, $k=\{1,\,2,\,3,\,4\}$, as functions of $\gamma$ in $\mathcal{N}=4$ SYM.} \label{fig:N=4+gamma-Shear-etaOverStauT-vs-gamma} \end{figure} \section{Relaxation time and poles of energy-momentum tensor correlators in a theory dual to Gauss-Bonnet gravity} \label{sec:GB} The action of Einstein-Gauss-Bonnet gravity in five space-time dimensions is given by \begin{align} \label{eq:GBaction} S_{GB} = \frac{1}{2\kappa_5^2} \int d^5 x \sqrt{-g} \left[ R + \frac{12}{L^2} + \frac{l^2_{GB}}{2} \left( R^2 - 4 R_{\mu\nu} R^{\mu\nu} + R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} \right) \right], \end{align} where the scale $l^2_{GB}$ of the higher derivative term can be chosen to be set by a cosmological constant, $l^2_{GB} = \lambda_{\scriptscriptstyle GB} L^2$, where $\lambda_{\scriptscriptstyle GB}$ is the dimensionless parameter. The coefficients of the curvature-squared terms ensure that the equations of motion following from the action \eqref{eq:GBaction} are second order in derivatives. Thus, in the absence of Ostrogradsky instability and other difficulties usually induced by the dynamics with higher derivatives, Gauss-Bonnet and more generally Lovelock theories, are popular theoretical laboratories for studying non-perturbative effects of higher-derivative couplings. For example, the shear viscosity-entropy ratio in a (hypothetical) conformal fluid dual to five-dimensional Gauss-Bonnet gravity turns out to be \cite{Brigante:2007nu} \begin{equation} \frac{\eta}{s} = \frac{1 - 4 \lambda_{\scriptscriptstyle GB}}{4\pi}\,, \label{eq:gbviscosity} \end{equation} and this result is obtained without the assumption $|\lambda_{\scriptscriptstyle GB}|\ll1$, i.e. non-perturbatively in the coupling. However, as pointed out and investigated in detail in Refs.~\cite{Brigante:2007nu,Brigante:2008gz,Buchel:2009tt,deBoer:2009pn,Camanho:2009vw,Buchel:2009sk}, for $\lambda_{\scriptscriptstyle GB}$ outside of a certain interval, the dual theory suffers from pathologies associated with superluminal propagation of high momentum modes. More recently, Camanho {\it et al.} \cite{Camanho:2014apa} argued that Gauss-Bonnet theory suffers from causality problems in the bulk that can only be cured by adding higher spin fields. This would effectively imply that Gauss-Bonnet and, most likely, general Lovelock theories\footnote{See Refs.~\cite{deBoer:2009gx,Camanho:2009hu,Camanho:2010ru} for relevant work in Lovelock theories.} should loose their privileged non-perturbative status and be treated as any other theory with higher derivative terms, i.e. the coupling $\lambda_{\scriptscriptstyle GB}$ in, for example, Eq.~(\ref{eq:gbviscosity}) should be seen as an infinitesimally small parameter (see, however, Refs.~\cite{Reall:2014pwa, Papallo:2015rna}). We note those difficulties but will not constrain $\lambda_{\scriptscriptstyle GB}$ beyond its natural (here, limited by the existence of the black brane solution) domain $\lambda_{\scriptscriptstyle GB} \in ( -\infty,1/4]$ in the following. Our goal is to compute the quasinormal spectrum of gravitational fluctuations of the Gauss-Bonnet black brane metric\footnote{Exact solutions and thermodynamics of black branes and black holes in Gauss-Bonnet gravity were considered in \cite{Cai:2001dz} (see also \cite{Nojiri:2001aj,Cho:2002hq,Neupane:2002bf,Neupane:2003vz}).} \begin{align} ds^2 = - f(r) N^2_{GB} dt^2 + \frac{1}{f(r)} dr^2 + \frac{r^2}{L^2} \left(dx^2 + dy^2 +dz^2 \right), \label{eq:BB} \end{align} dual to a thermal state of a boundary CFT. Here \begin{align} f(r) = \frac{r^2}{L^2} \frac{1}{2\lambda_{\scriptscriptstyle GB}} \left[1 - \sqrt{1-4\lambda_{\scriptscriptstyle GB} \left(1 - \frac{r^4_0}{r^4} \right) } \right] \label{eq:BBf} \end{align} and the constant $N_{GB}$ can be chosen to normalize the speed of light at the boundary to $c=1$: \begin{align} N_{GB}^2 = \frac{1}{2} \left(1+\sqrt{1-4\lambda_{\scriptscriptstyle GB}} \right). \label{eq:NGBDef} \end{align} The Hawking temperature corresponding to the solution \eqref{eq:BB} is given by \begin{align} T = \frac{N_{GB} r_0}{\pi L^2} = \frac{r_0\sqrt{ 1+\gamma_{\scriptscriptstyle GB}}}{\sqrt{2} \pi L^2 }\,, \label{eq:GBTemperature} \end{align} where we introduced the notation $\gamma_{\scriptscriptstyle GB} \equiv \sqrt{1-4\lambda_{\scriptscriptstyle GB}}$. We shall use $\lambda_{\scriptscriptstyle GB}$ and $\gamma_{\scriptscriptstyle GB}$ interchangeably in the following. The range $\lambda_{\scriptscriptstyle GB} <0 $ corresponds to $\gamma_{\scriptscriptstyle GB} \in [1,\infty)$ and the interval $\lambda_{\scriptscriptstyle GB} \in [0,1/4]$ maps into $\gamma_{\scriptscriptstyle GB} \in [0,1]$, with $\lambda_{\scriptscriptstyle GB} =0$ corresponding to $\gamma_{\scriptscriptstyle GB}=1$. \subsection{Equations of motion} Fluctuations $h_{\mu\nu}(r,t,z)$ of the Gauss-Bonnet black brane metric \eqref{eq:BB} can be decomposed into the scalar, shear and sound channels in the standard way \cite{Son:2002sd,Kovtun:2005ev}. The corresponding gauge-invariant combinations $Z_1$, $Z_2$, $Z_3$ of the metric fluctuations $h_{\mu\nu}(r,\omega,q)$ (Fourier transformed in the variables along the brane directions) in the three channels are given by \begin{align} &\text{\bf Scalar:}& &Z_1 = h^x_{~y}\,, \label{eq:GinvZ1} \\ &\text{\bf Shear:}& &Z_2 = \frac{q }{r^2} h_{tx} + \frac{\omega}{ r^2} h_{xz}\,, \label{eq:Ginv4g2} \\ &\text{\bf Sound:}& &Z_3 = \frac{2 q^2}{r^2 \omega^2} h_{tt} +\frac{4 q}{r^2 \omega} h_{tz} - \left( 1 - \frac{q^2 N_{GB}^2 \left(4 r^3 - 2 r f(r)\right)}{2 r \omega^2 \left(r^2 - 2 \lambda_{\scriptscriptstyle GB} f(r)\right)} \right) \left( \frac{h_{xx}}{r^2} + \frac{h_{yy}}{r^2} \right) + \frac{2}{r^2} h_{zz}\,. \label{eq:GinvZ3} \end{align} Introducing the new variable $u = r_0^2/r^2$, the equation of motion in each of the three channels can be written in the form of a linear second-order differential equation \begin{align} \partial_u^2 Z_i + A_i \partial_u Z_i + B_i Z_i = 0\,, \label{eq:eom_GB_ginv} \end{align} where $i=1,2,3$ and the coefficients $A_i$ and $B_i$ are given in Appendix \ref{sec:appendix-GB}. To find the quasinormal spectrum in the three channels, we impose the "incoming wave" boundary conditions at the horizon at $u=1$ \cite{Son:2002sd}, \begin{align} Z_i(u) = (1-u)^{-i\mathfrak{w}/2} \mathcal{Z}_i(u,\mathfrak{w},\mathfrak{q})\,, \end{align} where the functions $\mathcal{Z}_i$ are regular at $u=1$. The quasinormal spectra $\mathfrak{w} = \mathfrak{w} (\mathfrak{q})$ are then solutions to the equations $Z_i (u=0,\mathfrak{w},\mathfrak{q}) = 0$. They can be found numerically. In addition, in the regime $\mathfrak{w} \ll 1$ and $\mathfrak{q}\ll 1$, some frequencies are determined analytically. In all three channels, it will be convenient to use a new variable \begin{align} v = 1 - \sqrt{ 1 - \left(1-u^2\right)\left(1 - \gamma_{\scriptscriptstyle GB}^2 \right) }, \end{align} so that the horizon is at $v = 0$ and the boundary at $ v = 1 - \gamma_{\scriptscriptstyle GB}$. The new coordinate is singular at zero Gauss-Bonnet coupling, $\lambda_{\scriptscriptstyle GB} = 0$ ($\gamma_{\scriptscriptstyle GB} = 1$) and the results for that point, which are identical to those of $\mathcal{N} = 4$ SYM theory at infinite 't Hooft coupling, have to be obtained independently. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-4} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-3} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-2} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-1} \end{subfigure} \caption{Quasinormal spectrum (shown by circles) of the scalar channel metric perturbations in Gauss-Bonnet gravity for various values of the coupling $\lambda_{\scriptscriptstyle GB}$ and $\mathfrak{q}=0.1$. From top left: $\lambda_{\scriptscriptstyle GB} = \{-6,\, -1.3125, \, 0.09,\, 0.21\} $. For comparison, the spectrum at $\lambda_{\scriptscriptstyle GB} = 0$ is shown by squares.} \label{fig:GB-Scalar-channel} \end{figure} \subsection{The spectrum of the metric fluctuations} \label{sec:GBResults} The quasinormal spectra in Einstein-Gauss-Bonnet theory obtained non-perturbatively in $\lambda_{\scriptscriptstyle GB}$ show the properties qualitatively similar to the ones discussed in Section \ref{sec:N4Results} for the AdS-Schwarzschild background corrected by the $R^4$ term. In this Section we show the numerical results and analytic approximations for the spectra in the three channels, including the details of the breakdown of the hydrodynamic regime. There are some novelties in the Gauss-Bonnet case. First, not being restricted by the perturbative nature of the higher-derivative coupling, we are able to explore the coupling dependence to a fuller extent than in $\mathcal{N}=4$ SYM. In particular, we are able to say more about the spectral function and the density of poles in the complex plane than we could in $\mathcal{N}=4$ SYM owing to the restriction $\gamma \ll 1$. Second, Gauss-Bonnet gravity (and gravity with generic $R^2$ terms) provides an example of a holographic model, where the shear viscosity - entropy density ratio can be greater or less than $1/4\pi$, depending on the sign of $\lambda_{\scriptscriptstyle GB}$. We find qualitatively different patterns in the behavior of relaxation time and other quantities in those two regimes. \subsubsection{Scalar channel} The spectrum of gravitational perturbations in the scalar channel is shown in Fig.~\ref{fig:GB-Scalar-channel}. Two different regimes are observed depending on the value of $\eta/s$. \begin{figure}[ht] \centering \includegraphics[width=0.45\linewidth]{figs/GB-Scalar-Rew-vs-gamma} \includegraphics[width=0.45\linewidth]{figs/GB-Scalar-Imw-vs-gamma} \caption{Real (left panel) and imaginary (right panel) parts of the top three quasinormal frequencies of the symmetric branches in the scalar channel of Gauss-Bonnet gravity at $\mathfrak{q} = 0.5$.} \label{fig:GB-Scalar-Poles-vs-gamma} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{figs/scalar3poles_vs_lam_aos.pdf} \caption{Top three quasinormal frequencies (connected by lines for better visibility) of the symmetric branches in the scalar channel of gravitational perturbations in Gauss-Bonnet theory as functions of the coupling $\lambda_{\scriptscriptstyle GB} > 0$ (i.e. in the regime $\eta/s < 1/4\pi$). The rest of the quasinormal spectrum is not shown in this figure.} \label{fig:N=GB-Scalar-3-poles} \end{figure} For $\eta/s > 1/4\pi$ (corresponding to $\lambda_{\scriptscriptstyle GB} < 0$), the behavior of the poles is qualitatively the same as in $\mathcal{N}=4$ SYM: the two symmetric branches of gapped poles lift up towards the real axis monotonically with $|\lambda_{\scriptscriptstyle GB}|$ increasing, the distance between the poles decreases suggesting a formation of branch cuts $(-\infty,-q]$ and $[q,\infty)$ in the limit $|\lambda_{\scriptscriptstyle GB}|\rightarrow \infty$. Observing the motion of individual poles in the symmetric branches, one can say that there is an inflow of new poles from complex infinity along the branches with $|\lambda_{\scriptscriptstyle GB}|$ increasing. The dependence of real and imaginary parts of the top three poles in the symmetric branches on $\lambda_{\scriptscriptstyle GB}$ at $\mathfrak{q} = 0.5$ is shown in Fig.~\ref{fig:GB-Scalar-Poles-vs-gamma}. Within the limits of numerical accuracy, this dependence is monotonic. One may notice that the functions become flat for large negative $\lambda_{\scriptscriptstyle GB}$. When the poles in the two branches are sufficiently close to the real axis, we expect the spectral function to show the distinct quasiparticle peaks. We shall discuss this in detail for the shear channel, see subsection \ref{sec:SpectFunGBShear}. There is a new pole rising up the imaginary axis from complex infinity\footnote{In contrast to the corresponding $\mathcal{N}=4$ SYM case, we observe only one new pole for $\lambda_{\scriptscriptstyle GB} <0$, although it is difficult to make this conclusion with certainty using a numerical approach.}. The position of the new pole in the regime $\mathfrak{w} \ll 1$, $\mathfrak{q}\ll 1$, $\gamma_{\scriptscriptstyle GB} \gg 1$ can be determined analytically by solving the equation for $\mathcal{Z}_1$ perturbatively and imposing the condition $Z_1(u=0,\mathfrak{w},\mathfrak{q}) = 0$: \begin{align} \mathfrak{w}_1 = \mathfrak{w}^{GB}_{\mathfrak{g}} + \ldots = - \frac{4i}{\gamma_{\scriptscriptstyle GB} (\gamma_{\scriptscriptstyle GB} +2) - 3 + 2 \ln \left( \frac{ 2}{\gamma_{\scriptscriptstyle GB} + 1 }\right)} + \ldots \approx - \frac{i}{|\lambda_{\scriptscriptstyle GB}|} \, . \label{eq:GB-scalar-gap} \end{align} The mode remains purely on the negative imaginary axis and approaches the origin as $\gamma_{\scriptscriptstyle GB} \to \infty$ ($\lambda_{\scriptscriptstyle GB} \rightarrow -\infty$). This result is confirmed numerically. The residue vanishes in the limit $\gamma_{\scriptscriptstyle GB} \to \infty$, and so the pole disappears in that limit \cite{GBNesojen}. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-zoom-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-zoom-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-zoom-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-zoom-4} \end{subfigure} \caption{Quasinormal modes (shown by black dots), close to the origin, in the scalar channel of Gauss-Bonnet black brane metric perturbations, for increasing coupling constant and $\mathfrak{q}=0.1$. From top left to bottom right: $\lambda_{\scriptscriptstyle GB} =\{-2.8125, \,-4.8125,\, -7.3125,\, -13.8125\}$. The analytic approximation (\ref{eq:GB-scalar-gap}) to the gapped pole on the imaginary axis is shown by a white square.} \label{fig:GB-Scalar-zoom} \end{figure} For $\eta/s <1/4\pi$ (corresponding to $\lambda_{\scriptscriptstyle GB} > 0$), the poles in the two branches become more sparse relative to their $\lambda_{\scriptscriptstyle GB} = 0$ distribution (see Figs.~\ref{fig:GB-Scalar-channel} and \ref{fig:N=GB-Scalar-3-poles}). In sharp contrast with the $\eta/s >1/4\pi$ case, here the branches lift up very slightly, almost infinitesimally, relative to their $\lambda_{\scriptscriptstyle GB}=0$ positions. As shown in Figs.~\ref{fig:GB-Scalar-channel} and \ref{fig:N=GB-Scalar-3-poles}, an outflow of poles along the branches to complex infinity is observed and it is conceivable that the poles of the two branches are eventually completely pushed out of the finite complex plane. At the same time, there are still new poles rising up the imaginary axis. In the limit $\lambda_{\scriptscriptstyle GB} \rightarrow 1/4$ ($\gamma_{\scriptscriptstyle GB} \rightarrow 0$) they are seen numerically to approach the positions (known exactly \cite{GBNesojen}) \begin{align} \mathfrak{w} = -i \left(4+2n_1 - \sqrt{4-3\mathfrak{q}^2}\right) , \qquad \mathfrak{w} = -i \left(4+2n_2 + \sqrt{4-3\mathfrak{q}^2}\right)\,, \label{eq:QNMScalar} \end{align} where $n_1$ and $n_2$ are non-negative integers. The limit of vanishing shear viscosity $\lambda_{\scriptscriptstyle GB} \rightarrow 1/4$ is difficult to explore numerically. However, the observed behavior is consistent with analytic results available at $\lambda_{\scriptscriptstyle GB} = 1/4$. Indeed, exactly at $\lambda_{\scriptscriptstyle GB} = 1/4$ the equations of motion can be solved in terms of hypergeometric functions and the quasinormal spectrum is determined exactly \cite{GBNesojen}. The only quasinormal frequencies at $\lambda_{\scriptscriptstyle GB} = 1/4$ are the ones given by Eq.~\eqref{eq:QNMScalar}. This is consistent with the picture we observe numerically for $0 < \lambda_{\scriptscriptstyle GB} < 1/4$. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-4} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-3} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-2} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-1} \end{subfigure} \caption{Quasinormal spectrum (shown by circles) of the shear channel metric perturbations in Gauss-Bonnet gravity for various values of the coupling $\lambda_{\scriptscriptstyle GB}$ and $\mathfrak{q}=0.1$. From top left: $\lambda_{\scriptscriptstyle GB} = \{-6,\, -1.3125, \, 0.09,\, 0.21\} $. For comparison, the spectrum at $\lambda_{\scriptscriptstyle GB} = 0$ is shown by squares.} \label{fig:GB-Shear-channel} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.45\linewidth]{figs/GB-Shear-Rew-vs-gamma} \includegraphics[width=0.45\linewidth]{figs/GB-Shear-Imw-vs-gamma} \caption{Real (left panel) and imaginary (right panel) parts of the top three quasinormal frequencies in the symmetric branches in the shear channel of Gauss-Bonnet at $\mathfrak{q} = 0.5$.} \label{fig:GB-Shear-Poles-vs-gamma} \end{figure} \subsubsection{Shear channel} The distribution of the poles in the shear channel is shown in Fig.~\ref{fig:GB-Shear-channel} and the coupling dependence of the real and imaginary parts of the top three poles in the symmetric branches can be seen in Fig.~\ref{fig:GB-Shear-Poles-vs-gamma}. The behavior of the poles in the symmetric branches is qualitatively similar to the one observed in the scalar channel. In the limit $\lambda_{\scriptscriptstyle GB} \rightarrow 1/4$, the new poles moving up the imaginary axis approach the $\mathfrak{q}$-independent positions known analytically \cite{GBNesojen}, \begin{align} \mathfrak{w} = -2 i \left(1+n_1\right), & &\mathfrak{w} = -2 i \left(3+n_2\right), \label{eq:QNMShear} \end{align} where $n_1$ and $n_2$ are non-negative integers. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-zoom-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-zoom-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-zoom-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-zoom-4} \end{subfigure} \caption{Quasinormal modes, close to the origin, in the shear channel of Gauss-Bonnet, for increasing coupling constant and $\mathfrak{q}=0.1$. From left to right and top to bottom: $\lambda_{\scriptscriptstyle GB} =\{-2.0000,\, -2.8125,\, -3.7500,\, -22.3125\}$.} \label{fig:GB-Shear-zoom} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figs/GB-Shear-critical} \caption{Critical values of coupling $\lambda_{\scriptscriptstyle GB}$, limiting the hydrodynamic regime, for the shear channel of Gauss-Bonnet.} \label{fig:N=GB-Shear-critical} \end{figure} A characteristic feature of the shear channel is the presence of the hydrodynamic momentum diffusion pole on the imaginary axis. The dispersion relation for this mode is currently known analytically to quartic order in $q$ \cite{Grozdanov:2015kqa} and is given by \begin{align} \omega = - i \frac{\eta}{\varepsilon + P}\, q^2 - i \left[ \frac{\eta^2\tau_\Pi}{(\varepsilon + P)^2}-\frac{\theta_1}{2(\varepsilon + P)}\right] q^4 + \cdots , \label{eq:disp-shear} \end{align} where the transport coefficients were defined in Section \ref{sec:SYM}. For Gauss-Bonnet gravity, solving the equation for the shear mode analytically, perturbatively in $\mathfrak{w}\ll 1$, $\mathfrak{q}\ll 1$ and non-perturbatively in $\gamma_{\scriptscriptstyle GB}$ and imposing the Dirichlet condition $Z_2(0) = 0$, we find \begin{align} \mathfrak{w} = &- i \frac{\gamma_{\scriptscriptstyle GB}^2}{2}\, \mathfrak{q}^2 - i \frac{\gamma_{\scriptscriptstyle GB}^3}{16}\, \biggr[ \left(1+\gamma_{\scriptscriptstyle GB}\right) \left(\gamma_{\scriptscriptstyle GB}^2+ 5\gamma_{\scriptscriptstyle GB} - 2 \right) \nonumber\\ & - 2 \gamma_{\scriptscriptstyle GB}\, \ln \left[\frac{2 \left(1+\gamma_{\scriptscriptstyle GB}\right)}{\gamma_{\scriptscriptstyle GB} }\right] - 2 \left(2 \gamma_{\scriptscriptstyle GB}^2 + \gamma_{\scriptscriptstyle GB} - 1 \right)\biggr]\, \mathfrak{q}^4+ \cdots . \label{eq:wHydroThirdOrderGB} \end{align} The full set of non-perturbative first- and second-order hydrodynamic transport coefficients in Gauss-Bonnet theory was computed in \cite{Grozdanov:2015asa}. The coefficients relevant for the dispersion relation \eqref{eq:disp-shear} are given by \begin{align} &\eta = s \gamma_{\scriptscriptstyle GB}^2/4 \pi\,, \label{eq:gb-visc} \\ &\tau_{\Pi} = \frac{1}{2\pi T} \left[ \frac{1}{4} \left(1+\gamma_{\scriptscriptstyle GB}\right) \left( 5+\gamma_{\scriptscriptstyle GB} - \frac{2}{\gamma_{\scriptscriptstyle GB}}\right) - \frac{1}{2} \ln \frac{2 \left(1+\gamma_{\scriptscriptstyle GB}\right)}{\gamma_{\scriptscriptstyle GB} } \right] . \label{eq:l0} \end{align} Thus, the value of the third-order coefficient $\theta_1$ in the Gauss-Bonnet theory can now be read off Eq.~\eqref{eq:wHydroThirdOrderGB}: \begin{align} \theta_1 = \frac{\eta}{8\pi^2 T^2} \gamma_{\scriptscriptstyle GB} \left(2 \gamma_{\scriptscriptstyle GB}^2 + \gamma_{\scriptscriptstyle GB} - 1 \right). \end{align} In the limit of $\lambda_{\scriptscriptstyle GB} \to 0$ ($\gamma_{\scriptscriptstyle GB}\to 1$), this reproduces the corresponding result for $\mathcal{N}=4$ SYM theory found in Ref.~\cite{Grozdanov:2015kqa}, \begin{align} \theta_1 = \frac{\eta}{4 \pi^2 T^2} . \end{align} The behavior of the momentum diffusion pole depends on whether $\eta/s$ is greater or less than $1/4\pi$. For $\eta/s < 1/4\pi$ ($0<\lambda_{\scriptscriptstyle GB} <1/4$), the pole moves up the imaginary axis relative to its $\lambda_{\scriptscriptstyle GB} = 0 $ position and approaches the origin. It completely disappears from the spectrum at $\lambda_{\scriptscriptstyle GB} = 1/4$ \cite{GBNesojen}. For $\eta/s > 1/4\pi$ ($-\infty<\lambda_{\scriptscriptstyle GB} <0$), its behavior is qualitatively similar to the one observed in $\mathcal{N}=4$ SYM: it moves down the imaginary axis and collides with the top new pole moving up the axis from complex infinity at which point the hydrodynamic description seemingly fails. Then the two poles move off the imaginary axis into the complex plane. For sufficiently large values of $|\lambda_{\scriptscriptstyle GB}|$, this phenomenon happens in the range of small $\mathfrak{w}$, $\mathfrak{q}$ and thus can be approximated analytically (e.g. for $\lambda_{\scriptscriptstyle GB} \sim -3$, the merger of the poles occurs at $|\mathfrak{w}|\sim 0.1$, $\mathfrak{q} \sim 0.1$). Solving the shear mode equation of motion perturbatively in $\mathfrak{w}\ll 1$, $\mathfrak{q} \ll 1$, we find a pair of quasinormal frequencies \begin{align} &\mathfrak{w}_1 = \frac{-2 i + \sqrt{2 (\gamma_{\scriptscriptstyle GB} -1) (\gamma_{\scriptscriptstyle GB} +3) \gamma_{\scriptscriptstyle GB} ^2 \mathfrak{q}^2+4\gamma_{\scriptscriptstyle GB} ^2 \mathfrak{q}^2 \ln \left(\frac{2}{\gamma_{\scriptscriptstyle GB}+1}\right)-4}}{\gamma_{\scriptscriptstyle GB} (\gamma_{\scriptscriptstyle GB} +2) -3+2 \ln \left(\frac{2}{\gamma_{\scriptscriptstyle GB}+1}\right)}, \label{eq:FullGBShearAnaly1} \\ &\mathfrak{w}_2 = \frac{- 2 i -\sqrt{2 (\gamma_{\scriptscriptstyle GB} -1) (\gamma_{\scriptscriptstyle GB} +3) \gamma_{\scriptscriptstyle GB} ^2 \mathfrak{q}^2+4\gamma_{\scriptscriptstyle GB} ^2 \mathfrak{q}^2 \ln \left(\frac{2}{\gamma_{\scriptscriptstyle GB}+1}\right)-4}}{\gamma_{\scriptscriptstyle GB} (\gamma_{\scriptscriptstyle GB} +2) -3+ 2 \ln \left(\frac{2}{\gamma_{\scriptscriptstyle GB}+1}\right)} \label{eq:FullGBShearAnaly2} \end{align} whose motion in the complex plane approximates the numerical observations quite well (see Fig.~\ref{fig:GB-Shear-zoom}). Expanding the above expressions for $\mathfrak{w}_1$ and $\mathfrak{w}_2$ to second order in $\mathfrak{q}$, we find the standard hydrodynamic pole of Eq.~\eqref{eq:wHydroThirdOrderGB} \begin{align} \mathfrak{w}_1 = - \frac{1}{2} i \gamma_{\scriptscriptstyle GB}^2 \mathfrak{q}^2 + \ldots\, \label{eq:w1ShearExpq} \end{align} and the new gapped pole \begin{align} \mathfrak{w}_2 = \mathfrak{w}_{\mathfrak{g}}^{GB} + \frac{1}{2} i \gamma_{\scriptscriptstyle GB}^2 \mathfrak{q}^2 + \ldots\,, \label{eq:w2ShearExpq} \end{align} where the gap $\mathfrak{w}_{\mathfrak{g}}^{GB}$ is identical to the one in Eq.~\eqref{eq:GB-scalar-gap}. The behavior of the poles is qualitatively the same as in the $\mathcal{N}=4$ SYM theory. The diffusion pole moves down the imaginary axis while the new gapped pole moves up as $\lambda_{\scriptscriptstyle GB}$ decreases from $0$ towards negative values. Then the two poles collide at some $\mathfrak{q}$-dependent value of $\lambda_{\scriptscriptstyle GB}^c = \lambda_{\scriptscriptstyle GB}^c (\mathfrak{q})$ and move off the axis. An analytical approximation for this dependence (or more conveniently, for $\mathfrak{q}_c = \mathfrak{q}_c(\gamma_{\scriptscriptstyle GB})$) can be found from the condition $\mathfrak{w}_1 (\mathfrak{q}_c) = \mathfrak{w}_2 (\mathfrak{q}_c)$. We interpret this condition as the condition indicating inadequacy of hydrodynamic description for $\mathfrak{q} > \mathfrak{q}_c (\lambda_{\scriptscriptstyle GB})$. Equating the expressions (\ref{eq:w1ShearExpq}) and (\ref{eq:w1ShearExpq}), we obtain \begin{align}\label{eq:qcShearGBqExp} \mathfrak{q}_{c} = \frac{2}{\gamma_{\scriptscriptstyle GB} \sqrt{\gamma_{\scriptscriptstyle GB} (\gamma_{\scriptscriptstyle GB} +2) - 3 + 2 \ln \left( \frac{2}{\gamma_{\scriptscriptstyle GB} + 1 } \right)}} \sim \frac{1}{2 |\lambda_{\scriptscriptstyle GB}|}. \end{align} Note that if we used instead the un-expanded Eqs.~(\ref{eq:FullGBShearAnaly1}), (\ref{eq:FullGBShearAnaly2}), we would find $\mathfrak{q}_{c}^{\text{(un-exp)}} = \mathfrak{q}_{c}/\sqrt{2}$. The discrepancy is due to the additional $\mathfrak{q}$ corrections not captured by Eqs.~(\ref{eq:w1ShearExpq}), (\ref{eq:w1ShearExpq}). The dependence $\mathfrak{q}_c = \mathfrak{q}_c (\lambda_{\scriptscriptstyle GB})$ obtained numerically as well as the analytic approximation \eqref{eq:qcShearGBqExp} are shown in Fig.~\ref{fig:N=GB-Shear-critical}. \subsubsection{Sound channel} The poles in the sound channel are shown in Fig.~\ref{fig:GB-Sound-channel} and the behavior of the real and imaginary parts of the three leading non-hydrodynamic poles in the symmetric branches is demonstrated in Fig.~\ref{fig:GB-Sound-Poles-vs-gamma}. We observe the same features of the coupling dependence of the spectrum as in the other channels. The two symmetric branches lift up from their $\lambda_{\scriptscriptstyle GB}=0$ positions, moving swiftly towards the real axis and becoming more dense in the case of $\eta/s>1/4\pi$ ($\lambda_{\scriptscriptstyle GB} <0$) and moving only slightly, becoming more sparse and apparently disappearing from the finite complex plane for $\eta/s<1/4\pi$ ($0<\lambda_{\scriptscriptstyle GB} <1/4$). \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-4} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-3} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-2} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-1} \end{subfigure} \caption{Quasinormal spectrum (shown by circles) of the sound channel metric perturbations in Gauss-Bonnet gravity for various values of the coupling $\lambda_{\scriptscriptstyle GB}$ and $\mathfrak{q}=0.1$. From top left: $\lambda_{\scriptscriptstyle GB} = \{-6,\, -1.3125, \, 0.09,\, 0.21\} $. For comparison, the spectrum at $\lambda_{\scriptscriptstyle GB} = 0$ is shown by squares.} \label{fig:GB-Sound-channel} \end{figure} There are new gapped poles rising up the imaginary axis regardless of the sign of $\lambda_{\scriptscriptstyle GB}$. For $\eta/s<1/4\pi$ ($0<\lambda_{\scriptscriptstyle GB} <1/4$), they reach the asymptotic values \begin{align} \mathfrak{w} = -i \left(4+2n_1 - \sqrt{4+\mathfrak{q}^2}\right) , & &\mathfrak{w} = -i \left(4+2n_2 + \sqrt{4+\mathfrak{q}^2}\right) , \label{eq:QNMSound} \end{align} where $n_1$ and $n_2$ are non-negative integers, in the zero viscosity limit $\lambda_{\scriptscriptstyle GB} \rightarrow 1/4$. Here the modes \eqref{eq:QNMSound} are the exact quasinormal frequencies at $\lambda_{\scriptscriptstyle GB} = 1/4$ \cite{GBNesojen}. \begin{figure}[ht] \centering \includegraphics[width=0.45\linewidth]{figs/GB-Sound-Rew-vs-gamma} \includegraphics[width=0.45\linewidth]{figs/GB-Sound-Imw-vs-gamma} \caption{Real (left panel) and imaginary (right panel) parts of the top three quasinormal frequencies in the symmetric branches in the sound channel of Gauss-Bonnet at $\mathfrak{q} = 0.5$.} \label{fig:GB-Sound-Poles-vs-gamma} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-zoom-1} \end{subfigure} \qquad \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-zoom-2} \end{subfigure} \\ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-zoom-3} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-zoom-4} \end{subfigure} \caption{Quasinormal modes, close to the origin, in the sound channel of the Gauss-Bonnet theory, for increasing coupling constant and $\mathfrak{q}=0.1$. From left to right and top to bottom: $\lambda_{\scriptscriptstyle GB} =\{-2.0000,\, -2.8125,\, -3.7500,\, -24.7500\}$.} \label{fig:GB-Sound-zoom} \end{figure} In the regime $\eta/s>1/4\pi$ ($\lambda_{\scriptscriptstyle GB} <0$), the top new gapped pole moving up the imaginary axis with $|\lambda_{\scriptscriptstyle GB}|$ increasing gradually approaches the level of the two symmetric sound mode poles and becomes aligned with them (see Fig.~\ref{fig:GB-Sound-zoom}). For larger values of $|\lambda_{\scriptscriptstyle GB}|$, all three poles move closer to the real axis, with the sound poles now becoming parts of the symmetric branches. When the three poles are close to the origin, one can try to build an analytic approximation by solving the equation for $\mathcal{Z}_3$ perturbatively in $\mathfrak{w} \ll 1$, $\mathfrak{q} \ll 1$. The Dirichlet condition ($Z_3 (0) = 0$) then gives the equation \begin{align} &9 \gamma_{\scriptscriptstyle GB} ^2 \mathfrak{q}^2 \mathfrak{w} -3 \gamma_{\scriptscriptstyle GB} ^2 \mathfrak{w} ^3+2 \gamma_{\scriptscriptstyle GB} \mathfrak{q}^2 \mathfrak{w} -2 \mathfrak{q}^2 \mathfrak{w} \ln (\gamma_{\scriptscriptstyle GB}+1)-6 \gamma_{\scriptscriptstyle GB} \mathfrak{w} ^3+6 \mathfrak{w} ^3 \ln (\gamma_{\scriptscriptstyle GB}+1 ) \nonumber\\ &-3 \mathfrak{q}^2 \mathfrak{w}+\mathfrak{q}^2 \mathfrak{w} \ln 4+4 i \mathfrak{q}^2+9 \mathfrak{w}^3-3 \mathfrak{w} ^3 \ln 4 -12 i \mathfrak{w} ^2 = 0. \label{eq:PolySoundGB} \end{align} The three roots, $\mathfrak{w}_{1,2,3}$, can be found analytically, but the expressions are too cumbersome to present here. Their expansions in $\mathfrak{q}$ to quadratic order are given by \begin{align} &\mathfrak{w}_{1,2} = \pm \frac{1}{\sqrt{3}} \mathfrak{q} - \frac{1}{3} i \gamma_{\scriptscriptstyle GB}^2\mathfrak{q}^2 + \ldots \, , \label{eq:FullGBSoundAnaly1} \\ &\mathfrak{w}_3 = \mathfrak{w}_{\mathfrak{g}}^{GB} + \frac{2}{3} i \gamma_{\scriptscriptstyle GB}^2 q^2 + \ldots\, , \label{eq:FullGBSoundAnaly2} \end{align} where the gap $ \mathfrak{w}_{\mathfrak{g}}^{GB}$ is the same as in the scalar and shear channels (Eqs.~\eqref{eq:GB-scalar-gap} and \eqref{eq:w2ShearExpq}, respectively). The poles \eqref{eq:FullGBSoundAnaly1} correspond to the sound wave modes. Defining the critical momentum $\mathfrak{q}=\mathfrak{q}_c(\gamma_{\scriptscriptstyle GB})$ as the one at which the hydrodynamic expansion no longer serves as an adequate description of the low-energy limit of the theory, we may choose the equation $\mbox{Im} [\mathfrak{w}_1 (\mathfrak{q}_c) ] = \mbox{Im} [\mathfrak{w}_2 (\mathfrak{q}_c) ] = \mbox{Im} [\mathfrak{w}_3 (\mathfrak{q}_c) ] $ to represent such a condition. Solving this for $\mathfrak{q}_c(\gamma_{\scriptscriptstyle GB})$, we find exactly the same function \eqref{eq:qcShearGBqExp} as in the shear channel. Note, however, that the agreement between our numerical results and the analytic approximation is less satisfactory than in the shear channel (see Fig.~\ref{fig:GB-Sound-zoom}), apparently due to a stronger $\mathfrak{q}$ dependence in the sound channel. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Scalar-pole-density} \caption{Scalar} \end{subfigure} ~ \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Shear-pole-density} \caption{Shear} \end{subfigure} \linebreak\linebreak \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1\linewidth]{figs/GB-Sound-pole-density} \caption{Sound} \end{subfigure} \caption{Density of poles in the complex $\mathfrak{w}$ plane plotted as a function of $-\lambda_{GB} \in (-1/4,\, 2 )$ at $\mathfrak{q} = 0.5$.} \label{figs:GBPoleDensity} \end{figure} \subsubsection{The density of poles and the appearance of branch cuts} \label{sec:DensityBranch} In Section \ref{sec:SYM} we observed that for $\mathcal{N}=4$ SYM theory correlators, the density of poles in the two symmetric branches increases with 't Hooft coupling decreasing. In Gauss-Bonnet theory, the same phenomenon can be investigated in more detail since we are not constrained by infinitesimally small values of the higher derivative coupling. In all channels, the density of non-hydrodynamic poles in the two branches monotonically increases for $\eta / s > \hbar/4\pi k_B$ and decreases for $\eta / s < \hbar/4\pi k_B$. Although the trend is apparent already from Figs.~\ref{fig:GB-Scalar-channel}, \ref{fig:GB-Shear-channel}, \ref{fig:GB-Sound-channel}, here we show the density of poles as a function of the coupling constant in Fig.~\ref{figs:GBPoleDensity}. The density is determined by selecting a region of the complex $\mathfrak{w}$ plane, counting the number of poles in the symmetric branches inside that region and computing the resulting number density. The dependence in Fig.~\ref{figs:GBPoleDensity} is monotonic within the bounds of our numerical accuracy. The situation for $\lambda_{\scriptscriptstyle GB} >0$ ($\eta / s < \hbar/4\pi k_B$) is clear: as $\lambda_{\scriptscriptstyle GB} \rightarrow 1/4$ ($\eta / s \rightarrow 0$), the poles in the symmetric branches become less and less dense and in the limit they disappear from the finite complex plane altogether, as confirmed by analytic calculation at $\lambda_{\scriptscriptstyle GB} = 1/4$. For $\lambda_{\scriptscriptstyle GB} <0$ ($\eta / s > \hbar/4\pi k_B$), the poles become more and more dense, the symmetric branches lift up toward the real axis with $|\lambda_{\scriptscriptstyle GB}|$ increasing, and one may conjecture that in the limit $\lambda_{\scriptscriptstyle GB} \to -\infty$ they merge to form branch cuts in the complex plane of frequency along $(-\infty , - \mathfrak{q}]$ and $[\mathfrak{q}, \infty)$. Numerically, we observe that $\mbox{Re}[ \mathfrak{w}]$ of the leading quasinormal mode in the (right) branch of poles monotonically approaches the line $\mathfrak{w} = \mathfrak{q}$ for large $|\lambda_{\scriptscriptstyle GB}|$ (see Fig.~\ref{fig:GB-Shear-BranchCut}) which supports the conjecture that $\pm \mathfrak{q}$ are the branch points of the correlator in the limit $\lambda_{\scriptscriptstyle GB} \to -\infty$. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figs/GBShearBranchCutEdge} \caption{Position of the first pole in the shear spectrum of the Gauss-Bonnet theory, for $\lambda_{\scriptscriptstyle GB}=\{-100,-500,-1000\}$, as a function of momentum $\mathfrak{q}$. The point indicates where the hypothetical branch cut would begin in the limit of large $|\lambda_{\scriptscriptstyle GB}|$. The solid line corresponds to the expectation of where the position of $\mbox{Re}\, \mathfrak{w}$ of the first pole should be in the limit of $\lambda_{\scriptscriptstyle GB} \to -\infty$, i.e. $\mbox{Re} \,\mathfrak{w}=\mathfrak{q}$.} \label{fig:GB-Shear-BranchCut} \end{figure} Note that all other poles (the ones not belonging to the symmetric branches at finite $\lambda_{\scriptscriptstyle GB}$) in all channels either join the branches (in the sound and shear channels) or disappear due to vanishing residues (scalar and sound channels) in the limit $\lambda_{\scriptscriptstyle GB} \to -\infty$. Thus, in that limit, the analytic structure of the correlator is represented by the branch cuts $(-\infty , - \mathfrak{q}] \cup [\mathfrak{q}, \infty)$ (see Fig.~\ref{fig:GB-BranchCut}). This resembles the zero temperature limit of the thermal correlator in a CFT dual to Einstein gravity with no higher derivative corrections. Such correlators are known analytically only for BTZ background. For example, the $\Delta = 2$ thermal correlator has the form \cite{Son:2002sd} \begin{align} G^R (\mathfrak{w},\mathfrak{q}) \sim \left(\mathfrak{q}^2-\mathfrak{w}^2\right) \left\{ \psi \left[ 1 -\frac{i}{2}\left( \mathfrak{w} - \mathfrak{q}\right)\right] + \psi \left[ 1 -\frac{i}{2}\left( \mathfrak{w} + \mathfrak{q}\right)\right] \right\}\,, \label{eq:btz-corr} \end{align} where $\psi (z)$ is the logarithmic derivative of the Gamma-function with poles at $z=-n$, $n=0,1,2,...$. In the zero-temperature limit $\mathfrak{w}\gg 1$, $\mathfrak{q} \gg 1$, the poles merge forming two branch cuts running from the branch points $\omega = \pm q$ to infinity parallel to the imaginary axis. For large $z$, Binet's formula implies $\psi (z) \sim \ln z$ and thus in the limit of zero temperature the correlator (\ref{eq:btz-corr}) becomes $G^R \sim k^2 \ln{k^2}$, where $k^\mu = (-\omega,q)$. Similarly, the zero-temperature limit of the energy-momentum correlator in a $4d$ CFT dual to Einstein gravity is $G^R \sim (-\omega^2 + q^2)^2 \ln{(-\omega^2 + q^2)}$. This function has branch points at $\omega = \pm q, \infty$ joined by the branch cuts $(-\infty , - q] \cup [q, \infty)$. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figs/branchCut.pdf} \caption{The conjectured analytic structure of thermal correlators in holographic Gauss-Bonnet theory in the limit $\lambda_{\scriptscriptstyle GB} \to -\infty$.} \label{fig:GB-BranchCut} \end{figure} \subsubsection{Coupling constant dependence of the shear viscosity - relaxation time ratio in Gauss-Bonnet theory} The coupling constant dependence of the ratio $\eta / s \,\tau_{\scriptscriptstyle R} T$ in Gauss-Bonnet theory shows the same qualitative features as in $\mathcal{N} = 4$ SYM discussed in Section \ref{sec:RatioN4}. In Fig.~\ref{fig:GB-Shear-etaOverStauT-vs-gamma}, we plot the ratios $\eta / (s \,\tau_k T )$, where $\tau_k$, $k=1,2$, are defined as $\tau_k = 1/|\mbox{Im}\, \omega_k|$ for the two smallest in magnitude non-hydrodynamical quasinormal frequencies $\omega_k$ at $\mathfrak{q}=0$. We identify $\tau_{\scriptscriptstyle R}$ with $\tau_1$, $\omega_1$ being the fundamental frequency. The functions are monotonic, changing rapidly in the vicinity of $\lambda_{\scriptscriptstyle GB} =0$ and flattening out in the region $|\lambda_{\scriptscriptstyle GB}| \approx 3 - 6$. As in $\mathcal{N} = 4$ SYM theory, the kinetic theory result (\ref{eq:rel-visc-rel}) seems to hold at intermediate coupling. \begin{figure}[ht] \centering \includegraphics[width=0.7\linewidth]{figs/GB-Shear-etaOverStauT-vs-gamma} \caption{The ratios $\eta/s T \tau_k$, for $k=\{1,\,2\}$, as functions of $\lambda_{\scriptscriptstyle GB}$ in the shear channel of the Gauss-Bonnet theory. Here, $\tau_k$ are defined as $\tau_k = 1/|\mbox{Im}\, \omega_k|$ for the two smallest-in-magnitude non-hydrodynamical quasinormal frequencies $\omega_k$ at $\mathfrak{q}=0$. The error bars correspond to resolution errors in the $\mathfrak{w}$-plane.} \label{fig:GB-Shear-etaOverStauT-vs-gamma} \end{figure} \subsubsection{Shear channel spectral function and quasiparticles at ``weak coupling''} \label{sec:SpectFunGBShear} Since the non-hydrodynamic poles in the symmetric branches approach the real axis with $|\lambda_{\scriptscriptstyle GB}|$ increasing (i.e. at weaker coupling), one may expect the corresponding spectral function to develop a structure resembling quasiparticle peaks. We check this by computing the spectral function in the shear channel. Choosing the spatial momentum along the $z$ axis, the shear channel retarded energy-momentum tensor correlator $G^{\text{R}}_{xz,xz}(\mathfrak{w},\mathfrak{q},\lambda_{\scriptscriptstyle GB})$ in Gauss-Bonnet theory can be computed as follows \cite{GBNesojen}: \begin{equation} \label{eq:corr-norm} G^{\text{R}}_{xz,xz}(\mathfrak{w},\mathfrak{q},\lambda_{\scriptscriptstyle GB}) =8 \pi^2 T^2 \mathfrak{w}^2 \lim_{\varepsilon \rightarrow 0} \, {\cal C} (\varepsilon, \mathfrak{w}, \mathfrak{q})\frac{\partial_u Z_2 (\varepsilon,\mathfrak{w},\mathfrak{q})}{Z_2(\varepsilon,\mathfrak{w},\mathfrak{q})}\,, \end{equation} where the function ${\cal C}$ is given by \begin{equation} {\cal C} (u,\mathfrak{w},\mathfrak{q}) = \frac{\pi^2 T^2}{8\kappa_5^2}\, \frac{\bar{N} \bar{f} (1-\bar{f})}{ N_{GB}^5 u \left[ \bar{N} \bar{f} \mathfrak{q}^2 - (1-\bar{f})^2 \mathfrak{w}^2 \right]}\,, \end{equation} with $$ \bar{f} = 1 - \sqrt{1- 4 \lambda_{\scriptscriptstyle GB} (1-u^2)}\,, \;\; \qquad \;\; \bar{N} = N_{GB}^2\, \frac{1-4\lambda_{\scriptscriptstyle GB}}{2\lambda_{\scriptscriptstyle GB}}\,, $$ and $Z_2(u)$ is the solution of the shear channel equation of motion obeying the incoming wave boundary condition at the horizon and normalized to one at the same $u = \varepsilon \to 0$. The solution $Z_2(u)$ can be written as $Z_2(u) = \mathcal{A}_2 Z^I_2 (u) + \mathcal{B}_2 Z^{II}_2 (u)$, where $Z^I_2 (u)$ and $Z^{II}_2 (u)$ are the two local Frobenius expansions at the boundary (see e.g. \cite{Kovtun:2006pf}). In terms of $\mathcal{A}_2$ and $\mathcal{B}_2$, the retarded Green's function \eqref{eq:corr-norm} is given by\footnote{Since the Frobenius expansion of $Z^{I}_2$ contains $Z^{II}_2$ multiplying $\ln u$, it is numerically more convenient to compute $\mathcal{B}_2$ by subtracting off the logarithmic term, as was done in \cite{Kovtun:2006pf}. We find that $\mathcal{B}_2 = \frac{1}{2} \lim_{u\to 0} \left( \partial^2_u Z_2 - 2 \mathcal{A}_2 h \ln u \right) - \frac{3}{2} \mathcal{A}_2 h$, where in the Gauss-Bonnet theory, $h = - 8 \lambda_{\scriptscriptstyle GB}^4 \left( \mathfrak{q}^2 - \mathfrak{w}^2\right)^2 / \left( 1 - \sqrt{1-4\lambda_{\scriptscriptstyle GB}} \right)^4$ \cite{GBNesojen}.} \begin{align} G^{\text{R}}_{xz,xz}(\mathfrak{w},\mathfrak{q},\lambda_{\scriptscriptstyle GB}) =\frac{\pi^4T^4}{2 \kappa_5^2}\, \frac{\bar{N} \gamma_{\scriptscriptstyle GB} \left(1-\gamma_{\scriptscriptstyle GB}\right) \mathfrak{w}^2 }{ N_{GB}^5 \left[ \bar{N} \left(1-\gamma_{\scriptscriptstyle GB}\right) \mathfrak{q}^2 - \gamma_{\scriptscriptstyle GB}^2 \mathfrak{w}^2 \right]} \, \frac{\mathcal{B}_2}{\mathcal{A}_2}. \end{align} The spectral function is then computed as \begin{align} \rho_{xz,xz} \left(\mathfrak{w},\mathfrak{q},\lambda_{\scriptscriptstyle GB}\right) = - \mbox{Im} \, G^{\text{R}}_{xz,xz} \left(\mathfrak{w},\mathfrak{q},\lambda_{\scriptscriptstyle GB}\right). \end{align} In Fig.~\ref{fig:GB-Shear-SpectralFunctions}, we plot the dimensionless spectral function \begin{align} \bar \rho_{xz,xz} \left(\mathfrak{w},\mathfrak{q},\lambda_{\scriptscriptstyle GB}\right) \equiv \frac{ \kappa^2_5 }{ 4 \pi^2 T^4} \rho_{xz,xz} \left(\mathfrak{w},\mathfrak{q},\lambda_{\scriptscriptstyle GB}\right) , \end{align} where $\kappa^2_5$ is the Newton's constant from the Gauss-Bonnet action \eqref{eq:GBaction} and $T$ the Hawking temperature \eqref{eq:GBTemperature}. As $|\lambda_{\scriptscriptstyle GB}|$ increases and the symmetric branches of poles approach the real $\mathfrak{w}$ axis, the appearance of quasiparticle-like peaks in the spectral function is clearly seen. As a result of the quasinormal modes now having $\left|\mbox{Im} [\mathfrak{w}] \right| \ll \left|\mbox{Re} [\mathfrak{w}] \right|$ at large $|\lambda_{\scriptscriptstyle GB}|$, the peaks become sharp and very narrow. Since the density of poles increases with $|\lambda_{\scriptscriptstyle GB}|$, the density of peaks increases as well. In the limit $|\lambda_{\scriptscriptstyle GB}|\rightarrow \infty$, they presumably form a continuum. \begin{figure}[ht] \centering \begin{subfigure}[t]{0.47\linewidth} \includegraphics[width=1\linewidth]{figs/GBShearSpectralFunction-lambda=-100} \end{subfigure} \qquad \begin{subfigure}[t]{0.47\linewidth} \includegraphics[width=1\linewidth]{figs/GBShearSpectralFunction-lambda=-500} \end{subfigure} \caption{The dimensionless spectral function $\bar \rho_{xz,xz} \left(\mathfrak{w},\mathfrak{q},\lambda_{\scriptscriptstyle GB}\right)$ in the shear channel of the Gauss-Bonnet theory for $\lambda_{\scriptscriptstyle GB} = -100$ (left panel) and $\lambda_{\scriptscriptstyle GB} = -500$ (right panel) at $\mathfrak{q} = 0.1$.} \label{fig:GB-Shear-SpectralFunctions} \end{figure} \section{Generic curvature squared corrections to quasinormal spectra of metric perturbations} \label{sec:r2} In this Section, we comment on the quasinormal spectrum in the theory with general curvature squared terms in the action, \begin{align} S_{R^2} = \frac{1}{2 \kappa_5^2 } \int d^5 x \sqrt{-g} \left[ R - 2 \Lambda + L^2 \left( \alpha_1 R^2 + \alpha_2 R_{\mu\nu} R^{\mu\nu} + \alpha_3 R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma} \right) \right], \label{eq:R2Th} \end{align} where the cosmological constant is $\Lambda = - 6 / L^2$. For the special choice of the parameters $\alpha_1$, $\alpha_2$, $\alpha_3$ given by \begin{align} \label{eq:gb-set} \alpha_1 = \lambda_{\scriptscriptstyle GB}/2\,, \qquad \alpha_2 = - 2 \lambda_{\scriptscriptstyle GB}\,, \qquad \alpha_3 = \lambda_{\scriptscriptstyle GB}/2\,, \end{align} the action \eqref{eq:R2Th} coincides with the Gauss-Bonnet action (\ref{eq:GBaction}). Generically, however, the action \eqref{eq:R2Th} leads to the equations of motion involving derivatives up to the fourth order. In this case the higher derivative terms in \eqref{eq:R2Th} are treated perturbatively and the parameters $\alpha_i$ are assumed to be infinitesimally small. We can find the corresponding quasinormal spectra by using a field redefinition and the known results for Gauss-Bonnet and $\mathcal{N} = 4$ SYM theories. One may notice \cite{Brigante:2007nu} that the action \eqref{eq:R2Th} with $\alpha_3 = 0$ is equivalent via a field redefinition \begin{align} g_{\mu\nu} = \bar g_{\mu\nu} + \alpha_2 \bar R_{\mu\nu} - \frac{1}{3} \left(\alpha_2 + 2 \alpha_1 \right) \bar g_{\mu\nu} \bar R, \label{eq:FieldRedef} \end{align} and an additional rescaling to the Einstein-Hilbert action with the same cosmological constant and modified Newton's constant (which does not enter the vacuum equations of motion).\footnote{See Ref. \cite{Grozdanov:2014kva} for a detailed description of this procedure where it was applied to the calculation of the second-order transport coefficients.} Consider now a gauge-invariant (with respect to infinitesimal metric perturbations) mode $Z\left(h_{\mu\nu}\right)$ that is linear in metric perturbations, $\delta g_{\mu\nu} = h_{\mu\nu}$. To linear order, the Ricci and Einstein tensors are invariant under diffeomorphisms, hence so is $g_{\mu\nu} R$. It therefore follows that $g_{\mu\nu}$ and $\bar g_{\mu\nu}$ transform identically under the diffeomorphisms and so \begin{align} Z\left(h_{\mu\nu}\right) = Z\left(\bar h_{\mu\nu}\right). \end{align} Hence, when $\alpha_3 = 0$, the quasinormal modes of $Z\left(\bar h_{\mu\nu}\right)$ are also those of $Z\left(h_{\mu\nu}\right)$, which means that the quasinormal mode spectrum of the AdS-Schwarzschild black brane (dual to thermal $\mathcal{N} = 4$ SYM theory at infinite 't Hooft coupling) is exactly the spectrum of the theory defined by \eqref{eq:R2Th} with $\alpha_3 = 0$. To include the $\alpha_3$ contributions, we can use the fact that the perturbative (in $\alpha_i$) quasinormal spectrum generically has the form \begin{align} \omega^* = \omega_0^* + \alpha_1\ \tilde\omega_{1}^* + \alpha_2 \ \tilde\omega_{2}^* + \alpha_3 \ \tilde\omega_{3}^*, \end{align} where $\omega^*_0$ are the quasinormal frequencies of the AdS-Schwarzschild black brane. Moreover, the above discussion shows that $\tilde\omega_{1}^* = 0$, $\tilde\omega_{2}^* = 0$. Keeping in mind the identification (\ref{eq:gb-set}) and considering the linearized quasinormal spectrum in Gauss-Bonnet theory, \begin{align} \omega^*_{GB} = \omega^*_0 + \lambda_{GB}\, \tilde\omega^*_{GB}, \end{align} we conclude that $\lambda_{GB} \tilde \omega_3^*/2 = \lambda_{GB} \tilde\omega_{GB}^*$. Hence, the quasinormal spectrum of a background defined by the action \eqref{eq:R2Th} has the form \begin{align} \omega^* = \omega^*_0 + 2\,\alpha_3\,\tilde\omega^*_{GB}\,, \end{align} where $\omega^*_0$ is the corresponding frequency in the spectrum of AdS-Schwarzschild black brane with no higher derivative corrections included and $\tilde\omega^*_{GB}$ is the coefficient of the term linear in $\lambda_{\scriptscriptstyle GB}$ in the corresponding spectrum of the Gauss-Bonnet theory. Thus, the coupling dependence of the relaxation time and other properties of the spectrum described in previous sections are qualitatively the same as the ones in the Gauss-Bonnet theory (and $\mathcal{N} = 4$ SYM theory with large but finite 't Hooft coupling). In particular, one observes a qualitative difference between the regimes with $\eta / s > \hbar/4\pi k_B$ and $\eta / s < \hbar/4\pi k_B$ similar to the one described in the previous Section. \section{Discussion} \label{sec:discussion} In this paper, we have studied the influence of higher derivative $R^2$ and $R^4$ terms on the quasinormal spectra of gravitational perturbations of black branes. In a dual QFT, this corresponds to changing the 't Hooft coupling or its analogue from infinite to large but finite value. Understanding, even qualitatively, how the physical quantities responsible for thermalization change from strong to weak coupling would be important both from a conceptual and a phenomenological point of view. We were looking for robust, model-independent qualitative features the higher derivative terms may bring about. Vulnerabilities of this approach are quite obvious. While $\mathcal{N} = 4$ $SU(N_c)$ SYM is a well defined unitary theory, higher derivative corrections in its dual gravity description are only partially known even to leading order in $\gamma \sim \lambda^{-3/2}$ at infinite $N_c$ and those terms must be treated perturbatively. Moreover, as emphasized recently in \cite{Waeber:2015oka}, different physical quantities may have very different sensitivity to coupling corrections, and the smallness of the perturbative parameter $\gamma$ may not necessarily be a good indicator of the size of corrections. In contrast, the second order equations of motion of Gauss-Bonnet gravity can be treated fully non-perturbatively. However, the (hypothetical) dual field theory suffers from causality violation and even the bulk theory may need higher spin fields to mend the problems (the latter would imply that higher derivative corrections can only be treated perturbatively, i.e. the theory loses its special status with respect to Ostrogradsky instability). Unphazed by these uncertainties, we proceed to investigate coupling corrections in both theories and are encouraged to observe qualitatively similar results in both cases. Our findings are summarized at the end of Section \ref{sec:intro}. One curious feature we find is the behavior of quasinormal spectrum leading to a breakdown of the hydrodynamic description at a coupling-dependent critical value $q_c$ of the spatial momentum. In both $\mathcal{N} = 4$ SYM and Gauss-Bonnet theories, the dependence on coupling implies that hydrodynamics has a wider applicability range at strong coupling. It may be interesting to investigate the convergence properties of the hydrodynamic derivative expansion at finite coupling, possibly along the lines of Refs.~\cite{Heller:2013fn,Heller:2015dha}. Another qualitatively similar feature for both theories is the coupling dependence of the ratio of the shear viscosity to the product of relaxation time, entropy density and temperature. This quantity is (approximately) constant in kinetic theory at weak coupling. From the dual gravity with higher derivative corrections we find that this ratio changes rapidly in the vicinity of infinite coupling and then shows a very weak (essentially flat) dependence on coupling when the coupling is further decreased to large but finite values. Similar behavior is expected for other transport coefficients. Admittedly, corrections from the unknown higher derivative terms may influence the dependence at intermediate coupling. Yet, if correct, the observed tendency may help to explain certain phenomenological success and "unreasonable effectiveness" of kinetic theory methods far beyond their justified domain. We also found that the behavior of coupling corrections to quasinormal spectrum and related quantities depends strongly on whether $\eta / s > \hbar/4\pi k_B$ or $\eta / s < \hbar/4\pi k_B$. In the regime of $\eta / s < \hbar/4\pi k_B$, the symmetric branches of quasinormal modes exhibit monotonically increasing $\left| \mbox{Im}\, \omega \right|$. Since this could lead to the relaxation time of the system $\tau_{\scriptscriptstyle R}$ decreasing below any possible lower bound (see Eq.~\eqref{eq:sachdev-const}), it is conceivable that this regime is pathological. Earlier work was focused on looking for possible pathologies (e.g. causality violation) in the ultraviolet sector of the theories having the regime $\eta / s < \hbar/4\pi k_B$, and constraining higher derivative couplings accordingly. However, inconsistencies in this regime may exist in the infrared sector as well. As the qualitative behavior of the spectra critically depends on the sign of the correction to $\eta / s = \hbar/4\pi k_B$, we note that the relation between $\eta/s$ and the relaxation time $\tau_{\scriptscriptstyle R}$ raises the possibility that the bound on $\eta/s$ speculated upon\footnote{A number of strongly interacting many-body systems - from quark-gluon plasma and cold atoms \cite{Luzum:2008cw,Adams:2012th,Cremonini:2011iq} to dusty plasmas \cite{fortov-1,fortov-2} and rare gases and molecules in the vicinity of the critical point \cite{hohm} have $\eta/s \gtrsim \hbar/4\pi k_B$.} in Ref.~\cite{Kovtun:2003wp} is related to a bound on relaxation time. In the holographic models considered in this paper, both $\eta/s$ and $\tau_{\scriptscriptstyle R} T$ are monotonic functions of the coupling. For $\eta/s$ decreasing below $\hbar/4\pi k_B$, the relaxation time also decreases below its value at infinite coupling. Is there a minimal relaxation time possibly correlated with the viscosity bound? Are there any universal constraints on the constant ${\cal C}$ in Eq.~(\ref{eq:sachdev-const})? Curiously, in 2006 Hod \cite{Hod:2006jw} suggested a universal bound on relaxation time in any system: \begin{equation} \tau_{\scriptscriptstyle R} \geq \tau_{min} = \frac{\hbar}{\pi k_B T}\,. \label{eq:Hod-bound} \end{equation} For the black hole quasinormal spectrum, the inequality (\ref{eq:Hod-bound}) means that there exists at least one quasinormal frequency whose imaginary part lies in the strip $0 > \mbox{Im}\, \omega \geq - \pi k_B T/\hbar$ in the complex frequency plane or, in terms of $\mathfrak{w} = \omega/2\pi k_B T$, in the strip % \begin{equation} 0 > \mbox{Im}\, \mathfrak{w} \geq - \frac{1}{2}\,. \label{eq:Hod-bound-qnm} \end{equation} In the language of the kinetic theory linear collision operator spectrum, the bound implies \begin{equation} 0 \leq \nu_{\min} \leq \nu_c = \pi k_B T/\hbar\,, \label{eq:Hod-bound-nu} \end{equation} see Fig.~\ref{fig:spectrum_kinetic}d. Apparently, the inequality (\ref{eq:Hod-bound-qnm}) holds for black holes (for black holes, the bound suggests that a black hole has (at least) one channel of slowly decaying perturbation modes which respect (\ref{eq:Hod-bound-qnm})). At first glance, however, the relaxation time bound is void of meaning since one expects the hydrodynamic modes to be always present in any system in the thermodynamic limit and they may relax arbitrarily slowly for sufficiently long wavelengths (in other words, gapless quasinormal frequencies corresponding to hydrodynamic modes are always present in the strip (\ref{eq:Hod-bound-qnm}) for sufficiently small spatial momentum $q$). Moreover, even if we regard the bound (\ref{eq:Hod-bound}) as the bound obeyed by the (non-hydrodynamic) relaxation time $\tau_{\scriptscriptstyle R}$ (and correspondingly, the inequality (\ref{eq:Hod-bound-qnm}) as the one for the fundamental non-hydrodynamic quasinormal frequency), it appears to be violated in all black brane channels (see e.g. tables of quasinormal frequencies in Refs.~\cite{Starinets:2002br,Nunez:2003eq,Kovtun:2005ev}). Nevertheless, we believe the question of whether holography or black hole physics implies an inequality of the type (\ref{eq:Hod-bound-nu}) or (\ref{eq:Hod-bound}) is an interesting one in view of its apparent validity for black holes and its possible connection to viscosity bound. In this paper, we considered coupling constant corrections to equilibrium correlators in $4d$ CFTs. It would be interesting to consider non-conformal and time-dependent backgrounds. We also hope questions raised in this paper may stimulate additional work on weakly coupled thermal QFTs, via perturbation theory or kinetic theory, regarding the analytic structure of correlation functions and bounds of applicability of hydrodynamic description, with the goal to form a consistent qualitative picture interpolating between weak and strong coupling. \acknowledgments We would like to thank J.~Casalderrey Solana, F.~Essler, V.~Ker{\"a}nen, P.~Kleinert, R.~Konoplya, D.~Kovrizhin, H.~Reall, A.~Schekochihin, L.G.~Yaffe, J.~Zaanen and A.~Zhiboedov for illuminating discussions and S.~Hod for correspondence. A.O.S. is grateful to the Institute for Nuclear Theory at the University of Washington, Seattle, for its warm hospitality and to participants of the program INT-15-2c "Equilibration Mechanisms in Weakly and Strongly Coupled Quantum Field Theory" for useful discussions. S.G. is supported in part by a VICI grant of the Netherlands Organization for Scientific Research (NWO) and by the Netherlands Organization for Scientific Research/Ministry of Science and Education (NWO/OCW). N.K. is supported by a grant from the John Templeton foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton foundation. This work was carried out on the Dutch national e-infrastructure with the support of SURF Foundation. The work of A.O.S. was supported by the European Research Council under the European Union's Seventh Framework Programme (ERC Grant agreement 307955).
\section{Introduction} \label{sec1} The Naturalistic Teenage Driving Study (NTDS), sponsored by the National Institute of Child Health and Human Development (NICHD), was conducted to evaluate the effects of experience on teen driving performance under various driving conditions. It is the first naturalistic study of teenage driving and has given numerous insights to the risky driving behavior of newly licensed teens that includes evidence that risky driving does not decline with experience as discussed by \citet{Bruce}. During the study 42 newly licensed drivers were followed over the first 18 months after obtaining a license. The participants were paid for their participation, and there were no dropouts. Driving took place primarily in southern Virginia among small cities and rural areas. For each trip, various kinematic measures were captured. A lateral accelerometer recorded driver steering control by measuring g-forces the automobile experiences. These recordings provide two kinematic measures: lateral acceleration and lateral deceleration. A longitudinal accelerometer captured driving behavior along a straight path and records accelerations or decelerations. Another measure for steering control is the vehicle's yaw rate, which is the angular deviation of the vehicle's longitudinal axis from the direction of the automobile's path. Each of these kinematic measures was recorded as count data as they crossed specified thresholds that represent normal driving behavior. Crash and near crash outcomes were recorded in two ways. First, the driver of each vehicle had the ability to self report these events. Second, video cameras provided front and rear views from the car during each trip. Trained technicians analyzed each trip the driver took using the video and determination of crash/near crash events made. Table~\ref{summary} shows the aggregate data for the driving study. More information on the study can be found at \url{http://www.vtti.vt.edu}. Our interests are the prediction of crash and near crash events from longitudinal risky driving behavior. Crash or near crash outcomes are our binary outcome of interest, while excessive g-force events are our proxy for risky driving. It is likely that crash/near crash outcomes are best described by some unobserved or latent quality like inherent driving ability. Previously, \citet{Jackson} analyzed the driving data using a latent construct where the previously observed kinematic measures describe the hidden state and the hidden state describes the CNC outcome. Our approach here characterizes the joint distribution of crash/near crash and kinematic outcomes using a mixed hidden Markov model where both outcomes contribute to the calculation of the hidden state probabilities. \begin{table}[b] \tabcolsep=0pt \caption{Kinematic measures and correlation with CNCs, naturalistic teenage driving study, $^\ast$correlation computed between the CNC and elevated g-force event rates}\label{summary} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcd{5.0}d{3.1}c@{}} \hline \multicolumn{1}{@{}l}{\textbf{Category}} & \multicolumn{1}{c}{\textbf{Gravitation force}} & \multicolumn{1}{c}{\textbf{Frequency}}& \multicolumn{1}{c}{$\bolds{\%}$ \textbf{Total events}} & \multicolumn{1}{c@{}}{\textbf{Correlation with CNCs}$\bolds{^\ast}$}\\ \hline Rapid starts & $> 0.35$ & 8747 & 39.6 &0.28\\ Hard stops & $\leqslant-0.45$ & 4228 & 19.1 &0.76\\ Hard left turns & $\leqslant-0.05$ & 4563 & 20.6 &0.53\\ Hard right turns& $\geqslant0.05$ & 3185 & 14.4 &0.62\\ Yaw & $6^\circ$ in 3 seconds & 1367 & 6.2 &0.46 \\[3pt] Total & &\multicolumn{1}{c}{22,090} &\multicolumn{1}{c}{100} &\multicolumn{1}{c}{0.60}\\ \hline \end{tabular*} \end{table} There is a previous literature on mixed hidden Markov models. Discrete-time mixed Markov latent class models are introduced by \citet {Landv}. A general framework for implementing random effects in the hidden Markov model is discussed by \citet{Altman}. In this work, the author presented a general framework for a mixed hidden Markov model with a single outcome. The mixed hidden Markov model presented by Altman unifies existing hidden Markov models for multiple processes, which provides several advantages. The modeling of multiple processes simultaneously permits the estimation of population-level effects as well as allowing great flexibility in modeling the correlation structure because they relax the assumption that observations are independent given the hidden states. There are a variety of methods available for estimation of parameters in mixed hidden Markov models. \citet{Altman} performed estimation by evaluating the likelihood as a product of matrices and performing numerical integration via Gaussian quadrature. A quasi-Newton method is used for maximum likelihood estimation. \citet{Bart2} extend the latent class model for the analysis of capture-recapture data, which takes into account the effect of past capture outcomes on future capture events. Their model allows for heterogeneity among subjects using multiple classes in the latent state. \citet{Scott} introduces Bayesian methods for hidden Markov models which was used as a framework to analyze alcoholism treatment trial data [\citet{Shirley}]. \citet{Bart1} use a fixed effects model to evaluate the performance of nursing homes using a hidden Markov model with time-varying covariates in the hidden process. \citet{Maruotti} discusses mixed hidden Markov models and their estimation using the expectation--maximization algorithm and leveraging the \textit{forward} and \textit{backward} probabilities given by the \textit{forward--backward} algorithm and partitioning the complete-data-log-likelihood into sub-problems. \citet{MaruottiRocci} proposed a mixed non-homogeneous hidden {M}arkov model for a categorical data. We present a model that extends the work of \citet{Altman} in several ways. First, we allow the hidden state to jointly model longitudinal binary and count data, which in the context of the driving study represent crash/near crash events and kinematic events, respectively. We also introduce an alternative method to evaluate the likelihood by first using the \textit{forward--backward} algorithm [\citet{Baum}] followed by performing integration using adaptive Gaussian quadrature. Implementation of the \textit{forward--backward} algorithm allows for easy recovery of the posterior hidden state probabilities, while the use of adaptive Gaussian quadrature alleviates bias of parameter estimates in the hidden process [\citet{Altman}]. Understanding the nature of the hidden state at different time points was of particular interest to us in our application to teenage driving, and our estimation procedure yields an efficient way to evaluate the posterior probability of state occupancy. In this paper we first introduce a joint model for crash/near crash outcomes and kinematic events which allows the mean of each these to change according to a two-state Markov chain. We introduce heterogeneity in the hidden process as well as the conditional model for the kinematic events via a shared random effect. We then discuss an estimation procedure whereby the likelihood is maximized directly and estimation of the hidden states is readily available by incorporating the \textit{forward--backward} algorithm [\citet{Baum}] in evaluating individual likelihoods. We apply our model to the NTDS data and show that these driving kinematic data and CNC events are closely tied via the latent state. We use our model results to form a predictor for future CNC events based on previously observed kinematic data. \section{Methods} \label{sec2} Here we present the joint model for longitudinal binary and count outcomes using two hidden states as well as the estimation procedure for model parameters. \subsection{The model} Let $\mathbf{b}_i=(b_{i1},b_{i2},\ldots,b_{in_i})$ be an unobserved binary random vector whose elements follow a two-state Markov chain (state ``0'' represents a \textit{good} driving state and state ``1'' represents a \textit{poor} driving state) with unknown transition probabilities $\mathrm{p}_{01}, \mathrm{p}_{10}$ and initial probability distribution $r_0$. We model the crash/near crash outcome $Y_{ij}$, where \textit{i} represents an individual and \textit{j} the month since licensure, using the logistic regression model shown in (\ref{Ymodel}): \begin{eqnarray} \label{Ymodel} Y_{ij} & \sim&\operatorname{Bernoulli}(\pi_{ij}), \nonumber \\[-8pt] \\[-8pt] \nonumber \operatorname{logit}(\pi_{ij}) &=& \log(\mathrm{m}_{ij})+ \alpha_{0} +\alpha _{1}b_{ij}+ \alpha_2 u_i, \end{eqnarray} where the $\log(\mathrm{m}_{ij})$ is an offset to account for the miles driven during a particular month. Treatment of the CNC outcome as binary is not problematic since more than 98\% of the monthly observations had one or fewer CNCs observed. Although the $\log$ link is ideal for count data, it is a reasonable correction for the miles driven for the $\operatorname{logit}$ link when the risk of a crash is low. An alternative parameterization would be to include the miles driven as a covariate in the model. The parameter $\alpha_1$ assesses the odds-ratio of a crash or near crash event when in the \textit{poor} versus \textit{good} driving state; this odds ratio is simply $e^{\alpha_1}$, while $\alpha_2$ reflects unaccounted for covariates beyond the hidden state. The $X_{ij}$ are the sum of the observed elevated g-force count data and this model incorporates heterogeneity among subjects introducing the random effect in the mean structure shown in (\ref {Xmodelhidden}): \begin{eqnarray} \label{Xmodelhidden} X_{ij} & \sim& \operatorname{Poisson}(\mu_{ij}), \nonumber \\[-8pt] \\[-8pt] \nonumber \log(\mu_{ij})&=& \log(\mathrm{m}_{ij}) + \beta_{0} + \beta _{1}b_{ij} + \beta_{2}t_{j} + \beta_3 u_{i}, \end{eqnarray} where $u_{i}$ is the random effect with Gaussian distribution and $t_j$ reflects the month of observation since licensure for a particular individual (note that these observations are equally spaced and $t_j$ was not statistically significant when included in the CNC process, hence its omission in this part of the model). Here the parameter $\beta _3$ in (\ref{Xmodelhidden}), along with the variance of the random effect distribution, accounts for any variation not explained by the other terms included in the model and induces a correlation between outcomes. Next, we assume that $\{b_{ij}|u_i\}_{j=1}^{n_i}$ is a Markov chain and that $b_{ij}|u_i$ is independent of $b_{it}|u_i$ for $j \neq t$. The transition probabilities for the Markov chain must lie between 0 and 1, and the sum of transitioning from one state to either state must be 1. Hence, the transition probabilities are modeled as \begin{eqnarray} \label{p10r} &&\mathrm{p}_{01}(u_{i})\dvtx\quad \operatorname{logit} \bigl( \operatorname {Pr}(b_{ij}=1|b_{i,j-1}=0|u_i) \bigr)= \gamma_{01}+\delta _1y_{i,j-1}+u_{i}, \nonumber \\[-8pt] \\[-8pt] \nonumber &&\mathrm{p}_{10}(u_{i})\dvtx\quad \operatorname{logit} \bigl( \operatorname {Pr}(b_{ij}=0|b_{i,j-1}=1|u_i) \bigr)= \gamma_{10} + \delta_2 y_{i,j-1}+ \delta^{\ast} u_{i}, \end{eqnarray} where the parameter $\delta^{\ast}$ in (\ref{p10r}) characterizes the degree of correlation between transition probabilities among individuals. Two different types of correlations are described by $\delta$. If $\delta^{\ast} < 0$, then this implies individuals have a tendency to remain in either state. If $\delta^{\ast} > 0$, then this implies some individuals exhibit a tendency to transition more between states than others. This approach to describe the transitions in similar to that presented by \citet{AlbertFollman}. Introducing random effects in this manner is of great benefit computationally, however, there is a downside risk in that this implementation assumes the processes are highly correlated. This is especially important for the hidden process where if the correlation between transition probabilities is not very strong, biased estimates will result. We present these findings in our simulation section. \subsection{Estimation} Letting ${\Psi}$ represent all parameters included in the model discussed above, the likelihood for the joint model is \begin{eqnarray} \label{mhmmlike} &&L(\Psi; \mathbf{y},\mathbf{x}) \nonumber\\ &&\qquad=\int_{\mathbf{u}}\sum_{\mathbf{b}}f \bigl(\mathbf{y}|(\mathbf{b},\mathbf{u}),\Psi \bigr)g\bigl(\mathbf{x}|(\mathbf{b}| \mathbf{u}),\Psi\bigr)h(\mathbf{u};\Psi )\,d\mathbf {u} \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad=\int_{\mathbf{u}}\sum_{\mathbf{b}} \Biggl\{ \prod_{i=1}^{N}\prod _{j=2}^{n_{i}}f\bigl(y_{ij}|(b_{ij}| \mathbf{u}),\Psi \bigr)g\bigl(x_{ij}|(b_{ij}|\mathbf {u}),\Psi \bigr) \Biggr\}\\ &&\hspace*{59pt}{}\times \Biggl\{ \prod_{i=1}^{N}r_{i0} \prod_{j=3}^{n_{i}}\mathrm {p}_{b_{i,j-1},b_{ij}|\mathbf{u}} \Biggr\}h(\mathbf{u};\Psi)\,d \mathbf{u},\nonumber \end{eqnarray} where the summation associated with $\mathbf{b}$ represents all possible state sequences for an individual and the initial state probabilities are given by $\{r_{i0}\}$ and may include a subject-specific random effect. In (\ref{mhmmlike}) we assume the crash/near crash and kinematic event data are conditionally independent. We also assume the $\{u_{i}\}$ are independent and identically distributed and the observations for any driver are independent given the random effect $u_{i}$ and the sequence of hidden states. Given these assumptions, the likelihood given in (\ref {mhmmlike}) simplifies to a product of one-dimensional integrals shown in (\ref{finlike}): \begin{eqnarray} \label{finlike} &&L(\Psi; \mathbf{y},\mathbf{x})\nonumber \\ &&\qquad=\prod_{i=1}^{N}\int _{u_i} \Biggl\{\sum_{\mathbf {b}_{i}}r_{i0}f \bigl(y_{i2}|(b_{i2}|u_i),\Psi\bigr)g \bigl(x_{i2}|(b_{i2}|u_i),\Psi\bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\hspace*{33pt}\qquad\quad{}\times\prod_{j=3}^{n_{i}}\mathrm {p}_{b_{i,j-1},b_{ij}|u_i}f\bigl(y_{ij}|(b_{ij}|u_i), \Psi \bigr)\\ &&\hspace*{163pt}{}\times g\bigl(x_{ij}|(b_{ij}|u_i),\Psi\bigr) \Biggr\}h(u_i;\Psi) \,du_{i}.\nonumber \end{eqnarray} The different roles the random effects perform in this joint model are of particular interest. The purpose of the inclusion of the random effect in the conditional model for the kinematic data $\{ ({x}_i|(b_i|u_i))\}$ provides a relaxation of the assumption that the observations for an individual are conditionally independent given the hidden states $\{\mathbf{b}_i\}$ as well as accounting for overdispersion for the kinematic event data. The inclusion of the random effect in the transition probabilities allows the transition probabilities to vary across individuals, inducing a correlation between transition probabilities that induces a relationship between the kinematic events and CNC processes. Further, the random effect provides a departure from the assumption that the transition process follows a Markov chain. One could formulate a reduced model that includes a random effect only in the hidden process, in the hidden process and one or both observed processes, or only in observed processes. Several possibilities exist for maximizing the likelihood given by (\ref {finlike}). Two common approaches are the Monte Carlo expectation maximization algorithm introduced by \citet{Wei} and the simulated maximum likelihood methods discussed by \citet{Mccul}. In Section~\ref{sec2.3}, we propose a different method for parameter estimation that does not rely on Monte Carlo methods which are difficult to implement and monitor for convergence. Our method utilizes the \textit{forward--backward} algorithm to evaluate the individual likelihoods conditional on the random effect. As we will show, incorporation of this algorithm provides a simpler means of computing the posterior probability of the hidden state at any time point than MCEM. Further, this method provides a straightforward approach for likelihood and variance evaluation. This approach has the added benefit of producing the estimated variance--covariance matrix for parameter estimates as determined by the inverse of the observed information matrix. \subsection{Maximizing the likelihood using an implementation of the \textit{forward--backward} algorithm}\label{sec2.3} In maximizing the likelihood given by (\ref{finlike}), our approach first evaluates the portion of the likelihood described by the observed data given the random effect and hidden states shown here: \begin{eqnarray} &&\Biggl\{\sum_{\mathbf{b}_{i}}r_{i0}f \bigl(y_{i2}|(b_{i2}|u_i),\Psi \bigr)g \bigl(x_{i2}|(b_{i2}|u_i),\Psi\bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad{}\times\prod _{j=3}^{n_{i}}\mathrm {p}_{b_{i,j-1},b_{ij}|u_i}f \bigl(y_{ij}|(b_{ij}|u_i),\Psi \bigr)g \bigl(x_{ij}|(b_{ij}|u_i),\Psi\bigr) \Biggr\}, \end{eqnarray} using the \textit{forward--backward} algorithm and subsequent numerical integration over the random effect via adaptive Gaussian quadrature. We then use a quasi-Newton method to maximize this result. The alteration of the \textit{forward--backward} algorithm to accommodate joint outcomes is described as follows. Here, let the vectors $\mathbf {Y}_{ik}^{j}=(Y_{ik},\ldots,Y_{ij})'$ with realized values $(y_{ik},\ldots ,y_{ij})'$ and $\mathbf{X}_{ik}^{j}=(X_{ik},\ldots,X_{ij})'$ with realized values $(x_{ik},\ldots,x_{ij})'$. Decompose the joint probability for an individual as follows: \begin{eqnarray} \label{third} &&\operatorname{Pr}(b_{ij}=m,\mathbf{Y}_{i}= \mathbf{y}_{i},\mathbf {X}_{i}=\mathbf{x}_{i}|u_{i})\nonumber\\ &&\qquad=\operatorname{Pr} \bigl(Y_{i2}^{j}=y_{i2}^{j},X_{i2}^{j}=x_{i2}^{j},b_{ij}=m|u_{i} \bigr)\\ &&\qquad\quad{}\times\operatorname {Pr}\bigl( Y_{i,j+1}^{n}=y_{i,j+1}^{n}|Y_{i2}^{j}=y_{i2}^{j},(b_{ij}=m| u_{i})\bigr) \nonumber \\ &&\qquad\quad{} \times\operatorname{Pr}\bigl( X_{i,j+1}^{n}=x_{i,j+1}^{n}|X_{i2}^{j}=x_{i2}^{j},(b_{ij}=m|u_{i}) \bigr)\nonumber \\ &&\qquad= \operatorname{Pr} \bigl(Y_{i2}^{j}=y_{i2}^{j},X_{i2}^{j}=x_{i2}^{j},b_{ij}=m|u_{i} \bigr) \operatorname{Pr} \bigl(Y_{i,j+1}^{n}=y_{i,j+1}^{n}|(b_{ij}=m| u_{i})\bigr) \nonumber \\ &&\qquad\quad{}\times\operatorname {Pr}\bigl(X_{i,j+1}^{n}=x_{i,j+1}^{n}|(b_{ij}=m|u_{i}) \bigr) \nonumber \\ &&\qquad= a_{im}(j)z_{im}(j)\qquad \mbox{for }m=0,1,\nonumber \end{eqnarray} where $a_{im}(j)$ and $z_{im}(j)$ are referred to as the \textit{forward} and \textit{backward} quantities, respectively, and are \begin{eqnarray} a_{im}(j) &=&\operatorname {Pr}\bigl(Y_{i2}^{j}=y_{i2}^{j},X_{i2}^{j}=x_{i2}^{j},b_{ij}=m|u_{i} \bigr)\qquad\mbox{for } j=2, \ldots, n_{i},\nonumber \\ a_{im}(1) &=&\operatorname{Pr}\bigl((b_{i1}=m|u_i) \bigr)\operatorname {Pr}\bigl(Y_{i2}=y_{i2}|(b_{i2}=m|u_{i}) \bigr)\nonumber\\ &&{}\times\operatorname{Pr}\bigl(X_{i2}=x_{i2}| (b_{i2}=m|u_{i}) \bigr),\nonumber \\ z_{im}(j) &=&\operatorname {Pr}\bigl(Y_{i,j+1}^{n_{i}}=y_{i,j+1}^{n_{i}},X_{i,j+1}^{n_{i}}=x_{i,j+1}^{n_{i}},b_{ij}=m|u_{i} \bigr)\nonumber \\ \eqntext{\mbox{for } j=1,\ldots,(n_{i}-1),} \\ z_{im}(n_{i})&=& 1\qquad\mbox{for all } i.\nonumber \end{eqnarray} The $a_{im}(j)$ and $z_{im}(j)$ are computed recursively in $j$ by using the following: \begin{eqnarray*} a_{im}(j) &=&\sum_{l=0}^{1} \operatorname {Pr}\bigl(Y_{i2}^{j}=y_{i2}^{j},X_{i2}^{j}=x_{i2}^{j},b_{i,j-1}=l,b_{ij}=m,u_{i} \bigr) \\[-2pt] \nonumber &=&\sum_{l=0}^{1}\operatorname {Pr} \bigl(Y_{i2}^{j-1}=y_{i2}^{j-1},X_{i2}^{j-1}=x_{i2}^{j-1},b_{i,j-1}=l|u_{i} \bigr) \mathrm{p}_{lm|u_{i}} \\[-2pt] \nonumber &&\hspace*{14pt}{}\times\operatorname {Pr}\bigl(Y_{ij}=y_{ij},X_{ij}=x_{ij}|(b_{ij}=m|u_{i}) \bigr) \\[-2pt] \nonumber &=&\sum_{l=0}^{1}a_{il}(j-1) \mathrm{p}_{lm|u_{i}}\operatorname{Pr}\bigl(Y_{ij}=y_{ij},X_{ij}=x_{ij}|(b_{ij}=m|u_{i}) \bigr) \\[-2pt] \nonumber &=&\sum_{l=0}^{1}a_{il}(j-1) \mathrm{p}_{lm|u_{i}}\operatorname{Pr}\bigl(Y_{ij}=y_{ij}|(b_{ij}=m|u_{i}) \bigr) \\[-2pt] &&\hspace*{14pt}{}\times\operatorname{Pr}\bigl(X_{ij}=x_{ij}|(b_{ij}=m|u_{i}) \bigr) \nonumber \end{eqnarray*} and \begin{eqnarray*} z_{im}(j) &=&\sum_{l=0}^{1} \operatorname {Pr}\bigl(Y_{i,j+1}^{n_{i}}=y_{i,j+1}^{n_{i}},X_{i,j+1}^{n_{i}}=x_{i,j+1}^{n_{i}},b_{ij +1}=l,(b_{ij}=m|u_{i}) \bigr) \\[-2pt] \nonumber &=&\sum_{l=0}^{1}\operatorname {Pr} \bigl(Y_{i,j+1}^{n_{i}}=y_{i1}^{n_{i}},X_{i,j+1}^{n_{i}}=x_{i,j+1}^{n_{i}}|(b_{ij +1}=l|u_{i}) \bigr)\mathrm{p}_{ml|u_{i}} \\[-2pt] \nonumber &&\hspace*{14pt}{}\times \operatorname{Pr} (Y_{i,j+1}=y_{i,j+1},X_{i,j+1}=x_{i,j+1}|b_{i,j+1}=l|u_{i}) \\[-2pt] \nonumber &=&\sum_{l=0}^{1}z_{il}(j+1) \mathrm{p}_{ml|u_{i}}\operatorname {Pr}\bigl(Y_{i,j+1}=y_{i,j+1},X_{i,j+1}=x_{i,j+1}|(b_{i,j+1}=l|u_{i}) \bigr) \\[-2pt] \nonumber &=&\sum_{l=0}^{1}z_{il}(j+1) \mathrm{p}_{ml|u_{i}}\operatorname {Pr}\bigl(Y_{i,j+1}=y_{i,j+1}|(b_{i,j+1}=l|u_i) \bigr)\\[-2pt] &&\hspace*{14pt}{}\times\operatorname{Pr}\bigl(X_{i,j +1}=x_{i,j+1}|(b_{i,j+1}=l|u_i) \bigr). \nonumber \end{eqnarray*} For any individual, the likelihood conditional on the random effect may be expressed as a function of the forward probabilities, so for a two-state Markov chain the conditional likelihood for an individual is \begin{eqnarray} L_{i|u_i} &=& \operatorname{Pr}(\mathbf{Y}_i=\mathbf{y}_i, \mathbf {X}_i=\mathbf {x}_i|u_i)\nonumber \\[-2pt] &=& \sum_{l=0}^{1} \operatorname{Pr}( \mathbf{Y}_i=\mathbf{y}_i,\mathbf{X}_i= \mathbf {x}_i,b_{n_i}=l|u_i) \\[-2pt] \nonumber &=&\sum_{l=0}^1a_{il}(n_i| \Psi,u_i), \end{eqnarray} where $a_{i0}(n_i|\Psi,u_i) \mbox{ and } a_{i1}(n_i|\Psi,u_i)$ are the \textit{forward} probabilities for subject $i$ associated with states 0 and 1, respectively, evaluated at the last observation of the subject's observation sequence $n_i$. The marginal likelihood for an individual can now be found by integrating with respect to the random effect\looseness=-1 \begin{equation} \label{iLikemhmm} L_i = \int_{u_i} \bigl \{a_{i0}(n_i|\Psi,u_i) + a_{i1}(n_i| \Psi ,u_i) \bigr\}h(u_i)\,du_i \end{equation} and the complete likelihood can be expressed as a product of individual likelihoods. \subsection{Numerical integration} Adaptive Gaussian quadrature can be used to integrate (\ref {iLikemhmm}). This technique is essential to obtaining accurate parameter estimates, as the integrand is sharply ``peaked'' at different values depending on the observed measurements of the individual. Applying the results described in \citet{Gaussquad} to the joint hidden Markov model, numerical integration of (\ref{iLikemhmm}) is achieved by considering the distribution of the random effects to be $N(0,\theta ^2)$. The procedure for obtaining maximum likelihood estimates for model parameters are shown in Table~\ref{MaxLikeProc}. \begin{table}[b] \caption{Procedure for obtaining maximum likelihood estimates for the joint mixed hidden Markov model} \label{MaxLikeProc} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lp{330pt}@{}} \hline (1)& Select initial parameter estimates $p^0$. \\ (2)& Compute the set of adaptive quadrature points for each individual $q_i$ given the current parameter estimates $p^{m}$.\\ (3)& Maximize the likelihood obtained using the \textit{forward--backward} algorithm and adaptive quadrature via $q_i \in Q$ using the quasi-Newton method.\\ (4)& Update parameter estimates $p^{(m+1)}$.\\ (5)& Repeats steps (2)--(4) until parameters converge.\\ \hline \end{tabular*} \end{table} \subsection{Estimation of posterior hidden state probabilities} The \textit{forward--\break backward} implementation in evaluating the likelihood is not only efficient (the number of operations to compute the likelihood conditional on the random effect is of linear order as the observation sequence increases), but it also provides a mechanism for recovering information about the hidden states. By leveraging the \textit{forward} and \textit{backward} probabilities, we can compute the hidden posterior state probabilities $\{\hat{b}_{ij}\}$: \begin{eqnarray} \label{bgivY} \mathrm{E}(b_{ij}|{y_i,x_i})&=& \mathrm{E}_{u_i|{y}_i,x_i}\bigl\{\mathrm {E}(b_{ij}|{y_i,x_i},u_i) \bigr\}\nonumber \\ &=&\int_{u_i} \bigl\{\operatorname{Pr} \bigl(b_{ij}=1|({y_i,x_i},u_i) \bigr) \bigr\} h_{u_i|({y_i,x_i})}\,du_i \\ \nonumber &=&\int_{u_i} \biggl\{\frac{\operatorname{Pr}(b_{ij}=1,(\mathbf {y}_i,\mathbf {x}_i),u_i)}{\operatorname{Pr}(\mathbf{y}_i,\mathbf{x}_i)} \biggr\}\,du_i, \end{eqnarray} since \begin{eqnarray*} \operatorname{Pr}\bigl(b_{ij}=1,(\mathbf{y}_i,\mathbf{x}_i),u_i \bigr)&=&\operatorname {Pr}\bigl(b_{ij}=1,(\mathbf{y}_i,\mathbf{x}_i)|u_i \bigr)h(u_i) \\ &=&a_{i1}(j)z_{i1}(j)h(u_i), \end{eqnarray*} equation (\ref{bgivY}) can be expressed as \begin{eqnarray} \label{bgivYfin}&& \int_{u_i} \biggl\{\frac{\operatorname{Pr} (b_{ij}=1,(\mathbf{y}_i,\mathbf {x}_i),u_i)}{\operatorname{Pr}(\mathbf{y}_i,\mathbf{x}_i)} \biggr \}\,du_i \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad=\int_{u_i} \biggl\{\frac{a_{i1}(j)z_{i1}(j)h(u_i)}{ \{\int_{u_i}(a_{i0}(j)+a_{i1}(j))h(u_i)\,du_i \}} \biggr\}\,du_i. \end{eqnarray} Evaluation of (\ref{bgivYfin}) is accomplished via adaptive Gaussian quadrature as outlined in the earlier section and the quantities of interest in (\ref{bgivYfin}) are readily available after running the \textit{forward--backward} algorithm. Using a shared random effect is also attractive in that it is computationally more efficient than incorporating separate random effects for the count outcome and transition probabilities. Generally speaking, once the number of random effects exceeds three or four (depending on the type of quadrature being used and the number of nodes included for each integration), direct maximization is no longer a computationally efficient method and Monte Carlo expectation maximization (MCEM) is an appealing alternative approach. In accounting for heterogeneity with a single random effect, we eliminate the need for MCEM. \section{Simulation of the mixed model} \label{sec3} We performed a simulation to investigate the performance of parameter estimation using the proposed approach. Under the shared random effect parameterization, we use the model in (\ref{simhmmre}) for the simulation \begin{eqnarray} \label{simhmmre} \operatorname{logit}(\pi_{ij}) &=& \alpha_{0} + \alpha_{1}b_{ij} +\alpha_{2} u_i, \nonumber \\ \log(\mu_{ij}) &=& \beta_{0}+\beta_{1}b_{ij} + \beta_2 u_i, \nonumber \\ \operatorname{logit}\bigl\{ \operatorname{Pr}(b_{ij}=1|b_{ij-1}=0) \bigr\} &=&\gamma_{01} + \delta_1 y_{i,j-1} + u_i, \\ \operatorname{logit}\bigl\{ \operatorname{Pr}(b_{ij}=1|b_{ij-1}=0) \bigr\} &=&\gamma_{10}+ \delta_2 y_{i,j-1} + \delta^{\ast} u_{i}, \nonumber \\ \operatorname{logit}(b_{i1}=1) &=& \pi_{1}, \nonumber \end{eqnarray} where $u_{i} \backsim N(0,e^{\lambda})$. \begin{table} \caption{Parameter estimates for mixed hidden Markov model 1000 simulations (60 individuals, 20 observations) using $Q=5$ and $Q=11$ quadrature points} \label{simretable} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ld{2.2}d{2.3}ccd{2.3}cc@{}} \hline & &\multicolumn{3}{c} {$\bolds{Q=5}$}& \multicolumn{3}{c@{}}{$\bolds{Q=11}$}\\[-4pt] & &\multicolumn{3}{c} {\hrulefill}& \multicolumn{3}{c@{}}{\hrulefill}\\ \textbf{Parameter} & \multicolumn{1}{c}{$\bolds{\theta}$} & \multicolumn{1}{c}{$\bolds{\overline{\hat{\theta}}}$}& \multicolumn{1}{c}{$\bolds{\hat{\theta }_{\mathrm{sd}}}$}&\multicolumn{1}{c}{$\bolds{\sigma_{\hat{\theta}}}$} & \multicolumn{1}{c}{$\bolds{\overline{\hat{\theta}}}$} & \multicolumn{1}{c}{$\bolds{\hat {\theta}_{\mathrm{sd}}}$} &\multicolumn{1}{c@{}}{$\bolds{\sigma_{\hat{\theta}}}$}\\ \hline $\alpha_{0}$ & -1.0 & -1.01 & 0.12& 0.12 & -1.01 & 0.11& 0.11 \\ $\alpha_{1}$ & 2.0 & 2.03 & 0.18& 0.19 & 2.03 & 0.18& 0.17 \\ $\alpha_{2}$ & 1.5 & 1.52 & 0.41& 0.42 & 1.51 & 0.42& 0.44 \\ $\beta_{0}$ & -1.0 &-1.01 & 0.10& 0.10 & -1.00 & 0.09& 0.06 \\ $\beta_{1}$ & 2.0 & 2.01 & 0.12& 0.09 & 2.02 & 0.11& 0.09 \\ $\beta_{2}$ & 0.25 & 0.255 & 0.06& 0.07 & 0.251 & 0.06& 0.04 \\ $\gamma_{01}$ & -0.62 & -0.61 & 0.19& 0.18 & -0.62 & 0.19& 0.17 \\ $\gamma_{10}$ & 0.4 & 0.42 & 0.34& 0.32 & 0.42 & 0.30& 0.30 \\ $ \lambda$ & 0.0 & -0.03 & 0.16& 0.15 & -0.03 & 0.16& 0.16 \\ $\delta^{\ast}$ & 2.00 & 2.15 & 0.45& 0.41 & 2.02 & 0.43 & 0.42 \\ $\delta_1$ & 1.00 & 1.08 & 0.22& 0.21 & 1.02 & 0.20 & 0.21 \\ $\delta_2$ & 3.00 & 2.97 & 0.41& 0.43 & 2.97 & 0.44 & 0.42 \\ $\pi_{i1}$ & -0.8 & -0.82 & 0.06& 0.05 & -0.81 & 0.05 & 0.05 \\ \hline \end{tabular*} \end{table} The simulations were conducted with 20 observations on 60 subjects. Using a 1.86-GHz Intel Core 2 Duo processor, the fitting of the shared model took less than 3 minutes on average. The simulation results (1000 simulations) are shown in Table~\ref{simretable}. Parameter estimates obtained using adaptive Gaussian quadrature with five and eleven points, respectively, are presented along with true parameter value $(\theta)$ and mean $(\overline{\hat{\theta}})$, the sample standard deviation for the parameter estimates $\hat{\theta}_{\mathrm{sd}}$, and the average asymptotic standard errors $\sigma_{\hat{\theta}}$. In performing the estimation using five quadrature points, parameter estimation was quite accurate with the exception of the coefficient of the random effect ${\delta^{\ast}}$ in the 0--1 transition with an average estimated value of 2.15 compared to the actual value of 2.0. Other parameters display very little bias. The bias for ${\delta^\ast}$ virtually disappears when performing the integration via adaptive Gaussian quadrature using ten points where the average estimated value was ${\delta^\ast}=2.02$. These results were unchanged when evaluating for possible effects due to the total number of subjects or observations (varying $n$ and~$I$). Additionally, the average asymptotic standard errors agree quite closely to the sample standard deviations for all model parameters. Similar results hold for the case where random effects are only included in the hidden process. We used different starting values to examine the sensitivity of estimation to initial values, and our proposed algorithm was insensitive to the selection of these values. Simulation results provide support that the complexity of the model does not inhibit parameter estimation, rendering discovery of heterogeneity in the observed and hidden processes as a nice byproduct for the model. \begin{table}[t] \caption{Parameter estimates for the hidden process when the true underlying random effects distribution is correlated. 1000 simulations (60 individuals, 20 observations)} \label{SIMcorr} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ld{2.3}d{2.3}d{2.3}d{2.3}d{2.2}d{2.2}d{2.2}@{}} \hline \multicolumn{1}{@{}l}{\textbf{Parameter}} & \multicolumn{1}{c}{\textbf{True value}}& \multicolumn{1}{c}{$\bolds{\rho=1}$} & \multicolumn{1}{c}{$\bolds{\rho=0.95}$}& \multicolumn{1}{c}{$\bolds{\rho=0.9}$}& \multicolumn{1}{c}{$\bolds{\rho =0.8}$}& \multicolumn{1}{c}{$\bolds{\rho=0.7}$}& \multicolumn{1}{c@{}}{$\bolds{\rho=0.6}$}\\ \hline $\gamma_{10}$ & -0.62 & -0.620 & -0.628 & -0.640 & -0.67 & -0.69 & -0.72 \\ $\gamma_{01}$ & 0.4 & 0.401 & -0.399 & 0.398 & 0.32 & 0.28 & 0.26\\ $\delta$ & 2.0 & 2.00 & 2.11 & 2.28 & 2.44 & 2.61 & 2.68\\ \hline \end{tabular*} \end{table} While performance of the estimation procedure for the mixed hidden Markov model in (\ref{simhmmre}) was quite good, this model formulation assumes near perfect correlation between the random effects. To examine the robustness of the estimation procedure to this assumption, we evaluated model performance in the case where there are correlated random effects in the hidden process. For this case, data was simulated for the transition probabilities using (\ref{corrtrans}): \begin{eqnarray} \label{corrtrans} \operatorname{logit}\bigl\{ \operatorname{Pr}(b_{ij}=1|b_{ij-1}=0) \bigr\}&=&\gamma_{01} + u_1, \nonumber \\[-8pt] \\[-8pt] \nonumber \operatorname{logit}\bigl\{ \operatorname{Pr}(b_{ij}=1|b_{ij-1}=0) \bigr \}&=&\gamma_{10}+ u_{2}, \end{eqnarray} where $(u_1,u_2)$ follow a bivariate normal distribution $\mathit{BVN}(\mathbf {0},\bolds{\Sigma})$. Our model was then fit to the simulated data using the parameterization in (\ref{mfit}): \begin{eqnarray} \label{mfit} \operatorname{logit}\bigl\{ \operatorname{Pr}(b_{ij}=1|b_{ij-1}=0) \bigr\}&=&\gamma_{01} + u_i, \nonumber \\[-8pt] \\[-8pt] \nonumber \operatorname{logit}\bigl\{ \operatorname{Pr}(b_{ij}=1|b_{ij-1}=0) \bigr \}&=&\gamma_{10} + \delta u_{i}. \end{eqnarray} The results for this simulation are shown in Table~\ref{SIMcorr}. For moderate departures from perfect correlation, biased estimates result in the hidden process. Thus, we recommend first considering correlated random effects in the hidden process before use of the shared random effects model. In the case of our application, the transition process exhibited very high correlation, so we proceed in the analysis with our estimation procedure. Details for estimation using bivariate adaptive Gaussian quadrature are shown in the supplementary material [\citet{jaz}]. \section{Results} \label{sec4} The two-state mixed hidden Markov model presented in the previous sections was applied to the NTDS data. Table~\ref{mHmmresults} displays the parameter estimates and associated standard errors. The initial probability distribution for the hidden states was common to all individuals in the study and modeled using $\operatorname{logit}(r_1)=\pi_1$ and the random effects distribution was $\mathrm{N}(0,e^\lambda)$. Several models were compared based on the relative goodness-of-fit measure, AIC. Model excursions included evaluating the suitability of a three-state hidden Markov model without random effects $(\mathit{AIC}=3563.06)$ (model shown in the supplementary material [\citet{jaz}]), the two-state model without random effects $(\mathit{AIC}=3548.97)$, and the two-state model with random effects $(\mathit{AIC}=3492.73)$. \begin{table} \tablewidth=250pt \caption{Parameter estimates for the mixed hidden Markov model as applied to the NTDS data} \label{mHmmresults} \begin{tabular*}{250pt}{@{\extracolsep{\fill}}ld{2.3}d{1.3}@{}} \hline \multicolumn{1}{@{}l}{\textbf{Parameters}} & \multicolumn{1}{c}{\textbf{Estimate}} & \multicolumn{1}{c@{}}{\textbf{Std Err}} \\ \hline $\alpha_{0} $ & -7.48 & 0.14 \\ $\alpha_{1} $ & 1.49 & 0.25 \\ $\alpha_2 $ & 0.03 & 0.02 \\ $\beta_{0} $ & -5.97 & 0.13 \\ $\beta_{1} $ & 1.31 & 0.06 \\ $\beta_{2} $ &0.007 &0.004 \\ $\beta_{3}$ & 1.10 & 0.04 \\ $\lambda$ &-0.18 & 0.33 \\ $\delta^\ast$ & 1.25 & 0.32 \\ $\gamma_{10} $ & -2.13 & 0.35 \\ $\gamma_{01} $ & -3.47 & 0.28 \\ $\delta_1$ & 1.75 & 0.24 \\ $\delta_2$ & -2.17 & 0.53 \\ $\pi_1 $ &-0.83 &0.28 \\ \hline \end{tabular*} \end{table} The fixed effects model provided initial parameter estimates for most model parameters while multiple starting values for $\lambda$ were used in conjunction with a grid search over parameters $\beta_3$ and $\delta ^\ast$ to determine these initial values. The number of quadrature points implemented at each iteration was increased until the likelihood showed no substantial change. As illustrated in the simulation study, there were eleven points used in the adaptive quadrature routine. Standard error estimates were obtained using a numerical approximation to the Hessian using the \textsc{\{nlm\}} package in R. The coefficients of the hidden states $\alpha_1$ and $\beta_1$ are both significantly greater than zero, indicating that drivers operating in a \textit{poor} driving state $(b_{ij}=1)$ are more likely to have a crash/near crash event and, correspondingly, a highly number of kinematic events. While the variance component of the random effect is somewhat small, the dispersion parameter $\beta_3$ is highly significant, indicating the data are overdispersed. Interestingly, heterogeneity is not exhibited in the CNC outcome as the coefficient for the random effect $\alpha_2$ is insignificant, providing support to the notion that the hidden state is capturing unobserved quantities in a meaningful way. There is evidence of heterogeneity across individuals in their propensity to change between states as indicated by $\lambda$ and $\delta^\ast$. In the case of the NTDS data, $\delta^\ast> 0$ indicates a positive correlation between the transition probabilities, meaning that some individuals are prone to changing more often between states than others. Coefficients in the hidden process, $\delta_1$ and $\delta_2$, illustrate that transition between states depends on previous CNC outcomes. A prior crash was associated with an increased probability of transitioning from the \textit{good} driving state to the \textit{poor} one ($\delta_1=1.75$) and a decreased probability of transitioning from the \textit{poor} to the \textit{good} driving state ($\delta_2=2.17$). Since the shared random effect, which assumes a perfect correlation between the random components, may not be robust to a more flexible random effects structure, we also fit the model using correlated random effects in the hidden process (see simulation results). For a variety of starting values, the correlation coefficient estimates were near 1 (0.998 or greater), giving us confidence in using the shared random effect approach. \begin{figure}[b] \includegraphics{765f01.eps} \caption{ROC curve for the mixed hidden Markov model based one ``one-step ahead'' predictions (area under the curve ${}= 0.74$).} \label{ROConestepmhmm} \end{figure} An interpretation of parameter estimates given in Table~\ref{mHmmresults} is subject-specific and depends on a driver's exposure for a given month. If we consider a subject driving the average mileage for all subjects (358.1 miles), parameter estimates indicate that the risk of a crash/near crash outcome increases from 0.16 to 0.47 when in the \textit{poor} driving state, $b_{ij}=1$. Correspondingly, this ``average'' subject would also expect to experience 2.43 more kinematic events on average when in the \textit{poor} driving state. For the typical teenager, the likelihood of moving out of the \textit{poor} driving state decreases from $10.6\%$ to $1.3\%$ when experiencing a CNC event in the previous month. Similarly, the likelihood of moving out of the \textit{good} driving state increases from $3.01\%$ to $15.2\%$ when experiencing a CNC event in the previous month. \begin{figure}[t] \includegraphics{765f02.eps} \caption{Predicted value of the hidden state given the observed data for three drivers. The ($\circ$) indicates the probability of being in state 1 (poor driving), ($+$) indicates a crash/near crash event and the dotted line indicates the composite kinematic measure.}\label{predbij} \end{figure} \begin{figure}[b] \includegraphics{765f03.eps} \caption{Comparison of local and global decoding of the hidden states and CNC outcomes. The ($\circ$) indicates the probability of being in state 1 (poor driving), ($+$) indicates a CNC event and the ($\bigtriangleup$) indicates the hidden state occupation using the Viterbi algorithm.} \label{global} \end{figure} A receiver operating characteristic (ROC) curve was constructed to determine the predictive capability of our model by plotting the true positive rate versus the false positive rate for different cutoff values. The ROC curve based on the ``one-step ahead'' predictions [observed outcome given all previous kinematic observations $\operatorname {Pr}(Y_{ij}=1|y_{i1},\ldots,y_{i,j-1},x_{i1},\ldots,x_{i,j-1})$] is shown in Figure~\ref{ROConestepmhmm}. An attractive feature of our model is that it allows for the development of a predictor based on prior kinematic events. We constructed this ROC curve using a cross-validation approach whereby one driver was removed from the data set, model parameters were then determined using the remaining data and these results were then used to predict the removed driver's crash/near crash outcomes. The predictive accuracy of this model was moderately high with an area under the curve of 0.74. Although the goodness of fit was best for the two-state mixed hidden Markov model, area under the curve for the other models was nearly identical. A sample of three drivers and their corresponding hidden state probability $\mathrm{E}_{u_i|\mathbf{y}_i,\mathbf{x}_i}\{\mathrm {E}(b_{ij}|\mathbf{y}_i,\mathbf{x}_i,u_i)\}$ along with their crash/near crash outcomes and total number of kinematic events is shown in Figure~\ref{predbij}. It is evident how the total kinematic measures influence the predicted value of the hidden state and work particularly well for cases where driving is ``consistent'' over relatively short time periods (i.e., low variation in kinematic measures for a given time period). In cases where the driving kinematics exhibit a great deal of variability, the model does not perform as well in predicting crash/near crash outcomes as indicated by the rightmost panel in Figure~\ref{predbij}. As a comparison, we show the results of global decoding of the most likely hidden state sequence using the Viterbi algorithm in Figure~\ref{global} for the same three drivers. Hidden state classification is similar whether using global or local decoding for the left two panels in Figure~\ref{global}, which is indicative of most drivers in the study, while there are differences in the case of the rightmost panel likely due to the greater variability in these data. \section{Discussion} \label{sec5} In this paper we presented a mixed hidden Markov model for joint longitudinal binary and count outcomes introducing a shared random effect in the conditional model for the count outcomes and the model for the hidden process. An estimation procedure incorporating the \textit{forward--backward} algorithm with adaptive Gaussian quadrature for numerical integration is used for parameter estimation. A welcome by-product of the \textit{forward--backward} algorithm is the hidden state probabilities for an individual during any time period. The shared random effect eliminates the need for more costly numerical methods in approximating the likelihood, such as higher dimensional Gaussian quadrature or through Monte Carlo EM. The model was applied to the NTDS data and proved to be a good predictor of crash and near crash outcomes. Our estimation procedure also provides a means of quantifying teenage driving risk. Using the hidden state probabilities which represent the probability of being in a \textit{poor} driving state given the observed crash/near crash and kinematic outcomes, we can analyze the data in a richer way than standard summary statistics. Additionally, our approach allows for a broader class of predictors whereby the investigator may make predictions based on observations that go as far into the past as warranted. There are limitations to our approach. The shared random effect proposes a rather strong modeling assumption in order to take advantage of an appealing reduction in computational complexity. Using more general correlated random effects approaches is an alternative, but others have found that identification of the correlation parameter is difficult [\citet{SmithMoffatt} and \citet{Alfo}]. Formal testing for heterogeneity in these models is also a challenging problem [\citet{Altman2}]. There is also a potential issue of having treated the miles driven during a particular month ($\mathrm{m}_{ij}$) as exogenous. For some crashes, it is possible that previous CNC outcomes ($y_{i,j-1}$) may affect the miles driven in the following month and our model does not capture this dynamic. As with any study, greater clarity in the information obtained for each trip might yield more valuable insights. Metrics such as the type of road, road conditions and trip purpose, while potentially useful, were not available for this analysis. There are extensions to the model that may be useful. The model can address more than two outcomes. We summarize the kinematic events at a given time as the sum across multiple types. This approach could be extended to incorporate multiple correlated processes corresponding to each kinematic type. Depending on the situation, the additional flexibility and potential benefits of such an extension may be worth the increased computational cost. \section*{Acknowledgments} We thank the Center for Information Technology, National Institutes of Health, for providing access to the high-performance computational capabilities of the Biowulf cluster computing system. We also thank Bruce Simons-Morton for discussions related to this work. Inquiries about the study data may be sent to P.~S. Albert at \printead*{e2}. \begin{supplement}[id=suppA] \stitle{Adaptive quadrature for the three-state mixed hidden Markov model} \slink[doi]{10.1214/14-AOAS765SUPP} \sdatatype{.pdf} \sfilename{aoas765\_supp.pdf} \sdescription{We provide details on the adaptive quadrature routine for the MHMM with bivariate normal random effects in the hidden process, as well as expressions for the three-state hidden Markov model.} \end{supplement}
\section{Introduction} \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{main.png} \caption{Results and Intermediate Visualizations. (a) From left to right: scribble-annotation, ground-truth, and our prediction. (b) From left to right: Neural representation before and after the random walk. (c) Leading eigenvectors of the transition matrix.} \label{fig:main} \end{figure*} In recent years, the use of neural networks, especially convolutional neural networks, has dramatically improved semantic classification, detection, and segmentation~\cite{he2016deep,long2015fully,wang2018non}. As one of the most fine-grained ways to understand the scene, typically, semantic segmentation demands large-scale data with high-quality annotations to feed the network. However, the pixel-level annotating process for semantic segmentation is costly and tedious, limiting its flexibility and usability on some tasks that require rapid deployment~\cite{lin2016scribblesup}. As a consequence, the scribble annotations, which are more easily available, become popular. Correspondingly, some datasets and approaches are developed. \citealp{lin2016scribblesup} proposed a scribble-annotated dataset based on the pascal VOC~\cite{everingham2015pascal} and adopted the classic graphical model as post-processing to obtain the final dense predictions. \citealp{vernaza2017learning} and \citealp{wang2019boundary} optimize the performance of semantic segmentation by introducing an auxiliary task, edge detection. However, this edge detection task needs to be trained on another well-labeled dataset, so these methods have not relieved the heavy labor of annotation yet. To avoid the post-processing and dependence on another well-labeled dataset, \citealp{tang2018normalized} and \citealp{tang2018regularized} design graphical model based regularized loss to make the predictions between similar pixels consistent. But these two works only measured similarities in color and texture, did not consider semantic similarity for regularization. Moreover, most of these methods require every existing object in the image labeled, which is too strict for dataset preparation. In this work, we intend to propose a more flexible approach to get rid of the above issues holistically. The proposed approach tackles scribble-supervised semantic segmentation by leveraging the uniform and consistency of neural representation (features). A representative result is shown in Fig.~\ref{fig:main}(a). The assumption is that, given the sparse scribble labels, if we guide the neural representation to be uniform within each image's objects and consistent between related images, then the neural network's best solution to the cross-entropy loss is the dense semantic predictions. To realize this kind of learning trend, we impose the diffusion on neural representation by random walk and consistency on neural eigenspace by self-supervision during training. The random walk on neural representation has been studied in some works and could produce uniform and dense semantic predictions within each object~\cite{jiang2018difnet}. The key to this kind of random walk is forming a probabilistic transition matrix that measures the similarity between neural representations. Then, with the transition matrix, the neural representation is diffused to be uniform. Fig.~\ref{fig:main}(b) shows several neural representations before and after the random walk. In addition to the uniform neural representation within each object, it is also essential to have consistent neural representation over related images to produce dense and confident semantic predictions. Namely, when the image is transformed, the neural representation should be the same as the original one with the corresponding transform. This kind of consistency is usually referred to as self-supervision~\cite{laine2016temporal,tarvainen2017mean,mittal2019semi} and measured on neural representation. In this way, though with the various and sparse scribble labels, the network will still tend to generate consistent object perception. However, for semantic segmentation, the consistency over the whole image is not necessary. When some parts of the image are distorted and changed heavily after transform, it is hard for the network to generate consistent neural representation anymore and may confuse the network in some scenarios. In this work, we propose to set the self-supervision on the main parts of images by imposing the consistent loss on the eigenspace of the transition matrix. The idea is inspired by spectral methods~\cite{von2007tutorial}, which observed that the eigenvectors of the Laplacian matrix have the capability to distinguish the main parts in the images, and some methods use this property for clustering~\cite{ng2002spectral,nadler2007fundamental} and saliency detection~\cite{jiang2019super,yang2013saliency}. Since the eigenspace of the transition matrix has a close relation to the one of the Laplacian matrix, our self-supervision on the transition matrix's eigenspace will also focus on the main image parts. Several leading eigenvectors are presented in Fig.~\ref{fig:main}(c). The computation of eigenspace is time-consuming and unstable, especially during the dynamic optimizing of the neural network. Though some people have developed approximation methods~\cite{dang2018eigendecomposition,wang2019backpropagation,sun2019neural}, it is better to avoid the explicit eigenspace decomposition. Thus, in our implementation, we only apply a soft consistent loss on the eigenspace. For eigenvalue consistency, according to the fact that the matrix's trace is equal to the sum of its eigenvalues, we measure the matrix trace consistency instead. Given the consistency on eigenvalue, we compute the Kullback-Leibler Divergence between probabilistic transition matrices to further prompt eigenvectors' consistency. We also developed convenient ways to compute consistent loss regarding the complicated relationship between probabilistic transition matrices after image transform and modification. The proposed method demonstrates consistent superiority to others on the common scribble-annotated dataset and is even comparable to some fully supervised ones. Moreover, we further conducted experiments when the scribble-annotations gradually shrank and dropped. The proposed method could still work reasonably, even the scribble shrank to the point or dropped significantly. Besides, careful ablation study and mechanism study is made to verify the effectiveness of every module. Finally, the code and dataset are open-sourced. \begin{comment} \begin{itemize} \item Using the diffusion mechanism, the probability transfer module is proposed. Using this module, only part of the cross-entropy loss function can be used to complete the supervision of the full mask labeling after the scribble labeling is diffused. \item Self-supervised loss function of eigenvalues and eigenvectors on transition matrix. By rotating, intercepting and occluding three operations, the corresponding self-supervised loss function is obtained in transition matrix to standardize the learning process of semantic similarity. \item Extensive ablation experiments show that the network performance of the self-supervised loss function used on the transition matrix is better than that used on the feature map. And it is better than the current state-of-the-art work. \end{itemize} \end{comment} \section{Related Work} \subsection{Scribble-Supervised Semantic Segmentation} The scribble-supervised semantic segmentation aims to produce dense predictions given only sparse scribble-annotations. Existing deep learning based works usually could be divided into two groups: 1) Two-stage approaches~\cite{lin2016scribblesup,vernaza2017learning}, which firstly obtain the full mask pseudo-labels by manipulating the scribble annotations, then train the semantic segmentation network as usual with pseudo-labels. 2) Single-stage approaches~\cite{tang2018normalized,tang2018regularized}, which directly train the network using scribble-annotations by the specific design of loss function and network structure. While two-stage approaches can be formulated as regular semantic segmentation, single-stage approaches are usually defined to minimize $L$: \begin{equation} L=\sum_{p\in\Omega_{\mathcal{L}}}c(s_p,y_p)+\lambda\sum_{p,q\in\Omega}u(s_p,s_q), \label{eq:eq1} \end{equation} where $\Omega$ is the pixel set, $\Omega_{\mathcal{L}}$ is the pixel set with scribble-annotations, $s_i$ represents the prediction of pixel $i$, and $y_i$ is the corresponding ground truth. The first term measures the error with respect to the scribble annotations and usually is in the form of cross-entropy. The second term is a pair-wise regularisation to help generate uniform prediction. The two terms are harmonized by a weight parameter $\lambda$. For scribble-supervised semantic segmentation, the graphical model has been prevalently adopted in either two-stage approaches for generating pseudo-label or one-stage approaches for loss design. \citealt{lin2016scribblesup} iteratively conduct label refinement and network optimization through a graphical model. \citealt{vernaza2017learning} generate high-quality pseudo-labels for full-label supervised semantic segmentation by optimizing graphical model with edge detector learned from another well-labeled dataset. These two works require iterative optimization or auxiliary dataset. Instead, \citealt{tang2018normalized, tang2018regularized} add the soft graphical model regularization into loss function and avoid explicitly graphical model optimization. Besides, some works only well on the dataset with every existing object labeled by at least one scribble. In general, most methods have not provided a flexible and efficient solution to scribble supervised semantic segmentation yet. \subsection{Random Walk} Uniform neural representation is crucial for semantic segmentation to produce dense predictions no matter scribble supervised or full-label supervised. Typically, embedding a random walk operation in the network would provide help. In this way, \citeauthor{bertasius2017convolutional} use a random walk to addresses the issues of poor boundary localization and spatially fragmented predictions. Then \citeauthor{jiang2018difnet} further conduct a sequence of random walks to approximate a stationary of the diffusion process. A random walk operation in the network can be defined as: \begin{equation} f(x)^{L}=\alpha{Pf(x)^{L-1}}+f(x)^{L-1}, \label{eq:eq2} \end{equation} where $f(x)^{L-1}$ is the neural representation of image $x$ in layer $L$-$1$ and $f(x)^{L}$ is the neural representation after random walk in layer $L$. $\alpha$ is the weight parameter learned during training. The most key component of random walk is the probabilistic transition matrix $P$, whose unit $p_{ij}$ measures the similarity between $i$-th and $j$-th elements of neural representations. Besides, all the units are positive, with every row of the matrix is summed to $1$. Inner product, embedded gaussian~\cite{wang2018non}, and diffusion distance~\cite{sun2019neural} have been widely used to compute the similarity between neural representations. \subsection{Self-Supervision} Consistency on neural representation is the property that should be concerned in almost all the learning-based tasks. Consistency would be helpful for detection, tracking, and definitely for segmentation. For this property, the consistent loss has been widely adopted. Since no ground truth required, this loss is more popular for unsupervised and semi-supervised tasks, and usually defined as the difference between neural representations of image and its transform: \begin{equation} ss(x,\phi)=l(T_\phi(f(x)),f(t_\phi(x))), \label{eq:eq3} \end{equation} where $t_\phi$ denotes the transform operation on $x$ with $\phi$ parameter, while $T_\phi$ corresponds to the transform operation on $f(x)$ ($t_\phi$ and $T_\phi$ are pair of corresponding transforms for self-supervision). $l$ is the metric to define how the difference is measured. This kind of consistent loss is also referred to as self-supervision, which we denote as $ss(x,\phi)$. Self-supervision usually has to conduct two feed-forward processes, \citealp{laine2016temporal} propose temporal ensembling, which saves previous neural representations to avoid multiple feed-forward and facilitate the computation. Then, the mean teacher~\cite{tarvainen2017mean} proposed to prepare a teacher network rather than save the auxiliary information. The self-supervision has been used for semi-supervised semantic segmentation~\cite{mittal2019semi}. \section{Method} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{network.png} \caption{Network Architecture and Pipeline. We use blue flow to represent the scribble-supervised training and orange flow to represent self-supervised training. Given an image and its transform, we pass them to ResNet backbone to extract neural representations, from which the similarity measurement module (SMM) computes transition matrices. Then, a random walk is carried out on the neural representation of the original image. The results are used for classifying semantic. Simultaneously, the self-supervised loss is set between the transition matrices to realize the self-supervision on neural eigenspace. During inference, only blue flow is activated.} \label{fig:pipeline} \end{figure*} The network of the proposed method is illustrated in Fig.~\ref{fig:pipeline}, including two modules (ResNet backbone to extract features, and Similarity Measurement Module to compute the probabilistic transition matrix), one specific process (the random walk process) and three loss functions (the common cross-entropy loss, the maximum-entropy loss and self-supervised loss). In the following subsections, we will introduce the main components and discuss the insight. \subsection{Similarity Measurement Module} Similarity Measurement Module computes the distance between any pairs of neural representation elements and forms the probabilistic transition matrix. The module is illustrated in Fig.~\ref{fig:pipeline}. There are several choices for the distance definition, such as Euclidean distance, inner product~\cite{wang2018non}, and diffusion distance~\cite{sun2019neural}. To preserve the property of the transition matrix (positive unit and each row sums to 1) and compute efficiency, in this work, we use the inner product with softmax operation to form the transition matrix. The probabilistic transition matrix $P$ is defined as: \begin{equation} P=softmax({f(x)^{L-1}}^Tf(x)^{L-1}), \label{eq:eq4} \end{equation} where $f(x)^{L-1}$ is the neural representation of input $x$ in layer $L$-1 and has flatted to $MN$$\times$$C$ dimension ($M$: height, $N$: width, $C$: feature channels). Thus, ${f(x)^{L-1}}^Tf(x)^{L-1}$ will produce a matrix of dimension $MN$$\times$$MN$. With $softmax$ in the horizontal direction, we generate the adequate probabilistic transition matrix $P$. \subsection{Embedded Random Walk} We embed the random walk process in the computation flow of the neural network. The process is defined as Eq.~\ref{eq:eq2} (with $P$ as Eq.~\ref{eq:eq4}) and learned $\alpha$ to control the degree of random walk, and conducted on the final layer right before the classifier. Through the random walk process, the $i$-th element in the neural representation of layer $L$, $f(x)_i^L$, equals to the weighted sum of all the elements in layer $L$-$1$ with the weight defined by the $i$-th row of $P$: \begin{equation} f(x)_i^L=\alpha\sum_{j=1}^{MN} P_{ij}f(x)_{j}^{L-1}+f(x)_i^L. \label{eq:eq5} \end{equation} More similar $f(x)_j^{L-1}$ to $f(x)_i^{L-1}$, more large the $P_{ij}$ is, and more important $f(x)_j^{L-1}$ to the $f(x)_i^L$. With this process, each element in neural representation have relationships with all the other elements. Thus, the scribble annotations will affect not only the labeled elements but also unlabeled elements. When the cross-entropy loss is applied, the best solution (achieve lower loss) will be the uniform predictions within each object, considering the random walk process's constraint. In Fig.~\ref{fig:main}(b), we visualize the neural representations (after the sum of absolute values in the channel dimension) before and after the random walk process. As can be seen, the neural representations become uniform within each semantic region after the random walk. It verifies our assumption in the introduction. It is worth noting that we have not imposed supervision on $P$, but $P$ has gained semantic similarity knowledge from the results. It is the embedded random walk process with the scribble-annotations guide the formulation of $P$ to produce uniform predictions. \subsection{Self-Supervision Loss} The self-supervision loss computes the difference between the neural representations of the image and its transform. There are several issues that need to be considered when applying self-supervision loss in this work: (1) where the self-supervision is involved; (2) how the self-supervision loss is calculated; (3) what kinds of the transform will be used. We address these issues in the following. \subsubsection{Self-Supervision on Eigenspace} In our work, the typical choice of neural representation for self-supervision is $f(x)^L$, and thus the self-supervision loss will be \begin{equation} ss(x,\phi)=l(T_\phi(f(x)^L),f(t_\phi(x))^L). \label{eq:eq6} \end{equation} However, as for semantic segmentation tasks with self-supervision, we argue that directly calculating loss on the whole neural representation is not necessary and may not optimal. When the image is distorted heavily after the transform, some parts of its neural representation will change greatly, so minimizing Eq.~\ref{eq:eq6} will be hard and even ambiguous. The transition matrix $P$ could also be defined as $P=D^{-1}W$, where $W$ is the affinity matrix, and $D$ is the degree matrix. The eigenspace of $P$ and the one of normalized Laplacian matrix $L$ have close relationships, considering $L=D^{-1}(D-W)$. It can be proved that $\Lambda_P$=$1-\Lambda_L$ and $U_P$=$U_L$ ($\Lambda$ denotes diagonal matrix with eigenvalues as entries, $U$ denotes matrix with eigenvectors as columns). According to \citealp{von2007tutorial,Jiang_2015_ICCV,jiang2019super}, columns of $U_L$ have the capability to distinguish the main parts of the images. So, $U_P$ will also inherit this property. We visualize several eigenvectors of $P$ in Fig.~\ref{fig:main}(c). As can be seen, compared with original neural representation, the eigenvectors of $P$ are more powerful to distinguish the main parts from others and neglect some details, though $P$ is also computed from neural representation. Based on the above analysis, in this work, we propose to set the self-supervision on the eigenspace of $P$: \begin{equation} \begin{aligned} ss(x,\phi)&=l(T_\phi(U_P(x)),U_P(t_\phi(x)))\\ &+l(T_\phi(\Lambda_P(x)),\Lambda_P(t_\phi(x))). \end{aligned} \label{eq:eq7} \end{equation} \subsubsection{Soft Eigenspace Self-Supervision} Eq.~\ref{eq:eq7} requires explicit eigendecomposition, which is time-consuming, especially within the deep neural network context. Though there are some approximation methods~\cite{dang2018eigendecomposition,wang2019backpropagation,sun2019neural} proposed, their efficiency and stability are still far from satisfactory. To this end, we develop soft eigenspace self-supervision, which avoids explicit eigendecomposition. Firstly, in view of the fact that the matrix's trace is equal to the sum of its eigenvalues, we measure the consistency on the $\Lambda$ by computing the difference on the trace of $P$, $tr(P)$. Secondly, given the consistency on the $\Lambda$, we propose to measure the consistency on the $P$ to obtain consistent $U$ indirectly. In other words, the soft eigenspace self-supervision loss is defined as: \begin{equation} \begin{aligned} ss_P(x,\phi)&=l_1(T_\phi(P(x)),P(t_\phi(x)))\\ &+\gamma\ast l_2(T_\phi(tr(P(x))),tr(P(t_\phi(x)))), \end{aligned} \label{eq:eq8} \end{equation} where $P(x)$ denotes $P$ for image $x$, $tr(P(x))$ is the trace of $P(x)$. Since $P(x)$ is the probabilistic transition matrix, we use Kullback-Leibler Divergence as $l_1$ to measure the difference. $l_2$ is defined as $L_2$ norm. $\gamma$ is the weight to control the two terms. \subsubsection{Transform Operation and Computing Matrix} In this work, we have linear transforms, including horizontal flip and translation, $\phi\in$(horizontal flip, translation). Compared with the transform effect on the neural representation, any transform will lead to a complex change on $P$ and complicate the computation. However, since all the transform is linear, the probabilistic transition matrix after transform can be expressed as the multiplication of the original $P$ with the predefined computing matrices, to facilitate the Eq.~\ref{eq:eq8} computation. $T_{\phi}(P(x))$ can be defined as: \begin{equation} \begin{aligned} T_{\phi}(P(x))=T_{\phi}r\cdot{P(x)}\cdot{T_{\phi}c}, \end{aligned} \end{equation} where $T_{\phi}r$ and $T_{\phi}c$ are predefined computing matrices for transform $\phi$. In Fig.~\ref{fig:ss_op}, we visualize computing matrices for horizontal flip and vertical translation when using soft eigenspace self-supervision. Please check supplementary for the detail definitions. \subsection{Maximum-entropy Loss} To force the network producing a more convinced prediction, we further minimize the maximum-entropy loss on the final prediction, which is defined as \begin{equation} \begin{aligned} E(s)=-\frac{1}{HW} \sum_{i,j}\sum_c( s(i,j,c)\cdot log(s(i,j,c))) , \end{aligned} \end{equation} where $s$ represents the final prediction, and is of the size $H\times W\times C$ ($C$ is the number of categories). $s(i,j,c)$ represents the probability that the pixel at position $(i,j)$ of the image belongs to the $c$-th category. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{ss_op.png} \caption{Visualization of predefined computing matrices for self-supervision on eigenspace.} \label{fig:ss_op} \end{figure} \begin{comment} \textbf{Horizontal Flip} $t_\phi(x)=xT$, assuming that the size of x is M\*N, then T is a matrix of N\*N size, and its subdiagonal line The value is 1, and the rest are 0. The transformation function $T_\phi(y)=yT$ on the feature map, assuming the size of x is m\*n, then T is a matrix of size n\*n, and the value on the subdiagonal line is 1 , The remaining values are 0. The transformation function $T_\phi(y)=T_ryT_c$ on the probability transition matrix, assuming that the size of x is m\*n, then the size of y is mn*mn, assuming that x is the y obtained after expansion by rows, Then $T_r$ and $T_c$ are a square matrix with the same size as y, which is defined as follows $$ T_{i,j}=\begin{cases}1\ if\ i+j=kn+1\ and\ |i-j|<n\ \ k=1,2..,m \\ 0 \ else\end{cases} $$ \textbf{Crop} For image cropping operation, take cropping below as an example, the transformation function of the input image $t_\phi(x)=Tx$, assuming the size of x is M\*N, then T is a matrix of M\*H size, H <M, which is defined as follows $$ T_{i,j}=\begin{cases}1\ if\ i=j\\0 \ else\end{cases} $$ The transformation function $T_\phi(y)=Ty$ on the feature map, assuming the size of x is m\*n, then T is a matrix of m\*h size, h<m, which is defined as follows $$ T_{i,j}=\begin{cases}1\ if\ i=j\\0 \ else\end{cases} $$ The transformation function $T_\phi(y)=T_ryT_c$ on the probability transition matrix, assuming the size of x is m\*n, the size of y is mn*mn, and the size of $t_\phi(x)$ is h \*n, assuming that x is y obtained after expanding by rows, the size of $T_r$ is hn\*mn, and the size of $T_c$ is mn\*hn, which is defined as follows $$ T_{i,j}=\begin{cases}1\ if\ i=j\\0 \ else\end{cases} $$ \textbf{Patch Occlusion} For patch occlusion operation, the transformation function of the input image is $t_\phi(x)=M*x$, assuming that the size of x is M*N, the starting position and size of the patch are a, b, $h_p$, $ w_p$, then $M$ is defined as follows The transformation function in the feature map is similar to that of the input image. The transformation function on the probability transition matrix $T_\phi(y)=M*y$, assuming that the size of x is m\*n, the starting position and size of the patch are a, b, $h_p$, $w_p, respectively $, then the size of M is mn*mn, which is defined as \end{comment} \begin{table} \centering \setlength{\belowcaptionskip}{3pt} \caption{Ablation study on random walk, and operation and location of self-supervision.} \begin{tabular}{c|c|c|c} \toprule[1pt] \multirow{2}{*}{Random Walk}&\multicolumn{2}{|c|}{Self-Supervision}&\multirow{2}{*}{mIoU}\\ \cline{2-3} ~&operation&location&~\\ \hline \hline &-&-&64.4\\ $\checkmark$&-&-&67.6\\ \hline $\checkmark$&flip&$f(x)^{L-1}$&69.8\\ $\checkmark$&flip&$f(x)^L$&70.1\\ $\checkmark$&flip&$Eigenspace$&\textbf{70.5}\\ \hline $\checkmark$&translation&$f(x)^{L-1}$&70.5\\ $\checkmark$&translation&$f(x)^L$&70.5\\ $\checkmark$&translation&$Eigenspace$&\textbf{70.8}\\ \hline $\checkmark$&random&$f(x)^{L-1}$&70.2\\ $\checkmark$&random&$f(x)^L$&70.3\\ $\checkmark$&random&$Eigenspace$&\textbf{71.2}\\ \bottomrule[1pt] \end{tabular} \label{tab:ablation} \end{table} \section{Experiment} \subsection{Implementation} The whole pipeline is shown in Fig.~\ref{fig:pipeline}. We use pre-trained ResNet~\cite{he2016deep} with dilation~\cite{chen2017deeplab} as the backbone to extract initial neural representations. The total loss in our work is defined as: \begin{equation} \begin{aligned} L=\sum_{p\in\Omega_{\mathcal{L}}}c(s_p,y_p)+\omega_1\ast E(s)+\omega_2\ast ss_P(x,\phi), \end{aligned} \label{eq:eq9} \end{equation} where $\omega_1$ and $\omega_2$ are the predefined weights. All the training images are randomly scaled (0.5 to 2), rotated (-10 to 10), blurred, and flipped for data augmentation, then cropped to $465 \times 465$ before feeding to the network. And the immediate output of $ResNet$ ($f(x)^{L-1}$) is of spatial dimension $29 \times 29$. The training process has two steps. Firstly, only the cross-entropy loss is used to train the network because the network may not perform well initially, and in this time, self-supervision will not bring benefits but prevent the optimization. After the network got reasonable performance, the whole Eq.~\ref{eq:eq9} (scribble supervise and self supervise) is activated. The process has been visualized in Fig.~\ref{fig:pipeline}. The two steps training is also used in~\cite{tang2018normalized, tang2018regularized}. \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{twodatasets.png} \caption{(a) A representative sample of \emph{scribble-drop} with the scribble drop rate from 0.1 to 0.5, and the mIoU scores on different settings. (b) A representative sample of \emph{scribble-shrink} with the scribble shrink rate from 0 to 1 (point), and the mIoU scores on different settings. (Zoom in for better visualization)} \label{fig:datasets} \end{figure*} \begin{table} \centering \setlength{\belowcaptionskip}{3pt} \caption{Performance on the validation set of pascal VOC. The supervision types (Sup.) indicate: $\mathcal{P}$–point, $\mathcal{S}$–scribble, $\mathcal{B}$–bounding box, $\mathcal{I}$–image-level label, and $\mathcal{F}$–full label.} \scalebox{0.9}{ \begin{tabular}{l|c|c|c|c} \toprule[1pt] Method&Sup.&Backbone&wo/ CRF&w/ CRF\\ \hline \hline What'sPoint&$\mathcal{P}$&$VGG16$&46.0&-\\ SDI&$\mathcal{B}$&$ResNet101$&-&69.4\\ BCM&$\mathcal{B}$&$ResNet101$&-&70.2\\ CIAN&$\mathcal{I}$&$ResNet101$&64.1&67.3\\ FickleNet&$\mathcal{I}$&$ResNet101$&64.9&-\\ SCE&$\mathcal{I}$&$ResNet101$&64.8&66.1\\ DeepLabV2&$\mathcal{F}$&$ResNet101$&76.4&77.7\\ \hline scribblesup&$\mathcal{S}$&$VGG16$&-&63.1\\ RAWKS&$\mathcal{S}$&$ResNet101$&59.5&61.4\\ NCL&$\mathcal{S}$&$ResNet101$&72.8&74.5\\ KCL&$\mathcal{S}$&$ResNet101$&73.0&75.0\\ BPG-PRN&$\mathcal{S}$&$ResNet101$&71.4&-\\ \hline ours-ResNet50&$\mathcal{S}$&$ResNet50$&71.9&73.6\\ ours-ResNet101&$\mathcal{S}$&$ResNet101$&\textbf{73.4}&\textbf{75.8}\\ \bottomrule[1pt] \end{tabular} \label{tab:all}} \end{table} \subsection{Experiment setting} \subsubsection{Datasets}\label{sec:dataset} We mainly compare with others on the common scribble-annotated dataset, \emph{scribblesup} ~\cite{lin2016scribblesup}. This dataset is revised from pascal VOC~\cite{everingham2015pascal} and has every existing object in each image labeled by at least one scribble. However, our method does not need this hypothesis. To better demonstrate the advantage of the proposed method, we further proposed two variants of \emph{scribblesup}. The first one is \emph{scribble-drop}, where every object in images may drop (\emph{i.e.} delete) all the scribble annotations with a defined possibility. The second one is \emph{scribble-shrink}, where every scribble in the image is shrunk randomly (even to a point). All the images in two datasets are identical to the \emph{scribblesup}'s, as well as the training and validation partition. In our experiments, many settings of drop and shrink rate are tested. Fig.~\ref{fig:datasets} shows several representative samples of \emph{scribble-drop} and \emph{scribble-shrink}. \subsubsection{Compared methods} We compare with recently proposed scribble-supervised methods including scribblesup~\cite{lin2016scribblesup}, RAWKS~\cite{vernaza2017learning}, NCL~\cite{tang2018normalized}, KCL~\cite{tang2018regularized} and BPG-PRN~\cite{wang2019boundary}, and also other weakly-supervised methods such as point supervised (What’sPoint~\cite{bearman2016s}), bounding-box supervised (SDI~\cite{khoreva2017simple}, BCM~\cite{Song_2019_CVPR}) and image-level-label supervised (CIAN~\cite{fan2020cian}, FickleNet~\cite{lee2019ficklenet}, SCE~\cite{chang2020weakly}). Besides, full-label supervised method (DeepLabV2~\cite{chen2017deeplab}) is also compared. We use $mIoU$ as the main metric to evaluate these methods and ours. When comparing with others, we mainly refer their reported scores if available. \subsubsection{Hyper-parameters} The proposed method has 200 epochs of training, with the first 100 epochs have no self-supervised loss. For every step, eight images (batch size) are randomly selected to train the network with Adam~\cite{kingma2014adam} optimizer, Sync-BatchNorm~\cite{ioffe2015batch} and learning rate as $1e$-$3$ for the first 100 epochs and $1e$-$4$ for the rest. The weights $\gamma$, $\omega_1$ and $\omega_2$ are set to be 0.01, 0.2 and 1, respectively. Besides, as the common way for semantic segmentation~\cite{zhao2018psanet}, data augmentation is adopted during training. All the computations are carried out on NVIDIA TITAN RTX GPUs. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{supp_2.png} \caption{Visualization of variation by different self-supervision operations. (a) The input image, (b) The variation on $f(x)^{L-1}$, (c) The variation on $P(x)$, (d) The variation on $f(x)^L$. The first row shows the variation for the flip operation, while the second row is for the translation operation.} \label{fig:variations1} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=1\linewidth]{VisualizationResults.png} \caption{Visual comparison between proposed method and others on \emph{scribblesup} dataset.} \label{fig:VisualizationResults} \end{figure*} \begin{table} \centering \setlength{\belowcaptionskip}{3pt} \caption{Variation comparison under the same transform.} \begin{tabular}{c|c|c|c} \toprule[1pt] &$f(x)^{L-1}$&$f(x)^L$&$P(x)$\\ \hline \hline flip&25.3\%&27.7\%&11.1\%\\ translation&7.7\%&15.2\%&6.9\%\\ \bottomrule[1pt] \end{tabular} \label{tab:ablation2} \end{table} \begin{table*} \centering \setlength{\belowcaptionskip}{3pt} \caption{The performance drop ratios compared to no drop and no shrink when only using baseline and gradually adding random walk and self-supervision.} \begin{tabular}{l|c|c|c|c|c||l|c|c|c|c} \toprule[1pt] drop rate&0.1&0.2&0.3&0.4&0.5&shrink rate&0.2&0.5&0.7&1\\ \hline \hline baseline&0.7\%&4.2\%&4.5\%&6.2\%&9.6\%&baseline&0\%&4.9\%&8.6\%&22.1\%\\ +Random Walk&1.4\%&3.8\%&4.0\%&5.3\%&8.4\%&+Random Walk&2.4\%&4.4\%&8.1\%&21.8\%\\ +Self-Supervision&0.7\%&2.1\%&3.5\%&5.0\%&5.9\%&+Self-Supervision&0.7\%&4.8\%&6.3\%&18.2\%\\ \bottomrule[1pt] \end{tabular} \label{tab:drop_ratios} \end{table*} \begin{comment} \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{absense+shrink.png} \caption{mIoU of proposed method on \emph{scribble-drop} and \emph{scribble-shrink} datasets with different drop and shrink rate.} \label{fig:newdata_curve} \end{figure} \end{comment} \subsection{Ablation Study} In Tab.~\ref{tab:ablation}, we do ablation study w/wo random walk, w/wo self-supervision, and self-supervision on $f(x)^{L-1}$, $f(x)^{L}$ and $P(x)$ on \emph{Scribblesup} dataset. We use \emph{ResNet50} as the backbone without maximum-entropy loss and CRF to study the random walk and self-supervision more thoroughly. The first one is a fully convolutional neural network (Baseline). With random walk, the mIoU receives more than 3\% promotion. After incorporating self-supervision (the last one), the performance further boosted by 3.6\% increasing. As for the self-supervision operations, we observe that both are helpful and self-supervision on the eigenspace outperforms on the other locations consistently no matter what kind of operation applied. Besides, randomly combined on the eigenspace leads to the best performance, while other locations do not show such promotion. It is worth noting that eigenspace is also a kind of neural representation that lies in the middle between $f(x)^{L-1}$ and $f(x)^L$. The middle one always outperforms sides, indicating a general advantage of self-supervision on neural eigenspace. In Tab.~\ref{tab:ablation2}, we show the mean variations of $f(x)^{L-1}$, $f(x)^L$ and $P(x)$ under the same transform (no self-supervision applied yet). The variation is measured by the relative error defined as $|T_\phi(f)-f'|/(|T_\phi(f)|+|f'|)$ ($f$: feature, $f'$: feature after transform). As can be seen, the same transform will always lead to less change on $P(x)$. Considering the property of eigenvectors, we believe $P(x)$ is not sensitive to the trivial structures (whose semantic may be heavily changed after transform). In Fig.~\ref{fig:variations1}, we visualize the variation of two images by different self-supervision operations, respectively. It can be seen the variation on $P(x)$ are most distributed on the main objects or object boundaries, while $f(x)^{L-1}$ and $f(x)^L$ also highlight backgrounds with uneven distribution. This phenomenon indicates that self-supervision on $P(x)$ will be more effortless and less confusing. \subsection{Quantitative Results} After introducing maximum-entropy loss, our method gets $71.9\%$ mIoU with $ResNet50$ and $73.4\%$ with $ResNet101$ on $Scribblesup$ dataset. When comparing with others, we also report the performance with CRF as others. Tab.~\ref{tab:all} lists all the scores of compared methods under different settings. In addition to the scribble-supervised methods, we also show methods with other type labels, including point, bounding-box, image-level-label and full-label. The proposed method reaches state-of-the-art performance compared with scribble and other weakly supervised methods and is even comparable to the full-label one. The reported full-label method (DeepLabV2) had been pre-trained on COCO dataset. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{result_drop1.png} \caption{Results of proposed method on \emph{scribble-drop} dataset with different drop rate.} \label{fig:dropout_result} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{result_shrink1.png} \caption{Results of proposed method on \emph{scribble-shrink} dataset with different shrink rate.} \label{fig:shrink_result} \end{figure} It should be noted that some methods, such as~\citealp{lin2016scribblesup,vernaza2017learning}, require every existing object in the image labeled. However, ours does not have this limit. To evaluate how the performance affected when the scribble randomly dropped and shrank, we further prepare two datasets, \emph{scribble-drop} and \emph{scribble-shrink} modified from \emph{scribblesup} as described before. We conduct experiments with different drop and shrink rate and show the mIoU scores in Fig.~\ref{fig:datasets}. Besides, in Tab.~\ref{tab:drop_ratios}, we also show the performance drop ratios under different drop and shrink rates. The methods used are identical to the ones in Tab.~\ref{tab:ablation}. It can be seen that the proposed method demonstrates certain robustness when the drop rate and shrink rate increase, even all the scribbles were shrunk to points (point-supervised). \subsection{Qualitative Results} We show visual comparison in Fig.~\ref{fig:VisualizationResults}, Fig.~\ref{fig:dropout_result} and Fig.~\ref{fig:shrink_result}. Fig.~\ref{fig:VisualizationResults} presents results of $NCL$, $KCL$ and ours on \emph{scribblesup} dataset. With the proposed random walk (RW) and self-supervision on eigenspace (SS), the results are gradually refined, and the complete method outperforms others clearly and shows significant promotion over the baseline. Fig.~\ref{fig:dropout_result} and Fig.~\ref{fig:shrink_result} further demonstrate our results on \emph{scribble-drop} and \emph{scribble-shrink}. Here no CRF using for better evaluation. It can be seen some details are missing when the annotations for training are gradually shrunk, but the main parts are preserved well. As for the random drop, our method shows promising robustness. When each scribble was dropped with 50\% probability during training, the prediction does not degrade much. (The results and scores in this section are all from the validation set. Please check supplementary for more results.) \subsection{Efficiency} We analyze the efficiency after adding random walk and self-supervision in Tab.~\ref{tab:efficiency}. Since the self-supervision branch shares the same trainable-parameters with the main branch, no extra parameters are needed. Besides, the self-supervision output will act as the target, so no intermediate features need to store. Finally, self-supervision is only conducted during training. Consequently, in our implementation, the time and memory cost by random walk and self-supervision is acceptable. \begin{table} \centering \setlength{\belowcaptionskip}{3pt} \caption{Trainable-parameters (P), memory (M), and inference speed (S) statistics when only using baseline and changes after gradually adding random walk and self-supervision, respectively.} \begin{tabular}{l|c|c|c} \toprule[1pt] &P&M&S\\ \hline \hline Baseline&23.61 M&1090.82 MB&5.87 it/s\\ +Random Walk&+0.31 M&+9.3 MB&-0.07 it/s\\ +Self-supervision&+0 M&+2.7 MB&-0 it/s\\ \bottomrule[1pt] \end{tabular} \label{tab:efficiency} \end{table} \section{Conclusion} In this work, we present a scribble-supervised semantic segmentation method. The insight is to guide the network to produce uniform and consistent predictions by embedding a random walk process on the neural representation and imposing a self-supervision on the neural eigenspace. Thoroughly ablation studies and intermediate visualization have verified the effectiveness of proposed components. Finally, the complete method reaches state-of-the-art performance compared with others, even is comparable to the full-label supervised ones. Moreover, the proposed method shows robustness to the data with randomly dropped or shrank labels. \input{AAAI.bbl} \bibliographystyle{IEEEtran} \end{document}
\section{Introduction} \label{sec:intro} Machine comprehension and question answering on text have significant progress in the recent years. One of the most representative corpora is the Stanford Question Answering Dataset (SQuAD)~\cite{rajpurkar2016squad}, on which deep neural network- (DNN-) based models are comparable with human. The achievements of the state-of-the-art question answering models demonstrate that machine has already acquired complex reasoning ability. On the other hand, accessing large collections of multimedia or spoken content is much more difficult and time-consuming than plain text content for humans. It is therefore highly attractive to develop Spoken Question Answering (SQA)~\cite{ispoken,comas2012factoid,turmo2008overview,comas2012sibyl}, which requires machine to find the answer from spoken content given a question in either text or spoken form. In SQA, after transcribing spoken content into text by automatic speech recognition (ASR), typical approaches use information retrieval (IR) techniques~\cite{shiang2014spoken} or knowledge bases~\cite{hixon2015learning} to find the proper answer from the transcriptions. Another attempt towards machine comprehension of spoken content is TOEFL listening comprehension by machine~\cite{tseng2016towards}. TOEFL is an English examination that tests the knowledge and skills of academic English for English learners whose native languages are not English. Deep-based models including attention-based RNN\cite{tseng2016towards} and tree-structured RNN\cite{fang2016hierarchical} were used to answer TOEFL listening comprehension test. Transfer learning for Question Answering (QA) is also studied on this task\cite{chung2017supervised}. However, TOEFL listening comprehension test is a multi-select question answering corpus, and its scale is not large enough to support the training of powerful listening comprehension models. Another spoken question answering corpus is Spoken-SQuAD\cite{li2018spoken}, which is generated from SQuAD dataset through Google Text-to-Speech (TTS) system. The spoken content is then transcribed by CMU sphinx\cite{walker2004sphinx}. Several state-of-the-art question answering models are evaluated on this dataset, and ASR errors seriously degrade the performance of these models. On Spoken-SQuAD, it has been verified that using sub-word units in SQA can mitigate the impact of ASR errors. Although Spoken-SQuAD is large enough to train state-of-the-art QA models, it is artificially generated, so it is still one step away from real SQA. To further push the boundary of SQA, in this paper, we release a large scale SQA dataset -- Open-Domain Spoken Question Answering Dataset (ODSQA). The contribution of our work are four-fold: \begin{itemize} \item First of all, we release an SQA dataset, ODSQA, with more than three thousand questions. ODSQA is a Chinese dataset, and to the best of our knowledge, the largest real\footnote{not generated by TTS as Spoken-SQA} SQA dataset for extraction-based QA task. \item Secondly, we found ASR errors have catastrophic impact on real SQA. We tested numbers of state-of-the-art SQuAD models on ODSQA, and reported their degrading performance on ASR transcriptions. \item Thirdly, we apply sub-word units in SQA to mitigate the impact of speech recognition errors, and this approach brings consistent improvements experimentally. \item Last but not the least, we found that back-translation, which has been applied on text QA~\cite{yu2018qanet} to improve the performance of models, also improve the SQA models. \end{itemize} \section{Related Work} Most QA work focuses on understanding text documents\cite{richardson2013mctest,lai2017race,rajpurkar2016squad,trischler2016newsqa}. The QA task has been extended from text to images \cite{zitnick2016adopting,young2014image,lin2014microsoft,kong2014you} or video descriptions \cite{chen2011collecting,das2013thousand,rohrbach2015dataset}. In the MovieQA task\cite{tapaswi2016movieqa}, the machine answers questions about movies using video clips, plots, subtitles, scripts, and DVS. Usually only text information (e.g., the movie's plot) is considered in the MovieQA task; learning to answer questions using video is still difficult. Machine comprehension of spoken content is still a less investigated problem. To mitigate the impact of speech recognition errors, we use sub-word units to represent the transcriptions of spoken content in this paper. Using sub-word unit is a popular approach for speech-related down-stream task and has been applied to spoken document retrieval\cite{ng1997subword}, spoken term detection \cite{van2017constructing}\cite{huijbregts2011unsupervised}, spoken document categorization\cite{qu2000using}, and speech recognition\cite{parada2011learning}. It has been verified that sub-word units can improve the performance of SQA~\cite{li2018spoken} . However, the previous experiments only conducted on an artificial dataset. In addition, the previous work focuses on English SQA, whereas we focus on Chinese SQA in this paper. There is a big difference between the subword units of English and Chinese. To improve the robustness to speech recognition errors, we used back-translation as a data augmentation approach in this paper. Back-translation allows the model to learn from more diversified data through paraphrasing. Back-translation was also studied in spoken language understanding and text-based QA as a data augmentation approach. In cross lingual spoken language understanding, training with the back-translation data via target language will make the model adaptive to translation errors \cite{he2013multi}\cite{upadhyay2018almost}. In text-based QA, back-translation was used to paraphrase questions\cite{dong2017learning} and paraphrase documents\cite{yu2018qanet}. \section{Task Description} \subsection{Data Format} \label{sec:format} In this paper, we introduce a new listening comprehension corpus, Open-Domain Spoken Questions Answering Dataset (ODSQA). Each example in this dataset is a triple, $(q, d, a)$. $q$ is a question, which has both text and spoken forms. $d$ is a multi-sentence spoken-form document. The answer $a$ is in text from, which is a word span from the reference transcription of $d$. An overview architecture of this task is shown in figure~\ref{fig:overview}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{overview_2.png} \caption{Flow diagram of the SQA system and the standard evaluation method. Given a spoken document and a spoken or text question, an SQA system, which is a concatenation of ASR module and reading comprehension module, can return a predicted text answer. This predicted answer is a span in the ASR transcription of the spoken document and will be evaluated by EM/F1 scores.} \label{fig:overview} \end{figure} \subsection{Data Collection} To build a spoken version QA dataset, we conducted the following procedures to generate spoken documents and questions. Our reference texts are from Delta Reading Comprehension Dataset (DRCD), which is an open domain traditional Chinese machine reading comprehension (MRC) dataset~\cite{shao2018drcd}. Each data example in DRCD is a triple of $(q, d, a)$ in which $q$ is a text-form question, $d$ is a multi-sentence text-form document that contains the answer $a$ as an extraction segment. In DRCD, training set contains 26,936 questions with 8,014 paragraphs, the development set contains 3,524 questions with 1,000 paragraphs and the testing set contains 3,485 questions with 1,000 paragraphs. The training set and development set are publicly available, while the testing set is not. Therefore, the DRCD training set was used as the reference text of the training set of ODSQA, and the DRCD development set was used as the reference text of the testing set of ODSQA. 20 speakers were recruited to read the questions and paragraphs in the development set of DRCD. All the recruited speakers were native Chinese speakers and used Chinese as their primary language. For document, each sentence was shown to speaker respectively. The speaker was required to speak one sentence at a time. All the sentences of the same document were guaranteed to be spoken by the same speaker. Because in the real-life user scenario, it is more possible that an user enters a spoken question, and machine answers the question based on an already recorded spoken document collection. The document and the question from the same data example do not have to be recorded by the same speakers. We collected 3,654 question answer pairs as the testing set. The corpus is released\footnote{ODSQA: OPEN-DOMAIN SPOKEN QUESTION ANSWERING DATASET \\ \url{https://github.com/chiahsuan156/ODSQA}}. The speech was all sampled at 16 kHz due to its common usage among the speech community, but also because the ASR model we adopted was trained on 16 kHz audio waveforms. An example of a corresponding pair between DRCD and ODSQA is shown in column(1) and (2) of Table~\ref{tab:EXAMPLE}. The detailed information of ODSQA about the speakers, audio total length and Word Error Rate are listed in row(1) of Table~\ref{tab:statistics}. \subsection{Evaluation Metrics} In this task, when the model is given a spoken document, it needs to find the answer of a question from the transcriptions of the spoken document. SQA can be solved by the concatenation of ASR module and reading comprehension module. Given a query and the ASR transcriptions of a spoken document, the reading comprehension module can output a text answer. The most intuitive way to evaluate the text answer is to directly compute the \textbf{Exact Match (EM)} and \textbf{Macro-averaged F1 scores(F1)} between the predicted text answer and the ground-truth text answer. If the predicted text answer and the ground-truth text answer are exactly the same, then the EM score is 1, otherwise 0. The F1 score is based on the precision and recall. Precision is the percentage of Chinese characters in the predicted answer existing in the ground-truth answer, while recall is the percentage of Chinese characters in the ground-truth answer also appearing in the predicted answer. The EM and F1 scores of each testing example are averaged to be the final EM and F1 score. We used the standard evaluation script from SQuAD\cite{rajpurkar2016squad} to evaluate the performance. \begin{comment} \begin{table*}[] \caption{An example in ODSQA and the corresponding reference texts in DRCD.The English translations were added here for easy reading. Document is denoted as D here.} \begin{CJK}{UTF8}{bkai}{} \label{tab:EXAMPLE} \begin{tabular}{|c p{4cm}|} \hline \multirow{7}{*}{\textbf{Text Document}} & 黃河,在中國古代稱作河水,簡稱河,是中國的第二長河,僅次於長江... “Yellow River, which was called as river water in ancient China, briefly called as river, is the second longest river in China, second only to Yangtze...” \\ \hline \multirow{1}{*}{\textbf{Spoken Document}} & xxxxxxx \\ \hline \multirow{3}{*}{\textbf{Question}} & 中國的第一長河為? “What is the longest river in China?” \\ \hline \multirow{1}{*}{\textbf{Spoken Question}} & xxxxxxx \\ \hline \textbf{Ground truth} & 長江 “Yangtze” \\ \hline \end{tabular} \end{CJK} \end{table*} \end{comment} \begin{center} \begin{table*}[] \centering \caption{An example in ODSQA and the corresponding reference texts in DRCD.The English translations were added here for easy reading. } \begin{CJK}{UTF8}{bkai}{} \label{tab:EXAMPLE} \begin{tabular}{|c || p{3.5cm} |p{3.5cm}| p{3.5cm}|p{3.5cm}|} \hline \multirow{1}{*}{\textbf{Data}} & \textbf{(1) DRCD} & \textbf{(2) ODSQA} & \textbf{(3) DRCD-TTS } & \textbf{(4) DRCD-backtrans } \\ \hline \multirow{7}{*}{\textbf{D}} & ...廣州屬亞熱帶季風海洋性氣候,氣候濕熱易上火的環境使飲涼茶成為廣州人常年的一個生活習慣。 “Guangzhou has a subtropical monsoon maritime climate. Drinking cool tea has become a daily habit of Guangzhou people for a long time due to the humid and hot environment.” & ...廣州屬亞熱帶季風海洋性氣候,氣候濕熱,易上火的環境\textbf{時,應嚴查}成為廣州人常年的一個生活習慣。 “Guangzhou has a subtropical monsoon maritime climate. \textbf{When} the environment is hot and humid, \textbf{necessarily strictly examination} has become a daily habit for Guangzhou people for a lone time.” & ...廣州屬亞熱帶季風海洋性氣候,氣候,\textbf{是誠意上火的環境適應量},茶成爲廣州人常年的一個生活習慣。 “Guangzhou has a subtropical monsoon maritime climate. Climate, \textbf{being a sincere and hot humid adaptation to the environment}, tea has become a daily habit for Guangzhou people for a lone time.” & ...廣州屬亞熱帶季風海洋性氣候,氣候炎熱潮濕,是廣州人喝茶的共同習慣。 “Guangzhou has a subtropical monsoon maritime climate. The climate is hot and humid, which is a common habit of Guangzhou people when drinking tea.” \\ \hline \end{tabular} \end{CJK} \vspace{-3mm} \end{table*} \end{center} \section{Proposed Approach} ASR errors are inevitable, and they can hinder the reasoning of QA models. However, when a transcribed word is wrong, some phonetic sub-word units in the word may still be correctly transcribed. Therefore, building word representation from sub-word units may mitigate the impact of ASR errors. Pingyin-token sequence of words are used in our experiments. Pinyin, literally meaning “spell out the sound”, is the Romanized phonetic transcription of the Chinese language. Each Chinese character consists of one pingyin syllable, and one syllable is composed of a number of pingyin-tokens. We adopt one-dimensional Convolution Neural Network (1-D CNN) to generate the word representation from the pingyin-token sequence of a word, and this network is called Pingyin-CNN. Our proposed approach is the reminiscent of Char-CNN~\cite{zhang2015character,kim2016character}, which apply 1-D CNN on character sequence to generate distributed representation of word for text classification task. Pingyin-CNN is illustrated in Figure~\ref{fig:phonemeCNN}. We explain how we obtain feature for one word with one filter. Suppose that a word $W$ consists of a sequence of pingyin-tokens $P = [p_1,...,p_l]$, where $l$ is the number of pingyin-tokens of this word. Let $H \in \mathbb{R}^{C \times d}$ be the lookup table pingyin-token embedding matrix, where $C$ is the number of pingyin-tokens, and $d$ is the dimension of the token embedding. In other words, each token corresponds to a $d$-dimensional vector. Given $P$, after looking up table, the intermediate token embedding $E \in\mathbb{R}^{l \times d}$ is obtained. The convolution between $E$ and a filter $F \in\mathbb{R}^{k \times d}$ is performed with stride 1 to obtain one-dimension vector $Z \in \mathbb{R}^{l-k+1}$. After max pooling over $Z$, we obtain a scalar value. With a set of filters, with the above process, we obtain a pingyin-token sequence embedding. The size of the pingyin-token sequence embedding is the number of filters. The filter is essentially scanning pingyin-token n-gram, where the size of n-gram is the height of the filter (the number of $k$ above). The pingyin-token sequence embedding is concatenated with typical word embedding to obtain new word representation as the input of reading comprehension model. All the parameters of filters and pingyin-token embedding matrix $H$ are end-to-end learned with reading comprehension model. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{pingyinCNN.png} \begin{CJK}{UTF8}{bkai}{} \caption{Illustration of enhanced word embedding. For a given input word $W$ at the bottom, a sequence of pingyin-tokens $P = [p_1,...,p_l]$ are obtained by looking up in the Chinese lexicon. Each pingyin-token is mapped to a vector $ \mathbb{R}^{d}$ and concatenated to form intermediate matrix $E$. $E$ is fed into the 1-D convolutional module. The output $Z$ is further fed into max-pooling layer, and a scalar value is generated. All the scalars from various filters will form the phoneme sequence embedding. Then the pingyin sequence embedding is further concatenated with word embedding as the input of reading comprehension model. In this illustration, the Chinese word 上 (means "up" in English) consists of five pingyin-tokens.} \label{fig:phonemeCNN} \end{CJK} \end{figure} \begin{comment} \subsection{Bilingual Embeddings} \label{sec:pagestyle} In this subsection, we introduce how we learn a model which can take in both English data and Chinese data and improve the performance over Chinese data. Since the QA models usually take word embeddings as input, we need to ensure that the word embeddings in these two languages are in the same space. This can be accomplished by aligning word embeddings~\cite{conneau2017word}\cite{copy those from zero resource} from different languages into a shared vector space, but this approach yielded poor performance in our experiments. We conjecture that this is due to the fundamental linguistic difference between English and Chinese and thus the embedding space couldn't be properly aligned. We therefore used a naive approach to make sure the words from English and Chinese reside in the shared vector space. For each English word $w$, we look up a bilingual dictionary using Google Translation API to fetch the corresponding word $w'$ in Chinese. Then we look up the pre-trained Chinese word embedding matrix, and use the word embedding of Chinese word $w'$ as the word embedding of English word $w$. Each English word has embedding $e_{src}$. For each Chinese word, once again, we look up the pre-trained Chinese word embedding matrix to obtain a embedding $e_{tar}$ In this way, the two languages actually use the same set of word embeddings. Thus the model is capable of learning from English data even when the testing data is in Chinese. \end{comment} \begin{comment} \subsection{Private and Shared Encoder } \label{subsec:Private and Shared Encoder} Due to the fact that various languages have different syntactic structures, using the same set of word embeddings is not enough to properly align English and Chinese for QA models. So we used private encoders for individual languages to model the local interactions between words in a single language. We adopt bi-directional LSTM (biLSTM)\cite{BILSTM} as the encoders in all our experiments. Here we use $ h_{pri,tar} = biLSTM_\text{tar}(e_{tar})$ ( $ h_{pri,src} = biLSTM_\text{src}(e_{src})$ ) to represent the output vector sequence of private encoder given input Chinese (English) word embedding sequence $e_\text{tar}$ ($e_\text{src}$). The following layers of QA models take $ h_{pri,tar}$ or $ h_{pri,src}$ as input, and the parameters of the following layers of QA model are shared across different languages. Furthermore, to model global language-generic information, we utilize a shared encoder $biLSTM_{s}$ to allow knowledge transfer between languages. The shared encoder is shared for both Chinese and English while the private encoders have different parameters for different languages. We use $ h_{share,tar} = biLSTM_\text{share}(e_{tar})$ ( $ h_{share,src} = biLSTM_\text{share}(e_{src})$ ) to represent the output vector sequence of private encoder given input Chinese (English) word embedding sequence Given English sentence as input, the $ h_{pri,src}$ and $ h_{share,src}$ are added as the input of the following layers of the QA model. For Chinese sentence, the summation of $ h_{pri,tar}$ and $ h_{share,tar}$ are used as input of the following layers of the QA model. \end{comment} \begin{comment} \subsection{Contextual Encoder} In subsection~\ref{subsec:Private and Shared Encoder}, we use summation operation to combine the output of private encoder and the output of shared encoder. Here we explore a more advanced method. We adopt a contextual encoder layer on top of the private and shared encoder. To be more specific, one on the source side and another on the target side. The contextual layer $biLSTM_{context}$ takes two inputs: the output from the private encoder, the output from the shared encoder : \begin{equation} \begin{split} \quad biLSTM_{context} = (f( h_{pri,tar} ,h_{share, tar} ) ) \end{split} \label{eqcontextual} \end{equation} The output from private encoder and the ouput from shared encoder are projected through : \begin{equation} \begin{split} f ( h_{pri,tar}, h_{share, tar}) = W3 tanh(W_{1}h_{pri,tar} + W_{2}h_{share, tar} ) \end{split} \label{projection} \end{equation} where $W_{1}$, $W_{2}$ and $W_{3}$ are learned parameters shared by the contextual encoder for both source data and target data. Although we only show how contextual layer works for target data, the equations can be adapted to source data by replacing all $tar$ notations to $src$. Equation~\ref{projection} combines the different features provided by private encoder and shared encoder. The output of the contextual layer is fed into following layers of QA model. \end{comment} \begin{comment} \subsection{Adversarial Training} We introduce a domain classifier whose job is to identify the domain of the input utterance by taking in the representations from the bi-LSTM state outputs. We train the domain classifier to minimize the domain classification loss. On the other hand, we update the parameters of the network underlying the domain classifier to maximize the domain classification loss, which works adversarially towards the domain classifier. We thus expect the model to learn features and structures that are general across domains. In our experiments, without properly regularize the output of the shared encoder, we couldn't obtain additional performance gains compared to baseline model which trained only on DRCD training set. Therefore, we apply adversarial learning to enforce the shared encoder to learn language-invariant feature representations. We introduce a discriminator $D$ whose job is to identify the language of the input sentence from the state outputs of shared encoder $\Psi_\text{tar}$ and $\Psi_\text{src}$. $D$ is learned to minimize $L_\text{adv}$ below. \begin{equation} \begin{split} \quad L_\text{adv} & = E_{x\sim x_\text{tar}} \,[\log D(\Psi_\text{s}(x))] \\ & \quad\; + E_{x \sim x_\text{src}} \, [\log (1 - D(\Psi_\text{s}(x))], \end{split} \label{eq2} \end{equation} where $D$ is the discriminator, $x_\text{src}$ is the word sequence of source language, and $x_\text{tar}$ is the word sequence of target language. $D$ is learned to assign small values to the outputs of $\Psi_\text{s}(x)$ where $x\sim x_\text{tar}$ , while large values to $\Psi_\text{s}(x)$ where $x\sim x_\text{src}$. On the other hand, the QA model learns to maximize the objective by updating the parameters of private encoder $\Psi_\text{s}$ which works adversarially towards the discriminator. The intuition behind this design is that if $\Psi_\text{s}(x)$ where $x\sim x_\text{src}$ is indistinguishable from $\Psi_\text{s}(x)$ where $x\sim x_\text{tar}$, the shared encoder will learn to extract features and structures across different language domains, and thus make the following layers of QA models easier to learn language-independent general knowledge. The discriminator is only applied on $\Psi_{s}(x)$. In this way, $\Psi_{s}(x)$ extracts language invariant features. The whole training process is summarized in algorithm \ref{alg:framework}. And the model architecture is shown in figure\ref{fig:architecture} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{architecture.png} \caption{The model architecture of the model.} \label{fig:architecture} \end{figure} \begin{algorithm}[htb] \caption{Shared Encoder With Adversarial Training} \label{alg:framework} \begin{algorithmic}[1] \Require Training data $(x, y)_{txt}$, $(x, y)_{ASR}$, QA model, Number of training epochs. \For{\texttt{epochs}} \State Sample $\{X,Y\}_{txt}=\{x^{i},y^{i}\}^m_{i=1}$ from training data Sample $\{X,Y\}_{ASR}=\{x^{i},y^{i}\}^m_{i=1}$ from training data \State Minimize cross entropy loss for QA prediction and update QA model \State Minimize generator loss and update shared encoder \For{\texttt{i=1$\sim$5}} \State Minimize discriminator loss and update the discriminator \EndFor \EndFor \end{algorithmic} \end{algorithm} \end{comment} \section{Experiments} \label{sec:typestyle} \subsection{Experimental Setup} \begin{itemize} \item \textbf{Speech Recognition}: We used the iFLYTEK ASR system \footnote{iFLYTEK ASR system\\\url{ https://www.xfyun.cn/doccenter/asr}} to transcribe both the spoken document and spoken question. \item \textbf{Pre-processing}: We used jieba-TW\footnote{jieba-zh-TW:\\ \url{https://github.com/ldkrsi/jieba-zh-TW}}, a python library specialized for traditional Chinese, to segment sentence into words. The resulting word vocabulary size for DRCD is around 160,000 and the character vocabulary size is around 6,200. We experimented with both words and characters. \item \textbf{Implementation Details} \textbf{Chinese word embeddings}: We pre-train a Fasttext\cite{bojanowski2016enriching} model on words of traditional Chinese Wikipedia articles\footnote{Wikipedia articles:\\\url{https://dumps.wikimedia.org/zhwiki/}} segmented by jieba-zh-TW. This model can handle Out-of-Vocabulary words with character n-grams. The word embeddings in all our experiments were initialized from this 300 dimensional pre-trained fasttext model and fixed during training for both translated English word and Chinese word. This model is crucial to the performance of the qeustion answering models according to our experimental results. \textbf{Chinese character embeddings}: We pre-train a skip-gram model on characters of traditional Chinese Wikipedia articles using Gensim\footnote{Genism:\\ \url{ https://radimrehurek.com/gensim/models/word2vec.html} }. \end{itemize} \vspace{-5mm} \begin{center} \begin{table*}[] \centering \caption{Data statistics of ODSQA, DRCD-TTS and DRCD-backtrans. The average document length and the average question length are denoted as Avg D Len and Avg Q Len respectively and they stand for the total numbers of Chinese characters. Since training with noisy data and different speakers will make QA model more robust during testing, the number of speakers is large. ODSQA testing set is denoted as ODSQA-test. } \label{tab:statistics} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{Subsets} & \textbf{QApair}& \textbf{Hours} &\textbf{M-spkrs}&\textbf{F-Spkrs}&\textbf{WER(\%)}&\textbf{WER-Q(\%)}&\textbf{Avg D Len}&\textbf{Avg Q Len}\\ \hline \hline (1) ODSQA-test & 1,465 & 25.28 & 7 & 13 & 19.11 & 18.57 & 428.32 & 22.08\\ \hline (2) DRCD-TTS & 16,746 & -- & -- & -- & 33.63 & -- & 332.80 & 20.53\\ \hline (3) DRCD-backtrans & 15,238 & -- & -- & -- & 45.64 & -- & 439.55 & 20.75\\ \hline \end{tabular} \vspace{-3mm} \end{table*} \end{center} \vspace{-8mm} \subsection{Baselines} \label{subsec:baseline} We chose several competitive reading comprehension models here. The models are listed as follow: \begin{itemize} \item \textbf{BiDirectional Attention Flow (BiDAF)}~\cite{seo2016bidirectional}: In BIDAF, both character-level and word-level embeddings are incorporated. A Bi-directional attention flow mechanism, which computes attentions in two directions: from context to query as well as from query to context is introduced to obtain a query-aware context representation. \item \textbf{R-NET}~\cite{wang2017gated}: In R-NET, the dependency in long context is captured more than plain recurrent neural network. A self-matching mechanism is introduced to dynamically refine context representation with information from the whole context. \item \textbf{QANet}~\cite{yu2018qanet}: There are completely no recurrent networks in QANet. Its encoder is composed of exlusively of convolution and self-attention. The intuition is that convolution components model local interactions and self-attention components model global interactions. Due to the removal of recurrent networks, it's training speed is 5x faster than BiDAF when reaching the same performance on SQuAD dataset. \item \textbf{FusionNet}~\cite{huang2017fusionnet}: There are mainly two contributions in FusionNet. First is the \textbf{History of Word}, in which all representations of a word from lowest level word embedding to the highest semantic level are concatenated to be the final representation of this word. Second is the \textbf{Fully-aware Multi-level Attention Mechanism}, which captures the complete information in one text (such as a question) and exploits it in its counterpart (such as a context or passage) layer by layer. \item \textbf{Dr.QA}~\cite{chen2017reading}: Dr.QA is a rather simple neural network architecture compared to the previous introduced models. It basically is composed of multi-layer bidirectional long short-term memory networks. It utilizes some linguistic features such as part-of-speech tagging and name entity recognition. \end{itemize} In our task, during testing stage, all the baseline QA models take into a machine-transcribed spoken document and a machine-transcribed spoken question as input, and the output is an extracted span from the ASR transcription of document. We train these baseline QA models on DRCD training set and compare the performance between DRCD dev set and ODSQA testing set. \subsection{Artificially Generated Corpus} It is reported that training on transcriptions with ASR errors are better than training on text.\cite{li2018spoken}, so we conduct the following procedures to generate transcriptions of spoken version DRCD. First, we used iFLYTEK Text-to-Speech system \footnote{iFLYTEK Text-to-Speech system\\ \url{ https://www.xfyun.cn/doccenter/tts}} to generate the spoken version of the articles in DRCD. Then we utilized iFLYTEK ASR system to obtain the corresponding ASR transcriptions. In this corpus, we left the questions in the text form. If the answer to a question did not exist in the ASR transcriptions of the associated article, we removed the question-answer pair from the corpus. This artificially generated corpus is called \textbf{DRCD-TTS} and its data statistics is shown in row(2) of Table~\ref{tab:statistics}. \subsection{Back-translation Corpus} To improve the robustness to speech recognition errors of QA model, we augmented DRCD training dataset with back-translation. We conduct the following procedures to generate an augmented training set. First, the DRCD training set is translated using Google Translation system into English. Then this set is translated back into Chinese through Google Translation system. We chose English as the pivot language here because English is the most common language and the translation quality is probably the best. Because the task is extraction-based QA, the ground-truth answer must exist in the document. Therefore, we removed the examples which cannot fulfill the requirement of extraction-based QA after translation. This resulting dataset is called \textbf{DRCD-backtrans} and its statistics is shown in row(3) of Table~\ref{tab:statistics}. \begin{comment} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{augmentation.png} \caption{Flow diagram of the generation procedure of DRCD-backtrans and DRCD-TTS dataset.} \label{fig:augmentation} \end{figure} \end{comment} \subsection{Result} First of all, we show the performance of the baseline models that are briefly introduced in subsection 5.2. All the QA models were trained on DRCD then test on DRCD dev set and ODSQA testing set respectively to compare the performance between text documents and spoken documents. Secondly, we compare the performance of QA models with and without the proposed pingyin sequence embedding. Thirdly, we show how co-training with \textbf{DRCD-TTS} and \textbf{DRCD-backtrans} benefit. Last but not the least, we compare the performance between spoken question and text question. \textbf{Investigating the Impact of ASR Errors}. We trained five reading comprehension models mentioned in Section 5.2 on the DRCD training set and these five models are tested on DRCD dev set and ODSQA testing set. In the following experiments, we do not consider the spoken documents whose answers do not exist in the ASR transcriptions because the model can never obtain the correct answers in these cases. % To make the comparison fair, the DRCD dev set are filtered to contain only the same set of examples. As shown in Table~\ref{tab:stateoftheart}, across the five models with char-based input, the average F1 score on the text document is 81.05\%. The average F1 score fell to 63.67\% when there are ASR errors. Similar phenomenon is observed on EM. The impact of ASR errors is significant for machine comprehension models. Since the author of BiDAF released its source code\footnote{BiDAF:Bi-directional Attention Flow for Machine Comprehension \\\url{https://github.com/allenai/bi-att-flow}} and its decent performance over ODSQA testing set, we use it as the base model with char-based input for further experiments. \begin{table}[] \centering \vspace{-3mm} \caption{Experiment results for state-of-the-art QA models demonstrating degrading performance under spoken data. All models were trained on the full DRCD training set. FusionNet is denoted by F-NET. DRCD dev set and ODSQA testing set are denoted by DRCD-dev and ODSQA-test, respectively. } \label{tab:stateoftheart} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{MODEL}}} & \multicolumn{2}{|c|}{\textbf{DRCD-dev}} & \multicolumn{2}{|c|}{\textbf{ODSQA-test}} \\ \cline{2-3}\cline{4-5} \multicolumn{1}{|c|}{} & EM & F1 & EM & F1 \\ \hline \hline BiDAF-word(a) & 56.45 & 70.57 & 39.38 & 55.1 \\ BiDAF-char(b) & 70.23 & 81.65 & 55.29 & 67.16 \\ \hline R-NET-word & 70.38 & 79.25 & 36.68 & 46.55 \\ R-NET-char & 69.90 & 79.49 & 43.44 & 55.83 \\ \hline QAnet-word & 69.83 & 78.33 & 49.80 & 59.35 \\ QAnet-char & 70.78 & 80.83 & 46.52 & 59.11 \\ \hline Dr.QA-word & 63.21 & 74.11 & 41.39 & 54.28 \\ Dr.QA-char & 70.24 & 81.19 & 56.22 & 68.99 \\ \hline F-Net-word & 57.54 & 70.86 & 45.39 & 57.40 \\ F-Net-char & 71.33 & 82.12 & 47.98 & 67.26 \\ \hline \hline \textbf{Average-word} & 63.48 & 74.62 & 42.52 & 54.53 \\ \textbf{Average-char} & 70.49 & 81.05 & 49.89 & 63.67 \\ \hline \end{tabular} \vspace{-3mm} \end{table} \textbf{Mitigating ASR errors by Subword Units}. We utilized an open-sourced chinese mandarin lexicon tool \footnote{DaCiDian : an open-sourced chinese mandarin lexicon for automatic speech recognition(ASR) \\ \url{https://github.com/aishell-foundation/DaCiDian}} to convert each word into sequence of pingyin-tokens and then fed the ping-yin tokens into Pingyin-CNN network to obtain pingyin-token sequence embedding. In this work, we didn't utilize tone information in pingyin-tokens. We leave it as a future work. The network details are listed as follow: pingyin-token embedding size 6, filter size 3x6 and numbers of filters 100. Different from~\cite{li2016phoneme} using one-hot vector, we choose distributed representation vectors to represent sub-word units. The experimental results with and without the proposed sub-word unit approach are in Table \ref{tab:approaches}. We see from Table \ref{tab:approaches}, using the combination of word embedding and the proposed pingyin sequence embedding is better than just word embedding (row (b)(d)(f)(h)(j)(l) vs. (a)(c)(e)(g)(i)(k)). The average EM score is improved by 1.3 by using pingyin sequence embedding over ODSQA testing set. \textbf{Data augmentation}. To improve the robustness to speech recognition errors of QA models, we augmented training data DRCD with DRCD-TTS and DRCD-backtrans. The experiment results are shown in Table \ref{tab:approaches}. We can see from Table \ref{tab:approaches}, training with the combination of DRCD and DRCD-backtrans or training with the combination of DRCD and DRCD-TTS are all better than training with only DRCD (row (g)(i) vs. (a) and row(h)(j) vs. row(b)). And finally training with the combination of DRCD, DRCD-TTS and DRCD-backtrans with pingyin sequence embedding obtains the best results (row(l)) which is better than baseline (row (a)) by almost 4 F1 score. Therefore, data augmentation proves to be helpful in boosting performance. \begin{table}[] \centering \vspace{-5mm} \caption{Comparison experiments demonstrating that the proposed sub-word units improved EM/F1 scores over both DRCD-dev and ODSQA-test. We use BiDAF as our base model in all experiments. Furthermore, augmenting DRCD with DRCD-TTS and DRCD-backtrans also gain improvements. Training with the combination of DRCD and DRCD-backtrans, the combination of DRCD and DRCD-TTS and the combination of DRCD, DRCD-TTS and DRCD-bakctrans are denoted as DRCD+back, DRCD+TTS and DRCD+TTS+back respectively.} \label{tab:approaches} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{MODEL}}} & \multicolumn{2}{|c|}{\textbf{DRCD-dev}} & \multicolumn{2}{|c|}{\textbf{ODSQA-test}} \\ \cline{2-3}\cline{4-5} \multicolumn{1}{|c|}{} & EM & F1 & EM & F1 \\ \hline \hline DRCD (a) & 70.23 & 81.65 & 55.29 & 67.16 \\ +pingyin (b) & 71.05 & 81.82 & 55.49 & 68.79 \\ \hline DRCD-TTS (c) & 59.24 & 72.64 & 50.64 & 63.65 \\ +pingyin (d) & 61.36 & 74.22 & 51.74 & 64.59 \\ \hline DRCD-back (e) & 58.56 & 72.31 & 46.55 & 61.52 \\ +pingyin (f) & 58.63 & 72.97 & 48.2 & 62.82 \\ \hline \hline DRCD+TTS (i) & 70.51 & 81.85 & 55.97 & 69.31 \\ +pingyin (j) & 71.53 & 82.42 & 56.65 & 69.45 \\ \hline DRCD+back (g) & 71.39 & 82.28 & 55.29 & 68.49 \\ +pingyin (h) & 71.8 & 82.4 & 57.6 & 69.26 \\ \hline \hline DRCD+TTS+back (k) & 72.21 & 82.8 & 57.61 & 70.29 \\ +pingyin (l) & \textbf{72.76} & \textbf{83.15} & \textbf{59.52} & \textbf{71.01} \\ \hline \hline \textbf{Average (m)} & 67.02 & 78.92 & 53.55 & 66.73 \\ \textbf{Average-pingyin (n)} & 67.85 & 79.49 & 54.86 & 67.65 \\ \hline \end{tabular} \vspace{-5mm} \end{table} \textbf{Comparison Between Text Question and ASR Transcribed Question}. ASR errors on question will affect the reasoning of a QA model. In this part, we compare the performance between input with text questions and input with ASR-transcribed questions. We can see from Table \ref{tab:textq}, the average F1 score fell from 71.61\% to 66.73\% when there are ASR errors in question. Similar phenomenon is observed on EM. Once again, we can see that using pingyin sequence embedding brings improvement (row(h) vs (g)) even with text question as input. \begin{table}[] \centering \vspace{-8mm} \caption{Comparison experiments between input with text question and input with transcribed question. We use BiDAF as our base model in all experiments.} \label{tab:textq} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{MODEL}}} & \multicolumn{2}{|c|}{\textbf{Text-Q}} & \multicolumn{2}{|c|}{\textbf{Spoken-Q}} \\ \cline{2-3}\cline{4-5} \multicolumn{1}{|c|}{} & EM & F1 & EM & F1 \\ \hline \hline DRCD (a) & 59.63 & 72.02 & 55.29 & 67.16 \\ +pingyin (b) & 61.47 & 72.93 & 55.49 & 68.79 \\ \hline DRCD-TTS (c) & 54.43 & 67.18 & 50.64 & 63.65 \\ +pingyin (d) & 55.39 & 68.12 & 51.74 & 64.59 \\ \hline DRCD-back (a) & 52.45 & 67.13 & 46.55 & 61.52 \\ +pingyin (b) & 53.41 & 68.57 & 48.2 & 62.82 \\ \hline \hline DRCD+TTS (i) & 61.95 & 73.78 & 55.97 & 69.31 \\ +pingyin (j) & 62.43 & 74.3 & 56.65 & 69.45 \\ \hline DRCD+back (c) & 62.22 & 74.33 & 55.29 & 68.49 \\ +pingyin (d) & 62.7 & 74.81 & 57.6 & 69.26 \\ \hline \hline DRCD+TTS+back (e) & 63.11 & 75.27 & 58.29 & 69.94 \\ +pingyin (f) & 64.54 & 75.63 & 59.52 & 70.95 \\ \hline \hline \textbf{Average (g)} & 58.96 & 71.61 & 53.55 & 66.73\\ \textbf{Average-pingyin (h)} & 59.99 & 72.39 & 54.86 & 67.65 \\ \hline \end{tabular} \vspace{-5mm} \end{table} \begin{comment} \begin{table}[] \centering \caption{EM/F1 scores over ODSQA testing set. Row (a) is the word-based baseline BIDAF model trained on DRCD training set. All the proposed approaches were trained jointly using the training examples of DRCD and DRCD-TTS. } \label{tab:approaches} \begin{tabular}{c|c|c|c|c} \hline \multicolumn{2}{|c|}{{\textbf{Approaches}}} & \multicolumn{1}{|c|}{} &\textbf{EM} & \textbf{F1} \\ \hline \hline \multicolumn{2}{|c|}{{Baseline}} & (a) & 56.27 & 74.49 \\ \hline\hline CHAR & share & (f) & 00 & 00 \\ CHAR & share +syllable & (g) & 00 & 00 \\ CHAR & share + GAN & (h) & 00 & 00 \\ CHAR & share +syllable+ GAN & (i) & 00 & 00 \\ \hline \end{tabular} \end{table} \end{comment} \section{Concluding Remarks} \vspace{-1.5mm} We release an SQA dataset, ODSQA. By testing several models, we found that ASR errors have catastrophic impact on SQA. We found that subword units bring consistent improvements over all the models. We also found that using back-translation and TTS to augment the text-based QA training examples can help SQA. In the future work, we are collecting more data for SQA. \bibliographystyle{IEEEbib}
\section{Introduction} It is well known that a set $\{ \varphi_m \}_{ m = 1 }^M$ of $M\,(\geqslant N)$ unit vectors in $\mathbb{R}^N$ satisfies \begin{equation*} \max_{ m \ne n } | \langle \varphi_m ,\varphi_n \rangle | \geqslant \sqrt{ \frac{ M - N }{ N ( M - 1 ) } }, \end{equation*} and equality is attained if and only if $\{ \varphi_m \}_{ m = 1 }^M$ is \emph{equiangular}, i.e., $| \langle \varphi_m ,\varphi_n \rangle |$ is constant for $m\ne n$, and it is a \emph{tight frame}, i.e., there is a constant $c > 0$ such that \begin{equation*} \sum_{ m = 1 }^M \langle \varphi, \varphi_m \rangle^2 = c \| \varphi \|^2 \end{equation*} for all $\varphi \in \mathbb{R}^N$; see, e.g., \cite{Welch1974IEEE}, \cite[\S10.6.2]{BH2012B}. It seems that the concept of real equiangular tight frames (real ETFs) first appeared in Van Lint and Seidel \cite{LS1966IM}; see also Lemmens and Seidel \cite{LS1973JA}. We refer the reader to \cite{CRT2008P,FJMP2018JCTA,FM2015pre,FMT2012LAA,IJM2020DCG,IM2019pre,Singh2010LAA,Waldron2009LAA} for some of the recent activities on ETFs. This paper presents all nontrivial real ETFs obtained as spherical embeddings of primitive rank $3$ graphs on $M$ vertices, and those such that one of their associated $M$ strongly regular graphs on $M-1$ vertices is a primitive rank $3$ graph. This research was started when Ei.B. received an email from Bob Griess dated July 29, 2012. The content of his email was essentially as follows. \begin{quote} ``There are interesting examples of spherical codes which come from VOA theory. For instance, there is a code in $156$-dimensional Euclidean space with $496$ unit vectors and group $O^+_{ 10 } ( 2 )$. If $x$ is one unit vector, it has inner product $0$ with $255$ others and inner product $1/8$ with $240$ others. (The points of the spherical code are rescales of the so-called conformal vectors of central charge $1/2$). Do you know anything about this code? For example, is it close to being extremal in any sense?'' \end{quote} \noindent (He also mentioned that the above procedure is applied to the Monster simple group acting in $196884$ dimensions.) The reply to his inquiry was essentially as follows. \begin{quote} ``The $496$ points are on the hyperplane in $\mathbb{R}^{156}$ which is at distance $1/4$ from the origin. If we regard this $496$ point set as a spherical code on the unit sphere in the $155$-dimentional Euclidean space, then the inner products become $1/15$ and $-1/15$. So, we have an equiangular line system. Moreover, this gives an extremal property of being a real equiangular tight frame. Also, this construction of real equiangular tight frames is generalized for $O^+_{ 2 m } ( 2 )$ and $O^-_{ 2 m } ( 2 )$ (for all $m$) acting as transitive rank $3$ groups (i.e., strongly regular graphs) on the set of non-isotropic points.'' \end{quote} Then, Ei.B., Et.B., and H.T. quickly found four other infinite families by looking at the eigenmatrices of various primitive rank $3$ graphs but did not publish the results. In Winter 2021, C.-Y.L. and W.-H.Y. joined the project of finding \emph{all} nontrivial real ETFs obtained as spherical embeddings of primitive rank $3$ graphs. Our work was greatly facilitated by the very recent book written by Brouwer and Van Maldeghem \cite{BVM2022B}, which includes descriptions of all primitive rank $3$ graphs. Besides the above $2+4=6$ infinite families, it turns out that there are only two sporadic examples. Associated with every nontrivial real ETF $\{ \varphi_m \}_{ m = 1 }^M$ are $M$ strongly regular graphs on $M-1$ vertices having the same parameters. As a related problem, we also determine all nontrivial real ETFs such that one of these strongly regular graphs is a primitive rank $3$ graph. Our main results are Theorems \ref{classification} and \ref{rank 3 Waldron}. This paper is organized as follows. Section \ref{sec: SRGs} recalls the necessary facts about strongly regular graphs and their spherical embeddings. In Section \ref{sec: rank 3 graphs}, we summarize the classification of primitive rank $3$ graphs based on Brouwer and Van Maldeghem \cite{BVM2022B} and identify all those whose spherical embeddings give rise to real ETFs. Section \ref{sec: real ETFs from SRGs} describes these six infinite families and two sporadic graphs. Finally, Section \ref{sec: Waldron ETFs} discusses nontrivial real ETFs, one of whose associated strongly regular graphs is a primitive rank $3$ graph. \section{Spherical embeddings of strongly regular graphs} \label{sec: SRGs} Let $\Gamma$ be a finite simple graph with vertex set $X$, which is neither complete nor edgeless. Recall that $\Gamma$ is called a \emph{strongly regular graph} (\emph{SRG}) \emph{with parameters} $(v,k,\lambda,\mu)$ if it has $v$ vertices and is $k$-regular, and every pair of adjacent (resp.~distinct and nonadjacent) vertices has exactly $\lambda$ (resp.~$\mu$) common neighbors. For the rest of this section, suppose that $\Gamma$ is an SRG with parameters $(v,k,\lambda,\mu)$. The complement $\overline{\Gamma}$ of $\Gamma$ is an SRG with parameters $(v,\bar{k},\bar{\lambda},\bar{\mu})$, where $\bar{k}=v-k-1$, $\bar{\lambda}=v-2k+\mu-2$, and $\bar{\mu}=v-2k+\lambda$; cf.~\cite[\S 1.1.2]{BVM2022B}. We also assume that $\Gamma$ is \emph{primitive}, i.e., $\Gamma$ is neither a complete multipartite graph $(\mu=k)$ nor a union of complete graphs $(\mu=0)$. Let $A$ and $\overline{A}=J-I-A$ be the adjacency matrices of $\Gamma$ and $\overline{\Gamma}$, respectively, where $J$ is the all one's matrix. Then $\bm{A} := \mathrm{span} \{ I, A, \overline{A} \}$ is a three-dimensional algebra since $A^2=kI+\lambda A+\mu \overline{A}$. The algebra $\bm{A}$ is known as the \emph{Bose--Mesner algebra} of $\Gamma$; cf.~\cite[\S1.3.2]{BVM2022B}. The matrix $A$ has eigenvalue $k$ with multiplicity one since $\Gamma$ is connected and $k$-regular. It has two other distinct eigenvalues $r>s$ and these are the roots of the quadratic equation $\xi^2+(\mu-\lambda)\xi+(\mu-k)=0$; cf.~\cite[\S 1.1.1]{BVM2022B}. Let $E_k, E_r, E_s$ be the orthogonal projections onto the eigenspaces $V_k, V_r, V_s$ of $A$ with eigenvalues $k, r, s$, respectively. We note that $E_k = v^{ - 1 } J$ and that $E_k, E_r$, and $E_s$ form a basis of $\bm{A}$. Define the $3\times 3$ matrices $P$ and $Q$ by $( I, A, \overline{A} ) = ( E_k, E_r, E_s ) P$ and $v ( E_k, E_r, E_s ) = ( I, A, \overline{A} ) Q$. We call $P$ and $Q$ the \emph{first} and \emph{second eigenmatrices} of $\Gamma$, respectively. We have \begin{equation*} P = \begin{pmatrix} 1 & k & \bar{k} \\ 1 & r &\bar{s} \\ 1 & s & \bar{r} \end{pmatrix}\!, \end{equation*} where $\bar{r}=-s-1$ and $\bar{s}=-r-1$ are the distinct eigenvalues of $\overline{A}$ other than $\bar{k}$. Moreover, it follows that \begin{equation*} Q = \begin{pmatrix} 1 & f & g \\ 1 & fr/k & gs/k \\ 1 & f\bar{s}/\bar{k} & g\bar{r}/\bar{k} \end{pmatrix}\!, \end{equation*} where $f=\dim V_r$ and $g = \dim V_s=v-f-1$. Observe in particular that \begin{equation}\label{E_r} E_s = \frac{ g }{ v } \left( I + \frac{ s }{ k } A + \frac{ \bar{r} }{ \bar{k} } \overline{A} \right). \end{equation} For every $x \in X$, let $( E_s )_x$ denote the $x^{ \mathrm{th} }$ column of $E_s$, and consider the set $\{ \varphi_x : x \in X \}$ of $v$ (column) vectors in the Euclidean space $V_s \, ( \cong \mathbb{R}^g )$ by \begin{equation*} \varphi_x = \sqrt{ \frac{ v }{ g } } ( E_s )_x \qquad ( x \in X ). \end{equation*} Then it follows from \eqref{E_r} that the $\varphi_x$ are unit vectors in $V_s$ and that \begin{equation}\label{angles} \langle \varphi_x, \varphi_y \rangle = \begin{cases} s/k & \text{if $x, y$ are adjacent}, \\ \bar{r}/\bar{k} & \text{if $x, y$ are nonadjacent}, \end{cases} \qquad ( x, y \in X, \ x \ne y ). \end{equation} In this paper, we call $\{ \varphi_x : x \in X \}$ the \emph{spherical embedding} of $\Gamma$. Since \begin{equation*} \sum_{x\in X} \langle \varphi, \varphi_x \rangle^2 = \frac{v}{g} \varphi^{\mathsf{T}} E_s \varphi = \frac{v}{g} \| \varphi \|^2 \qquad (\varphi\in V_s), \end{equation*} the spherical embedding of $\Gamma$ gives a real ETF with $(M,N)=(v,g)$ if and only if it is equiangular, i.e., \begin{equation}\label{equiangular} \frac{s}{k}=-\frac{\bar{r}}{\bar{k}}. \end{equation} If this is the case, then the columns of the matrix \begin{equation*} \sqrt{ \frac{ v }{ v-g } } ( I - E_s ) = \sqrt{ \frac{ v }{ v-g } } ( E_k + E_r ) \end{equation*} also define a real ETF with $(M,N)=(v,v-g)$ in $V_k+V_r \, ( \cong \mathbb{R}^{v-g} )$, known as the \emph{Naimark complement} of $\{ \varphi_x : x \in X \}$. We note that the spherical embedding of $\overline{\Gamma}$ consists of the normalized columns of $E_r$. If $g=1$, then it follows from \eqref{angles} that $\Gamma$ is a complete bipartite graph, which we exclude here. Hence we have $g\geqslant 2$. Likewise, we have $f\geqslant 2$. It follows that the real ETFs obtained in this way are always nontrivial, i.e., the pairs $(M,N)$ satisfy $2\leqslant N\leqslant M-2$.\footnote{The \emph{trivial} real ETFs are those with (a) $N=M$ (orthonormal bases); (b) $N=M-1$ (vertices of regular simplices); or (c) $N=1$. Cases (b) and (c) are the Naimark complements of each other.} It is known (cf.~\cite[\S8.14]{BVM2022B}) that every dependent (i.e., $M>N$) real ETF gives rise to a \emph{regular two-graph}, that is to say, a $2$-design with block size three such that every $4$-set of vertices (or points) contains an even number of blocks. Specifically, the vertices of the latter are indexed by the vectors of the former, where a $3$-set of vertices is a block if and only if an odd number of angles between the corresponding three vectors are obtuse angles. Conversely, every regular two-graph is obtained in this way; see, e.g., \cite[\S\S10.3, 10.6]{BH2012B}, \cite[\S11.4]{GR2001B}. By \cite[Proposition 10.3.2]{BH2012B}, one of the spherical embeddings of $\Gamma$ and $\overline{\Gamma}$ is a real ETF if and only if \begin{equation}\label{regular two-graph} v=2(2k-\lambda-\mu)\,(=2(2\bar{k}-\bar{\lambda}-\bar{\mu})). \end{equation} \section{Primitive rank \texorpdfstring{$3$}{3} graphs} \label{sec: rank 3 graphs} Brouwer and Van Maldeghem \cite{BVM2022B} described all \emph{primitive rank $3$ graphs}, i.e., those SRGs admitting primitive rank $3$ automorphism groups. Recall that the \emph{socle} of a group $G$ is the subgroup generated by the minimal normal subgroups of $G$. \begin{theorem}[{\cite[Theorem 11.1.1]{BVM2022B}}] Let $\Gamma$ be a primitive strongly regular graph on $v$ vertices, and let $G$ be a primitive rank $3$ permutation group acting as an automorphism group of $\Gamma$. Then one of the following holds. \begin{enumerate} \item[(i)] $T\times T \lhd G \leq T_0\wr 2$, where $T_0$ is a $2$-transitive group of degree $v_0$, the socle $T$ of $T_0$ is simple, and $v=v_0^2$. \item[(ii)] The socle $L$ of $G$ is (nonabelian) simple. \item[(iii)] The group $G$ is an affine group, i.e., $G$ has a regular elementary abelian normal subgroup and $v$ is a power of a prime. \end{enumerate} \end{theorem} See also \cite{Fawcett2009M}. Case (i) is discussed in \cite[\S11.2]{BVM2022B}. There are $25$ kinds of groups, but all the groups generate the same class of the lattice graphs $L_2(q)$ (cf.~\cite[\S 1.1.8]{BVM2022B}). Case (ii) has further four subcases: alternating socle, classical simple socle, exceptional simple socle, and sporadic simple socle; cf.~\cite[\S11.3]{BVM2022B}. Case (iii) has further three subcases: infinite classes, extraspecial classes, and exceptional classes; cf.~\cite[\S11.4]{BVM2022B}. The infinite families from Cases (ii) and (iii) are summarized in Tables \ref{Case (ii) families} and \ref{Case (iii) families}, respectively.\footnote{For some families, there are additional conditions to be of rank $3$.} Most\footnote{The sporadic examples missing in \cite[\S11.5]{BVM2022B} are those in the classical simple socle subcase of Case (ii), i.e., Theorem 11.3.2\,(v), (vi), (vii), (x), and Theorem 11.3.3\,(iii) in \cite[\S11.3.2]{BVM2022B}.} of the sporadic examples are listed in \cite[\S11.5]{BVM2022B}. \begin{table}[h] \begin{minipage}{5.5cm} \begin{center} \begin{tabular}{c|c} \hline\hline Graph& Reference \\ \hline $T(n)$ & \cite[\S1.1.7]{BVM2022B} \\ $\mathsf{Sp}_{2n}(q)$ & \cite[\S2.5]{BVM2022B} \\ $\mathsf{O}_{2n+1}(q)$ & \cite[\S2.6]{BVM2022B} \\ $\mathsf{O}_{2n}^+(q)$ & \cite[\S2.6]{BVM2022B} \\ $\mathsf{O}_{2n+2}^-(q)$ & \cite[\S2.6]{BVM2022B} \\ $\mathsf{U}_{2n}(\sqrt q)$ & \cite[\S2.7]{BVM2022B} \\ $\mathsf{U}_{2n+1}(\sqrt q)$ & \cite[\S2.7]{BVM2022B} \\ dual polar & \cite[\S2.2.11]{BVM2022B} \\ half dual polar & \cite[\S2.2.12]{BVM2022B} \\ $\mathit{NU}_n(q)$ & \cite[\S3.1.6]{BVM2022B} \\ $\mathit{NO}_{2n}^{\varepsilon}(2)$ & \cite[\S3.1.2]{BVM2022B} \\ $\mathit{NO}_{2n}^{\varepsilon}(3)$ & \cite[\S3.1.3]{BVM2022B} \\ $\mathit{NO}_{2n+1}^{\varepsilon}(q)$ & \cite[\S3.1.4]{BVM2022B} \\ $J_q(n,2)$ & \cite[\S3.5.1]{BVM2022B} \\ $\mathsf E_{6,1}(q)$ & \cite[\S4.9]{BVM2022B} \\ \hline\hline \end{tabular} \end{center} \caption{Case (ii) families}\label{Case (ii) families} \end{minipage}\!\! \begin{minipage}{6cm} \begin{center} \begin{tabular}{c|c} \hline\hline Graph & Reference \\ \hline $P(q)$ & \cite[\S1.1.9]{BVM2022B} \\ $P^*(q)$ & \cite[\S7.3.6]{BVM2022B} \\ Van Lint-Schrijver & \cite[\S7.3.1]{BVM2022B} \\ $L_2(q)$ & \cite[\S1.1.8]{BVM2022B} \\ $H_q(2,e)$ & \cite[\S3.4.1]{BVM2022B} \\ $\mathit{VO}_{2n}^{\varepsilon}(q)$ & \cite[\S3.3.1]{BVM2022B} \\ alternating forms & \cite[\S3.4.2]{BVM2022B} \\ $\mathit{VD}_{5,5}(q)$ & \cite[\S3.3.3]{BVM2022B} \\ $\mathit{VSz}(q)$ & \cite[\S3.3.1]{BVM2022B} \\ \hline\hline \end{tabular} \end{center} \caption{Case (iii) families}\label{Case (iii) families} \end{minipage} \end{table} \begin{theorem}\label{classification} The primitive rank $3$ graphs whose spherical embeddings give rise to nontrivial real equiangular tight frames are given in Table \ref{classification table}. \end{theorem} \begin{table}[h] \begin{center} \begin{tabular}{c|c|c|c} \hline\hline Graph & $M$ & $N$ & $M-N$ \\ \hline $\rule{0pt}{13pt} \mathit{NO}_{2n}^+(2)$\, $(n\geqslant 3)$ & $2^{n-1}(2^n-1)$ & $\frac{ ( 2^{n - 1 } - 1 ) ( 2^n - 1 ) }{ 3 }$ & $\frac{ 2^{2n}-1 }{ 3 }$ \\[1mm] $\overline{\mathit{NO}_{2n}^-(2)}$\, $(n\geqslant 2)$ & $2^{n-1}(2^n+1)$ & $\frac{ ( 2^{n - 1 } + 1 ) ( 2^n+ 1 ) }{ 3 }$ & $\frac{ 2^{2n}-1 }{ 3 }$ \\[1mm] $\mathit{NO}_{2n+1}^+(4)$\, $(n\geqslant 1)$ & $\frac{4^n ( 4^n + 1 )}{2}$ & $\frac{ 4^{2n}-1 }{ 3 }$ & $\frac{ (4^n+1)(4^n+2) }{ 6 }$ \\[1mm] $\overline{\mathit{NO}_{2n+1}^-(4)}$ \ $(n\geqslant 2)$ & $\frac{ 4^n ( 4^n - 1 ) }{2}$ & $\frac{ 4^{2n}-1 }{ 3 }$ & $\frac{ (4^n-1)(4^n-2) }{ 6 }$ \\[1mm] $\mathit{VO}_{2n}^+(2)$\, $(n\geqslant 2)$ & $2^{ 2n }$ & $2^{n- 1 } ( 2^n- 1 )$ & $2^{n-1}(2^n+1)$ \\[1mm] $\overline{\mathit{VO}_{2n}^-(2)}$\, $(n\geqslant 2)$ & $2^{ 2n }$ & $2^{n- 1 } ( 2^n+ 1 )$ & $2^{n-1}(2^n-1)$ \\[1mm] $\rule{0pt}{13pt} \overline{\mathsf{G}_2(2)} $ & $36$ & $21$ & $15$ \\ $\rule{0pt}{13pt} \overline{\mathsf{M}_{22}} $ & $176$ & $154$ & $22$ \\ \hline\hline \end{tabular} \end{center} \caption{Primitive rank $3$ graphs yielding nontrivial real ETFs}\label{classification table} \end{table} \begin{proof} Routine verification of \eqref{equiangular} or \eqref{regular two-graph}. We also note the following isomorphisms: $T(5)\cong \overline{\mathit{NO}_4^-(2)}$ (cf.~\cite[\S10.3]{BVM2022B}), $\overline{T(8)}\cong \overline{\mathit{NO}_3^+(7)}\cong \mathit{NO}_6^+(2)$ (cf.~\cite[\S\S3.1.4, 3.6.1]{BVM2022B}), $\overline{L_2(4)}\cong \overline{\mathsf{O}_{4}^+(3)}\cong H_2(2,2)\cong \overline{\mathit{VO}_{2}^+(4)}\cong \mathit{VO}_4^+(2)$ (cf.~\cite[\S\S2.6.4, 3.3.1, 3.4.1]{BVM2022B}), $\mathit{NO}_5^-(3)\cong \overline{\mathit{NO}_6^-(2)}$ (cf.~\cite[\S10.15]{BVM2022B}), and $\mathit{VSz}(2)\cong\mathit{VO}^-_4(2)$ (cf.~\cite[\S2.5.5]{BVM2022B}). \end{proof} \section{Rank \texorpdfstring{$3$}{3} graphs yielding equiangular tight frames} \label{sec: real ETFs from SRGs} In this section, we describe the primitive rank $3$ graphs found in Theorem \ref{classification} for the convenience of the reader. \begin{example}[$\mathit{NO}^+_{ 2n } ( 2 )$]\label{example 1} Equip $\mathbb{F}_2^{2n}$ with a nondegenerate quadratic form of Witt index $n$ and let $X$ be the set of nonsingular points. Let $\Gamma=\mathit{NO}^+_{ 2n } ( 2 )$ have vertex set $X$, two vertices being adjacent when they are orthogonal. See \cite[\S3.1.2]{BVM2022B}. For $n\geqslant 3$, the graph $\mathit{NO}^+_{2n}(2)$ is a primitive rank $3$ graph with parameters $(2^{n-1}(2^n-1), 2^{2n-2}-1, 2^{2n-3}-2, 2^{n-2}(2^{n-1}+1) )$ and eigenmatrices \begin{equation*} P = \begin{pmatrix} 1 & 2^{2n-2}-1 & 2^{n-1}(2^{n-1}-1) \\ 1 & 2^{ n- 2 } - 1 & - 2^{ n- 2 } \\ 1 & - 2^{ n- 1 } - 1 & 2^{ n- 1 } \end{pmatrix}\!, \ \ Q = \begin{pmatrix} 1 & \frac{ 2^{2n}-4 }{ 3 } & \frac{ ( 2^{ n- 1 } - 1 ) ( 2^n - 1 ) }{ 3 } \\[1mm] 1 & \frac{ 2^n-4 }{ 3 } & - \frac{ 2^n - 1 }{ 3 } \\[1mm] 1 & - \frac{ 2^n+2 }{ 3 } & \frac{ 2^n - 1 }{ 3 } \end{pmatrix}\!. \end{equation*} \end{example} \begin{example}[$\overline{\mathit{NO}^-_{ 2n } ( 2 )}$] Equip $\mathbb{F}_2^{2n}$ with a nondegenerate quadratic form of Witt index $n-1$ and let $X$ be the set of nonsingular points. Let $\Gamma=\overline{\mathit{NO}^-_{ 2n } ( 2 )}$ have vertex set $X$, two vertices being adjacent when they are nonorthogonal. See \cite[\S3.1.2]{BVM2022B}. For $n\geqslant 2$, the graph $\overline{\mathit{NO}^-_{ 2n } ( 2 )}$ is a primitive rank $3$ graph with parameters $( 2^{n-1}(2^n+1), 2^{n-1}(2^{n-1}+1), 2^{n-2}(2^{n-1}+1), 2^{n-1}(2^{n-2}+1) )$ and eigenmatrices \begin{equation*} P = \begin{pmatrix} 1 & 2^{n-1}(2^{n-1}+1) & 2^{2n-2}-1 \\ 1 & 2^{ n - 2 } & - 2^{ n - 2 } - 1 \\ 1 & - 2^{ n - 1 } & 2^{ n - 1 } - 1 \end{pmatrix}\!, \ \ Q = \begin{pmatrix} 1 & \frac{ 2^{2n}-4 }{ 3 } & \frac{ ( 2^{ n - 1 } + 1 ) ( 2^n + 1 ) }{ 3 } \\[1mm] 1 & \frac{ 2^n-2 }{ 3 } & - \frac{ 2^n + 1 }{ 3 } \\[1mm] 1 & - \frac{ 2^n+4 }{ 3 } & \frac{ 2^n + 1 }{ 3 } \end{pmatrix}\!. \end{equation*} \end{example} \begin{example}[$\mathit{NO}^+_{ 2n + 1 } ( 4 )$] Equip $\mathbb{F}_4^{2n+1}$ with a nondegenerate quadratic form and let $X$ be the set of nonsingular hyperbolic hyperplanes. Let $\Gamma=\mathit{NO}^+_{ 2n+1 } ( 4 )$ have vertex set $X$, two vertices being adjacent when the restriction of the quadratic form to their intersection is degenerate. See \cite[\S3.1.4]{BVM2022B}. For $n\geqslant 1$, the graph $\mathit{NO}^+_{ 2n+1 } ( 4 )$ is a primitive rank $3$ graph with parameters $( 4^n ( 4^n + 1 )/2, ( 4^{ n - 1 } + 1 ) ( 4^n - 1 ), ( 4^{n-1}+2 ) ( 4^n-2 )/2, 4^n ( 4^{ n - 1 } + 1 )/2 )$ and eigenmatrices \begin{gather*} P = \begin{pmatrix} 1 & ( 4^{ n - 1 } + 1 ) ( 4^n - 1 ) & 4^{n-1}(4^n-1) \\ 1 & 2 \cdot 4^{ n - 1 } - 1 & - 2 \cdot 4^{ n - 1 } \\ 1 & - 4^{ n - 1 } - 1 & 4^{ n - 1 } \end{pmatrix}\!, \\ Q = \begin{pmatrix} 1 & \frac{ 2 ( 4^{ n - 1 } + 1 ) ( 4^n - 1 ) }{ 3 } & \frac{ 4^{2n}-1 }{ 3 } \\[1mm] 1 & \frac{ 4^n-2 }{ 3 } & - \frac{ 4^n + 1 }{ 3 } \\[1mm] 1 & - \frac{ 4 ( 4^{ n - 1 } + 1 ) }{ 3 } & \frac{ 4^n + 1 }{ 3 } \end{pmatrix}\!. \end{gather*} \end{example} \begin{example}[$\overline{\mathit{NO}^-_{ 2n + 1 } ( 4 )}$] Equip $\mathbb{F}_4^{2n+1}$ with a nondegenerate quadratic form and let $X$ be the set of nonsingular elliptic hyperplanes. Let $\Gamma=\overline{\mathit{NO}^-_{ 2n+1 } ( 4 )}$ have vertex set $X$, two vertices being adjacent when the restriction of the quadratic form to their intersection is nondegenerate. See \cite[\S3.1.4]{BVM2022B}. For $n\geqslant 2$, the graph $\overline{\mathit{NO}^-_{ 2n + 1 } ( 4 )}$ is a primitive rank $3$ graph with parameters $( 4^n ( 4^n - 1 )/2, 4^{ n - 1 } ( 4^n + 1 ), 4^n( 4^{n-1}+1 )/2, 4^{n-1} ( 4^n+2 )/2 )$ and eigenmatrices \begin{gather*} P = \begin{pmatrix} 1 & 4^{ n - 1 } ( 4^n + 1 ) & ( 4^{ n - 1 } - 1 ) ( 4^n + 1 ) \\ 1 & 2 \cdot 4^{ n - 1 } & - 2 \cdot 4^{ n - 1 } - 1 \\ 1 & - 4^{ n - 1 } & 4^{ n - 1 } - 1 \end{pmatrix}\!, \\ Q = \begin{pmatrix} 1 & \frac{ 2 ( 4^{ n - 1 } - 1 ) ( 4^n + 1 ) }{ 3 } & \frac{ 4^{2n}-1 }{ 3 } \\[1mm] 1 & \frac{ 4 ( 4^{ n - 1 } - 1 ) }{ 3 } & - \frac{ 4^n - 1 }{ 3 } \\[1mm] 1 & - \frac{ 4^n+2 }{ 3 } & \frac{ 4^n - 1 }{ 3 } \end{pmatrix}\!. \end{gather*} \end{example} \begin{example}[$\mathit{VO}^+_{ 2n } ( 2 )$] Equip $\mathbb{F}_2^{2n}$ with a nondegenerate quadratic form of Witt index $n$ and let $X=\mathbb{F}_2^{2n}$. Let $\Gamma=\mathit{VO}^+_{2n} ( 2 )$ have vertex set $X$, two vertices being adjacent when their difference is isotropic. See \cite[\S3.3.1]{BVM2022B}. For $n\geqslant 2$, the graph $\mathit{VO}^+_{2n} (2)$ is a primitive rank $3$ graph with parameters $( 2^{ 2 n }, ( 2^{ n - 1 } + 1 ) ( 2^n - 1 ), ( 2^{ n - 1 } + 2 ) ( 2^{ n - 1 } -1 ), 2^{ n - 1 } ( 2^{ n - 1 } + 1 ) )$ and eigenmatrices \begin{equation*} P = Q = \begin{pmatrix} 1 & ( 2^{ n - 1 } + 1 ) ( 2^n - 1 ) & 2^{ n - 1 } ( 2^n - 1 ) \\ 1 & 2^{ n - 1 } - 1 & - 2^{ n - 1 } \\ 1 & - 2^{ n - 1 } - 1 & 2^{ n - 1 } \end{pmatrix}\!. \end{equation*} \end{example} \begin{example}[$\overline{\mathit{VO}^-_{ 2n } ( 2 )}$]\label{example 6} Equip $\mathbb{F}_2^{2n}$ with a nondegenerate quadratic form of Witt index $n-1$ and let $X=\mathbb{F}_2^{2n}$. Let $\Gamma=\overline{\mathit{VO}^-_{2n} ( 2 )}$ have vertex set $X$, two vertices being adjacent when their difference is nonisotropic. See \cite[\S3.3.1]{BVM2022B}. For $n\geqslant 2$, the graph $\overline{\mathit{VO}^-_{ 2n } ( 2 )}$ is a primitive rank $3$ graph with parameters $( 2^{ 2 n }, 2^{ n - 1 } ( 2^n + 1 ), 2^{n-1}( 2^{ n - 1 } + 1 ), 2^{ n - 1 } ( 2^{ n - 1 } +1 ) )$ and eigenmatrices \begin{gather*} P = \begin{pmatrix} 1 & 2^{ n - 1 } ( 2^n + 1 ) & ( 2^{ n - 1 } - 1 ) ( 2^n + 1 ) \\ 1 & 2^{ n - 1 } & - 2^{ n - 1 } - 1 \\ 1 & - 2^{ n - 1 } & 2^{ n - 1 } - 1 \end{pmatrix}\!, \\ Q = \begin{pmatrix} 1 & ( 2^{ n - 1 } - 1 ) ( 2^n + 1 ) & 2^{ n - 1 } ( 2^n + 1 ) \\ 1 & 2^{ n - 1 } - 1 & - 2^{ n - 1 } \\ 1 & - 2^{ n - 1 } - 1 & 2^{ n - 1 } \end{pmatrix}\!. \end{gather*} \end{example} \begin{example}[$\overline{\mathsf{G}_2(2)}$] Let $X$ be the set consisting of the $7$ points, $7$ lines, and $21$ flags of the Fano plane, together with an additional element denoted by $\infty$. The graph $\overline{\mathsf{G}_2(2)}$ has vertex set $X$ and the adjacency is defined as follows: The vertex $\infty$ is adjacent to the flags. The points form a clique, and so do the lines. A point and a line are adjacent when they are incident. A point (resp.~line) is adjacent to the flags whose lines (resp.~ points) are not on it. Finally, two flags are adjacent when either they are not disjoint, or they are disjoint and the point of each of them is not on the line of the other. See \cite[\S10.14]{BVM2022B}. The graph $\overline{\mathsf{G}_2(2)}$ is a primitive rank $3$ graph with parameters $(36,21,12,12)$ and eigenmatrices \begin{equation*} P=\begin{pmatrix} 1 & 21 & 14 \\ 1 & 3 & -4 \\ 1 & -3 & 2 \end{pmatrix}\!, \quad Q=\begin{pmatrix} 1 & 14 & 21 \\[1mm] 1 & 2 & -3 \\[1mm] 1 & -4 & 3 \end{pmatrix}\!. \end{equation*} \end{example} \begin{example}[$\overline{\mathsf{M}_{22}}$] Let $X$ be the set of $176$ blocks of the unique quasi-symmetric $2$-$(22,7,16)$ design with block intersection numbers $1$ and $3$. Let $\Gamma=\overline{\mathsf{M}_{22}}$ have vertex set $X$, two vertices being adjacent when they intersect in $3$ points. See \cite[\S10.51]{BVM2022B}. The graph $\overline{\mathsf{M}_{22}}$ is a primitive rank $3$ graph with parameters $(176,105,68,54)$ and eigenmatrices \begin{equation*} P=\begin{pmatrix} 1 & 105 & 70 \\ 1 & 17 & -18 \\ 1 & -3 & 2 \end{pmatrix}\!, \quad Q=\begin{pmatrix} 1 & 21 & 154 \\[1mm] 1 & \frac{17}{5} & -\frac{22}{5} \\[1mm] 1 & -\frac{27}{5} & \frac{22}{5} \end{pmatrix}\!. \end{equation*} \end{example} \begin{remark} The six infinite families of SRGs in Examples \ref{example 1}--\ref{example 6} are part of more general families of association schemes obtained from actions of classical groups, and their eigenmatrices were extensively studied. See, e.g., \cite{Bannai1990P,BHS1990JCTA,BKS1990MFSKUA,BSHW1991JA,Kwok1991GC,Kwok1992EJC,ST2006P,Tanaka2001M,Tanaka2002EJC,Tanaka2004AG}. It may be interesting to point out that these SRGs and association schemes were used to construct Ramanujan graphs; cf.~\cite{BST2004EJC,BST2009DM}. \end{remark} Observe that some of the real ETFs in Theorem \ref{classification} have the same parameters up to taking the Naimark complements (see also Table \ref{comparison} in Section \ref{sec: Waldron ETFs}). We wonder if, for example, the real ETFs obtained from $\mathit{NO}_{2n+1}^+(4)$ and $\overline{\mathit{NO}_{4n}^-(2)}$ are equivalent, i.e., the graphs $\mathit{NO}_{2n+1}^+(4)$ and $\overline{\mathit{NO}_{4n}^-(2)}$ are switching equivalent. We also note that the real ETFs in Theorem \ref{classification} are still realized only as Gram matrices since the isomorphism $V_s\cong\mathbb{R}^N$ (where $N=g$) is not canonical. However, the families $\mathit{VO}^+_{ 2n } ( 2 )$ and $\overline{\mathit{VO}^-_{ 2n } ( 2 )}$ are Cayley graphs on elementary abelian $2$-groups, and their real ETFs can be given as explicit $N$-dimensional real unit column vectors. For $\mathit{VO}^+_{ 2n} ( 2 )$, we may assume that the quadratic form is given by \begin{equation*} q(x)=x_1x_2+\cdots+x_{2n-1}x_{2n} \end{equation*} for $x=(x_1,\dots,x_{2n})\in\mathbb{F}_2^{2n}$. The associated bilinear form is then given by \begin{equation*} B(x,y)=(x_1y_2+x_2y_1)+\cdots+(x_{2n-1}y_{2n}+x_{2n}y_{2n-1}) \end{equation*} for $x=(x_1,\dots,x_{2n}),y=(y_1,\dots,y_{2n})\in\mathbb{F}_2^{2n}$. Let $X_1$ be the set of nonisotropic points in $X=\mathbb{F}_2^{2n}$. Then $|X_1|=g=N=2^{n-1}(2^n-1)$, and the following column vectors form an orthonormal basis of $V_s$: \begin{equation*} \frac{1}{2^n}\big((-1)^{B(x,z)}\big)_{x\in X} \quad (z\in X_1). \end{equation*} See, e.g., \cite{Tanaka2004AG}. Hence, with respect to this basis, the vectors $\varphi_x$ $(x\in X)$ in the real ETF are expressed as $2^{n-1}(2^n-1)$-dimensional column vectors \begin{equation*} \frac{1}{\sqrt{2^{n-1}(2^n-1)}}\big((-1)^{B(x,z)}\big)_{z\in X_1} \quad (x\in X). \end{equation*} A similar discussion applies to $\overline{\mathit{VO}^-_{ 2n } ( 2 )}$. \section{Real equiangular tight frames having rank \texorpdfstring{$3$}{3} descendants} \label{sec: Waldron ETFs} We mentioned earlier that every dependent real ETF corresponds to a regular two-graph. If, moreover, the former is nontrivial, then the derived designs of the latter are SRGs with $v=M-1$ and $k=2\mu$. The derived designs are called the \emph{descendants} \cite[\S10.2]{BH2012B} and also the \emph{neighborhoods} \cite[\S11.5]{GR2001B} of the latter. Here, let us temporarily call them the \emph{descendants of the real ETF}. Conversely, if $\Gamma$ is an SRG with $k=2\mu$, then the matrix \begin{equation*} I +\frac{1}{1+2r} \left(\begin{array}{c|c} 0 & \bm{1}^{\mathsf{T}} \\ \hline \bm{1} & J-I-2A \end{array}\right) \end{equation*} is the Gram matrix of a nontrivial real ETF with $M=v+1$ and $N=g+1$, where $\bm{1}$ denotes the all one's vector. The graphs $\Gamma$ and $\overline{\Gamma}$ are then descendants of this real ETF and its Naimark complement, respectively. See \cite[\S10.3]{BH2012B}, \cite[\S11.6]{GR2001B}, and also \cite[\S5]{Waldron2009LAA}. \begin{theorem}\label{rank 3 Waldron} The primitive rank $3$ graphs, up to taking complements, which are descendants of nontrivial real equiangular tight frames, are given in Table \ref{Waldron table}. \end{theorem} \begin{table}[ht] \begin{center} \begin{tabular}{c|c|c|c} \hline\hline Graph & $M$ & $N$ & $M-N$ \\ \hline $\rule{0pt}{13pt} \mathsf{Sp}_{2n}(2)$\, $(n\geqslant 2)$ & $2^{2n}$ & $2^{n-1}(2^n-1)$ & $2^{n-1}(2^n+1)$ \\[1mm] $\mathsf{O}_{2n}^+(2)$\, $(n\geqslant 2)$ & $2^{n-1}(2^n+1)$ & $\frac{ 2^{2n}-1 }{ 3 }$ & $\frac{ ( 2^{ n - 1 } + 1 ) ( 2^n + 1 ) }{ 3 }$ \\[1mm] $\mathsf{O}_{2n+2}^-(2)$\, $(n\geqslant 2)$ & $2^n(2^{n+1}-1)$ & $\frac{ (2^n-1)(2^{n+1}-1) }{ 3 }$ & $\frac{ 2^{2n+2}-1 }{ 3 }$ \\[1mm] $P(q)$ & $q+1$ & $\frac{ q+1 }{2}$ & $\frac{ q+1 }{2}$ \\[1mm] $P^*(q)$ & $q+1$ & $\frac{ q+1 }{2}$ & $\frac{ q+1 }{2}$ \\[1mm] McLaughlin & $276$ & $23$ & $253$ \\[1mm] $P^{**}(529)$ & $530$ & $265$ & $265$ \\[1mm] $(2209,1104,551,552)$ & $2210$ & $1105$ & $1105$ \\ \hline\hline \end{tabular} \end{center} \caption{Primitive rank $3$ descendants of nontrivial real ETFs\\[2mm] \small For $P(q)$, $q$ is a prime power and $q\equiv 1\, (\operatorname{mod}\,4)$. For $P^*(q)$, $q$ is an even power of a prime $p\equiv 3\, (\operatorname{mod}\,4)$. For the McLaughlin graph, see \cite[\S10.61]{BVM2022B}. The graph $P^{**}(529)$ is the sporadic Peisert graph \cite[\S10.70]{BVM2022B}. There are precisely three rank $3$ graphs with the last parameters, two of which are $P(2209)$ and $P^*(2209)$; cf.~\cite[\S10.86]{BVM2022B}.}\label{Waldron table} \end{table} \begin{proof} Routine verification. We also note the following isomorphisms: $\mathsf{O}_{2n+1}(2)\cong\mathsf{Sp}_{2n}(2)$ (cf.~\cite[\S2.6.3]{BVM2022B}), $\overline{T(6)}\cong\mathit{NO}^-_4(3)\cong\overline{\mathit{NO}^+_3(5)}\cong\mathsf{Sp}_4(2)$ (cf.~\cite[\S\S3.1.3, 3.1.4, 10.5]{BVM2022B}), $J_2(4,2)\cong\mathsf{O}^+_6(2)$ (cf.~\cite[\S10.13]{BVM2022B}), and $L_2(3)\cong \mathit{VO}_2^+(3)\cong P(9)\cong\mathsf{O}_4^+(2)$ (cf.~\cite[\S10.2]{BVM2022B}). The graph $\mathit{NU}_3(3)$ satisfies $k=2\mu$ but is of rank $4$; cf.~\cite[\S10.22]{BVM2022B}. \end{proof} We do not describe the graphs in Table \ref{Waldron table} to keep the paper concise. See the references given. The Paley graphs $P(q)$ and the Peisert graphs $P^*(q)$ give rise to real ETFs having the same parameters, so we ask if these real ETFs are equivalent, i.e., the disjoint union $K_1+P(q)$ is switching equivalent to $K_1+P^*(q)$. We may ask the same question for $P^{**}(529)$ and the last graph in Table \ref{Waldron table}. \begin{table}[ht] \begin{center} \begin{tabular}{c|c|c|c} \hline\hline Thm.~\ref{classification} & Thm.~\ref{rank 3 Waldron} & $M$ & $\{N,M-N\}$ \\ \hline $\rule{0pt}{13pt} \mathit{NO}_{2n}^+(2)$ & $\mathsf{O}_{2n}^-(2)$ & $2^{n-1}(2^n-1)$ & $\left\{\frac{ ( 2^{ n - 1 } - 1 ) ( 2^n - 1 ) }{ 3 },\frac{ 2^{2n}-1 }{ 3 }\right\}$ \\[2mm] $\overline{\mathit{NO}_{2n}^-(2)}$ & $\mathsf{O}_{2n}^+(2)$ & $2^{n-1}(2^n+1)$ & $\left\{\frac{ ( 2^{ n - 1 } + 1 ) ( 2^n + 1 ) }{ 3 },\frac{ 2^{2n}-1 }{ 3 }\right\}$ \\[2mm] $\mathit{NO}_{2n+1}^+(4)$ & $\mathsf{O}_{4n}^+(2)$ & $2^{2 n-1} \left(2^{2n}+1\right) $ & $\left\{\frac{2^{4n}-1}{3} , \frac{(2^{2n-1}+1)(2^{2n}+1)}{3} \right\}$ \\[2mm] $\overline{\mathit{NO}_{2n+1}^-(4)}$ & $\mathsf{O}_{4n}^-(2)$ & $2^{2 n-1} \left(2^{2n}-1\right) $ & $\left\{\frac{2^{4n}-1}{3} ,\frac{(2^{2n-1}-1)(2^{2n}-1)}{3} \right\}$ \\[2mm] $\mathit{VO}_{2n}^+(2),\overline{\mathit{VO}_{2n}^-(2)}$ & $\mathsf{Sp}_{2n}(2)$ & $2^{ 2 n }$ & $\big\{2^{ n - 1 } ( 2^n - 1 ),2^{n-1}(2^n+1)\big\}$ \\[1mm] $\overline{\mathsf{G}_2(2)}$ & $\mathsf{O}_6^+(2)$ & $36$ & $\{21,15\}$ \\[1mm] \hline\hline \end{tabular} \end{center} \caption{Comparison of the parameters of real ETFs}\label{comparison} \end{table} Some of the real ETFs found in Theorems \ref{classification} and \ref{rank 3 Waldron} also have the same parameters; see Table \ref{comparison}. We again wonder if some of these real ETFs are equivalent. If the spherical embedding of an SRG $\Gamma$ is a real ETF, then its descendants are obtained by first removing a vertex from $\Gamma$ and then switching it with respect to the neighbors and the nonneighbors of the removed vertex. For example, if $\Gamma=\mathit{NO}_{2n}^+(2)$, then we ask if the resulting graph is isomorphic to $\mathsf{O}_{2n}^-(2)$. We may remark that if $\Gamma$ has parameters $(v,k,\lambda,\mu)$ (cf.~\eqref{regular two-graph}), then the parameters of the descendants are given by $(v-1,2(k-\mu),k+\lambda-2\mu,k-\mu)$; see \cite[Proposition 10.3.2]{BH2012B}. \section*{Acknowledgments} Eiichi Bannai and Etsuko Bannai thank National Center for Theoretical Sciences (NCTS) at National Taiwan University (NTU) for inviting them to stay for 14 months (Dec.~2020--Jan.~2022). The last stage of this work was done while they were visiting there. Hajime Tanaka was supported by JSPS KAKENHI Grant Number JP20K03551. Wei-Hsuan Yu and Chin-Yen Lee were supported by MOST under Grant MOST109-2628-M-008-002-MY4.
\section{Introduction} \label{introduction} Supernova remnants (SNRs) are the imprints of stars that died in supernova (SN) explosions on the interstellar medium (ISM). SNRs return nucleosynthesis products to the ISM, enriching and mixing it with freshly produced heavy elements. A core-collapse (CC) SN is the explosion of a massive star, and it produces large quantities of $\alpha$-group elements (e.g. O, Ne, Mg, Si, S). Thermonuclear (or type Ia) SNe mark the disruption of a carbon-oxygen white dwarf (WD) that reached the Chandrasekhar limit, \revision{although recent models suggest that sub-Chandrasekhar WDs may also explode as type Ia SNe \citep{2010ApJ...714L..52S,2010ApJ...722L.157V,2011ApJ...734...38W}}. The thermonuclear burning front in a type Ia SN incinerates most of the progenitor to Fe-group elements. Despite the essential role of type Ia SNe in cosmology as standard candles, leading to the discovery that the expansion of the Universe is accelerating \citep{1998AJ....116.1009R,1999ApJ...517..565P}, the exact nature of the progenitor system as either a white dwarf accreting from a companion or a merger of two white dwarves is still hotly debated \citep[see][for a review]{2012PASA...29..447M}. SNe of either type instantaneously release a tremendous amount of \revision{kinetic} energy ($\sim 10^{51}$~erg) in the ISM and consequently have a profound and long-lasting impact on their surrounding environment. SN ejecta are launched to velocities in excess of $10^4$~km~s$^{-1}$, producing shock waves that heat the ISM and ejecta up to X-ray emitting temperatures ($> 10^6$ K). SNe are the main source of energy for the ISM, in the form of kinetic energy \revision{and turbulence \citep[\mbox{e.\,g.}\xspace][and references therein]{2004RvMP...76..125M}} or in the form of cosmic rays that are accelerated at SNR shock fronts. X-ray observations are a powerful tool for studying SNRs \citep[see \mbox{e.\,g.}\xspace the review of][]{2012A&ARv..20...49V}. While some SNRs exhibit non-thermal \revision{X-ray} emission, originating in synchrotron-emitting electrons accelerated up to 100~TeV \citep[see][and references therein]{1995Natur.378..255K,2002ApJ...581.1116R,2005ApJ...621..793B}, \revision{most X-ray emitting SNRs have thermal spectra} dominated by highly ionised species of C, N, O, Ne, Mg, Si, S, and Fe. At the typical electron temperatures of SNR shocks ($kT \sim$~0.2~--~5~keV), all these astrophysically abundant elements have emission lines in the range accessible to X-ray space observatories. Thus, the thermal X-ray spectrum of an SNR encrypts precious information about the temperature, ionisation state, and chemical composition of the hot plasma \citep{2014IAUS..296..226S}. This, in turn, provides clues to the evolutionary state of the remnant, ambient density (of the inter- or circum-stellar medium), age, explosion energy, and the type of supernova progenitor. The distribution of these parameters, the impact of the environment on them, and their interrelations (\mbox{e.\,g.}\xspace temperature vs. size/age, luminosity vs. ambient density) are valuable information to understand the evolution of SNRs and their role in the hydrodynamical and chemical evolution of galaxies. Furthermore, SNRs are \revision{observable} for a few tens of thousands of years. Thus, even though SNe are rare events in a galaxy (typically one per century or less), there will be tens or hundreds of SNRs for us to access. In our own Galaxy, the Milky Way (MW), 294 SNRs are known \citep{2014BASI...42...47G}. However, studies of Galactic SNRs are plagued by the large distance uncertainties towards sources in the Galactic plane. In addition, many important X-ray lines of O, Ne, Mg, and Fe are emitted at energies $kT <$ 2 keV and are readily absorbed by the high column densities in front of Galactic sources. On the other hand, the Large Magellanic Cloud (LMC), our closest neighbour galaxy, offers an ideal laboratory for such (X-ray) studies: First, the distance towards the LMC is relatively small \citep[50~kpc,][]{2013Natur.495...76P} and very well studied \citep{2014AJ....147..122D}. Second, the moderate inclination angle \citep[between 25\textdegree\ and 40\textdegree, \mbox{e.\,g.}\xspace][]{2014ApJ...781..121V} and the small line-of-sight depth of the LMC \citep[between 0.3~kpc and 1.5~kpc,][]{2002AJ....124.2639V} mean that we can assume all LMC sources to be at a very similar distance. Third, the interstellar absorption by gas in the foreground is much smaller towards the LMC ($N_H < 10^{21}$ cm$^{-2}$) than towards the Galactic plane ($N_H > 10^{22}$ cm$^{-2}$), allowing detection of photons even in the soft X-ray regime, below 1~keV. Finally, a wealth of data is available for the LMC, allowing for easier detection and multi-wavelength analysis of SNRs. For all these reasons, we aim to discover and study the \emph{complete} sample of SNRs in the LMC. \revision{While several studies exist analysing the sample of LMC remnants as a whole (see references in Sect.\,\ref{compiling_literature})}, they focus either on surveys with a particular instrument (at a particular wavelength, \mbox{e.\,g.}\xspace infrared or ultraviolet), or on some specific aspects (\mbox{e.\,g.}\xspace the size distribution of SNRs). In X-rays, \citet{1981ApJ...248..925L} used the {\it Einstein}\ survey of the LMC to detect 26 SNRs. Later, \citet{1999ApJS..123..467W} compiled a list of 37 SNRs, amongst which they studied the X-ray morphology of 31 objects with ROSAT. Since 2000, more than twenty new remnants were discovered or confirmed, primarily through XMM-{\it Newton}\ observations. However, the X-ray spectral analyses of LMC SNRs were presented in a wide collection of individual papers with little consistency in the instruments, spectral models, and analysis methods used. Furthermore, several known SNRs were observed for the first time with modern X-ray instrumentation during our XMM-{\it Newton}\ LMC survey (see Sect.\,\ref{observations_observations}) and their spectral properties are as yet unpublished \revision{(see Sect.\,\ref{results_spectra_general} and Appendix~\ref{appendix_spectra}}). Because of these limitations, it is not feasible to study the spectral properties of the whole population of LMC remnants with a mere survey of the available literature. The main ambition of this work is to alleviate these limitations and provide for the first time an up-to-date study of the X-ray emission of LMC SNRs, using primarily XMM-{\it Newton}\ observations. To that end, we performed a systematic and homogeneous X-ray spectral analysis of \emph{all} LMC SNRs for which XMM-{\it Newton}\ data are available. This allows meaningful comparisons of remnants at various evolutionary stages, and provides a complete census of various spectral features, such as Fe~K or SN ejecta emission. In turn, SNRs are used as probes of their surroundings, thanks to which one can derive the chemical abundances in the hot phase of the LMC ISM, and compare those to abundances measured in older populations (globular clusters and red giant stars). In addition, we take advantage of the availability of star formation history (SFH) maps of the LMC, based on spatially resolved stellar photometry, to investigate the connection between LMC SNRs and their local environment, characterised by different SFHs. Doing so, we devise a method to tentatively type all LMC SNRs, which can then be used to retrieve the ratio of core-collapse to type Ia SN rates in the LMC. Then, via their X-ray luminosity function, we compare SNR populations in galaxies of the Local Group (M31, M33, LMC, SMC), which have different metallicities and SFHs. Finally, we study the spatial distribution of SNRs in the LMC with respect to cool gas, star-forming regions, and stars. This work is organised as follows: We start in Sect.\,\ref{observations} by presenting the X-ray observations and their processing, along with supplementary data. In Sect.\,\ref{compiling}, we compile a complete, clean sample of LMC SNRs which is used throughout the rest of the paper. The details of our data analysis methods are given in Sect.\,\ref{data}. The following Sections present the results of the systematic spectral analysis of LMC SNRs (Sect.\,\ref{results_spectra}), the SNR typing and measurement of the ratio of core-collapse to type Ia SN rates (Sect.\,\ref{results_sfh}), the comparative study of the X-ray luminosity functions of Local Group SNRs (Sect.\,\ref{results_XLF}), and the spatial distribution of SNRs in the LMC (Sect.\,\ref{results_distribution}). Finally, we summarise our findings and offer our conclusions in Sect.\,\ref{summary}. \section{Observations and data reduction} \label{observations} \begin{figure}[t] \centering \includegraphics [width=0.999\hsize]{survey_MCELS.jpg} \caption{\revision{The LMC in the light of [\ion{S}{ii}] (red), H$\alpha$ (green), and [\ion{O}{iii}] (blue), all data from MCELS (see Sect.\,\ref{observations_supplementary}). The red contours delineates the coverage of the LMC with XMM-{\it Newton}, combining archival data and observations of our large survey (see Sect.\,\ref{observations_observations}). The white contours outline a LMC \ion{H}{I} column density of $1 \times 10^{21}$~cm$^{-2}$ \citep[data from][]{2003ApJS..148..473K}.} } \label{fig_observations_survey} \end{figure} \subsection{XMM-Newton observations of the LMC} \label{observations_observations} The XMM-{\it Newton}\ space observatory \citep{2001A&A...365L...1J,2012OptEn..51a1009L} was placed in a 48~hours highly eccentric orbit by an Ariane-V on 1999 December 10. It carries three identical X-ray telescopes, each consisting of 58 gold-coated nested Wolter-I mirrors with a focal length of 7.5~m. Three CCD imaging cameras are placed at the focal points of each telescope. Two of them have Metal Oxide Semi-conductor (MOS) CCD arrays \citep{2001A&A...365L..27T} and the third uses pn-CCDs \citep{2001A&A...365L..18S}. Together, they form the European Photon Imaging Camera (EPIC). Other instruments are the two Reflection Grating Spectrometers \citep[RGS,][]{2001A&A...365L...7D} for high-resolution spectroscopy of bright on-axis point sources, and the optical monitor \citep[OM,][]{2001A&A...365L..36M}, a 30~cm Ritchey-Chr\'etien telescope observing simultaneously the central field of view in optical and ultraviolet light. However, data from RGS and OM were not used in this work. About 200 XMM-{\it Newton}\ observations of the LMC were performed since the ``first light'' image of the observatory, of the 30 Doradus region \citep{2001A&A...365L.202D}. In most cases, one specific object is placed at the focus of the telescopes (regular or ``target of opportunity'' observations). Some fields were observed several times, yielding very deep exposures. For instance, the SNR N132D is used as a calibration source and regularly observed; also SN~1987A is frequently monitored \citep{2008ApJ...676..361H,2012A&A...548L...3M}. In these two regions, the combined exposure reaches $10^6$~s. We have also carried out dedicated XMM-{\it Newton}\ observations of SNR candidates found in ROSAT, radio, and optical data \citep[][]{2012A&A...539A..15G,2014MNRAS.439.1110B,2015A&A...579A..63K}. Other programmes are raster surveys: The most ambitious such project was the survey of the LMC, proposed as a Very Large Programme (VLP) for XMM-{\it Newton}\ (PI: Frank Haberl). The survey comprises 70 pointings chosen to fill the gaps between all existing observations. This provides a contiguous field in the central region of the LMC, a strategy similar to the XMM-{\it Newton}\ survey of the SMC \citep{2012A&A...545A.128H,2012PhDT......ppppS,2013A&A...558A...3S}. \revision{The LMC coverage with XMM-{\it Newton}, including both the 70 observations of that survey \emph{and} the archival data, is shown in Fig.\,\ref{fig_observations_survey} on an optical image of the galaxy}. Because the LMC is closer and has a larger extent on the sky than the SMC, all the observations combined still cover less than half of the total extent of the galaxy. \subsection{Data reduction} \label{observations_reduction} The processing of all available XMM-{\it Newton}\ data in the LMC region, and those of the VLP survey in particular, was done with the data reduction pipeline developed in our research group over several years. This pipeline was already used for the surveys of M31 \citep{2005A&A...434..483P,2011A&A...534A..55S} and M33 \citep{2004A&A...426...11P,2006A&A...448.1247M}. It was then enhanced for the analysis of the SMC survey by Richard Sturm (\citeyear{2012PhDT......ppppS}). The data reduction pipeline is similar in essence to that used for the XMM-{\it Newton}\ Serendipitous Source Catalogue \citep{2009A&A...493..339W}, with the advantage of a better spatial accuracy (thanks to astrometric boresight corrections), and dedicated source screenings and cross-identifications. It is a collection of tasks from the XMM-{\it Newton}\ Science Analysis Software\,\footnote{SAS, \url{http://xmm.esac.esa.int/sas/}}, organised in \texttt{bash} scripts, together with other tools, in particular the FITS-file manipulation tasks of the \texttt{FTOOLS} package\,\footnote{\url{http://heasarc.gsfc.nasa.gov/ftools/}} (Blackburn \citeyear{1995ASPC...77..367B}). We summarise below the important steps of the pipeline. \paragraph{Preparing the data:} To point to the Current Calibration Files (CCFs) corresponding to each observation, a CCF index file (CIF) is created with the SAS task \texttt{cifbuild}. Then, using the task \texttt{odfingest}, the ODF summary file is extended with data extracted from the instrument housekeeping datasets. The instrument mode is also determined based on the CIF. \paragraph{Creating event lists:} The meta-tasks \texttt{epchain} and \texttt{emchain} produce EPIC-pn and MOS event lists, respectively. Raw events are first extracted from each exposure and CCD chip. Bad pixels are flagged. In the case of EPIC-pn, the task \texttt{epreject} corrects shifts in the energy scale of some pixels induced by high-energy particles hitting the detector while the offset map is calculated. Raw events are then assigned pattern and detector position information. EPIC-pn events are corrected for gain variation and charge transfer inefficiency (CTI). The calibrated events are (tangentially) projected on the sky using the task \texttt{attcalc} and an attitude history file (AHF), which records the attitude of the spacecraft during the observation. The AHF is created by the task \texttt{atthkgen} that is automatically ran before the main chain (unless the AHF already exists). EPIC-pn event times are randomised within their read-out frame. Finally, event lists from all CCDs are merged in the final list by \texttt{evlistcomb}. \paragraph{Time filtering:} Times that are useful for analysis are known as good time intervals (GTIs). In particular, periods of high background must be filtered out. The pipeline identifies the background-GTIs as times when the count rate in the (7~--15)~keV band is below a threshold of $8 \times 10^{-3}$~cts\,s$^{-1}$\,arcmin$^{-2}$ and $2.5 \times 10^{-3}$~cts\,s$^{-1}$\,arcmin$^{-2}$ for EPIC-pn and EPIC-MOS, respectively. Soft proton flares affect all detectors, so only the GTIs \emph{common} to pn and MOS are used. When one instrument starts earlier or observes longer, this interval is added to the GTIs. For instance, EPIC-pn calculates an offset map before an exposure. Thus, pn exposures usually start later than those of MOS, but these times should not be vetoed, unless the background in MOS is above the threshold. \paragraph{Images creation:} The pipeline then produces images from the calibrated, cleaned, and background-filtered event lists. The image pixels have a size of 2\arcsec$\times$~2\arcsec. All single to quadruple-pixel (\texttt{PATTERN} = 0 to 12) events with \texttt{FLAG = 0} from the MOS detectors are used. From the pn detector single and double-pixel events (\texttt{PATTERN} = 0 to 4) with \texttt{(FLAG \&\& 0xf0000) = 0} (including events next to bad pixels or bad columns) are used. Below 500~eV, only single-pixel events are selected to avoid the higher detector noise contribution from the double-pixel events. Exposure maps taking into account the telescope vignetting (which is energy-dependent) are created with the task \texttt{eexpmap}. Images and exposure maps are extracted in various energy bands for all three cameras. Out-of-time (OoT) images are created from the EPIC-pn OoT event lists, scaled by the corresponding OoT fraction $f_{\mathrm{OoT}}$\,\footnote{Values taken from the XMM-{\it Newton}\ Users Handbook.}, and subtracted from the source+background images. MOS and pn images are then merged, smoothed with a 10\arcsec\ full width at half maximum (FWHM) Gaussian kernel, and finally divided by the vignetted exposure maps. Detector-background images are also created, by using XMM-{\it Newton}\ filter wheel closed (hereafter FWC) data, obtained with the detectors shielded from astrophysical and soft-proton backgrounds by a 1.05~mm-thick aluminium filter. FWC data are collected several times per year, and the merged event lists of these observations are made available by the XMM-{\it Newton}\ Science Operations Centre\,\footnote{ \url{http://xmm2.esac.esa.int/external/xmm_sw_cal/background/filter_closed/}}. The detector corners are always shielded from the X-ray telescopes, and the count rate in the corners is used to estimate the contribution of the instrumental background $f_{\mathrm{FWC}}$ to the science image. The FWC image is scaled by $f_{\mathrm{FWC}}$ and removed from the science image to create the background-subtracted image. \paragraph{Source detection:} X-ray source detection is performed simultaneously using source+background images in all available energy bands of all three instruments with the SAS meta-task \texttt{edetectchain}. Although this work is concerned with SNRs, \mbox{i.\,e.}\xspace extended sources, detecting point sources is highly desirable: it allows us to excise unrelated point sources from spectral extraction regions and to look for central compact objects or pulsar wind nebulae inside SNRs. \paragraph{Fine-tuning for SNRs:} Several scripts for the analysis of the LMC SNRs were produced. For imaging purposes, all observations of an SNR are combined to produce an image centred on the source. The smoothing of the images (using the SAS task \texttt{asmooth}) is performed both in constant and adaptive mode. In the latter, the task calculates a library of Gaussian kernels such that the resulting images reached a minimum (Poissonian) signal-to-noise ratio of 5 everywhere. Regions of good statistics (\mbox{e.\,g.}\xspace bright sources) will be smoothed with a 10\arcsec\ FWHM kernel (the chosen minimum value), whereas fainter regions (diffuse emission, rims of the field of view) will be smoothed with wider kernels. The (minimum) kernel size for (adaptive) smoothing was chosen depending on the available data and brightness of the SNR under investigation. Moderately bright and faint SNRs (\mbox{i.\,e.}\xspace most of the sample) have smoothing kernel sizes of $\gtrsim 10$\arcsec\ or $\gtrsim 20$\arcsec. The bright objects and SNRs in very deep fields (\mbox{e.\,g.}\xspace the field around SNR~1987A) only need shallow smoothing (kernels $\gtrsim 3$\arcsec\ or $\gtrsim 6$\arcsec). Images were produced in a set of energy bands tailored to the thermal spectrum of SNRs: A soft band from 0.3~keV to 0.7~keV includes strong lines from oxygen; a medium band from 0.7~keV to 1.1~keV comprises Fe L-shell lines as well as Ly$\alpha$ lines from \ion{Ne}{IX} and \ion{Ne}{X}; and a hard band (1.1~--~4.2~keV) which includes lines from Mg, Si, S, Ca, Ar, and possibly non-thermal continuum. Thus, the composite images of SNRs provide a visual evaluation of their temperature: evolved objects with a relatively cool plasma (0.2~keV~$\lesssim kT\lesssim$~0.4~keV) are most prominent in the soft band, those with higher temperatures (0.4~keV~$\lesssim kT\lesssim$~1~keV) in the medium band. Only (young) SNRs with a much hotter component or a non-thermal continuum will have significant emission in the hard band as well. \subsection{Supplementary data} \label{observations_supplementary} Various non-X-ray data were used to supplement the XMM-{\it Newton}\ observations. They allow us \mbox{e.\,g.}\xspace to assess the relation between the population of SNRs and large scale structure of the LMC (Sect.\,\ref{results_distribution}), or to evaluate doubtful candidates in the sample compilation (Sect.\,\ref{compiling}). Here, we present those data briefly. \paragraph{Optical data:} The Magellanic Clouds Emission Line Survey \citep[MCELS, \mbox{e.\,g.}\xspace][]{2000ASPC..221...83S} was carried out at the Cerro Tololo Inter-American Observatory (CTIO). It is a spatially complete, flux-limited survey with the 0.6/0.9~m Curtis Schmidt telescope of the University of Michigan. A 8\degr~$\times$~8\degr\ region centred on the LMC was imaged with three narrow-band filters [\ion{S}{ii}]$\lambda\lambda$6716,\,6731 \AA, \ensuremath{{\rm H}\alpha}\xspace\,\footnote{the \ensuremath{{\rm H}\alpha}\xspace filter included the [\ion{N}{ii}]$\lambda\lambda$6548,\,6584 \AA\ doublet in its bandpass.}, and [\ion{O}{iii}]$\lambda$5007 \AA. Observations with green and red broad-band filters centred at 5130~\AA\ and 6850~\AA\ were obtained to subtract stellar continua. The pixel size of the mosaicked data is 2\arcsec~$\times$~2\arcsec. For optical photometry, we used results of the Magellanic Clouds Photometric Survey \citep[MCPS,][]{2004AJ....128.1606Z}, a $UBVI$ survey of 24 million stars in the central $\sim 64$~deg$^2$ of the LMC down to $V\sim 20 -21$~mag (depending on crowding). Additionally, we used optical images (red continuum and \ensuremath{{\rm H}\alpha}\xspace) from the Southern H-Alpha Sky Survey Atlas \citep[SHASSA][]{2001PASP..113.1326G}. \paragraph{Radio\,:} The neutral hydrogen (\ion{H}{i}\xspace) content and structure of the LMC has been studied (at 21~cm) by \citet{2003MNRAS.339...87S} and \citet{2003ApJS..148..473K}. The former used data from the 64-m single-dish Parkes radio-telescope, sensitive to large-scale structures (200~pc to 10~kpc). They show the distribution of \ion{H}{i}\xspace in a well-defined disc and three ``arms'' interpreted as tidal features. Several \ion{H}{i}\xspace holes (the largest ones) are associated to supergiant shells (SGS). In \citet{2003ApJS..148..473K}, the Parkes data are merged with data from the Australia Telescope Compact Array (ATCA) interferometer, which provides a view of the smaller structures (15 pc to 500 pc). The resulting map (which we used in this work) reveal the clumpiness of the \ion{H}{i}\xspace distribution, or in their words, ``the filamentary, bubbly, and flocculent structures of the ISM in the LMC''. Finally, the molecular content of the LMC is assessed by the $\sim 30$~deg$^2$ survey with the NANTEN telescope in the \element[][12][]{CO}~$(J = 1 - 0)$ line \citep{2008ApJS..178...56F}, from which we borrowed the velocity-integrated CO map. \paragraph{Star formation history map of the LMC\,:} \label{observations_supplementary_SFH} The first studies of the LMC's stellar content in the 1960s suggested a different SFH than for the Milky Way \citep{1960ApJ...131..351H,1961ApJ...133..413H}. Most of the early studies used age-dating of LMC clusters. The most striking feature they revealed was the ``Age Gap'', \mbox{i.\,e.}\xspace the lack of clusters between ages of $\sim 5$~Gyr and $\sim 12$~Gyr \citep[\mbox{e.\,g.}\xspace][]{1991IAUS..148..183D}. Studies of \emph{field star} populations \citep[\mbox{e.\,g.}\xspace with \textit{HST},][]{1999AJ....118.2262H,2002ApJ...566..239S} reveal essentially the same results, \mbox{i.\,e.}\xspace a dearth of star formation between an initial burst ($\gtrsim 12$~Gyr) and a second episode 4--5~Gyr ago. The first truly global analysis of the LMC's SFH was conducted by \citet{2009AJ....138.1243H}. They used the results from the MCPS to perform colour-magnitude diagram fitting. They obtained a reconstruction of the star formation rate (SFR, in M$_{\sun}$\, yr$^{-1}$) in 13 time bins and four metallicity bins, for 1380 cells, most of them having a size of 12\arcmin $\times$ 12\arcmin. Although poorly sensitive to old ages because the survey does not reach the main-sequence turn-off (MSTO) in the crowded fields\,\footnote{in the Bar the old ($\gtrsim 4$~Gyr) SFH is constrained to match that obtained with \textit{HST}.}, the SFH obtained is extremely useful to study the recent and intermediate-age star formation episodes, and to compare the integrated SFH of small- and medium-scale regions. We used the SFH map to compare the local stellar populations around LMC SNRs in Sect.\,\ref{results_sfh}. \section{Compiling a complete sample of LMC SNRs} \label{compiling} Obtaining a complete and clean census of LMC remnants is a complex task, for several reasons\,:\\ \indent \textbullet\ \emph{Classification\,:} different authors may use different criteria to classify an object as a definite SNR.\\ \indent \textbullet\ \emph{Literature size\,:} with the exception of the early studies, the discovery of most new objects was reported in separate papers, building up a vast literature. \\ \indent \textbullet\ \emph{Nomenclature\,:} an additional problem related to the previous point is the inconsistencies in the naming convention for LMC SNRs. The common names of many remnants used in the literature, especially those discovered first, are an unruly collection of various surveys and catalogues in specific wavelengths. Some are referred to after the \ion{H}{ii} complex within which they are located (\mbox{e.\,g.}\xspace ``SNR in N44''), or worse, a nearby \ion{H}{ii} region (\mbox{e.\,g.}\xspace DEM~L109, though it is most likely unrelated to the remnant). Other names use B1950 coordinates, with little to no consistency in the coordinates convention. Consequently, some objects were mistakenly listed twice in SNR compilations (Sect.\,\ref{compiling_cleaning}). To bypass these shortcomings, we performed a complete literature survey to build a list of LMC SNRs, combining all papers that either \emph{i)} report the discovery or classification of one or more SNRs, \emph{ii)} give a list of LMC SNRs, or \emph{iii)} present new candidates (Sect.\,\ref{compiling_literature}). The list is then cleaned from the wrongly identified or misclassified objects (Sect.\,\ref{compiling_cleaning}). Unconfirmed candidates, particularly in light of new X-ray observations, are also removed. For the naming of all SNRs in the Magellanic Clouds, we made use of the acronym ``MCSNR'', which was pre-registered to the International Astronomical Union by R. Williams et al., who maintain the Magellanic Cloud Supernova Remnants online database\,\footnote{MCSNR, \url{http://www.mcsnr.org/Default.aspx}}. This ensures a consistent and general naming system. Therefore, all SNRs are assigned the identifier ``MCSNR JHHMM+DDMM'', although we also retained the old ``common names'' from the literature for easy cross-identifications. \subsection{Literature survey} \label{compiling_literature} The first extragalactic supernova remnants were found in the LMC in the 1960s. Combining Parkes observations with \ensuremath{{\rm H}\alpha}\xspace\ photographs, \citet*{1963Natur.199..681M} first identified N49 as an SNR, to which \citet{1966MNRAS.131..371W} soon added N63A and N132D. Less than ten years later, \citet{1973ApJ...180..725M}, using the same method, had already discovered 12 new SNRs\,\footnote{Counting the two distinct shells they identified in N135 (the remnants to be known as DEM~L316A and DEM~L316B) and including the two objects in the 30 Doradus region that they identified as candidates.}. The survey with {\it Einstein}\ allowed \citet{1981ApJ...248..925L} to list 26 SNRs detected in X-rays, confirming many previously suggested candidates (based on optical or radio data). \citet{1983ApJS...51..345M} provided a catalogue of 25 SNRs with radio, optical, and X-ray results. With more observations, \citet{1984AuJPh..37..321M} and \citet{1984ApJS...55..189M,1985ApJS...58..197M} increased the size of the sample to 32. In the 1990s, several new SNRs were discovered with ROSAT pointed observations \citep{1993ApJ...414..213C,2000AJ....119.2242C,1994AJ....108.1266S}, sometimes aided by optical spectroscopy \citep{1995AJ....109.1729C,1997PASP..109..554C}. Since then, about twenty new remnants were discovered or confirmed in a collection of papers. Some discoveries stemmed from new radio observations \citep[\mbox{e.\,g.}\xspace][]{2012MNRAS.420.2588B,2012RMxAA..48...41B,2012A&A...540A..25D}. The majority, though, used XMM-{\it Newton}\ observations, either optically selected candidates \citep{2010ApJ...725.2281K}, ROSAT-selected candidates \citep[][Kavanagh et al., in prep.]{2012A&A...539A..15G,2014MNRAS.439.1110B}, or serendipitously observed during the LMC VLP survey \citep{2012A&A...546A.109M,2014A&A...561A..76M} or other programmes \citep{2014A&A...567A.136W}. Several groups compiled lists of SNRs in the (Large) Magellanic Cloud(s), the purpose being to analyse some of their global properties. \citet{1998A&AS..130..421F} used Parkes surveys to study the radio spectral index and luminosityy distribution of 34 confirmed and 24 probable LMC SNRs. \citet{1999ApJS..123..467W} were the first to study the X-ray morphology of all known LMC SNRs at that time. They showed ROSAT images for 31 out of their list of 37 SNRs. \citet[][hereafter \citetalias{2006ApJS..165..480B}]{2006ApJS..165..480B} compiled a sample of 39 SNRs in the LMC which was observed with the \textit{Far Ultraviolet Spectroscopic Explorer (FUSE)} satellite. The goal was to study UV emission from SNRs, in particular in the light of highly ionised oxygen (\ion{O}{vi}~$\lambda 1032$). A sample of 52 confirmed and 20 candidate radio-selected SNRs was observed spectroscopically in \citet{2008MNRAS.383.1175P}, but the exact list was not given. Instead, they reported the results for the 25 objects which were detected. \citet{2010AJ....140..584D} studied the triggering of star formation by SNRs. To that end, they examined the young stellar objects and molecular clouds associated to LMC SNRs. Their census resulted in a list of 45 objects. A total of 54 SNRs was used by \citet[][hereafter \citetalias{2010MNRAS.407.1301B}]{2010MNRAS.407.1301B} to study their size distribution. The difference in numbers stems from their including objects from unpublished sources (\mbox{i.\,e.}\xspace online catalogues). \citet{2008PASJ...60S.453S,2013ApJ...779..134S} combined \textit{AKARI} and \textit{Spitzer} observatories to survey the infrared emission of LMC SNRs. They presented a list of 47 SNRs, warning that some sources in \citetalias{2010MNRAS.407.1301B} still needed confirmation. \subsection{Cleaning the sample: Objects not included} \label{compiling_cleaning} To build the final list of LMC SNRs, we combined objects from the older catalogues \citep{1973ApJ...180..725M,1981ApJ...248..925L,1983ApJS...51..345M, 1984ApJS...55..189M,1985ApJS...58..197M} with those reported in individual studies since then. We also included all sources present in the various compilations described in the previous Section. After removing all multiple occurrences of the same object, we ``cleaned'' the sample, searching for\,:\\ \indent \textbullet\ \textit{Misclassification:} the object is something else than an SNR, \mbox{e.\,g.}\xspace a superbubble. \revision{The X-ray properties of non-SNR extended sources that can be found in the field of the LMC were described in \citet{2014A&A...561A..76M}}.\\ \indent \textbullet\ \emph{Unconfirmed candidates:} new data obtained since the classification as an SNR/candidate argue against this interpretation. \revision{This includes mainly candidates observed with XMM-{\it Newton}\ for the first time in our VLP survey. The absence of coincident X-ray emission strongly disfavours an SNR nature, unless radio and optical emission typical of SNRs is found.}\\ \indent \textbullet\ \emph{Misidentification:} spurious source due to confusion (of the coordinates or nomenclature) in the literature. Below, we describe the objects erroneously classified as SNRs or candidates and the evidence motivating the decision. These objects are listed in Table~\ref{table_compiling_rejected} and were not included in our final sample. \paragraph{[BGS2006b] J0449$-$693:} This object was observed in the UV by \citet{2006ApJS..165..480B} and in optical by \citet{2008MNRAS.383.1175P}, although the latter used a different location, further to the south-east than the former. None of these studies gave conclusive evidence of an SNR nature (no UV lines detected, moderate [\ion{S}{ii}]/H$\alpha$\ ratio). \citet{2010ApJ...725.2281K} used MCELS and XMM-{\it Newton}\ to identify the true SNR in that region, that they named SNR0449$-$6921, now registered as [BMD2010]~SNR~J0449.3$-$6920 in Simbad. The X-ray emission originates from an optical shell clearly distinct from the position given for [BGS2006b] J0449$-$693. In \citetalias{2010MNRAS.407.1301B}, both sources are listed, although only [BMD2010]~SNR~J0449.3$-$6920 (SNR0449$-$6921) is the true source. This is an example of a misidentification due to coordinate confusion. \paragraph{LHA 120$-$N~185:} \citet{2006ApJS..165..480B} could not detect UV emission from this source (that they incorrectly listed as SNR~0453-672). It was not included in the compilations from \citet{2010AJ....140..584D} and \citet{2013ApJ...779..134S}. Only \citetalias{2010MNRAS.407.1301B} classified the source as an SNR. X-ray emission is detected, surrounded by the large, bright optical shell N~185. However, the nature of the source remains uncertain. Most likely, N~185 is actually a superbubble, and not the remnant of a single supernova \citep{2014ApJ...792...58Z,2014AJ....148..102R}. \paragraph{SNR J051327$-$691119:} This source is located north-westwards of SNR~ B0513$-$692 (which has the name MCSNR J0513$-$6912 in our list). \citet{2007MNRAS.378.1237B} present the optical and radio observations of this region, identifying the large (4.1\arcmin~$\times$~3.3\arcmin) shell of MCSNR J0513$-$6912. They detected a strong unresolved radio source at its north-western edge, that they classified as an unrelated \ion{H}{ii} region or background galaxy \citep[GH 6$-$2, see references in][]{2007MNRAS.378.1237B}. In addition, they observed a faint optical shell seen in both MCELS [\ion{S}{ii}] and AAO/UKST deep H$\alpha$ images. Follow-up optical spectroscopy revealed distinct, higher [\ion{S}{ii}]/H$\alpha$\ ratios from this faint shell, prompting \citep{2007MNRAS.378.1237B} to classify this shell as a new candidate SNR, J051327$-$691119. This region was covered by the XMM-{\it Newton}\ survey, revealing in details the X-ray emission of MCSNR J0513$-$6912 (Sect.\,\ref{results_spectra}, Appendix~\ref{appendix_spectra} \& \ref{appendix_images}). On the other hand, the candidate J051327$-$691119 lacks any X-ray feature. The small extent of the source (40\arcsec\ diameter in H$\alpha$) would suggest a young, X-ray bright SNR, easily detectable in observations of the XMM-{\it Newton}\ survey. With only weak optical evidence, a confused field in the radio, and a stringent non-detection in X-rays, one is forced to conclude that J051327$-$691119 is \emph{not} an SNR. \begin{figure}[t] \centering \includegraphics[angle=0,width=0.87\hsize] {J0529-6833_MCELS_contours.jpg} \caption[\ The rejected SNR candidate in DEM L203 in optical lines with soft X-ray contours]{The rejected SNR candidate in DEM L203 in optical lines ([\ion{S}{ii}] (red), H$\alpha$ (green), and [\ion{O}{iii}] (blue), data from MCELS), with soft X-ray contours (from XMM-{\it Newton}) overlaid in white. The image spans 20\arcmin\ across. The bright star seen in X-rays (lower right corner) is the Galactic star HD~269602. } \label{fig_compiling_DEML203} \end{figure} \paragraph{LHA 120$-$N 204:} It is only listed as an SNR in the compilation of \citetalias{2010MNRAS.407.1301B}. It was selected from the radio observations of \citet{2008MNRAS.383.1175P} where it appeared for the first time in the literature. Therefore, it was selected from radio catalogues. The ``SNR'' lies within the large (diameter of 14\arcmin) optical shell N~204, although a size of 1\arcmin\ was given in \citet{2008MNRAS.383.1175P}. The field 61 of the XMM-{\it Newton}\ survey covered this region, detecting no extended X-ray emission. With the small size of this source, bright emission is expected. Instead, an X-ray point source is detected in projection in N~204, which correlates with a mid-IR selected AGN \citep[MQS J052749.08$-$703641.7,][]{2012ApJ...746...27K}. The background AGN is most likely the origin of the radio emission which led to the misclassification of the target as an SNR candidate. \begin{table*}[t] \caption{LMC objects erroneously classified as SNRs or candidates, not included in the final sample.} \begin{center} \label{table_compiling_rejected} \begin{tabular}{@{\hspace{1em}}l @{\hspace{1em}} c @{\hspace{1em}} c @{\hspace{1em}} c @{\hspace{1em}}} \hline\hline \noalign{\smallskip} \multicolumn{1}{c}{Name} & \multicolumn{1}{c}{Alternative name} & \multicolumn{1}{c}{Category} & \multicolumn{1}{c}{Ref. code} \\ \noalign{\smallskip} \hline \noalign{\smallskip} {[}BGS2006b{]} J0449$-$693 & B0450$-$6927 & Wrong identification & BGS06\\ LHA 120$-$N 185 & N185 & Wrong classification (superbubble) & PWF08\\ SNR J051327$-$691119 & DEM L109 & Unconfirmed candidate & BFP07\\ LHA 120$-$N 204 & B0528$-$7038 & Wrong identification & PWF08\\ {[}BMD2010{]} SNR J0529.1$-$6833 & DEM L203 & Unconfirmed candidate & BMD10\\ \noalign{\smallskip} \multirow{2}{*}{RX J0533.5$-$6855} & X-ray arc around & \multirow{2}{*}{Unconfirmed candidate} & \multirow{2}{*}{LCG04}\\ & RX J053335$-$6854.9 & \\ \noalign{\smallskip} 30 DOR C & {[}BGS2006b{]} J0536$-$692 & Wrong classification (superbubble) & MFT85\\ SNR B0538$-$69.3 & {[}BGS2006b{]} J0538$-$693 & Unconfirmed candidate & MFD84\\ \noalign{\smallskip} \hline \end{tabular} \end{center} \tablefoot{See text in Sect.\,\ref{compiling_cleaning} for a description of each object. Reference codes\,: \citetalias{1984ApJS...55..189M}: \citet{1984ApJS...55..189M}; \citetalias{1985ApJS...58..197M}: \citet{1985ApJS...58..197M}; \citetalias{2004AJ....127..125L}: \citet{2004AJ....127..125L}; \citetalias{2006ApJS..165..480B}: \citet{2006ApJS..165..480B}; \citetalias{2007MNRAS.378.1237B}: \citet{2007MNRAS.378.1237B}; \citetalias{2008MNRAS.383.1175P}: \citet{2008MNRAS.383.1175P}; \citetalias{2010MNRAS.407.1301B}: \citet{2010MNRAS.407.1301B}. } \end{table*} \paragraph{[BMD2010] SNR J0529.1$-$6833:} The classification as an SNR candidate (in the MCSNR online database) stems from the detection of radio emission correlating with the large optical shell DEM L203. This object is however in the compilation of ``confirmed'' SNRs of \citetalias{2010MNRAS.407.1301B}. Again, X-ray observations can shed light on the nature of the source. DEM L203 has no X-ray counterpart in the ROSAT catalogue. More importantly, XMM-{\it Newton}\ covered the object on three occasions during the LMC survey. Combining $\sim 35$~ks of EPIC data, only unrelated large-scale diffuse emission is detected, without any correlation with the optical shell, as shown in Fig.\,\ref{fig_compiling_DEML203}. A very old age, as indicated by the large extent, might explain the lack of X-ray emission, although XMM-{\it Newton}\ can and did detect the largest SNRs, such as MCSNR J0450$-$7050 \citep[5.7\arcmin\ diameter,][]{2009SerAJ.179...55C} or J0506$-$6541 \citep[6.8\arcmin][]{2010ApJ...725.2281K}. Furthermore, the MCELS image reveals no clear enhanced [\ion{S}{ii}] emission, and the source was not spectroscopically observed by \citet{2008MNRAS.383.1175P}. In light of this and the absence of X-ray emission, we do not confirm the classification of this object as an SNR and did not include it in the final sample. \paragraph{RX J0533.5$-$6855:} \citet{2004AJ....127..125L} used ROSAT to study the X-ray diffuse emission around the point source RX J053335$-$6854.9 (referenced as RX J0533.5$-$6855 in Simbad) and concluded that the X-ray arc seen was a large SNR candidate; they classified the X-ray point source as a dwarf M2-M3 star in the Solar neighbourhood. This region was covered in the XMM-{\it Newton}\ survey. The diffuse emission detected with ROSAT is found to be part of larger scale structures from the hot phase of the LMC ISM. There is \emph{no} large SNR around RX J0533.5$-$6855. \paragraph{30 DOR C:} This is a large shell seen in X-rays with a non-thermal spectrum \citep{2004ApJ...602..257B,2015A&A...573A..73K}. Its nature as a superbubble rather than a standard SNR was already recognised by \citet{1985ApJS...58..197M}. It was however listed as an SNR in \citet[][with the identifier {[}BGS2006b{]} J0536$-$692]{2006ApJS..165..480B} and \citetalias[][as {[}BMD2010{]} SNR J0536.2$-$6912]{2010MNRAS.407.1301B}. Interestingly, there \emph{is} an SNR (in projection) in 30 DOR C \citep[MCSNR J0536$-$6913,][]{2015A&A...573A..73K}, but it was revealed only later and is most likely distinct from the non-thermal shell. \paragraph{SNR B0538$-$69.3:} The first classification as an SNR dates back to \citet{1984ApJS...55..189M}, based on radio and weak optical evidence. \citetalias{2010MNRAS.407.1301B} included that source with the wrong J2000 coordinates. \citet{2006ApJS..165..480B} used the correct position but did not detect UV emission from the object. B0538$-$69.3 is unusually bright in radio (Miroslav Filipovi\'c 2014, personal communication) considering the general lack of X-ray and optical emission. \citet{1984ApJS...55..189M} noted that the absence of X-ray emission might be due to the high $N_H$ towards this region of the LMC. However, other SNRs are found in that region (\mbox{e.\,g.}\xspace MCSNR J0536$-$6913, DEM L299, the Honeycomb nebula), so a negative result with XMM-{\it Newton}\ is puzzling. This objects remains at best an SNR \emph{candidate}. \subsection{The final sample} \label{compiling_final} Our compilation results in a list of 59 definite SNRs. In Table~\ref{appendix_table_snrs_sample} we list the final sample of LMC SNRs used in this work. Basic information is given for each object: MCSNR identifier and old name, position, X-ray data available, and reference. In addition, we added columns with X-ray results: X-ray luminosity (Sect.\,\ref{results_spectra} and \ref{results_XLF}), X-ray size (Sect.\,\ref{data_imaging}), and $N_H$ fraction (Sect.\,\ref{results_distribution}). Finally, we give for each SNR the values of the two metrics used to assess the local stellar environment described in Sect.\,\ref{results_sfh}. See text in Appendix~\ref{appendix_sample} for detailed description of each column. This work focuses on the X-ray emission of LMC SNRs. Therefore, there are only confirmed SNRs in the final sample (no candidate). The resulting list provides the most complete sample of SNRs in the LMC, \emph{as far as X-rays are concerned}: XMM-{\it Newton}\ observations exist for 51 SNRs out of the list of 59 SNRs defined here. Out of the eight objects without XMM-{\it Newton}\ data available, three were covered with {\it Chandra}, and two only by ROSAT. Only three objects have not any X-ray information available (yet), though their radio and optical properties warrant their classifications as SNR. In Sect.\,\ref{results_XLF} and Sect.\,\ref{summary}, we discuss the total number of LMC SNRs and the overall completeness of the sample. \section{Data analysis} \label{data} \subsection{X-ray imaging} \label{data_imaging} For each SNR, we combined the smoothed images in the soft, medium, and hard bands (obtained as described in Sect.\,\ref{observations_reduction}) into X-ray composite images. These are shown in Appendix~\ref{appendix_images}. The same images are used to obtain X-ray contours to help in defining regions for spectral extraction (Sect.\,\ref{data_spectra_extraction}). The study of the size distribution of SNRs provides clues to small-scale structures in galaxies and the energy and matter cycles in the ISM. The sample of LMC SNRs (at various levels of completeness) has been already used for such studies \citep[\mbox{e.\,g.}\xspace][\citetalias{2010MNRAS.407.1301B}]{1983ApJS...51..345M}. An SNR can appear to have different sizes depending on the wavelength (\mbox{e.\,g.}\xspace a larger radius in radio than in X-rays), or can have an asymmetric morphology that complicates the definition of its ``size''. To help future studies of the size distribution, we provide in this work the \emph{maximal} extent of each SNR in X-rays, which we measured from the X-ray images and contours. The values are listed in Table~\ref{appendix_table_snrs_sample}. The size distribution of LMC remnants, combining measurements at various wavelengths, is presented and discussed in Bozzetto et al. (in prep.). \subsection{X-ray spectra} \label{data_spectra} \subsubsection{Analysis method} \label{data_spectra_method} SNRs are \emph{extended} X-ray sources, and many of those in our sample have a low surface-brightness. Consequently, the analysis of their spectra is challenging. A careful treatment of the background, both instrumental and astrophysical, is utterly important in order to obtain meaningful fits and extract the purest possible information from the source. It is not desirable to simply subtract a background spectrum extracted from a nearby region, because of the different responses and background contributions associated to different regions, and because of the resulting loss in the statistical quality of the source spectrum. An alternative method, which we used in this work, is to extract a nearby background spectrum, define a (physically motivated) model for the background and simultaneously fit the source and background spectra. Below, we explain the method in detail. Our account of the background is detailed in Appendix~\ref{appendix_background}. The only source for which a different method was used is SNR~1987A, as described in Sect.\,\ref{results_spectra_1987A}. The spectral-fitting package XSPEC \citep{1996ASPC..101...17A} version 12.8.0m was used to perform the spectral analysis. Unless otherwise stated, spectra were rebinned with a minimum of 25 counts to allow the use of the $\chi ^2$-statistic. Interstellar absorption was reproduced by two photoelectric absorption components (\texttt{phabs} and \texttt{vphabs} in XSPEC, where the previx ``v'' indicates that abundances can vary), one with a column density $N_{H\mathrm{\ Gal}}$ and solar abundances for the foreground Galactic absorption, and another one with $N_{H\mathrm{\ LMC}}$ and LMC elemental abundances \citep{1992ApJ...384..508R} for absorption within the LMC. Cross-sections for photoelectric absorption were set to those of \citet{1992ApJ...400..699B}. The foreground column density $N_{H\mathrm{\ Gal}}$ at the location of each analysed source is taken (and fixed) from the \ion{H}{I} maps of \citet[][available online on the HEASARC pages\footnote{\url{http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl}}]{ 1990ARA&A..28..215D}. For the analysis of one extended source, two different regions are defined: \emph{i)} a source spectrum extraction region (hereafter \texttt{SRC}\xspace region), and \emph{ii)} a background spectrum extraction region (hereafter \texttt{BG}\xspace region). Two spectra are extracted per instrument (pn, MOS1, and MOS2) from each region, one from the event list of the science observation, the second from the FWC data. The FWC spectra must be extracted at the same \emph{detector} position as in the science observation, because of the strong position-dependency of the instrumental background for both pn and MOS. The \texttt{SRC}\xspace and \texttt{BG}\xspace regions are best defined in World Coordinates System (WCS, \mbox{i.\,e.}\xspace sky position\footnote{This is more practical, in particular when several observations of a source with different pointings exist.}). Therefore, we first project the FWC data at the same sky position as the science observation, using its attitude history file and the SAS task \texttt{attcalc}. One can then use the same extraction regions to select FWC spectra. The four spectra are fitted simultaneously \revision{over the 0.3~keV~--~12~keV range}. The instrumental background model is constrained by the FWC data, and included (with tied parameters) in the spectra from the science observation. The science spectrum in the \texttt{BG}\xspace region therefore allows the parameters of the astrophysical X-ray background (AXB) to be determined. It is assumed that the temperature of the thermal components and the surface brightness of the thermal and non-thermal components do not vary significantly between the \texttt{SRC}\xspace and \texttt{BG}\xspace regions. Thus, the appropriate temperature and normalisation parameters are linked. All background components are then accounted for, and one can explore the intrinsic emission from the source using several emission models (Sect.~\ref{data_spectra_models}). Regarding which instruments are used, several configurations are possible, depending on the data present. Ideally, one would use all EPIC instruments (pn+MOS1+MOS2) together. However, our analysis method requires FWC data. Those are available for all read-out modes of pn, but only for the full-frame mode of MOS, limiting the use of MOS data in some cases (\mbox{e.\,g.}\xspace MOS in Small Window mode). It also happens that the SNR is outside the MOS field of view, if it is too far off-axis or on one of the damaged chips of MOS1 \citep{2006ESASP.604..943A}. In these cases only the pn spectrum is used for analysis. The contrary (only MOS spectra available) occurs in rare cases. About 80\,\% of the SNRs in the sample were observed only once. A few were observed twice in overlapping survey observations; the deep field centred on SNR~1987A contains four SNRs in total, and a plethora of XMM-{\it Newton}\ data are at hand for those. To keep the analysis the same for most sources, we restricted the number of observations analysed simultaneously to two for the latter cases. If more than two observations are available, we selected the two deepest datasets (\mbox{i.\,e.}\xspace longest flare-filtered exposure times) for analysis. Finally, N132D is a calibration target and frequently observed. It is however too bright for the full-frame mode; only Small Window and Large Window modes have been used and thus we only used the deepest pn dataset. It was found more efficient to pre-fit the instrumental and astrophysical background of each SNR. That is, we first fitted the (FWC + AXB) EPIC-pn spectra alone and FWC MOS spectra alone. If the pre-fitting of the background components was satisfactory, their best-fit parameters were used as starting points in the final fit, which includes the SNR emission model. Doing so speeds up the process of analysing the SNR spectrum alone. It also helps, by visual examination of the background fits, to identify problematic cases, as described in Appendix~\ref{appendix_background}. \begin{figure}[t] \centering \includegraphics [width=0.99\hsize]{J0519-6902_extraction_region.jpg} \includegraphics [width=0.99\hsize]{J0547-6943_extraction_region.jpg} \caption[\ Extraction regions used to extract spectra of MCSNR J0519$-$6902 and J0547$-$6943]{\emph{Top:} Extraction regions used to extract spectra of MCSNR J0519$-$6902 from EPIC pn, MOS1, and MOS2 detectors (left to right). For the scale, we remind the reader that pn chips are 4.4\arcmin-wide. The X-ray contours (in red) are used to outline the boundary of the remnant emission and set the radius of the circular \texttt{SRC}\xspace region (in green). The \texttt{BG}\xspace regions are shown by the blue dashed rectangle. The barred blue circles show detected point sources and a ``buffer'' area around the \texttt{SRC}\xspace region. Those are excluded from the \texttt{BG}\xspace region.\\ \emph{Bottom:} Same for MCSNR J0547$-$6943, outlined by the green polygonal region. The barred blue arcs are excluded to avoid single-reflections from LMC X-1. } \label{fig_data_spectra_extraction} \end{figure} \subsubsection{Extraction of the spectra} \label{data_spectra_extraction} The first step of the analysis is to extract spectra for each SNR of the sample, as well as corresponding background spectra from nearby regions (using the same observation). Due to the spread in morphology and size of the SNRs, unequal properties of their background (diffuse emission and point source crowding), and their varying location on the EPIC detectors, \texttt{SRC}\xspace and \texttt{BG}\xspace regions cannot be created automatically. Therefore, extraction regions were manually defined for each SNR. For the \texttt{SRC}\xspace region, the constraint was simply to include all the remnant's emission and exclude unrelated point sources that might be located in projection. We used the contours taken from the X-ray image (see Sect.\,\ref{data_imaging}), combining all observations of each remnant, to identify the boundaries of the SNR emission. If the morphology of the object requires it, an arbitrary shape (polygonal region) is used instead of a circle or ellipse. The \texttt{BG}\xspace regions are chosen from different locations on the pn and MOS detectors if needed, in order to be on the same CCD chip as (most of) the SNR emission. In most cases where the remnant was the target of the observation (\mbox{i.\,e.}\xspace was observed on-axis), the same \texttt{BG}\xspace region defined for pn can also be used for MOS data, because of the chip configuration of the latter with one central chip and six peripheral chips. Detected point sources are also excluded from the \texttt{BG}\xspace regions. Two examples are shown in Fig.\,\ref{fig_data_spectra_extraction}. In the simple case (that of MCSNR J0519$-$6902), we used a circular \texttt{SRC}\xspace region and the same \texttt{BG}\xspace regions for all EPIC detectors. In the more complex case of MCSNR J0547$-$6943 (or DEM L316B), we used a polygonal \texttt{SRC}\xspace region; the \texttt{BG}\xspace region is narrower for pn than for MOS to fit on a single CCD chip. In addition to point sources, we excluded arc-shaped regions which are affected by instrumental artefacts (single-reflections from LMC X-1). Extraction regions for all LMC SNRs analysed in this work are shown in Appendix~\ref{appendix_images}. Because of the telescope vignetting, the effective area is not constant across the extent of SNRs. To take this into account, all spectra (source and background) are extracted from \emph{vignetting-weighted} event lists. These are created with the SAS task \texttt{evigweight}, applied on the calibrated event lists produced by our data reduction pipeline (as described in Sect.\,\ref{observations_reduction}). It assigns a weight to each event of energy $E_j$ at detector coordinates $(detx_j,dety_j)$, which is the inverse of the ratio of the effective area at that position to the central effective area (at the same energy): \begin{equation} w_j \; = \; \frac{A_{0,0}(E_j)}{A_{detx_j,dety_j}(E_j)} \end{equation} The corrected event lists are equivalent to that obtained with a flat instrument. For spectral analysis, a flat response file with the on-axis effective area must then be used\,\footnote{The extraction radii for the three smallest SNRs (except SNR~1987A which is handled as a point source) result in PSF losses less than 15\,\%.}. Instrumental background spectra were extracted from FWC data at the same detector position as the \texttt{SRC}\xspace and \texttt{BG}\xspace regions following the method described above. \subsubsection{Spectral models} \label{data_spectra_models} The model of the SNR emission is built iteratively, in increasing order of complexity. First, a one-component collisional ionisation equilibrium (CIE) model (\emph{vapec} in XSPEC) is tried. The elemental abundances are initially set to the values measured by \citet{1992ApJ...384..508R}. Their abundance of silicon is highly uncertain, however, and therefore we use an initial value of half solar for Si. Analysis of the residuals and goodness-of-fit reveals if and how the model can be improved, by either thawing some elemental abundances, or switching to a non-equilibrium ionisation (NEI) model (\emph{vpshock} in XSPEC). Abundance effects are manifest when all lines from one element are over- or underestimated compared to the continuum or other elements, while the signatures of NEI are in the residuals from different ions of the same element (\mbox{e.\,g.}\xspace relative strengths of \ion{O}{VII}/\ion{O}{VIII} lines). We evaluate the significance of the fit improvements (if any) with F-tests. We refer to these one-component models as ``1T'' (one temperature). A second component is added if needed; those are the ``2T'' SNRs. Again we start with CIE, then assess whether there is need for NEI or free abundances in the second component. For several SNRs, the analysis of X-ray colour images already hints at the presence of two components, with \mbox{e.\,g.}\xspace a different temperature, $N_H$, or abundance pattern. This iterative process is done until a satisfactory fit is achieved, at which point 90\,\%~C.\.L. errors are computed for all free parameters. More complicated models may be applied/needed for a handful of SNRs, particularly amongst the brightest ones. These cases are presented in Sects.\,\ref{results_spectra_brightest} and \ref{results_spectra_1987A}. \section{The X-ray spectral properties of LMC SNRs} \label{results_spectra} \subsection{General properties} \label{results_spectra_general} Out of the sample of 59 SNRs and 51 with XMM-{\it Newton}\ data available, 45 are fitted with 1T or 2T models, while 6, amongst the brightest, are fitted with more complex models (see Sects.\,\ref{results_spectra_brightest} and \ref{results_spectra_1987A}). \revision{Six previously known SNRs have been covered by our VLP survey, giving the opportunity to study their spectra with XMM-{\it Newton}\ for the first time. Among these, MCSNR J0534$-$6955 and J0547$-$7025 were observed with {\it Chandra}\ \citep{2003ApJ...593..370H}. \citet{2008AJ....136.2011R} analysed only MOS data of MCSNR J0518$-$6939 from an archival observation, while we now have EPIC-pn data. We show the XMM-{\it Newton}\ spectra from these SNRs in Appendix~\ref{appendix_spectra}.} The results of the spectral analysis for the 1T/2T sample are given in the Appendix~\ref{appendix_tables} (Table~\ref{appendix_table_spectra_all}). All relevant parameters are listed with their 90\,\%~C.\.L. uncertainties: The fitted LMC absorption column density (column 2), plasma temperature $kT$ (3), ionisation age $\tau$ (4), emission measure EM (5), and abundances (6). When a second component is used, its parameters are given in columns (7)~--~(11). The first component is the one with higher EM. The quality of the fits are evaluated by the $\chi^2 / \nu$ of column (12), where $\nu$ is the number of degrees of freedom. The reduced $\chi^2$ values ($\chi ^2 _{\mathrm{red}}$) are also listed in column (12). The median $\chi ^2 _{\mathrm{red}}$ is 1.16. 90\,\% of the fitted objects have a reduced $\chi^2$ less than 1.4. In 32 cases, the SNR is fitted with, or the available data only require, a one component model. Amongst these, 9 do not show significant NEI effects and are fitted with a CIE model. Using a NEI model for these did not result in a statistically significant improvement, neither in the goodness-of-fit sense, nor in terms of residuals. Moreover, the ionisation ages $\tau$ in these cases are high and poorly constrained. Therefore we list the CEI model parameters. In the 23 remaining ``1T'' objects, better fits are obtained with an NEI model. The plasma temperature for the ``1T SNRs'' clusters in the 0.25~keV~--~0.45~keV range. The highest values of temperature (above 1~keV) are associated with the smallest ionisation ages. In at least some cases, this could be an artefact of the analysis due to insufficient data. The ionisation age $\tau$ of this sample is broadly distributed around a median value of $1.7 \times 10^{11}$~s~cm$^{-3}$. There are 13 SNRs in the 2T sample. Two objects are fitted with two CIE component models (MCSNR J0530$-$7008 and J0517$-$6759). The rest was fitted with two NEI components, although for three SNRs the ionisation age of one of the components was unconstrained and on the high end ($\tau \gtrsim 10^{13}$~s~cm$^{-3}$), indicating a plasma close to or in CIE. The median $\tau$ of the main component (\mbox{i.\,e.}\xspace that with higher emission measure) for the 2T sample is slightly higher (5~--~7~$\times 10^{11}$~s\,cm$^{-3}$) than that of the 1T sample, but low number statistics preclude a direct comparison. The temperature distribution is bimodal: one component has a median temperature of $kT = 0.31$~keV, the second a higher median of 0.8~keV. In several cases the high-$kT$ component also requires a different abundance pattern, revealing SN ejecta (Sect.\,\ref{results_spectra_ejecta}). For nine SNRs, the data did not require or allow to fit elemental abundances. For a few cases, this happens because the spectrum is contaminated by a bright pulsar (N157B and MCSNR J0540$-$6920), or by LMC X-1 (MCSNR J0540$-$6944), and the thermal emission is not well separated by XMM-{\it Newton}. The other SNRs fitted with abundances from \citet[][RD92 in Table~\ref{appendix_table_spectra_all}]{1992ApJ...384..508R} are relatively faint. The limited available data therefore prevent the use of free abundances in the fits. Oxygen and iron are the main contributors to the 0.5~keV~--~2~keV X-ray emission for the relevant plasma temperatures. Consequently, they are the first elements for which abundances can be fitted. Out of the 45 1T/2T SNRs, 35 have at least free O and Fe abundances. Neon and magnesium also have prominent lines routinely detected below 2~keV, and their abundances were fitted in 33 and 30 SNRs, respectively. Silicon is detected and its abundance fitted, in 23 SNRs. This subset has a higher median temperature ($kT \sim 0.6$~keV) than the whole sample, as expected. Indeed, Si emission becomes prominent for higher temperatures than, say, O, Ne, or Fe. While obvious and fitted in all the brightest SNRs, which are younger/hotter, lines of sulphur are not detected in most 1T/2T SNRs. Only a handful (MCSNR J0534$-$6955, J0547$-$7025, N63A) allow to fit the S abundances. All have plasma temperatures in excess of 0.8~keV. The fitted abundance patterns can be used to type the supernova progenitor, if ejecta are detected (Sect.\,\ref{results_spectra_ejecta}), or to measure metallicity of the LMC ISM (Sect.\,\ref{results_spectra_abundance}). \subsection{The analysis of the brightest SNRs} \label{results_spectra_brightest} For six of the brightest SNRs, the simple 1T/2T models approach was clearly insufficient to satisfactorily model the spectra. This is expected, because on the one hand, the exquisite statistical quality of these spectra imply that even a two-component model is not adequate to reproduce the complex multi-phase structure in these objects. On the other hand, the very young SNRs, in addition to a small ambient medium contribution, are dominated by ejecta. Because of stratification of the ejecta heated by the reverse shock, elements synthesised at different radii in the SN explosion can have distinct spectral properties. All the ``bright SNR'' sample was observed in individual XMM-{\it Newton}\ and {\it Chandra}\ pointings. Detailed results are published in several papers (references are given below), with which our results were never at odds. Here, we used multi-temperature empirical models to reproduce the spatially integrated spectra. This allows \emph{i)} to derive accurate X-ray fluxes, so that the luminosity function (Sect.\,\ref{results_XLF}) is complete at the bright end, \emph{ii)} to measure the properties of the Fe~K emission, if present (see Sect.\,\ref{results_spectra_FeK}), and \emph{iii)} to obtain spectral properties (\mbox{e.\,g.}\xspace $N_{H}$, $kT$, $\tau$) for statistical studies and comparison of their distributions for various sub-samples (Sect.\,\ref{results_XLF}). The adopted models are described below. The spectral parameters are given in Table~\ref{appendix_table_spectra_brightest} and Table~\ref{appendix_table_spectra_1987A}. \paragraph{DEM~L71 (MCSNR J0505$-$6753):} DEM~L71 is a notorious type Ia SNR, owing to the detection of iron-rich ejecta \citep[\mbox{e.\,g.}\xspace][]{1995ApJ...444L..81H}. \citet{2003A&A...406..141V} presented the XMM-{\it Newton}\ EPIC and RGS results for this remnant, and \citet{2003ApJ...582L..95H} those obtained with {\it Chandra}\ observations. Different conditions are measured in the shell and central regions. It is then unsurprising that a 2T model as used for other SNRs did not produce acceptable fits. Instead, we obtained satisfactory results with three components: Two components (``Fe-low $kT$'' and ``Fe-high $kT$'') had Si, S, and Fe (the main nucleosynthesis products of Ia SNe) freed and common to the two components, while other metals were set to zero. These components account for the ejecta-rich emission, as well as the Si, S, and Fe contribution of the ISM. A third component, with O, Ne, and Mg abundances free and Si, S, Fe set to zero, accounts for the bulk of ISM emission. In addition, Fe~K emission is clearly detected, pointing to the presence of very hot ejecta ($kT > 2$~keV). The statistical weight of this feature remains small. Therefore, instead of adding another thermal component, we modelled the line with a Gaussian. The parameters of the Fe~K line are used in comparison with other SNRs in Sect.\,\ref{results_spectra_FeK}. The ejecta components have best-fit temperatures of $\sim 0.4$~keV and $\sim0.9$~keV (Table~\ref{appendix_table_spectra_brightest}). The ionisation age of the cooler component is twice that of the hotter one. The ISM component has a temperature of $kT = 0.46$~keV, the same as measured with {\it Chandra}\ \citep{2003ApJ...582L..95H}, and in between the two temperatures used for the shell emission by \citet{2003A&A...406..141V}. \paragraph{N103B (MCSNR J0509$-$6844):} The spectrum of N103B is remarkable because of the numerous lines from highly ionised metals: \ion{Si}{XII} and \ion{Si}{XIV}, \ion{S}{XV} and (marginally) \ion{S}{XVI}, \ion{Ar}{XVII}, and \ion{Ca}{XIX}. A strong Fe~K blend is also detected. We fit the spectrum with the same three-temperature model as for DEM~L71. One component had abundances fixed to RD92, accounting for the ISM emission. Two components with different $kT$ and $\tau$ were used to reproduce the (dominating) ejecta emission. All relevant elements (O, Ne, Mg, Si, S, Ar, Ca, and Fe) were freed, but common to both components. A Gaussian was also included to fit the Fe~K~feature. With this model, the spectrum of N103B is well reproduced across the whole 0.3~keV~--~8~keV band. The results are comparable to those of \citet[][focusing on XMM-{\it Newton}\ data]{2002A&A...392..955V} and \citet[][with {\it Chandra}]{2003ApJ...582..770L}, especially regarding: \emph{i)}~ the column density $N_H \sim 3 \times 10^{21}$~cm$^{-2}$; \emph{ii)}~the presence of one high ionisation age component (at $kT \sim 0.7$~keV) and a hotter (1.6~keV) underionised component. Because the Fe~K blend is modelled separately with a Gaussian, the fitted temperature of the hottest component is lower than in the previous references; \emph{iii)}~high abundances of S, Ar, and Ca. \paragraph{N132D (MCSNR J0525$-$6938):} \citet{2001A&A...365L.242B} presented the XMM-{\it Newton}\ observations of N132D from the Performance Verification programme. Results of the {\it Chandra}\ ACIS-S observations can be found in \citet{2007ApJ...671L..45B}. Both instruments spatially resolve the SNR into regions with different spectral properties. Therefore, though a three-temperature model can reproduce the main features of the spectrum (thus allowing to measure accurately the integrated flux of the remnant), strong residual structures are seen between 0.5~keV and 1~keV, where the strongest variations are observed (lines of O, Ne, Fe). The best fit is obtained with a cool ($\sim 0.5$~keV) component with abundances close to the normal LMC values (\mbox{i.\,e.}\xspace it represents a blast wave component) that dominates the soft emission (below 1.5~keV). A second component with $kT \sim 1$~keV is characterised by enriched levels of O, Ne, and Mg, as well as a higher column density ($\sim 10^{22}$~cm$^{-2})$. This component thus describes the bulk of the ejecta emission, and accounts for most of the Si and S emission. Finally, the presence of highly ionised iron is evident from the \revision{$6.69$~keV line (see Table~\ref{table_results_spectra_FeK}), corresponding to the K$\alpha$ energy of \ion{Fe}{XXV}}. This indicates a third, very hot component ($\sim 5$~keV). In this component only Fe, Ar, and Ca are included. The two latter elements improve the residuals around 3.1~keV (\ion{Ar}{XVII}), and 3.9/4.1~keV (\ion{Ca}{XIX} and \ion{Ca}{XX}). These K lines were already mentioned in the early XMM-{\it Newton}\ results \citep{2001A&A...365L.242B}. \paragraph{0519$-$69.0 (MCSNR J0519$-$6902):} The SNR was observed early in the {\it Chandra}\ and XMM-{\it Newton}\ missions. In addition, the LMC survey covered the source, at an off-axis angle of $\sim 9$\arcmin, adding 23~ks and 27~ks to the existing 8~ks and 46~ks of full-frame pn and MOS data, respectively. Spectra from the two observations were fitted simultaneously. 0519$-$69.0 exhibits strong lines of Si, S, Ar, and Ca, as well as prominent Fe~L and K blends. To reproduce the spectra we used the multi-component approach of \citet{2010A&A...519A..11K}, who extensively studied the XMM-{\it Newton}\ and {\it Chandra}\ data. First, one NEI component with LMC abundances accounts for circumstellar medium (CSM) or ISM emission. Then, one NEI component for each (group of) element(s) having detected lines: oxygen, silicon and sulphur, argon and calcium, and iron. In the latter case two NEI components with distinct parameters are used, as the spectrum evidently includes both medium temperature and very hot iron. Due to the low count rate, and therefore statistical weight, of the Fe~K blend, the hot iron component was driven to fit lower energy lines instead. To alleviate this issue we fitted the high-energy part of the spectrum separately with this component, then froze the best-fitting parameters in the global fits. Residuals around 0.72~keV (lines of \ion{Fe}{XVII}) were fitted with an additional Gaussian line. \paragraph{0509$-$67.5 (MCSNR J0509$-$6731):} XMM-{\it Newton}\ observed the SNR for $\approx 40$~ks in 2000, with pn operated in Large Window mode. This dataset is presented in \citet{2008A&A...490..223K}, while \citet{2004ApJ...608..261W} reported the spectral and imaging analysis of a {\it Chandra}\ observation. Finally, \citet{2008ApJ...680.1149B} attempted to reproduce spectra from both instruments using a grid of hydrodynamical models and an X-ray emission code. Inconsistencies between pn and MOS spectra were found, with lines in the pn spectrum (red-)shifted relative to those in MOS spectra by about 1~\%. This is likely a gain issue of the pn instrument. We discarded spectra from the MOS instruments, as they were operated in Small Window mode, for which no FWC data are available. To get the spectral model to match the observed energies of atomic lines, we freed the ``redshift'' parameter available in XSPEC models, which allows an \emph{ad hoc} change of the energy scale. Satisfying results were obtained for a shift of $\approx 1$~\%, which is the measured pn/MOS discrepancy \citet{2008A&A...490..223K}. As for J0519$-$6902, lines from heavy elements are prominent, and we used a multi-component model. Si, S, and Ar were grouped in a NEI component, and shared the same temperature and ionisation age. Another NEI component modelled the continuum+lines emission from the CSM/ISM. No Si, S, Ar, or Ca were included in this component. Iron was included in two NEI components, one with a medium temperature ($\sim~1.4$~keV) and a high-$kT$ one ($\sim 11$~keV) that reproduces the strong Fe~K line. The latter component also includes calcium. Even with this model, residuals remained around Fe lines (0.72~keV and 1.22~keV), which we fitted with two Gaussian lines. \revision{The very high temperature of the second Fe component is atypical for SNRs, but was also suggested in {\it Chandra}\ data by \citet{2004ApJ...608..261W}. The SNR exhibits a high-energy continuum tail, which previous studies tried to reproduce with non-thermal models. This tail can also be reproduced with the Bremsstrahlung continuum of a high-$kT$ thermal model, driving our fit to temperatures above 10~keV. Furthermore, we already noted the energy shift issue of the pn spectra, that results in a small centroid energy of the Fe K line for a given fitted $kT$. Given this caveats, it remains unclear whether the 11~keV plasma is physical. } \begin{table*}[ht] \caption{Fe~K line properties of LMC SNRs} \label{table_results_spectra_FeK} \centering \begin{tabular}{l l c c c c c} \hline\hline \noalign{\smallskip} MCSNR & Alt. name & type & \multicolumn{2}{c}{Energy centroid (eV)} & \multicolumn{2}{c}{Line luminosity ($10^{42}$~ph s$^{-1}$)} \\ & & & XMM-{\it Newton} & {\it Suzaku} & XMM-{\it Newton} & {\it Suzaku} \\ \noalign{\smallskip} \hline \noalign{\smallskip} J0509$-$6731 & B0509$-$675 & Ia & 6432$_{-27}^{+29}$ & 6425$_{-15}^{+14}$ & 0.87$\pm0.21$ & 0.96$\pm0.12$ \\ \noalign{\smallskip} J0505$-$6753 & DEM L71 & Ia & 6494$\pm58$ & --- & 0.26$_{-0.09}^{+0.08}$ & --- \\ \noalign{\smallskip} J0509$-$6844 & N103B & Ia & 6514$_{-32}^{+31}$ & 6545$\pm6$ & 5.10$\pm0.87$ & 6.43$\pm0.30$ \\ \noalign{\smallskip} J0519$-$6902 & B0519$-$690 & Ia & 6543$_{-31}^{+28}$ & 6498$_{-8}^{+6}$ & 1.71$\pm0.45$ & 2.78$\pm0.15$ \\ \noalign{\smallskip} J0526$-$6605 & N49 & CC & --- & 6628$_{-26}^{+29}$ & $<$ 4.75\tablefootmark{a} & 0.54$\pm0.12$ \\ \noalign{\smallskip} J0535$-$6916 & SNR~1987A\tablefootmark{b} & CC & 6635$\pm70$ & 6646$_{-54}^{+55}$ & 0.64$\pm0.18$ & 0.57$\pm0.24$ \\ \noalign{\smallskip} J0535$-$6602 & N63A & CC & 6683$_{-99}^{+88}$ & 6647$_{-17}^{+16}$ & 2.36$_{-1.08}^{+1.03}$ & 2.57$\pm0.36$ \\ \noalign{\smallskip} J0525$-$6938 & N132D & CC & 6685$_{-14}^{+15}$ & 6656$\pm9$ & 4.58$\pm0.58$ & 5.47$\pm0.51$ \\ \noalign{\smallskip} \hline \end{tabular} \tablefoot{{\it Suzaku}\ results are from \citet{2014ApJ...785L..27Y}. \tablefoottext{a}{$3\sigma$ upper limit.} \tablefoottext{b}{The quoted numbers are average values over the last six epochs, and the uncertainties are the RMS scatter. Note that we found the energy centroid to evolve rapidly at recent epochs (see Sect.\,\ref{results_spectra_1987A}).} } \end{table*} \subsection{Update on the monitoring of SNR~1987A} \label{results_spectra_1987A} SN~1987A, the nearest supernova in almost 400~years, was discovered in the LMC on 23 February 1987. It is exceptional in many ways and has been extensively studied ever since. We have the unique opportunity to follow the early evolution of a supernova \emph{remnant} (hence the use of the identifier ``SNR~1987A''\,\footnote{In our nomenclature SNR~1987A is also given the name MCSNR J0535$-$6916.}). Results from the many existing XMM-{\it Newton}\ observations of SNR~1987A are presented in \citet{2006A&A...460..811H}, \citet{2008ApJ...676..361H}, and \citet{2010A&A...515A...5S}. \citet[hereafter \citetalias{2012A&A...548L...3M}]{2012A&A...548L...3M} analysed data from the 2007--2011 monitoring, focusing on the rapid evolution of the X-ray light curve and the properties and evolution of the Fe~K lines, which were detected unambiguously for the first time. However, the spectral parameters (except fluxes and Fe~K line properties) were not given in \citetalias{2012A&A...548L...3M}. We take advantage of this work to give these detailed results, and include an unpublished observations (ObsID 0690510101) performed on December 2012, after \citetalias{2012A&A...548L...3M} was released. All spectra from SNR~1987A were extracted from a circular region centred on the source, with a radius of 25\arcsec. The use of spatially integrated spectra is dictated by the small radius of the source \citep[still less than 1\arcsec,][]{2013ApJ...764...11H}, which is completely unresolved by XMM-{\it Newton}. The background spectra were extracted from a nearby point-source-free region common to all observations. Only single-pixel events (\texttt{PATTERN} = 0) from the pn detector were selected. Contrary to all other SNRs in this work, the background spectra were not modelled but \emph{subtracted} from the source spectra. We used the same three-component plane-parallel shock model as in \citetalias{2012A&A...548L...3M}, with one fixed-temperature component ($kT = 1.15$~keV) and free abundances of N, O, Ne, Mg, Si, S, and Fe. EPIC-pn spectra from all seven epochs of the monitoring are fitted simultaneously between 0.2~keV and 10~keV, with common abundances and N$_{H\mathrm{\ LMC}}$. To characterise the Fe~K line, we performed separate fits in the range (5--8)~keV on the \emph{non-rebinned} spectra using the C-statistic \citep{1979ApJ...228..939C}. We used a Bremsstrahlung model for the continuum and a Gaussian for the Fe~K line complex. The simultaneous (broad-band) fit was satisfactory, with $\chi ^2 = 5114.2$ for 4109 degrees of freedom (reduced $\chi ^2 _r~=~1.24$). Spectral results are the same as in \citetalias{2012A&A...548L...3M}. We give the best-fit parameters for all seven epochs in Table\,\ref{appendix_table_spectra_1987A}. We list soft (0.5~keV~$-$~2~keV) and hard (3~keV~$-$~10~keV) X-ray fluxes at all epochs, with $3\sigma$ uncertainties (99.73\,\% confidence level, C.\,L.) in Table~\ref{appendix_table_spectra_1987A}. Echoing the findings of \citetalias{2012A&A...548L...3M}, we see that the soft X-ray flux keeps increasing after the 25$^{{\rm th}}$ anniversary of SNR~1987A. Since 2011, however, the rate of increase has dropped below 10\,\% per year, showing that subsequent observations of SNR~1987A with XMM-{\it Newton}\ are highly desirable to follow the evolution of the X-ray flux and to identify the turn-over point. The central energy, $\sigma$-width, total photon flux and equivalent width (EW) of the Fe~K feature are also listed in Table~\ref{appendix_table_spectra_1987A}. Up to December 2011 the results are the same as in \citetalias{2012A&A...548L...3M}. The new data point (December 2012) reveals a line with roughly the same flux but a significantly higher central energy ($6.78_{-0.05}^{+0.06}$~keV) than previously ($6.60\pm0.01$~keV, averaging the earlier measurements). This hardening likely indicates an increased contribution from highly ionised iron (\ion{Fe}{xxvi}) that prior to 2012 was either absent (as iron was in lower ionisation stages) or too weak to be detected. With the resolution of pn and the statistics in our hand, it is not possible to resolve the K-shell lines from various Fe ions, which will become possible with next-generation X-ray calorimeters onboard Astro-H \citep{2012SPIE.8443E..1ZT} or Athena \citep{2013arXiv1308.6784B}. \subsection{Fe~K emission from LMC SNRs} \label{results_spectra_FeK} \citet{2014ApJ...785L..27Y} used {\it Suzaku}\ to systematically search for Fe~K emission from Galactic and LMC SNRs. Fe~K$\alpha$ emission was detected in 23 SNRs, including seven remnants in the LMC. Their essential finding is that the centroid energy of the Fe~K emission, determined by the ionisation state of iron, is a powerful tool for distinguishing progenitor types. Indeed, the Fe~K emission of type Ia remnants is significantly less ionised than in CC-SNRs. Furthermore, there is a positive correlation between the Fe~K$\alpha$ line luminosity and centroid energy \emph{within each progenitor group}. Because the Fe~K blend is a promising typing tool, we extended the search for Fe~K emission of \citet{2014ApJ...785L..27Y} to all LMC SNRs observed with XMM-{\it Newton}. Compared to the {\it Suzaku}\ sample, the coverage is more complete (\mbox{i.\,e.}\xspace more SNRs observed) and more sensitive (the EPIC-pn effective area is slightly higher than that of {\it Suzaku}'s XIS, even combining all detectors), and Fe~K can potentially be detected from more SNRs. In Table~\ref{table_results_spectra_FeK}, we give the results for all LMC SNRs with detected Fe~K emission, ranked by increasing centroid energy. The XMM-{\it Newton}\ and {\it Suzaku}\ results are consistent within the uncertainties. Strikingly, we found Fe~K emission undetected with {\it Suzaku}\ for only one source, DEM~L71. Its line luminosity is smaller than from any other LMC remnant. Likely, this fact and the smaller effective area of XIS explain why it was undetected in the 100~ks-long {\it Suzaku}\ observation of the remnant (Hiroya Yamaguchi 2014, personal communication). Furthermore, the second faintest Fe~K line from LMC SNRs is found in N49. With XMM-{\it Newton}\ one does not formally detect the line. Including a Gaussian at the energy measured with {\it Suzaku}, the XMM-{\it Newton}\ spectrum allows a line flux an order of magnitude above that actually detected. This is only a statistical issue. Indeed, there are less than 10~ks of EPIC-pn data available, which is no match to the 158~ks spent by {\it Suzaku}\ on N49 when detecting the Fe~K line. The properties of the Fe~K emission from DEM~L71 fit well with its type Ia nature. Furthermore, \citet[][their Figure~1, right]{2014ApJ...785L..27Y} used simple (one-dimensional) theoretical models of type Ia SNe exploding in uniform ambient media of various densities to predict the luminosity and energy of the line. Even with this simplistic approach, they are able to reproduce all the parameter space spanned by type Ia SNRs. In this context, the location of DEM~L71 in the Fe~K luminosity~--~energy diagram is well reproduced by a delayed-detonation model with a rather high explosion energy \citep[$1.4 \times 10^{51}$~erg, DDTa in][]{2003ApJ...593..358B,2005ApJ...624..198B}, in an ambient medium of density $\rho = 2 \times 10^{-24}$~g~cm$^{-3}$, at age between 2000~yr and 5000~yr. This is in line with the measured density and age of DEM~L71 \citep{2003A&A...406..141V,2003ApJ...590..833G}. Furthermore, the DDTa model predicts a silicon-to-iron mass ratio of 0.08, close to that measured in X-rays \citep[$\sim 0.15$,][]{2003ApJ...582L..95H,2003A&A...406..141V}. Since the hot, K$\alpha$-emitting iron was previously overlooked, the $M_{{\rm Si}}/M_{{\rm Fe}}$ ratio should be even lower, closer to the prediction of the DDTa model. The dearth of Fe~K-emitting remnants, aside from the combined XMM-{\it Newton}/{\it Suzaku}\ sample (eight objects), is somehow expected. Indeed, most of the SNRs have plasma temperatures less than 1~keV (Sect.\,\ref{results_spectra_general}), which is too low to excite iron K-shell electrons, so that no emission is expected. \revision{MCSNR J0547$-$6973 is one case where a hotter spectral component ($kT \sim 2.2$~keV) is present but no Fe~K emission is detected, likely because the emission measure of that component is too low, so that it is not detectable with current instruments.} Even if a spectrally unresolved hot iron component exists in more LMC remnants, a further issue is again detectability. The LMC SNRs of \citet{2014ApJ...785L..27Y} have hard X-ray (2~keV~--~8~keV) luminosities above $10^{35}$~erg~s$^{-1}$. There are only two other SNRs in the LMC above this level, MCSNR J0540$-$6920 and N157B, which are powered by a bright pulsar and pulsar wind nebula, respectively. Despite these observational difficulties, it is very likely that the sample of LMC Fe~K-emitting remnants \citep[of][plus DEM~L71]{2014ApJ...785L..27Y} is complete, because all young SNRs ($\lesssim 5000$~yr old) are now known and observed in X-rays. Translating the fraction of remnants with Fe~K emission in the LMC (\mbox{$\approx 13$~\%}) to the Galactic population \citep[294 objects,][]{2014BASI...42...47G}, we expect more than 40 such sources in the Milky Way. This number is a lower limit, since fainter line fluxes than in the LMC can be reached. \citet{2014ApJ...785L..27Y} list 16 Galactic SNRs detected, out of 56 objects observed with {\it Suzaku}\ \citep[][online database\footnote{\url{http://www.physics.umanitoba.ca/snr/SNRcat/}}]{ 2012AdSpR..49.1313F}. About 80 more SNRs were observed and detected with {\it Chandra}\ or XMM-{\it Newton}, and 150 have not been covered in X-rays. A systematic analysis of all X-ray-detected SNRs and new/deeper observations of promising candidates with more sensitive instruments (\mbox{e.\,g.}\xspace XMM-{\it Newton}\ vs. {\it Chandra}, future missions such as Athena) will provide a better census of Fe~K lines in SNRs. This will allow to type more remnants and to study the pre-SN evolution of their progenitors. \subsection{Detection of SN ejecta} \label{results_spectra_ejecta} When SN ejecta give an observable contribution to the X-ray emission of an SNR, the fitted abundances, or rather the fitted \emph{abundance ratios}, will reflect the nucleosynthesis yields of either thermonuclear or CC SNe. To identify SNRs with detected ejecta and the origin thereof, we computed abundance ratios X/Fe, where X is O, Ne, Mg, or Si. The ratios are normalised with respect to (X/Fe)$_{\mathrm{LMC}}$, the corresponding ratios with the LMC abundances \citep[from][]{1992ApJ...384..508R}. As CC-SNRs produce large amounts of light-Z elements and little iron, high (X/Fe)/(X/Fe)$_{\mathrm{LMC}}$ ratios (in excess of one) indicate a massive star progenitor. On the contrary, the main product of thermonuclear SNe is iron, and ejecta in type Ia SNRs (if detected), are expected to have (X/Fe)/(X/Fe)$_{\mathrm{LMC}}$~$\ll 1$. In Fig.\,\ref{fig_spectra_scatter}, the abundance ratio diagrams of all SNRs with corresponding fitted abundances are shown. The samples of SNRs with a secured CC or type Ia classification (as described in Appendix~\ref{appendix_secured}) are marked. Evidently, many of the known CC SNRs are located in regions of super-LMC X/Fe. The known type Ia SNRs are unsurprisingly in the (X/Fe)/(X/Fe)$_{\mathrm{LMC}}$~$\ll 1$ regions of the diagrams, because in most cases it is this very iron-enhancement that was used to classify them. \begin{figure}[t] \begin{center} \includegraphics[width=0.530\hsize] {abundance_ratios_final_NeO.jpg} \includegraphics[width=0.455\hsize] {abundance_ratios_final_SiO.jpg} \includegraphics[width=0.635\hsize] {abundance_ratios_final_MgO.jpg} \end{center} \caption[\ Abundance ratio diagrams of LMC SNRs with fitted abundances]{Abundance ratio diagrams of LMC SNRs with fitted abundances. Sources firmly classified as type Ia or CC-SNRs are plotted in red and blue, respectively, \revision{and the rest of the sample in black}. See text (Sect.\,\ref{results_spectra_ejecta}) for details. } \label{fig_spectra_scatter} \end{figure} \begin{table}[t] \caption{Constraints used for the identification of ejecta in SNR spectra.} \begin{center} \label{table_results_spectra_flags} \vspace{-0.2cm} \begin{tabular}{@{}c c c c c@{}} \hline\hline \noalign{\smallskip} X & \multicolumn{2}{c}{``high X/Fe'' flag} & \multicolumn{2}{c}{``low X/Fe'' flag} \\ & (1) & (2) & (1) & (3) \\ \noalign{\smallskip} \hline \noalign{\smallskip} O & $> 1.0 $ & $> 0.83$ & $< 0.60$ & $< 0.83$ \\ Ne & $> 1.43$ & $> 1.30$ & $< 0.55$ & $< 1.30$ \\ Mg & $> 0.62$ & $> 0.48$ & $< 0.22$ & $< 0.48$ \\ Si & $> 2.70$ & $> 1.60$ & $< 1.30$ & $< 1.60$ \\ \noalign{\smallskip} \hline \end{tabular} \end{center} \vspace{-0.45cm} \tablefoot{(1) Constraints on the ratio (X/Fe)/(X/Fe)$_{\mathrm{LMC}}$. (2) Constraints on the lower limit (X/Fe)/(X/Fe)$_{\mathrm{LMC}} - \Delta$(X/Fe), \revision{where $\Delta$(X/Fe) is the uncertainty of the abundance ratio}. (3) Constraints on the upper limit (X/Fe)/(X/Fe)$_{\mathrm{LMC}} + \Delta$(X/Fe). } \end{table} \begin{table*}[t] \caption[\ SNRs with detected ejecta]{1T/2T SNRs with detected ejecta (top part), and used for measurements of ISM composition (bottom part)} \vspace{-0.45cm} \begin{center} \label{table_results_spectra_flagged} \begin{tabular}{l c c c c c c | c c c c} \hline\hline \noalign{\smallskip} \multicolumn{1}{c}{\multirow{2}{*}{MCSNR}} & \multicolumn{1}{c}{\multirow{2}{*}{Old name}} & \multicolumn{1}{c}{\multirow{2}{*}{SN type}} & \multicolumn{4}{c}{High X/Fe flags} & \multicolumn{4}{c}{Low X/Fe flags} \\ & & & \multicolumn{1}{c}{O} & \multicolumn{1}{c}{Ne} & \multicolumn{1}{c}{Mg} & \multicolumn{1}{c}{Si} & \multicolumn{1}{c}{O} & \multicolumn{1}{c}{Ne} & \multicolumn{1}{c}{Mg} & \multicolumn{1}{c}{Si} \\ \noalign{\smallskip} \hline \noalign{\smallskip} J0453-6829 & B0453-685 & CC & --- & --- & Y & Y & --- & --- & --- & ---\\ J0506-6541 & --- & & --- & Y & --- & --- & --- & --- & --- & ---\\ J0506-7026 & [HP99] 1139 & & --- & --- & --- & --- & Y & Y & Y & ---\\ J0508-6830 & --- & Ia & --- & --- & --- & --- & Y & --- & --- &---\\ J0508-6902 & [HP99] 791 & Ia & --- & --- & --- & --- & Y & --- & --- &---\\ J0511-6759 & --- & Ia & --- & --- & --- & --- & Y & --- & --- & ---\\ J0519-6926 & B0520-694 & & --- & Y & Y & Y & --- & --- & --- & ---\\ J0523-6753 & N44 & & Y & Y & Y & --- & --- & --- & --- & ---\\ J0525-6559 & N49B & CC & --- & Y & Y & Y & --- & --- & --- & ---\\ J0526-6605 & N49 & CC & Y & --- & Y & Y & --- & --- & --- & ---\\ J0529-6653 & DEM L214 & & Y & --- & --- & --- & --- & --- & --- & ---\\ J0531-7100 & N206 & & Y & --- & --- & Y & --- & --- & --- & ---\\ J0533-7202 & 1RXSJ053353.6-7204 & & --- & --- & --- & --- & --- & --- & Y &---\\ J0534$-$6955 & B0534$-$699 & Ia & --- & --- & --- & --- & Y & Y & Y & ---\\ J0534-7033 & DEM L238 & Ia & --- & --- & --- & --- & Y & Y & Y & Y\\ J0535-6602 & N63A & CC & Y & Y & --- & --- & --- & --- & --- & --- \\ J0535-6918 & Honeycomb & & --- & --- & --- & Y & --- & --- & --- & ---\\ J0536-6735 & DEM L241 & CC & Y & Y & Y & --- & --- & --- & --- & ---\\ J0536-6913 & B0536-6914 & CC & Y & --- & --- & --- & --- & --- & --- & ---\\ J0536-7039 & DEM L249 & Ia & --- & --- & --- & --- & Y & Y & Y & Y\\ J0537-6628 & DEM L256 & & --- & --- & --- & --- & --- & --- & Y & ---\\ J0547-6941 & DEM L316A & Ia & --- & --- & --- & --- & Y & Y & Y & Y\\ J0547-7025 & B0548-704 & Ia & --- & --- & --- & --- & Y & Y & Y & ---\\ \hline\hline \noalign{\smallskip} & & & \multicolumn{8}{c}{ISM abundance} \\ & & & \multicolumn{2}{c}{O \& Fe} & \multicolumn{2}{c}{Ne} & \multicolumn{2}{c}{Mg} & \multicolumn{2}{c}{Si} \\ \noalign{\smallskip} \hline \noalign{\smallskip} J0450-7050 & B0450-709 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---}\\ J0453-6655 & N4 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---}\\ J0453-6829 & B0453-685 & CC & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---}\\ J0454-6626 & N11L & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---}\\ J0505-6802 & N23 & CC & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y}\\ J0514-6840 & --- & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---}\\ J0518-6939 & N120 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y}\\ J0519-6926 & B0520-694 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---}\\ J0527-6912 & B0528-692 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y}\\ J0528-6727 & DEM L205 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---}\\ J0531-7100 & N206 & CC & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---}\\ J0532-6732 & B0532-675 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y}\\ J0533-7202 & 1RXSJ053353.6-7204 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---} & \multicolumn{2}{c}{---}\\ J0535-6918 & Honeycomb & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{---}\\ J0543-6858 & DEM L299 & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y}\\ J0547-6943 & DEM L316B & & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y} & \multicolumn{2}{c}{Y}\\ \noalign{\smallskip} \hline \end{tabular} \end{center} \vspace{-0.5cm} \tablefoot{The classification given (type Ia or core-collapse) is described in Appendix~\ref{appendix_secured}. } \end{table*} Several sources without previous classification are located in the high- and low-ratio regions of the diagrams. For typing purpose, we assign ``high X/Fe'' and ``low X/Fe'' flags to these objects, using the following scheme: For each element X, we plot the cumulative distribution of the ratio (X/Fe)/(X/Fe)$_{\mathrm{LMC}}$. We then assign a ``high X/Fe'' flag to an object if its ratio is above the 68$^{\mathrm{th}}$ percentile ($\sim 1\sigma$) of the cumulative distribution. Symmetrically, a ``low X/Fe'' flag is given if the ratio is below the 32$^{\mathrm{th}}$ percentile. Since the uncertainties in the fitted abundances can be large, it is necessary to put a second constraint using the uncertainty of the ratio, \revision{noted $\Delta$(X/Fe)}: A ``high X/Fe'' flag is only given if the lower limit (\mbox{i.\,e.}\xspace ratio minus the uncertainty) is above the median of the cumulative distribution. For a ``low X/Fe'' flag the upper limit must be below the median. This excludes all cases where the ratios are elevated (or much smaller than one) but highly uncertain. Though some of the criteria for ``low X/Fe'' flags may seem high, the selected SNRs have actual ratios well below half the LMC average (well below 0.2 times the average for Mg). The criteria for ``high X/Fe'' and ``low X/Fe'' flag are given in Table~\ref{table_results_spectra_flags}. There are 23 SNRs in the 1T/2T sample with high or low abundance ratio flags, as listed in Table~\ref{table_results_spectra_flagged}. These flags are used in Sect.\,\ref{results_sfh} to help the typing of all LMC SNRs. \subsection{Metal abundances of the LMC ISM} \label{results_spectra_abundance} When no SN ejecta is detected, the X-ray emission is dominated by the ISM swept-up by the SN blast wave. Therefore, the fitted abundances in these cases provide us with measurements of the chemical composition of the gas phase of the LMC ISM. \citet{1992ApJ...384..508R} and \citet{1998ApJ...505..732H} have used samples of SNRs to obtain the abundance of some elements (using optical and X-ray observations, respectively), but the smaller sample of known SNRs and sensitivity of the X-ray instrument used (\textit{ASCA}) at the time limited the number of SNRs eligible to measure LMC abundances. \begin{figure*}[t] \begin{center} \includegraphics[width=0.995\hsize] {ISMabund_loose.jpg} \end{center} \caption[\ LMC ISM abundances (relative to solar values) measured in a sample of 16 X-ray SNRs]{LMC ISM abundances, relative to the solar values of \citet{2000ApJ...542..914W}, measured in a sample of 16 X-ray SNRs. The selection of the sample and the measurements of abundance are described in Sect.\,\ref{results_spectra_abundance}.} \label{fig_spectra_ISMabund} \end{figure*} We first selected all 1T/2T SNRs with fitted abundances but no high or low abundance ratios. To increase that sample, we included SNRs where some abundances are enhanced but others can still be used. For example in MCSNR J0453$-$6829, the spectrum is enhanced in Mg and Si, but the fitted values for O, Ne, and Fe, are still (assumed to be) reflecting the LMC ISM abundance. Furthermore, if the abundance of a given element is too uncertain, then the SNR is not used to measure the average abundance of that element. This limits in particular the size of the SNR sample allowing the abundance of silicon to be measured. In Table~\ref{table_results_spectra_flagged} we give the list of SNRs used to measure the abundance of O, Ne, Mg, Si, and Fe, or a subset of these elements. The measured abundances for this sample are plotted relative to solar values in Fig.\,\ref{fig_spectra_ISMabund}. The final LMC abundances are obtained by taking the average values from all SNRs where an element is used; the errors given are the RMS scatter amongst the SNRs used. This method is similar to that of \citet{1998ApJ...505..732H}. Resulting abundances range from $\sim 0.2$ solar for oxygen to $\sim 0.7$ solar for silicon. The results are listed in Table~\ref{table_results_spectra_abundance}. The absolute abundances, in the form 12 + log(X/H) (by number), are given, in comparison with results from \citet{1992ApJ...384..508R} and \citet{1998ApJ...505..732H}. Abundances of Fe and Si measured with XMM-{\it Newton}\ are in good agreement with the results measured for a different sample of SNRs by \citet{1998ApJ...505..732H}\,\footnote{\revision{They used six CC SNRs (J0453-6829, N23, N49, N49B, N63A, and N132D) and one type Ia SNR (DEM~L71).}}. More recent studies of abundances in the LMC, using large samples of field stars \citep{2005AJ....129.1465C,2008A&A...480..379P,2012ApJ...761...33L, 2013A&A...560A..44V}, can be used to evaluate our results. The metallicity distributions {[Fe/H]}\footnote{using the conventional notation: {[X/Y]}$ = \log\left(\mathrm{X/Y}\right) - \log \left(\mathrm{X/Y}\right)_{\sun}$.} peak at about $-$0.5~dex for most field star samples \citep{2012ApJ...761...33L}, \revision{corresponding to 12 + log(Fe/H) $= 7$. This matches very well our value based on XMM-{\it Newton}\ SNRs (6.97$_{-0.18} ^{+0.13}$)}, indicating no metallicity difference between field stars and gas-phase ISM. \begin{table}[t] \caption{LMC abundances} \begin{center} \label{table_results_spectra_abundance} \small \begin{tabular}{@{\hspace{0.05cm}} c @{\hspace{0.1cm}} @{\hspace{0.15cm}} c @{\hspace{0.15cm}} @{\hspace{0.15cm}} c @{\hspace{0.15cm}} @{\hspace{0.15cm}} c @{\hspace{0.15cm}} @{\hspace{0.10cm}} c @{\hspace{0.10cm}} @{\hspace{0.15cm}} c @{\hspace{0.15cm}} @{\hspace{0.15cm}} c @{\hspace{0.05cm}}} \hline\hline \noalign{\smallskip} Element & X/X$_{\sun}$ & N & RMS & 12 + log(X/H) & Hughes et & RD92 \\ & (1) & (2) & (3) & & al. (1998) & \\ \noalign{\smallskip} \hline \noalign{\smallskip} O & 0.21 & 15 & 0.08 & 8.01$_{-0.21} ^{+0.14}$ & 8.21$\pm 0.07$ & 8.35$\pm0.06$ \\ \noalign{\smallskip} Ne & 0.28 & 13 & 0.08 & 7.39$_{-0.15} ^{+0.11}$ & 7.55$\pm 0.08$ & 7.61$\pm0.05$ \\ \noalign{\smallskip} Mg & 0.33 & 11 & 0.19 & 6.92$_{-0.37} ^{+0.20}$ & 7.08$\pm 0.07$ & 7.47$\pm0.13$ \\ \noalign{\smallskip} Si & 0.69 & 6 & 0.42 & 7.11$_{-0.41} ^{+0.20}$ & 7.04$\pm 0.08$ & 7.81\tablefootmark{a}\\ \noalign{\smallskip} Fe & 0.35 & 15 & 0.12 & 6.97$_{-0.18} ^{+0.13}$ & 7.01$\pm 0.11$ & 7.23$\pm0.14$ \\ \noalign{\smallskip} \hline \end{tabular} \end{center} \tablefoot{ (1) Abundance relative to the solar value of \citet{2000ApJ...542..914W}. (2) Number of SNRs used to measure X/X$_{\sun}$. (3) RMS scatter amongst the N SNRs. \tablefoottext{a}{Silicon abundance was quoted as highly uncertain in \citet{1992ApJ...384..508R}} } \end{table} However, the abundances of light $\alpha$-elements tend to be lower (by $\sim$0.15~dex~--~0.2~dex) compared to \citet{1998ApJ...505..732H}, although the results for Mg and Ne might still be reconciled given the larger uncertainties. Still, we measured a ratio {[}O/Fe{]} of $-$0.21 while \textit{ASCA}\ SNRs gave $-$0.06. The likely explanation is two-fold. First, the $\alpha$-elements abundance has an intrinsic scatter \citep[about 0.05~dex~--~0.08~dex at the relevant metallicity,][]{2013A&A...560A..44V} that can partly explain the discrepancy. The second reason is the sample used by \citet{1998ApJ...505..732H}: \revision{N132D, N49, and N49B contain regions that have been (since then) found to be enhanced in low-Z elements (\mbox{e.\,g.}\xspace this work, and references in Appendix~\ref{appendix_secured}), while the bright central regions of the type Ia SNR DEM L71 are enriched in iron. These contributions from ejecta to the integrated spectra measured with \textit{ASCA}\ affect the measured LMC abundances. Six out of the seven SNRs used by \citet{1998ApJ...505..732H} are well-established CC-SNRs, and this bias is likely to explain their higher {[}O/Fe{]} (or more generally {[}$\alpha$/Fe{]}).} On the contrary, the XMM-{\it Newton}\ sample used here is explicitly cleaned of SNRs with abnormal abundance patterns (\mbox{i.\,e.}\xspace those with ejecta detected), resulting in a purer sample better suited to the measurement of the ISM composition. However, this sample comprises SNRs fainter than used in previous studies, and the abundances thus obtained are consequently relatively uncertain. The abundance pattern of metals should reflect the past history of chemical enrichment, and in particular the relative number of CC and Ia SNRs (hereafter $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$), because their metal yields are markedly different. In Fig.\,\ref{fig_spectra_ISMabund_vsFe} we show the [O/Fe] and [Mg/Fe] vs. [Fe/H] diagrams. Abundances measured with SNRs (\mbox{i.\,e.}\xspace that of the ISM gas phase) are compared with that measured in older populations: old globular clusters from \citet[][ages~$\sim 10$~Gyr]{2006ApJ...640..801J} and Bar and disc field red giant stars from \citet[][ages $\gtrsim 1$~Gyr]{2013A&A...560A..44V}. \revision{SNRs are only found in the higher metallicity ([Fe/H]) range. There is also a clear trend for SNRs to be at lower {[}$\alpha$/Fe{]} ratios, although uncertainties from X-ray spectral fitting are large (particularly for [Mg/Fe]). A larger sample and more data would be desirable to demonstrate this result definitely. Nevertheless, this trend should} reflect the continued enrichment by type Ia SNe in the last $\sim 1$~Gyr, which inject large amounts of Fe back in the ISM and drive younger populations towards the bottom right corner of the [$\alpha$/Fe] -- [Fe/H] diagrams. \begin{figure}[t] \begin{center} \includegraphics[width=0.95\hsize] {ISMabund_vs_Fe_correct.jpg} \end{center} \vspace{-0.3cm} \caption[\ {[}O/Fe{]} and {[}Mg/Fe{]} vs. {[}Fe/H{]} diagrams for various LMC populations]{[O/Fe] and [Mg/Fe] vs. [Fe/H] diagrams for various LMC populations: Abundances measured in SNRs (ISM gas phase, this work) are shown with red pentagons. The crosses indicate median error bars. Blue open circles are the old globular clusters from \citet[][ages~$\sim 10$~Gyr]{2006ApJ...640..801J}. Chemical abundances of Bar and disc stars are marked by black squares and grey dots, respectively \citep[from][ages~$\gtrsim 1$~Gyr]{2013A&A...560A..44V}. } \label{fig_spectra_ISMabund_vsFe} \end{figure} There are SNR-to-SNR variations in the abundances, but the metallicity scatter in the ISM gas phase is less than for field stars. In particular there is no metal-poor population \revision{([Fe/H]~$\lesssim -\, 0.9$)}. We also checked that there is no clear correlation between the location of an SNR in the [$\alpha$/Fe] -- [Fe/H] diagrams and the SFH around the SNR. For instance, SNRs with relatively high [$\alpha$/Fe] are not necessarily in regions with increased recent SF, which would produce massive stars that release low-Z elements. Despite the uncertainties and the limited size of the sample, this lack of correlation likely indicates that SNe-produced elements are well mixed in the ISM. In other words, the ISM is quickly homogenised, at least at the spatial scales over which SFH is measured ($\sim 200$~pc). After LMC abundances were measured \citep[\mbox{e.\,g.}\xspace][]{1992ApJ...384..508R}, \citet{1995MNRAS.277..945T} found with chemical evolution models that the deficit of light $\alpha$-elements of the MCs (\mbox{i.\,e.}\xspace lower {[$\alpha$/Fe]} for a given {[}Fe/H{]}) compared to the Galaxy must be explained by a \emph{smaller} $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ (more type Ia SNe). They measured a Galactic ratio of 6.7, but $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$$\sim4-5$ and $\sim 3.3$ for the LMC and SMC, respectively. Our results for the LMC ISM abundance suggest an even lower ratio $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$, because the deficit of light $\alpha$-elements is wider than previously assumed by \citet{1995MNRAS.277..945T}. By tentatively typing all LMC remnants, we show in Sect.\,\ref{results_sfh} that indeed $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ is particularly low, compared to previous measurements of the ratio in the LMC or inferred from galaxy cluster X-ray observations, and discuss likely explanations. \section{Measuring the ratio of CC to type Ia SNe in the LMC using ``SFH-typing''} \label{results_sfh} \subsection{The typing of SNRs in general} \label{results_sfh_general} Because the two flavours of SNe deposit a similar amount of energy in the ISM, they produce remnants which become increasingly hard to type as they age. The most secured typing methods are the study of SN \emph{optical} light echoes (\citealt{2005Natur.438.1132R,2008ApJ...680.1137R}; \emph{infrared} light echoes can be used to probe the ISM dust, see e.g. \citealt{2012ApJ...750..155V}), the measurement of the nucleosynthesis products in the ejecta \citep[e.g. ][]{1995ApJ...444L..81H}, or the association with a neutron star/pulsar wind nebula. Optical spectroscopy can also be used: In some cases the fast-moving ejecta are detected in optical lines with highly elevated abundances of oxygen. Those so-called \emph{oxygen-rich SNRs} have massive star progenitors, see for instance \citet{1978ApJ...223..109L}, \citet{1979ApJ...233..154C}, \citet{1995AJ....109.2104M}, and references therein. On the contrary, some SNRs have prominent Balmer lines of hydrogen, but absent or weak [\ion{S}{ii}] and [\ion{O}{iii}] lines. These Balmer-dominated optical spectra are interpreted as non-radiative shocks overtaking (partially) neutral gas \citep{1978ApJ...225L..27C,1980ApJ...235..186C}. A type Ia SN progenitor is consistent with the presence of neutral gas, as massive stars would ionise their surrounding. A sample of optically bright Balmer-dominated SNRs was detected in the LMC by \citet{1982ApJ...261..473T}. These methods work best for relatively young remnants ($\lesssim 5000$ yr), leaving a significant fraction of the SNR population untyped. However, several \emph{evolved} SNRs have been discovered (in X-rays) in the Magellanic Clouds with an iron-rich, centrally bright emission \citep{2001PASJ...53...99N,2003ApJ...593..370H,2004A&A...421.1031V, 2006ApJ...640..327S,2006ApJ...652.1259B,2014MNRAS.439.1110B, 2014A&A...561A..76M}, naturally leading to their classification as type Ia remnants. In addition, studies of the X-ray and infrared morphologies of SNRs \citep{2009ApJ...706L.106L,2013ApJ...771L..38P} suggest that, as a class, type Ia and CC SNRs have distinct symmetries: type Ia remnants are statistically more spherical and mirror symmetric than the CC SNRs. However, this method cannot give definite results for \emph{individual} objects: Prominent counterexamples include SNR 1E 0102.2$-$7219, a textbook CC SNR which is highly symmetric \citep{2004ApJ...605..230F}, and MCSNR J0547$-$7025, which is a type Ia SNR \citep[based on its X-ray spectrum][this work]{2003ApJ...593..370H} with ``anomalous'' ejecta distribution \citep{2009ApJ...706L.106L}. For a decent fraction of the LMC remnants, these various methods give a secured classification. This ``secured-type'' sample is presented in Appendix~\ref{appendix_secured} and listed in Table~\ref{appendix_table_securedSNRs}. To tentatively type the rest of the sample, we devise \revision{a new way to quantify the local stellar environment of LMC, as described in Sect.\,\ref{results_sfh_environment}}. This method is calibrated with the ``secured-type'' sample and applied in Sect.\,\ref{results_sfh_typing}. We can then discuss the measured ratio of CC to type Ia SNRs in the LMC and its implications in Sect.\,\ref{results_sfh_ratio}. \subsection{Evaluating the local stellar environment} \label{results_sfh_environment} We devised two metrics to assess the local stellar environment of LMC SNRs. Both ultimately stem from the same set of data \citep[the MCPS catalogue of][]{2004AJ....128.1606Z}. Although connected, they still measure two distinct properties and are therefore complementary, as we discuss below. The two metrics are given for each SNR in Table~\ref{appendix_table_snrs_sample}. \begin{figure}[t] \includegraphics[width=0.495\hsize] {DEML205_cmd.jpg} \includegraphics[width=0.495\hsize] {J0534-6955_cmd.jpg} \caption{Colour-magnitude diagram (CMD) of the MCPS stars \citep{2004AJ....128.1606Z} within 100~pc ($\sim$6.9\arcmin) of the central position of two remnants, MCSNR J0528$-$6727 (left) and MCSNR J0534$-$6955 (right). Geneva stellar evolution tracks \citep{2001A&A...366..538L} are shown as red lines, for metallicity of 0.4~Z$_{\sun}$ and initial masses of 3, 5~M$_{\sun}$ (dashed lines) and 10, 15, 20, 25, and 40~M$_{\sun}$ (solid lines), from bottom to top. The green dashed line shows the criteria used to identify the OB stars ($V < 16$ and $B-V< 0$). Stars satisfying these criteria are shown as blue dots.} \label{fig_Nob_cmd} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.995\hsize] {closestsfh_J0528-6727.jpg} \includegraphics[width=0.995\hsize] {closestsfh_J0534-6955.jpg} \end{center} \caption[\ Star formation history around MCSNR J0528$-$6727 and J0534$-$6955]{Star formation history around MCSNR J0528$-$6727 (top) and J0534$-$6955 (bottom). Data are taken from \citet{2009AJ....138.1243H}. The star formation rate in four metallicity bins are plotted against lookback time. The errors (combining all metallicities) are shown by the grey shading. The vertical dashed line at 40 Myr indicates the maximal lifetime of a CC SN progenitor. Note the changing vertical scale.} \label{fig_sfh_examples} \end{figure} \textbullet\ \textbf{\boldmath$N_{{\rm OB}}$, the number of blue early-type stars in the immediate vicinity of the remnant:}\\ To obtain this number, we constructed a $V$ vs. ($B-V$) colour-magnitude diagram (CMD) of all stars whose projected position lies within 100~pc ($\sim$6.9\arcmin) of each SNR. This value corresponds to the drift distance for a star of age $10^7$~yr at a velocity of 10~km~s$^{-1}$ and was used by \citet{1988AJ.....96.1874C}. The upper main-sequence of stars in the LMC was identified by adding the stellar evolutionary tracks of \citet{2001A&A...366..538L}, for Z $=$ 0.4~Z$_{\sun}$ and initial masses from 3~M$_{\sun}$ to 40~M$_{\sun}$. We assumed a distance modulus of 18.49 and an extinction $A_V = 0.5$ \citep[the average extinction for ``hot'' stars,][]{2004AJ....128.1606Z}. From there, we used the criteria of $V < 16$ and $B - V < 0$ to identify OB stars. In Fig.\,\ref{fig_Nob_cmd} we show two example CMDs of the regions around MCSNR J0528$-$6727 (left) and MCSNR J0534$-$6955 (right). In the former case, a prominent upper-main sequence is obvious and the number of OB stars \ensuremath{N_{{\rm OB}}}\xspace $= 142$. By contrast, the region around MCSNR~J0534$-$6955 is devoid of young massive stars. For this remnant, \ensuremath{N_{{\rm OB}}}\xspace is only 8. \revision{Note that \ensuremath{N_{{\rm OB}}}\xspace is likely a lower limit on the actual number of massive stars due to stellar crowding (the typical seeing of the MCPS is 1.5\arcsec). This issue affects chiefly regions with higher \ensuremath{N_{{\rm OB}}}\xspace.} \medskip \begin{figure*}[t] \begin{center} \includegraphics[width=0.33\hsize] {histo_NOB_all.jpg} \includegraphics[width=0.33\hsize] {histo_r_all.jpg} \includegraphics[bb=0 0 455 400, clip,width=0.330\hsize] {r_vs_NOB.jpg} \end{center} \caption{\emph{Left and middle panel:} Count distribution of LMC SNRs as function of \ensuremath{N_{{\rm OB}}}\xspace and $r$. The distribution for the SNRs with a secured CC classification is shown with the hatched boxes; that for type Ia SNRs is outlined in red. The whole sample is shown by the solid grey histograms. \emph{Right panel:} $r$--\ensuremath{N_{{\rm OB}}}\xspace diagram of LMC SNRs. Secured Ia and CC SNRs are marked by red triangles and blue squares, respectively; the rest of the sample is shown with black dots. The arrow in the lower left corner indicates an SNR with \ensuremath{N_{{\rm OB}}}\xspace $=$ 0. The regions corresponding to different ``Hint-SFH'' and ``Hint-CMD'' are marked by the gridding.} \label{fig_typing_calibration} \end{figure*} \textbullet\ \textbf{\boldmath$r=N_{\rm CC}/N_{\rm Ia}$, the ratio of CC SNe to thermonuclear SNe expected from the observed distribution of stellar ages in the neighbourhood of the remnants:}\\ This number is obtained via the spatially resolved SFH map of \citet[][see Sect.\,\ref{observations_supplementary_SFH}]{2009AJ....138.1243H}. For each SNR we plot the SFR of the cell including the remnant as a function of lookback time and metallicity. Two example SFHs are shown in Fig.\,\ref{fig_sfh_examples} for the same SNRs of Fig.\,\ref{fig_Nob_cmd}. They are strikingly different: The SFR around J0528$-$6727 soared in the last 20~Myr, when the numerous early-type stars in the vicinity of the remnant were formed, while the star formation around J0534$-$6955 peaked (at a lower absolute rate) about 125~Myr ago and was shut down in the most recent 20~Myr. Because stars might drift away from their birth place, one potentially important caveat is that the SFH of a cell hosting an SNR may be derived from stars having no physical connection with the SNR progenitor. For a detailed discussion on the relevance of local stellar populations to the study of progenitors, we point to \citet{2009ApJ...700..727B}. However, we stress that most of the information that can be gained from the study of the local SFHs, in the context of typing remnants, is contained in the most recent time bins. Namely, the presence of recent star formation episode is a strong necessary (but not sufficient) condition to tentatively type a remnant as having a CC origin. Conversely, the lack of recent star-forming activity favours a thermonuclear origin. To approach this question in a quantitative way, we did the following: We used the delay time distribution (DTD) $\Psi _{i} (\tau)$, the SN rate at time $\tau$ following a star formation event, measured by \citet{2010MNRAS.407.1314M} in the Magellanic Clouds, with $i=1$, 2, and 3 designating the time intervals they used ($t <$ 35~Myr, 35~Myr $< t <$ 330~Myr, and 330~Myr $< t < 14$~Gyr, respectively). From timescale arguments it is reasonably assumed that $\Psi _1$ will correspond to the CC-SN rate, whilst $\Psi _2$ and $\Psi _3$ will be that of SNe Ia (regardless of their ``prompt'' or ``delayed'' nature). The SFR is integrated to obtain $M_{i}$, the stellar mass formed in each time interval. The SFH of \citet{2009AJ....138.1243H} is only given at $t = 25$~Myr and $t = 50$~Myr. To obtain $M_1$, the mass formed at $t < 35$~Myr, we approximate $M (25 < t < 35)$ as half that formed between 25~Myr and 50~Myr (the second half is included in $M_2$). Likewise, we split the mass formed between $t = 250$~Myr and $t = 400$~Myr in two and include a half in both $M_2$ and $M_3$. Then, we compute $r=N_{\rm CC}/N_{\rm Ia}$ as the ratio of the \emph{rates} of CC and Ia SNe, since the visibility times are the same for both types, \mbox{i.\,e.}\xspace: \begin{equation} \label{eq_r} r = \frac{\Psi_1 M_1}{\Psi_2 M_2 + \Psi_3 M_3} \end{equation} Over the visibility time of a remnant --- taking 100 kyr as a very conservative limit --- the stars in the SFH cell including the remnant will not drift away. In other words, the distribution of stellar ages observed \emph{now} is the same as that when the SN exploded. $r$ is therefore a measure of the relative size of the pool of possible progenitors of both types. Using the same example SNRs as in Fig.\,\ref{fig_sfh_examples}, a value of $r = 9.0_{-4.9}^{+1.9}$ is obtained for J0528$-$6727\,\footnote{ The uncertainty given for $r$ solely includes that of the mass formed $M_i$, which is computed from uncertainties of the SFR given in \citet{2009AJ....138.1243H}. The uncertainties on $\Psi_2$ and $\Psi_3$ are larger, but are the same for all SNRs in the sample, allowing to use $r$ in a comparative fashion. we adopted $\Psi_2 = 0.26$ SNe yr$^{-1}$ ($10^{10}$M$_{\sun}$)$^{-1}$ and $\Psi_3 < 0.0014$ SNe yr$^{-1}$ ($10^{10}$M$_{\sun}$)$^{-1}$. Note that because $\Psi_3$ is an upper limit, $r$ is formally a lower limit. } while for J0534$-$6955 it is only $r = 1.2\pm0.1$. \subsection{``SFH-typing'' all LMC SNRs} \label{results_sfh_typing} We now proceed to give a tentative type to the whole sample of SNRs in the LMC, using \ensuremath{N_{{\rm OB}}}\xspace and $r$. We assign two numbers called ``Hint--CMD'' and ``Hint--SFH'', depending on the \ensuremath{N_{{\rm OB}}}\xspace and $r$-value obtained for each SNR, respectively. The numbers range from 1 meaning ``strongly favours a type Ia SN origin'', to 5 meaning ``strongly favours a CC-SN origin''. We used the distribution of \ensuremath{N_{{\rm OB}}}\xspace and $r$ for the sample of ``secured-type'' SNR to establish the correspondence between their values and the hints. \begin{table}[t] \begin{center} \caption{\normalsize{Criteria and ``Hints'' attributed to SNRs as function of \ensuremath{N_{{\rm OB}}}\xspace and $r$.}} \label{table_results_sfh_hints} \small \begin{tabular}{@{}c @{\hspace{0.0cm}} c @{\hspace{1em}} c @{\hspace{1em}} c @{}} \hline\hline \noalign{\smallskip} \multicolumn{1}{c}{Value} & \multicolumn{1}{c}{Hint-CMD} & \multicolumn{1}{c}{Hint-SFH} & \multicolumn{1}{c}{Meaning} \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1 & \ensuremath{N_{{\rm OB}}}\xspace $< 5$ & $r < 1.7$ & Strongly favours a type Ia\\ 2 & $5 \leq$ \ensuremath{N_{{\rm OB}}}\xspace $< 15$ & $1.7 < r < 2.2$ & Moderately favours a type Ia\\ 3 & $15 \leq$ \ensuremath{N_{{\rm OB}}}\xspace $< 35$& $2.2 < r < 3.4$ & Undecided \\ 4 & $35 \leq$ \ensuremath{N_{{\rm OB}}}\xspace $< 80$& $3.4 < r < 5$ & Moderately favours a CC\\ 5 & $80 \leq$ \ensuremath{N_{{\rm OB}}}\xspace & $5 < r$ & Strongly favours a CC\\ \noalign{\smallskip} \hline \end{tabular} \normalsize \end{center} \end{table} This method is conceptually similar to that used by \citet{1988AJ.....96.1874C}, albeit with several improvements: Firstly, the sample in this work is twice the size of that available to \citeauthor{1988AJ.....96.1874C}. Secondly, many ($\sim 25$) SNRs have now a secured type (Appendix~\ref{appendix_secured}) and can be used to calibrate the method and evaluate the rate of erroneous classification. Then, the completeness of the census of early-type stars in the vicinity of the remnants is higher, owing to the use of the MCPS catalogue. Finally, the spatially resolved SFH reconstruction was simply unavailable before \citet{2009AJ....138.1243H}. \begin{figure*}[ht] \includegraphics[width=0.495\hsize]{histo_hints.jpg} \includegraphics[width=0.495\hsize]{histo_hints_final.jpg} \caption{\emph{Left:} Count distribution of the LMC SNRs as function of ``Hint-SF'', combining \ensuremath{N_{{\rm OB}}}\xspace and $r$. \emph{Right:} Count distribution of the LMC SNRs as function of ``Hint--final'', combining spectral \emph{and} SFH information (see text for details). Hatching and colours as in Fig.\,\ref{fig_typing_calibration}.} \label{fig_hint_sf} \end{figure*} \paragraph{Calibration of the ``SFH-typing'':} The number of OB stars in the vicinity of the secured type Ia and CC SNRs is shown in Fig.\,\ref{fig_typing_calibration} (left panel). The two samples are rather well separated: The majority of type Ia SNRs have less than 20 early-type stars in their neighbourhood, while most of the CC-SNRs have \ensuremath{N_{{\rm OB}}}\xspace~$> 30$. The single major type Ia outlier is N103B (\ensuremath{N_{{\rm OB}}}\xspace~$= 99$), which is known to be in a region with a vigorous recent star formation activity \citep[\mbox{e.\,g.}\xspace][]{2009ApJ...700..727B}. MCSNR J0453$-$6829 is the only CC-SNRs to have a moderate \ensuremath{N_{{\rm OB}}}\xspace ($< 25$). The choice of ``Hint-CMD'' is given in Table~\ref{table_results_sfh_hints} to reflect this distribution: \ensuremath{N_{{\rm OB}}}\xspace less than 5 (less than 15) strongly (moderately) favours a type Ia classification, while \ensuremath{N_{{\rm OB}}}\xspace in excess of 80 (35) strongly (moderately) favours the CC-SN case. Intuitively, any value $r > 1$ should favour a CC SN origin (conversely for a thermonuclear origin). However, an important caveat in interpreting $r$ is that the rates of \citet{2010MNRAS.407.1314M}, especially $\Psi_2$ and $\Psi_3$, are quite uncertain, due to the still limited sample of SNRs. Specifically, $\Psi_2$ has a value that changes by a factor of four depending on the tracer used to constrain the SNR visibility time. To provide a better feeling on what $r$-value to expect in either case (and to decide where is the separation), we show the count distribution of secured type Ia and CC SNRs in the $r$-domain in Fig.\,\ref{fig_typing_calibration} (middle panel). There is a stronger overlap of both types in the intermediate range ($2.2 \lesssim r \lesssim 3.5$) than with \ensuremath{N_{{\rm OB}}}\xspace. However, the lower end ($r < 2.2$) still includes most of the type Ia SNRs, without contamination by the other type. N103B is again the only outlier at $r = 6.2$; at $r > 3.4$ only CC-SNRs are found. In view of this observed distribution, the ratio $r=N_{\rm CC}/N_{\rm Ia}$ is still a useful tool to assign a type to SNRs using the observed local SFH, and should be valid in a comparative and statistical sense. The ``Hints-SFH'' attributed to the sample based on $r$ are listed in Table~\ref{table_results_sfh_hints}. $r$ and \ensuremath{N_{{\rm OB}}}\xspace are also displayed as a scatter plot for secured Ia and CC SNRs (right panel of Fig.\,\ref{fig_typing_calibration}). There, the regions corresponding to different ``Hints'' are marked. \paragraph{Caveat on the complementarity of \ensuremath{N_{{\rm OB}}}\xspace and $r$\,:} It is clear that the two metrics are connected. Both are based on the MCPS catalogue; the early-type stars detected in a cell drive the fitting of the most recent time bins in the SFH reconstruction of \citet{2009AJ....138.1243H}. However, the $r$-value of a cell can be moderate even though \ensuremath{N_{{\rm OB}}}\xspace is high, as evident from the scatter along the horizontal axis in Fig.\,\ref{fig_typing_calibration} (right panel). That is because $r$ is a \emph{relative} measure of the recent SFR compared to that at earlier epochs, while \ensuremath{N_{{\rm OB}}}\xspace gives a measure of the \emph{absolute} strength of the recent star formation. In the (high \ensuremath{N_{{\rm OB}}}\xspace -- moderate $r$) case, there are many available progenitors of both CC- and type Ia SN; these are typically cases where the classification is inconclusive. \begin{table*}[t] \caption{``Hint-spec'' attributed to SNRs as function of spectral results.} \begin{center} \label{table_results_sfh_hints_spec} \begin{tabular}{@{}c c @{}} \hline\hline \noalign{\smallskip} \noalign{\smallskip} \multicolumn{1}{c}{Hint-spec} & \multicolumn{1}{c}{Criteria} \\ \noalign{\smallskip} \noalign{\smallskip} \hline \noalign{\smallskip} 1 & at least three ``low X/Fe'' flags, no ``high X/Fe'' flag \\ 1.5 & two ``low X/Fe'' flags or low O/Fe, no ``high X/Fe'' flag \\ 2 & one ``low X/Fe'' flag (except O/Fe), no ``high X/Fe'' flag \\ 2.5 & low Si/Fe, no ``high X/Fe'' flag \\ 3 & ISM abundances, unfitted abundances, or no XMM-{\it Newton}\ data \\ 3.5 & high Si/Fe, no ``low X/Fe'' flag \\ 4 & one `high X/Fe'' flag (except O/Fe), no ``low X/Fe'' flag \\ 4.5 & two ``high X/Fe'' flags or high O/Fe, no ``low X/Fe'' flag \\ 5 & at least three ``high X/Fe'' flags, no ``low X/Fe'' flag; pulsar/PWN detected \\ \noalign{\smallskip} \hline \end{tabular} \end{center} \end{table*} \paragraph{Results for the whole sample:} The count distributions for all LMC SNRs in the \ensuremath{N_{{\rm OB}}}\xspace and $r$ spaces are shown in Fig.\,\ref{fig_typing_calibration} (grey histograms, left and middle panels), and as a scatter plot in the right panel. They are similar, with larger numbers, to the distributions of the secured-type SNRs. About twenty remnants are in regions with a low number of early-type stars (\ensuremath{N_{{\rm OB}}}\xspace $< 15$) and not dominated by recent SF ($r \lesssim 2$). There is a peak at $r \sim 6$ with a dozen remnants. Those are SNRs in star-forming regions which are widely spread across the LMC. They are often associated with giant \ion{H}{II} complexes (\mbox{e.\,g.}\xspace LHA-120 N4, N11, N44). The objects with extreme values for $r$ ($\gtrsim 8$) also have the largest \ensuremath{N_{{\rm OB}}}\xspace. Those are located in the two most intensively star-forming regions of the LMC: 30 Doradus, and the rim of the supergiant shell LMC 4 \citep[which embeds the ``Constellation III'' region,][]{2008PASA...25..116H,2009AJ....138.1243H}. To combine the two ``Hints'' into one, we computed the arithmetic mean of Hint-CMD and Hint-SFH. The resulting ``star-formation Hint'' (Hint-SF) again ranges from 1 to 5. Its distribution for the whole sample and the secured-type SNRs is shown in Fig.\,\ref{fig_hint_sf}. There are 19 remnants with Hint-SF $\leq 2$; they most likely all result from a type Ia SN. We call this sample ``likely-Ia''. Likewise, the 28 objects above Hint-SF $\geq 4$ are probably most of the CC-SNR population. They form the ``likely-CC'' sample. The single type Ia SNR contaminating the sample (N103B) allows to estimate a false-positive rate of 5~\%~--~10~\%. The false-positive rate of the ``likely-Ia'' sample is probably lower: The massive stars formed at (roughly) the same time as the progenitor of a CC-SN can hardly be missed by photometric survey, because they would form the bright end of the population. There are 12 SNRs in between $2.5 \leq \mathrm{Hint_{SF}} \leq 3.5$, for which the local stellar environment cannot be used to decisively type the origin; they form the ``SFH-untyped'' sample. Interestingly, two and five of these remnants can be classified from other indicators as type Ia (the iron-rich MCSNR J0508$-$6830 and DEM L71) or CC-SNRs (\mbox{e.\,g.}\xspace the oxygen-rich N132D or MCSNR J0453$-$6829, which hosts a PWN), respectively. \paragraph{Including the spectral results for typing purpose} The spectral analysis of Sect.\,\ref{results_spectra} revealed the presence of ejecta-enhanced plasma in almost half of the sample (Tables~\ref{table_results_spectra_flags} and \ref{appendix_table_spectra_brightest}). One should take advantage of this for the typing of the remnant, in combination with the SFH-based method we just presented. We assign another number, ``Hint--spec'', which depends on the high- or low-abundance flags of each SNR (Sect.\,\ref{results_spectra_ejecta}). The numbers range from 1 (strongly favouring a type Ia origin) if ``low X/Fe'' flags are raised (\mbox{i.\,e.}\xspace the SNR is iron-rich), to 5 (strongly favouring a CC origin) when ``high X/Fe'' flags are raised (\mbox{i.\,e.}\xspace CC nucleosynthesis pattern is detected). The choice of ``Hint-spec'' is given in Table~\ref{table_results_sfh_hints_spec}. Note that a bigger impact is given to the low or high O/Fe ratio, as these elements are the most abundant. Therefore, this ratio is easier to fit, more reliable, and sometimes the only one available. A value of 5 is also attributed to remnants where a pulsar/PWN is detected in the remnant. \revision{The values of ``Hint--spec'' for each SNR are given in Table~\ref{appendix_table_snrs_sample}.} We combined ``Hint--SF'' and ``Hint--spec'' by taking their arithmetic mean. The distribution of the resulting ``Hint--final'' is shown in Fig.\,\ref{fig_hint_sf} (right panel). The contamination (\mbox{i.\,e.}\xspace misclassification of N103B) is slightly alleviated, while a better separation of the ``secured-type'' SNRs is evident. There are 23 SNRs with ``Hint--final'' $\leq 2.5$ which are likely of type Ia, and 31 SNRs where ``Hint--final'' $\geq 3.5$ which are likely attributed to CC, although N103B (Hint--final$=$3.5) is still contaminating the sample. There are five sources with inconclusive ``Hint--final'', including one secured-CC (N23). \subsection{Ratio of CC to type Ia SNe and implications} \label{results_sfh_ratio} The observed number of SNRs of both types provides a measurement of the ratio of CC to type Ia SNe that exploded in the LMC over the last few $10^4$~yr, \mbox{i.\,e.}\xspace very close to the current ratio of CC/Ia SN \emph{rates}. Based on the ``star formation Hint'', the numbers of SNRs in the ``likely Ia'' and ``likely CC'' samples translate in $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$~$=$~1.47 (28/19). Assuming all ``SFH-untyped'' SNRs which do not have a secured type are of type Ia then the ratio $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$ is 1.27 (33/26). Conversely, if the `SFH-untyped'' are all CC, the ratio is 1.81 (38/21). Even correcting for N103B, $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$ is conservatively in the range 1.2 to 1.8. Including the spectral results (detection of ejecta in X-rays), we have a ratio $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$~$= 1.35$ (31/23). Correcting for N103B and N23 (with wrong and uncertain classifications), the ratio CC:Ia based on SFH \emph{and} X-ray spectroscopy is between 1.11 (31/28) and 1.46 (35/24), depending on the type assigned to the remaining four objects. This range is compatible with that derived from the ``SFH-typing'' alone, albeit narrower because of the greater amount of information included in the calculation. This ratio can be compared to two kinds of measurements: First, to the observed ratio of current rates, obtained from SNe search. For instance, \citet{2011MNRAS.412.1441L} measured a ratio of about 3:1 in a volume-limited sample. Second, to the ratio $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ derived from intracluster medium (ICM) abundances. Galaxy clusters retain all the metals produced by SNe. The X-ray spectrum of the ICM reveals the elemental abundances, which are used to constrain the \emph{time-integrated} numbers of CC and Ia SNe. From {\it Suzaku}\ observations of a small sample of clusters and groups of galaxies, \citet{2007ApJ...667L..41S} estimated $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$~$\sim 3.5$ (ranging between 2 and 4, depending on the type Ia SN model assumed). With XMM-{\it Newton}\ and a larger cluster sample, \citet{2007A&A...465..345D} measured a $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ between 1.7 and 3.5, again depending on the adopted SN Ia models. However, none of the explored SN models could reproduce the Ar/Ca ratio. \citet{2011A&A...528A..60L} derived $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$~$\sim1.5-3$. Therefore, \textbf{the current ratio of CC/Ia SNe in the LMC is significantly lower than that measured in local SN surveys and in galaxy clusters.} One possible caveat could be that we are missing CC-SNRs. For instance, SNe exploding in superbubbles \citep[SBs, see \mbox{e.\,g.}\xspace][for a review]{2008IAUS..250..341C} will not be directly recognised as SNRs. \citet{1991ApJ...373..497W} and \citet{2001ApJS..136..119D} found a dozen LMC SBs with an X-ray luminosity, measured with {\it Einstein}\ and ROSAT, brighter than theoretically expected for a wind-blown bubble, and possibly energised by interior SNRs. The limited spatial resolution of the instruments used may result in \emph{distinct} SNRs to have been overlooked and the X-ray emission of the SB overestimated (\mbox{e.\,g.}\xspace MCSNR J0523$-$6753, near the \ion{H}{II} region/SB N44 in \citealt{1991ApJ...373..497W}, see also \citealt{2011ApJ...729...28J}). With a dozen extra CC SNRs, the ratio $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ is pushed to $\sim 1.5 - 2$. However, the number of type Ia SNRs currently known in the LMC is also expected to be below the actual number (see Sect.\,\ref{results_XLF} for a discussion on sample completeness). Therefore, it is unlikely that the ratio $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ is significantly underestimated. Furthermore, the abundance pattern of the LMC, with its low {[}$\alpha$/Fe{]} (Sect.\,\ref{results_spectra_abundance}), lends support to such a low $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$. This should be lower than the value of $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$$\sim4-5$ estimated by \citet{1995MNRAS.277..945T} using metallicity alone. The low $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ ratio measured in the LMC therefore has to be a consequence of the different SFH of the Cloud compared to that in other nearby galaxies or galaxy clusters. The local SN rates depend on the summed SFH of galaxies included in the SN surveys. The higher ratio measured by \mbox{e.\,g.}\xspace \citet{2011MNRAS.412.1441L} simply indicates that many star-forming galaxies are included in the local volume explored. The SFHs of galaxy clusters are relatively simple, with short episodes of star-formation at a redshift of $z \sim 3$ \citep{2008ApJ...684..905E}, so that the integrated numbers of type Ia and CC SNe inferred from X-ray observations correspond to the fractions of stars formed that end their lives as SN of either type. In the LMC, star formation occurred during several episodes. In addition to many regions with recent or ongoing star formation where, unsurprisingly, the CC-SNRs are found (see Sect.\,\ref{results_distribution}), the LMC had enhanced star formation episodes 100~Myr, 500~Myr, and 2~Gyr ago as well \citep{2009AJ....138.1243H}. The SN Ia DTD follows fairly well a $t^{-1}$ power law for delays $t > 1$~Gyr, and appear to keep increasing below 1~Gyr \citep[for a review of SN Ia DTD measurements, see][]{2012PASA...29..447M}. The majority of type Ia SNe explode within 2~Gyr after star-forming episodes. We are therefore coincidentally observing the LMC at a time when the rate of type Ia SN from the stellar populations formed 500~Myr to 2~Gyr ago is high. Integrated over an SNR lifetime (a few $10^4$~yr), it results in the relatively large number of type Ia SNRs. It is not possible to use $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ to estimate $\eta$, the fraction of stars that eventually explode as Ia SNe \citep{2008MNRAS.384..267M}, because of the complex SFH of the LMC: stars exploding now (as either SN types) were created at different epochs. Furthermore, $\eta$ is also dependent on the initial mass function (IMF), over which one has little freedom, since the SFH-reconstruction already assumes a particular form (the Salpeter IMF). \revision{Currently, the only other galaxy with which to compare the $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ of the LMC is the SMC}. In our own Milky Way, there remain too many untyped SNRs. More problematic are the distance uncertainties and line-of-sight confusion that prevent associating remnants to regions of star formation (\mbox{e.\,g.}\xspace spiral arms). In the Local Group (M31, M33) and beyond \citep[\mbox{e.\,g.}\xspace M83,][]{2010ApJ...710..964D}, the problem is again the lack of secured typing methods, and generally the absence of spatially resolved SFH. The situation is likely to improve in the near future with more sensitive X-ray observatories (\mbox{e.\,g.}\xspace \emph{Athena}), and large observing programmes of M31 and M33 with \emph{Hubble} which allow to build SFH maps \citep[so far, this was done in the few archival field available,][]{2012ApJ...761...26J,2014ApJ...795..170J}. The SMC is the only obvious target remaining where a similar study can be currently performed, although the smaller sample of SNRs and inclination of the galaxy (and corresponding line-of-sight confusion) might complicate direct comparisons to the LMC. \section{X-ray luminosity function of SNRs in the Local Group} \label{results_XLF} X-ray luminosity functions (XLFs) are valuable tools for the study of X-ray sources and comparisons between populations. The XMM-{\it Newton}\ dataset is ideally suited to derive the XLF of LMC SNRs. Out of the 59 objects in the sample, XMM-{\it Newton}\ covered 51 of them, to which we fitted spectral models (Sect.\,\ref{results_spectra}). For all these, the X-ray fluxes in various bands are obtained from the best-fit models (Tables~\ref{appendix_table_spectra_all} and \ref{appendix_table_spectra_brightest}) with the XSPEC command \texttt{flux}. The final results are presented in the ``broad'' band, from 0.3~keV to 8~keV (the effect of including the high-energy part is very minor and discussed below). Three SNRs have been covered with {\it Chandra}\ but not XMM-{\it Newton}: For MCSNR J0454$-$6713 (N9), we used the spectral results of \citet{2006ApJ...640..327S} to measure the flux. MCSNR J0459$-$7008 (N186D) was covered in the {\it Chandra}\ observation of the SB DEM~L50. \citet{2011ApJ...729...28J} published the results from these data. We used their best-fit NEI model for the SNR emission, which is spatially resolved from the SB, to derive the X-ray flux. Finally for MCSNR J0550$-$6823, we used the spectral parameters given in the entry of the {\it Chandra}\ SNR catalogue\,\footnote{Maintained by Fred Seward\,: \url{http://hea-www.cfa.harvard.edu/ChandraSNR/index.html}}. Two SNRs have neither XMM-{\it Newton}\ nor {\it Chandra}\ observations available, but were covered with ROSAT. \citet{1999ApJ...514..798W} present a spectral analysis of MCSNR J0455$-$6839 (in N86). We obtained the X-ray flux of the SNR using their best-fit model. MCSNR J0448$-$6700 corresponds to the ROSAT PSPC source {[HP99]}~460, with a count rate of $1.41 \times 10^{-2}$~cts~s$^{-1}$ \citep{1999A&AS..139..277H}. With the multi-mission count rate simulator WebPIMMS\,\footnote{ \url{http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl}}, we calculated the flux in several energy bands for various temperatures of an APEC model, assuming a total absorbing column $N_H = 7 \times 10^{20}$~cm$^{-2}$ towards the source and a mean abundance of 0.4~solar. The observed hardness ratios could be reproduced for $kT = 0.97$~keV. These spectral parameters and normalisation can be converted into fluxes in the same bands as used for the rest of the sample. In total, 56 objects have well-defined X-ray fluxes and make it into the XLF. The adopted values are listed in Table~\ref{appendix_table_snrs_sample}. Only three SNRs have no X-ray data available. The cumulative XLF is shown in Fig.\,\ref{fig_results_XLF}. The sample spans almost four orders of magnitude in luminosity, from the brightest (N132D) at $L_X = 3.15 \times 10^{37}$~erg~s$^{-1}$ down to $\sim 7\times 10^{33}$~erg~s$^{-1}$. The $L_X$ used is the observed value, uncorrected for (LMC) absorption, because the fitted column densities can be quite uncertain, in particular in the faint end. \revision{We checked the $N_{H\mathrm{\ LMC}}$-corrected XLF and found no significant variations in its shape compared to that of Fig.\,\ref{fig_results_XLF}, except the shift to higher $L_X$.} \paragraph{SNR XLF from other Local Group galaxies:} The LMC XLF can be best compared to other Local Group galaxies such as M31, M33, and the SMC. \citet{2012A&A...544A.144S} studied M31 SNRs and candidates identified in the XMM-{\it Newton}\ Large Programme survey of the Andromeda galaxy \citep{2011A&A...534A..55S}. They converted EPIC count rates into 0.35~keV~--~2~keV luminosities assuming a thermal (APEC) spectrum with $kT = 0.2$~keV, $N_{H\ \mathrm{M31}} = 10^{21}$~cm$^{-2}$, and $N_{H\ \mathrm{MW}} = 0.7 \times 10^{21}$~cm$^{-2}$. The quoted values, however, are corrected for $N_{H\ \mathrm{MW}}$, while for the LMC we give the observed luminosities. For consistency with the LMC XLF, we re-included $N_{H\ \mathrm{MW}} = 0.7 \times 10^{21}$~cm$^{-2}$ in the results of \citet{2012A&A...544A.144S} and converted the luminosity in the 0.3~keV~--~8~keV by scaling their $L_X$ by 0.577 (a factor derived from simulating their assumed spectrum with and without $N_{H\ \mathrm{MW}}$). Note that the effect of the foreground absorption should be very minor, since $N_{H\ \mathrm{MW}}$ values are very similar in the directions of M31, M33, and the LMC ($5 - 7 \times 10^{21}$~cm$^{-2}$). 26 objects were confirmed SNRs in \citet{2012A&A...544A.144S}, and another 20 were candidate SNRs. \citet{2010ApJS..187..495L} present a large catalogue of 82 confirmed SNRs in M33, based on the {\it Chandra}\ ACIS survey of M33 \citep[ChASeM33,][]{2011ApJS..193...31T}. They give $L_X$ in the 0.35~keV~--~2~keV band, converted from ACIS count rates, assuming a thermal plasma at 0.5 solar, $kT = 0.6$~keV, and total $N_{H} = 10^{21}$~cm$^{-2}$ (\mbox{i.\,e.}\xspace the spectrum found for the brightest source). We obtained the corresponding 0.3~keV~--~8~keV luminosity by scaling up the values of \citet{2010ApJS..187..495L} by 4~\%. Recently, \citet{2015ApJS..218....9W} published the results of a deep XMM-{\it Newton}\ survey of M33 with a larger coverage than ChASeM33, up to the $D_{25}$ isophote of the galaxy. They recovered most of the SNRs of \citet{2010ApJS..187..495L}, except in the central region where source confusion is an issue for XMM-{\it Newton}, and detected or confirmed eight new SNRs, including three sources far in the outskirts of M33. We converted the unabsorbed 0.2--4.5~keV EPIC flux of the new inclusions of \citet{2015ApJS..218....9W}, obtained assuming a power-law spectrum, to the 0.3--8~keV luminosity for the same spectrum as the SNRs of \citet{2010ApJS..187..495L}. The final, concatenated list of M33 SNRs thus comprises 90 objects. \begin{figure}[t] \centering \includegraphics [width=0.999\hsize]{XLF_final_corrected_log_allM31_updatedM33.jpg} \caption[\ Cumulative X-ray luminosity function of SNRs in Local Group galaxies]{Cumulative X-ray luminosity function of SNRs in Local Group galaxies. See text for details and references on how $L_X$ was measured for each sample. The brightest SNR in each galaxy is marked by a dot. The thin dotted lines are nonlinear least-square fits of a power law ($N (> L_X) \propto L_X\ ^{\alpha}$). Slopes $\alpha$ are given in the legend. These fits are only used to characterise the slopes and illustrate the differences between galaxies; they do not represent a physical fit of the population. } \label{fig_results_XLF} \end{figure} Converting count rate to luminosity in different energy bands assuming a single temperature might affect the slope of the XLF. For instance, from a count rate in the 0.35~keV~--~2~keV band, the luminosity in the broad band is 25~\% higher with $kT = 0.6$~keV than if it is 0.2~keV. The two studies have limited knowledge of the actual spectrum of each remnant, because the larger distances prohibit more complex spectral fits, and they have to assume a particular spectrum, regardless of luminosity. This is not the case in the LMC. We indeed found a trend for brighter remnants to have higher plasma temperatures (Sect.\,\ref{results_spectra_general}). Quantitatively, the median temperatures are $kT = 0.31$~keV for luminosities less than $10^{35}$~erg~s$^{-1}$, 0.55~keV between $10^{35}$~erg~s$^{-1}$ and $10^{36}$~erg~s$^{-1}$, and 0.8~keV above $10^{36}$~erg~s$^{-1}$. The luminosities of M31 SNRs were given assuming $kT = 0.2$~keV; we scaled the 0.3~keV~--~8~keV luminosity up by 1.05, 1.20, and 1.35 for sources with $L_X < 10^{35}$, $10^{35} < L_X < 10^{36}$, and $L_X > 10^{36}$~erg~s$^{-1}$, respectively. M33 SNRs were assumed to have a higher temperature (0.6~keV), which means that the luminosity of objects below $\sim 10^{35}$~erg~s$^{-1}$ was overestimated by about 15~\%, while for those above $10^{36}$~erg~s$^{-1}$ it was underestimated by $\sim 8$~\%. Correcting for this effect ensures a meaningful comparison between M31, M33, and the LMC. The SMC SNR population is comparatively smaller. \citet{2004A&A...421.1031V} presented an X-ray spectral analysis of all SNRs in the SMC known at that time. We used their best-fit models to measure the observed (\mbox{i.\,e.}\xspace absorbed) X-ray luminosity in the same 0.3~keV~--~8~keV band\,\footnote{The luminosity given in \citet{2004A&A...421.1031V}, Table~3, for IKT~22 (1E0102$-$7219, the brightest SMC SNR) was mistyped. Instead of the $150\times10^{27}$~W, it should read $1500\times10^{27}$~W ($1.5 \times 10^{37}$~erg~s$^{-1}$).}, except for IKT~16. For this SNR we used results from \citet{2011A&A...530A.132O}, which included more data from subsequent XMM-{\it Newton}\ observations. Three additional SNRs were covered with XMM-{\it Newton}; the results were published in \citet{2008A&A...485...63F}, from which we borrowed the best-fit spectral models. The latter study also reported a new SNR, HFPK~334. For this one, we used the best-fit model from \citet{2014AJ....148...99C}, which combined XMM-{\it Newton}\ and {\it Chandra}\ observations. Also included is the SNR XMMU~J0056.5$-$7208 identified during the SMC survey \citep{2012A&A...545A.128H,2012PhDT......ppppS}. Finally, the Be/X-ray binary pulsar SXP~1062 was found to be associated to an SNR, of which it is most likely the progenitor \citep{2012MNRAS.420L..13H}. The thermal emission from the SNR was studied by \citet{2012A&A...537L...1H}. This sample of 19 SMC SNRs is the most up to date. \paragraph{Comparative study of SNR XLFs:} The cumulative XLFs of M31 and M33 in the 0.3~keV~--~8~keV band, corrected for the $kT$~--~$L_X$ trend, are shown along that of the SMC and LMC in Fig.\,\ref{fig_results_XLF}. \textbf{In terms of depth}, the LMC XLF dominates. There is a single SNR at $L_X < 2\times 10^{34}$~erg~s$^{-1}$ in M33 and in the SMC, but the bright interior pulsar in the SMC case (SXP~1062) makes the measurement of the thermal emission luminosity uncertain. In contrast, there are eight SNRs with $L_X \lesssim 2\times 10^{34}$~erg~s$^{-1}$ in the LMC, of which seven were discovered or confirmed thanks to XMM-{\it Newton}\ observations. \textbf{In terms of number}, the largest population so far is found in M33 (90 SNRs in X-rays), probably owing to the depth of the {\it Chandra}\ survey (using 100~ks pointings) in the central 15\arcmin, the overlap with a deep XMM-{\it Newton}\ survey up to the $D_{25}$ isophote, and the favourable (face-on) orientation of M33. However, the population of M31 SNRs is larger than any other at $L_X \lesssim 5\times 10^{35}$~erg~s$^{-1}$ and is only limited by the depth of the survey ($\sim 10^{35}$~erg~s$^{-1}$). The ratio of M31-to-M33 SNRs in the $10^{35}$~--~$10^{36}$~erg~s$^{-1}$ range is at most 1.5, \mbox{i.\,e.}\xspace substantially smaller than the mass ratio of the galaxies \citep[10--20,][]{2003MNRAS.342..199C,2014MNRAS.443.2204P}. This shows the effect of the higher (recent) SFR in M33 compared to M31 \citep[0.45~M$_{\sun}$~yr$^{-1}$ vs. 0.27~M$_{\sun}$~yr$^{-1}$,][]{2009A&A...493..453V, 2010A&A...517A..77T} leading to a larger production of CC SNRs in M33. In the same luminosity range, the number of LMC SNRs is comparable to that in M33. This is expected because the LMC is only slightly less massive than M33. Furthermore, the recent SFR of the LMC is high, 0.3--0.4~M$_{\sun}$~yr$^{-1}$ in the last 40 Myr \citep{2009AJ....138.1243H}. This conspires with the high current type Ia SN rate (Sect.\,\ref{results_sfh_ratio}) to build up the large population of SNRs in the LMC. Finally, the ``feather-weight'' SMC (about ten times less massive than the LMC, \citep{2004ApJ...604..176S,2006AJ....131.2514H} has a smaller, yet decent population of remnants, likely owing to its recent star formation activity \citep[0.08--0.3~M$_{\sun}$~yr$^{-1}$,][]{2004AJ....127.1531H}. \textbf{In terms of shape}, the XLF of M31 SNRs is the most uniform, following a power law ($N (> L_X) \propto L_X\ ^{\alpha}$) with $\alpha = -0.86\pm0.04$ down to $\sim 2\times 10^{35}$~erg~s$^{-1}$. This holds with or without including the candidates, which means that most are indeed bona-fide SNRs. The M33 remnants follow mostly the same distribution, with $\alpha = -0.76\pm0.05$. Towards the faint end, the M33 XLF flattens and diverges from the power law below $10^{35}$~erg~s$^{-1}$, indicating incompleteness. \citet{2010ApJS..187..495L} concluded that no SNR brighter than $4\times 10^{35}$~erg~s$^{-1}$ was missed across the surveyed field. It is likely that they were over-conservative and that missing SNRs are only those which have luminosity below $10^{35}$~erg~s$^{-1}$. The combined ChASeM33 and XMM-{\it Newton}\ surveys cover the total extent of the galaxy \citep{2008ApJS..174..366P,2015ApJS..218....9W}, so the missing SNRs are either too X-ray-faint (below the surveys' detection limits), or absent/undetected at radio and optical wavelengths, precluding their identifications as SNRs. We performed Kolmogorov-Smirnov (KS) tests to compare the different populations. Using a bootstrapping method, we produced 1000 luminosity functions from the original data. We checked that similar results were obtained when increasing that number to $10^6$. Restricting the analysis to SNRs brighter than $3\times 10^{35}$~erg~s$^{-1}$ to ensure completeness of the samples, we found that the XLFs of M31 and M33 SNRs follow the same distribution at the $3\sigma$ confidence level. There was a marginal indication that the M33 distribution was steeper than that of M31 \citep{2012A&A...544A.144S}, but this difference essentially disappears once the $kT$--$L_X$ trend is taken into account. In the SMC, although the population is limited to about 20 objects, the distribution is relatively uniform. The XLF is however flatter ($\alpha = -0.5\pm0.05$), and KS tests confirm that the SMC population is different from those of M31 and M33. This might indicate that SMC remnants evolve faster (and fade earlier) than in M31 and M33, due to a lower ISM density. The lower metallicity in the SMC \citep[about 0.2 solar,][]{1992ApJ...384..508R} may also participates in the lower luminosities of the SMC SNR, \revision{since the emissivity of hot plasmas is smaller for lower metallicities}. In contrast to the other galaxies, the luminosity function of SNRs in the LMC exhibits a complex behaviour and does not obey a smooth power-law distribution over most of the dynamical range. Regardless of the lower luminosity cut used, the KS tests reject the null hypothesis that LMC SNRs have the same XLF than those in M31, M33, or the SMC. The most striking and robust result is the very prominent bright end of the LMC XLF. There are 13 SNRs with $L_X > 10^{36}$~erg~s$^{-1}$, more than in M31 and M33. Amongst these, there are two SNRs hosting bright pulsars/PWNe and a harder non-thermal spectrum. Even restricting the XLF to the soft band or excluding these two objects, the population of bright LMC SNRs is still above the other ones. This bright population is not a clearly distinct group. In particular, it is not made up of remnants from only one SN type. There are four type Ia SNRs and nine CC-SNRs, so the $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ ratio is higher than overall (Sect.\,\ref{results_sfh_ratio}), but not exceedingly so. Higher luminosities are expected from SNRs interacting with denser ISM. We compared the average LMC \ion{H}{I} column density \citep[from the map of][]{2003ApJS..148..473K} around the position of remnants in various luminosity bins, but no trend could be found. However, the line-of-sight integrated column density might not be a good indicator of the ISM density \emph{local} to the remnant, considering that the SNR could be in front of or behind the regions where most of the neutral hydrogen is (see Sect.\,\ref{results_distribution}). A possible explanation for the population of bright SNRs in the LMC stems from its lower metallicity. Massive stars lose a considerable amount of mass in the form of winds \citep[\mbox{e.\,g.}\xspace][]{2000ARA&A..38..613K}. The stellar winds blow low-density cavities, bordered by dense shells, around the stars that eventually explode as (core-collapse) SNe. The interaction of the SN shocks with the modified CSM results in a different evolution compared to that in a constant-density ISM. \citet[][and references therein]{2005ApJ...630..892D,2007ApJ...667..226D} explored the evolution of remnants in wind-blown cavities. It was shown that it critically depends on one parameter (coined $\Lambda$), the ratio of the mass of the dense shell to that of the ejected material. For low values ($\Lambda < 1$) the X-ray luminosity increases sharply when the shock reaches the dense shell early on ($t < 10^3$~yr). If instead the shell is more massive compared to the ejecta material, the shock propagates in the very low density of the (much larger) bubble, producing less X-ray emission. The increase of X-ray luminosity upon impact (after a few thousand years) is also smaller than in the low-$\Lambda$ case \citep[][his Figs. 7 and 12]{2005ApJ...630..892D}. The properties of the cavities around massive stars are determined by the mass loss rate $\dot{M}$ during their various evolutionary stages. This in turn is affected by the elemental abundance (\mbox{i.\,e.}\xspace metallicity), because the main driving mechanism of stellar winds is the transfer of momentum from photons to the star atmospheric gas by line interactions\,\footnote{The product abundance $\times$ ionisation fraction $\times$ number of available lines for metals is comparable to that of hydrogen and helium.} \citep{2000ARA&A..38..613K,2001A&A...369..574V}. By measuring mass-loss rates of early-type stars in the Galaxy, LMC, and SMC, \citet{2007A&A...473..603M} could quantify the dependence of $\dot{M}$ on the metallicity as $\dot{M} \propto Z^{0.83}$. It is therefore expected that in lower metallicity environment (\mbox{e.\,g.}\xspace LMC) massive stars explode in wind-blown cavities with lower $\Lambda$, and are more likely to produce \revision{\emph{young}} remnants that are brighter in X-rays. \revision{Since the SMC has the lowest metallicity of our sample of galaxies (0.2 solar), one would also expect an excess of bright sources. However, the SMC does not host SNRs as young (less than a few thousand years) as the LMC \citep{2004A&A...421.1031V}, and it is likely that the smaller wind-blown bubbles do not affect the X-ray luminosities of more evolved remnants. Furthermore, the small number of SNRss in the SMC hinders conclusions regarding a possible excess of bright sources.} Finally, there are also four type Ia SNRs amongst the bright end of the population, to which the explanation discussed above does not apply. If we exclude, these however, there is still an excess. Because they are prominently young (three are less than a thousand years old), it appears that the high current type Ia SN rate in the LMC (Sect.\,\ref{results_sfh_ratio}) will also contribute to a larger population of bright remnants. Between $\sim 1\times 10^{35}$~erg~s$^{-1}$ and $5\times 10^{35}$~erg~s$^{-1}$, where many SNRs reside (a third of the sample), the LMC XLF is comparable in shape to the M31 and M33 XLFs, with a power-law distribution (consistent with $\alpha$ between $-1$ and $-0.8$), and in number to M33 (M31 begins to have more sources below $\sim 8\times 10^{35}$~erg~s$^{-1}$). Towards the fainter end, the LMC XLF is again remarkable via its significant flattening. It is unlikely that this represents an overall flatter distribution (at least not as strongly as in the SMC), because it would imply that a lot of SNRs with $L_X \sim$(5--8)$\times 10^{35}$~erg~s$^{-1}$ (thus easy to identify) have been missed. It is more plausible that the flattening of the XLF is almost exclusively due to incompleteness. The majority of the remnants at $L_X < 8\times 10^{34}$~erg~s$^{-1}$ (15/22) were identified/confirmed thanks to (pointed or serendipitous) XMM-{\it Newton}\ observations. Though many were already detected with ROSAT, the combination of the large effective area and resolution of XMM-{\it Newton}\ is usually needed to confirm the extent and thermal emission of candidates. Even with the VLP survey, the area of the LMC covered by XMM-{\it Newton}\ is less than 20 square degrees, \mbox{i.\,e.}\xspace about a third of the whole galaxy. \revision{In particular, X-ray coverage to the south-west of the LMC Bar is sparse (see Fig.\,\ref{fig_observations_survey})}. Extending the covered fraction warrants to find the missing remnants. The M31 survey with XMM-{\it Newton}\ exemplifies how a full coverage results in a high completeness: the M31 SNR XLF is uniform down to the sensitivity limit of the survey, which fully covers the $D_{25}$ ellipse of M31 \citep{2012A&A...544A.144S}. In the LMC, the situation could easily be improved with more X-ray observations. We briefly discuss possible strategies in Sect.\,\ref{summary}. \section{3D spatial distribution} \label{results_distribution} \subsection{Comparison with other wavelengths} \label{results_distribution_comparison} The positions of SNRs in the LMC are plotted on the \ion{H}{I} column density map of \citet{2003ApJS..148..473K}, showing the LMC gas disc (Fig.\,\ref{fig_results_distribution_HI}). The population exhibits correlations with neutral hydrogen structures. The most striking example is the many SNRs (a dozen) around the supergiant shell (SGS) LMC~4 \citep[][SGS~11 in the notation of \citealt{1999AJ....118.2797K}]{1980MNRAS.192..365M}. SGSs are formed by the combined action of multiple generations of massive star formation. Their expansions shock and sweep up the ISM, which can trigger further star formation along the SGS rims \citep[][and references therein]{1998ASPC..148..150E}. The impact of SGSs on star formation, particularly in the LMC, was demonstrated by \citet{2001ApJ...553L.185Y,2001PASJ...53..959Y}. They found that the concentration of molecular clouds and young star clusters is enhanced by a factor of 1.5--2 near the SGS rims, and most of these clusters are on the side of the molecular clouds facing the interior of the SGSs. \citet{2009AJ....137.3599B} added massive young stellar objects and \ion{H}{II} regions/OB associations to the list of tracers of recent star formation that are well correlated with the shell peripheries. \begin{figure}[t] \centering \includegraphics [width=0.999\hsize]{SNRs_HI.jpg} \caption[\ Positions of LMC SNRs on an \ion{H}{I} column density map]{Positions of LMC SNRs (red circles) on the \ion{H}{I} column density map of \citet{2003ApJS..148..473K}, displayed on a linear scale ranging from 0 to $6 \times 10^{21}$~cm$^{-2}$. Black and blue contours indicate levels of 1 and $3 \times 10^{21}$~cm$^{-2}$, respectively. The green contours are the 3$\sigma$ level (1.2~K~km~s$^{-1}$) of the velocity-integrated map of $^{12}$CO $(J = 1 - 0)$ from the NANTEN survey \citep{2008ApJS..178...56F}. The position of the SGS LMC~4 is marked with a dashed black circle. } \label{fig_results_distribution_HI} \end{figure} \begin{figure*}[ht] \centering \includegraphics [width=0.495\hsize]{SNRs_Halpha_MCELS_typed.jpg} \includegraphics [width=0.495\hsize]{SNRs_R_SHASSA_typed.jpg} \caption[\ Location of LMC SNRs relative to H$\alpha$ and red continuum emission.]{ \emph{Left:} Location of LMC SNRs on the MCELS H$\alpha$ mosaic, displayed logarithmically in greyscale. ``Likely-Ia'' and ``secured-Ia'' SNRs are marked by red circles and squares, respectively, while ``likely-CC'' and ``secured-CC'' SNRs are shown in blue. Green circles are SNRs with undecided type. \emph{Right:} Same as left on a red continuum image from the SHASSA survey. } \label{fig_results_distribution_optical} \end{figure*} Because (core-collapse) SNRs are themselves very good indicators of recent star formation, the distribution of many SNRs around the edge of LMC~4 is a further sign of the important role played by SGSs in triggering star formation. In turn, this could be used to look for \emph{new} SNRs: The high number of remnants around LMC~4 is explained in part by the large size of the SGS ($\sim 1.2$~kpc), but also by the good X-ray coverage (only two out of twelve SNRs around LMC~4 were not observed with XMM-{\it Newton}). Exploring SGSs less well studied, \mbox{e.\,g.}\xspace in the west and south-west regions of the LMC, is promising, as we discuss in Sect.\,\ref{summary}. Another prominent \ion{H}{I} feature is the density enhancement in the east that extends southwards into ``arms B and E'' \citep[see Fig.~1 of ][]{2003MNRAS.339...87S}, which are interpreted as tidal features. Most of the SNRs in the south-east of the LMC are associated to the 30~Doradus complex (which itself might be a manifestation of tidal shear). Only a handful of sources are known in the regions of the B and E arms \citep[and a single SNR is confirmed south of a declination of $-71$\textdegree,][]{2013MNRAS.432.2177B}. The southern region of the LMC is poorly studied in X-rays, preventing conclusions regarding the dearth of SNRs observed there. However, it could be an interesting target for future studies (Sect.\,\ref{summary}). In Fig.\,\ref{fig_results_distribution_optical}, we show the position of SNRs relative to H$\alpha$ (left, MCELS data), and to a red continuum image from the SHASSA survey \citep{2001PASP..113.1326G}. \revision{The association of many CC-SNRs with large \ion{H}{II} regions, which trace regions of active star formation, is evident: Out of 31 secured or ``likely'' CC SNRs, 25 correlate with strong H$\alpha$ emission, three with moderate H$\alpha$ emission, and only three are not associated to large optical nebulosities.} In contrast, many SNRs are not associated to H$\alpha$ emission, \mbox{e.\,g.}\xspace in the Bar, or south-east and north-west of it. These are the type Ia SNRs\,: \revision{22 out of 24 secured or ``likely'' Ia SNRs have no coincident H$\alpha$ emission (except the remnant's emission itself). Only N103B is spatially associated to strong H$\alpha$ emission, although it is plausibly a projection effect, with the SNR on the far side of the LMC (as suggested by its large ``$N_H$ fraction'', see Sect.\,\ref{results_distribution_adding}). This would resolve the long-standing issue of the association of a type Ia SNR with a region of intense star formation \citep{1988AJ.....96.1874C,2009ApJ...700..727B}. The type Ia SNRs are} in regions of relatively high stellar density (\mbox{e.\,g.}\xspace the Bar, as traced in the red continuum image) but are also present in more isolated, less active regions, where intermediate- and old-age stellar populations dominate. \subsection{Adding the third dimension} \label{results_distribution_adding} So far, we discussed the 2-D distribution of SNRs, projected on the sky. It is possible to gain a rudimentary sense of depth, by comparing the absorbing column density derived from X-ray observations (hereafter $N_{H} ^{\ X}$), to the line-of-sight integrated \ion{H}{I} column density, derived from 21~cm observations (hereafter $N_{H} ^{\mathrm{\,21\,cm}}$). We recall that $N_{H} ^{\ X}$ is an \emph{equivalent} neutral hydrogen column density assuming a given chemical composition\,\footnote{X-rays are absorbed not only by \ion{H}{I}, but also by molecular hydrogen, helium, and metals \citep{2000ApJ...542..914W}.}. The ratio $N_{H} ^{\ X} / N_{H} ^{\mathrm{\,21\,cm}}$ (hereafter ``$N_H$ fraction'') is a measurement of how deep an SNR is with respect to the \ion{H}{I} structure. Interpreting the $N_H$ fraction is made easier by the favourable orientation of the LMC. Neutral hydrogen is mainly distributed in a nearly circular disc at a moderate inclination angle\,\footnote{ Measurements of the orientation of the disc, \mbox{i.\,e.}\xspace inclination $i$ (with 0~\textdegree\ defined as face-on) and position angle of line of nodes $\Theta$ (the intersection of the disc and sky planes, measured eastwards of north), are widely scattered but are in the range 25~\textdegree$< i <$ 40~\textdegree\ and 120~\textdegree$< \Theta <$ 155~\textdegree\ \citep[][]{1997macl.book.....W,2013A&A...552A.144S,2014ApJ...781..121V}. } , with a thickness of $\sim 360$~pc \citep{1999AJ....118.2797K}. Small $N_H$ fractions ($\lesssim 0.3$, \mbox{e.\,g.}\xspace when $N_{H} ^{\ X}$ is consistent with zero) indicate that the SNR is well in front of the disc; intermediate values (0.3 to 0.8) are expected from sources within the disc; high fractions (0.8--1.2; a value of 1.23 is expected when including contributions of neutral and singly ionised helium, \citealt{1999ApJ...510..806A}) are associated to remnants on the far side, or behind, the disc. Values significantly above 1.2 are discussed below. \begin{figure}[t] \includegraphics [width=0.95\hsize]{nH_fraction_Lx.jpg} \caption[\ $N_H$ fraction $= N_{H} ^{\ X} / N_{H} ^{\mathrm{\,21\,cm}}$ as function of broad-band X-ray luminosity]{$N_H$ fraction $= N_{H} ^{\ X} / N_{H} ^{\mathrm{\,21\,cm}}$ as function of broad-band X-ray luminosity (see text for details). Downward pointing arrows indicate upper limits, for objects with $N_{H} ^{\ X}$ consistent with 0. SNRs covered with {\it Chandra}\ are shown in red. } \label{fig_results_distribution_nH_Lx} \end{figure} \begin{figure*}[ht] \centering \includegraphics [width=0.85\hsize]{spatial_nH_fraction_label.jpg} \vspace{-0.2cm} \caption[\ ``Pseudo-3D'' distribution of LMC SNRs, using $N_H$ fractions as indicators of location along the line of sight]{``Pseudo-3D'' distribution of LMC SNRs, using $N_H$ fractions (quantified by the colour bar) as indicators of location along the line of sight. Objects ``in front of the disc'' ($N_H$ fraction $< 0.3$) are marked by upward pointing triangles; downward pointing triangles are used for those ``behind the disc ($N_H$ fraction $> 0.8$). Objects within the disc (0.3 to 0.8) are marked by dots. The black and blue contours delineate \ion{H}{I} column densities of 1 and $3 \times 10^{21}$~cm$^{-2}$, respectively (same as in Fig.\,\ref{fig_results_distribution_HI}). Prominent LMC structures are labelled. } \label{fig_results_distribution_nH} \end{figure*} $N_{H} ^{\ X}$ is taken from the spectral results of Sect.\,\ref{results_spectra}. For the 1T/2T sample, the adopted value is simply that in Table~\ref{appendix_table_spectra_all}. Only two 2T remnants have two different absorption components: For MCSNR~J0517$-$6759 we used the higher value. For MCSNR~J0535$-$6602 (N63A), the highly absorbed component is ejecta-rich and has a lower EM; we therefore adopted the (lower) $N_{H}$ of the ISM component, which is more representative. For the brightest SNRs, we adopted the best-fit values given in Table~\ref{appendix_table_spectra_brightest} and in Sect.\,\ref{results_spectra_1987A} (for SNR~1987A). For three SNRs with {\it Chandra}\ data only, we obtained $N_{H} ^{\ X}$ from the same references as in Sect.\,\ref{results_XLF}. Five remaining SNRs have either no or only ROSAT data available, and are not used in this analysis. $N_{H} ^{\mathrm{\,21\,cm}}$ is measured from the map of \citet{2003ApJS..148..473K}, by averaging the column density around each SNR over a 5\arcmin\ radius (the resolution of the map is about 1\arcmin, \mbox{i.\,e.}\xspace $\sim 15$~pc). We checked that using a smaller averaging radius, closer to the typical SNRs size, gave essentially the same results. We then computed the ratio, propagating only the uncertainties on $N_{H} ^{\ X}$ since they should dominate the error budget in most cases. $N_H$ fractions are plotted against $L_X$ in Fig.\,\ref{fig_results_distribution_nH_Lx}. No correlation is evident, as expected: $L_X$ depends mostly on the evolutionary state of the remnant, while the depth within the LMC does not. At lower luminosities, however, there are more remnants with only upper limits on $N_{H} ^{\ X}$ (and thus on the $N_H$ fraction). This likely stems from the difficulty of deriving $N_{H} ^{\ X}$ from limited X-ray statistics. For the same reason, the error bars are larger in the handful of cases below a few $10^{34}$~erg~s$^{-1}$, and the sense of depth provided by the $N_H$ fraction becomes blurry. In Fig.\,\ref{fig_results_distribution_nH}, the $N_H$ fraction is projected on the sky, on the same field of view as showed in Figs.\,\ref{fig_results_distribution_HI} and \ref{fig_results_distribution_optical}. Prominent LMC structures are labelled, including the LMC Bar: The bar is traced by the stars, both in young and intermediate-age populations \citep[][respectively]{1972VA.....14..163D,2001AJ....122.1827V}. It was found to be on the near side of the LMC, ``floating'' $\sim 0.5$~kpc to 5~kpc above the plane of the disc, as evidenced from near-infrared star count maps and distances to Cepheids, red clump, and RR Lyrae stars \citep{2000ApJ...545L..35Z,2004ApJ...601..260N, 2009AJ....138....1K,2012AJ....144..106H}. This interpretation is challenged by red clump stars distance measurements of \citet{2009ApJ...703L..37S} and \citet{2013A&A...552A.144S}. \citet{2004ApJ...614L..37Z} proposed an alternative model, where the Bar is a stellar bulge (with a $z$ scale height of 2.5--3~kpc) whose south-eastern part is obscured by the gas disc. Consequently, the photometric centre is offset (in the plane of the sky), and distance measurements are biased to stars in the near side of the bulge. As can be seen in Fig.\,\ref{fig_results_distribution_nH}, SNRs in the Bar region are primarily on the near side (low $N_H$ fraction). Since some of these remnants must originate from the stellar population of the Bar, this lends support to previous findings that the Bar is indeed ``floating'' in front of the disc. One advantage of our method is that it does not need distance measurements of both disc and Bar objects; it directly gives locations \emph{relative} to the disc. In the bulge model of \citet{2004ApJ...614L..37Z}, SNRs in the Bar, but behind the disc\footnote{The obscuring effect by the disc on X-rays is moderate, not sufficient to mask SNRs as it does on stars in optical surveys.}, should have large $N_H$ fractions, while some scatter should be found along the line of nodes, where the disc and bulge intersect. Unfortunately, there are too few SNRs known in the Bar region to adequately test this alternative model. The remnants in the 30 Doradus region and directly south of it (MCSNR~J0540$-$6920 and J0540$-$6944) are the most absorbed, both in absolute and relative terms (largest $N_{H} ^{\ X}$ and largest $N_H$ fractions). From distance measurements with red clump stars, \citet{2009AJ....138....1K} found that 30 Dor was further away, although it was noted that this could be an effect of 30 Dor being next to the Bar floating in front of the disc. With our analysis it is confirmed that not only does 30 Dor lie at a larger distance compared to neighbouring features, but is indeed \emph{behind} the plane of the gas disc. Finally, it is striking from Figs.~\ref{fig_results_distribution_nH_Lx} and \ref{fig_results_distribution_nH} that a few SNRs have an $N_H$ fraction in excess of 1.2, and up to 2.3. The extra absorption is likely to come from molecular hydrogen in front of the object \citep{1999ApJ...510..806A}. We show in Fig.\,\ref{fig_results_distribution_HI} CO contours from the NANTEN survey \citep{2008ApJS..178...56F}. CO is used as a tracer of molecular hydrogen. In the east of the LMC there are large regions of molecular gas, following the peak density in \ion{H}{I}. In most cases with large $N_H$ fractions, we find nearby (less than a few arcmin away in projection) CO clouds, using either the NANTEN catalogue or the higher resolution MAGMA survey \citep[][respectively]{2008ApJS..178...56F,2011ApJS..197...16W}. We stress that this does not imply that the remnants and the molecular clouds are physically connected, but can be merely a projection effect, with the remnant behind, and not interacting with, the molecular cloud. However, physical interactions can happen, as exemplified by the case of MCSNR J0517$-$6759, where secondary evidence hints at a physical connection \citep{2014A&A...561A..76M}. \section{Summary and outlooks} \label{summary} We have studied the X-ray emission of the rich population of SNRs in the LMC, using data from the XMM-{\it Newton}\ observatory. We compiled a sample of 59 definite SNRs, cleaned of misclassified objects and doubtful candidates. XMM-{\it Newton}\ data are available for the vast majority (51 SNRs) of the sample, which called for a homogeneous re-analysis of the X-ray spectra of the entire population. This alleviates the inconsistencies in spectral models and analysis methods used, and allows meaningful comparisons of, \mbox{e.\,g.}\xspace, temperature, chemical composition, and luminosity of SNRs. The main outcomes of this systematic spectral analysis are the following: \begin{itemize} \item First, it provides the best census of LMC remnants with an Fe~K line ($\approx 13$~\% of the sample), which is a powerful tool to retrieve the type of SN progenitor. \item Second, it reveals the contribution to the X-ray emission by hot SN ejecta for 23 SNRs ($\approx 39$~\% of the sample). Since the abundance ratios measured in the ejecta components reflect the nucleosynthesis yields of either type Ia and CC SNe, this is of great help for the typing of a substantial fraction of the sample. \item And third, it allows us to select 16 SNRs ($\approx 27$~\% of the sample) where the X-ray emission is dominated by swept-up ISM. In these objects, the fitted abundances provide a measurement of chemical abundances in the gas phase of the LMC ISM. A metallicity of {[Fe/H]} $= -0.46(_{-0.18}^{+0.13})$~dex is found based on XMM-{\it Newton}\ SNRs. Light $\alpha$-elements (O, Ne, Mg) have lower abundance ratios {[}$\alpha$/Fe{]} than in the Milky Way. Although this general result was previously known, one can now study abundance ratios within the LMC as function of age. In comparison to old clusters ($\sim 10$~Gyr) and red giant stars (1~Gyr and older), the relatively young gas phase ISM ($\lesssim 100$~Myr) has a higher metallicity [Fe/H] and lower {[}$\alpha$/Fe{]} (in particular [O/Fe]). This reflects the continued enrichment by type Ia SNe in the last $\sim 1$~Gyr, which injected large amounts of Fe back in the ISM. \end{itemize} We devised a \revision{quantitative way} to tentatively type all LMC SNRs, based on their local SFHs and stellar environments, combined with spectral information (\mbox{i.\,e.}\xspace detection of SN ejecta, when present). We calibrated this method with SNRs having a well-established type based on robust indicators. The resulting ratio of CC to type Ia SNe that exploded in the LMC over the last few $10^4$~yr (\mbox{i.\,e.}\xspace very close to the current ratio of CC/Ia \emph{rates}) is $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$~$= 1.35(_{-0.24}^{+0.11})$. This is lower than the ratio typically measured in local SNe surveys and in galaxy clusters. After arguing that SNRs of both types might be absent from the sample (\mbox{i.\,e.}\xspace the current sample is not biased towards one type only), we concluded that the low $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ ratio is a consequence of the specific SFH of the LMC, and particularly the enhanced star formation episodes that occurred 500~Myr and 2~Gyr ago. Because the majority of type Ia SNe explode within 2~Gyr after star-forming episodes, we are coincidentally observing the LMC at a time when the type Ia SN rate is high. Integrated over an SNR lifetime, this results in the relatively low $N_{\mathrm{CC}}/N_{\mathrm{Ia}}$\ observed. We also assessed the spatial distribution of SNRs with respect to cool gas (traced by \ion{H}{I} and molecular emission), star-forming regions (H$\alpha$), and stars (red continuum). A concentration of SNRs around the edge of the SGS LMC~4 exemplifies the role of SGSs in triggering star formation. The column density $N_{H} ^{\ X}$ obtained during the X-ray spectral analysis of the whole sample, when compared to the \ion{H}{I} column density, provides a measurement of the position of each SNR relative to the \ion{H}{I} structure. Since most of the neutral gas lies in a well-defined thin disc seen at a moderate inclination angle, the fraction $N_{H} ^{\ X} / N_{H} ^{\mathrm{\,21\,cm}}$ is a good indicator of the depth along the line-of-sight, revealing the ``pseudo-3D'' distribution of SNRs in the LMC. Previous studies found that the Bar is ``floating'' in front of the disc, but this statement was challenged by some authors. Our analysis shows that SNRs in the Bar regions are primarily on the near side (low $N_H$ fraction), lending support to the foreground location of the Bar. Finally, we compared the populations of SNRs in Local Group galaxies via their X-ray luminosity function. The XLF of SNRs in the SMC, M31, and M33 are relatively homogeneous over all the observed luminosity range, although that of the SMC is flatter. The LMC XLF is remarkable by its prominent bright end. The largest population of SNRs brighter than $L_X > 10^{36}$~erg~s$^{-1}$ is found in the LMC (13 SNRs vs. 8 and 7 in M31 and M33, respectively). This is possibly an effect of the lower metallicity in the LMC: Massive stars have smaller mass loss rates (less heavy elements to drive stellar winds) and the interaction of SN ejecta with less massive CSM shells produce brighter remnants. The number of SNRs brighter than $10^{35}$~erg~s$^{-1}$ in the LMC is comparable to that in M31 and M33, likely owing to its high recent SFR and high current type Ia SN rate. The LMC XLF flattens significantly because of incompleteness: Many X-ray-faint SNRs have been missed so far, due to the incomplete coverage of the LMC with sensitive X-ray instruments (\mbox{i.\,e.}\xspace\ {\it Chandra}\ or XMM-{\it Newton}). This work presents the state of the art on X-ray emission of SNRs in the LMC. However, it is clear that the \emph{current} sample is incomplete, as evidenced by the flattening of the X-ray luminosity function of LMC SNRs (Sect.\,\ref{results_XLF}). In the last 15 years, new SNRs were confirmed or discovered in the LMC at an almost constant rate (one or few per year), principally using X-ray observations. There is no indication that this trend will stop in the near future, so that more observations of the LMC will increase the sample of SNRs. Nevertheless, the observing time of major observatories is limited and expensive. We conclude this work by offering several strategies to maximise the chance of finding ``missing'' SNRs: \begin{itemize} \item As shown in Sect.\,\ref{results_distribution}, star formation is intense around the SGS LMC~4, and the edges of the shell abound in SNRs. Many LMC SGS have not been (fully) surveyed by XMM-{\it Newton}, for instance \citep[in the notation of][]{1999AJ....118.2797K} SGS~3 and 6 in the north, SGS~2 and 5 in the west, and SGS~4 in the \revision{south}. Targeting in particular SGSs associated to star formation (\mbox{e.\,g.}\xspace with \ion{H}{II} region along the rims) warrants successful SNR searches. \item The follow-up observations of X-ray-selected candidates (usually ROSAT sources) with XMM-{\it Newton}\ have been extremely successful. Such programmes should be continued until completion of the list of candidates. \item Even the ROSAT (targeted) survey of the LMC was not covering the LMC up to its outskirts. To find SNRs in these regions, the future \emph{eROSITA} survey \citep{2012arXiv1209.3114M} will be most useful, covering the full sky in the 0.5~keV~--~8~keV band. The LMC is located close to the South Ecliptic Pole and will be observed with a deeper exposure than the rest of the sky. Looking for new SNR candidates, especially evolved X-ray-only SNRs, will be of special interest. \end{itemize} Even in \emph{existing} data, some SNRs might be as yet unrecognised. There is significant diffuse emission from large-scale structures of the hot ISM in the LMC \citep{2002A&A...392..103S}, which is seen in greater spatial and spectral detail by XMM-{\it Newton}. By looking for ejecta-enhancement, it might be possible to distinguish old SNRs with low surface brightness hiding in the diffuse emission. Finding new SNRs is desirable. Individual objects of special interest are often found serendipitously, without prior knowledge of their exciting nature. The evolved type Ia SNRs presented in \citet{2014A&A...561A..76M} and \citet{2014MNRAS.439.1110B}, are good examples; the discovery of the SNR around the Be/X-ray binary SXP~1062 is another one \citep{2012MNRAS.420L..13H,2012A&A...537L...1H}. Furthermore, as demonstrated in this work, SNRs are powerful probes of the ISM of their host galaxies. With more SNRs where metallicity can be measured, we will obtain a more accurate knowledge of the chemical composition of the hot ISM or better assess its homogeneity. \begin{acknowledgements} \revision{We thank the anonymous referee for carefully reading this rather long manuscript and providing us with comments and suggestions to improve it.} The XMM-{\it Newton}\ project is supported by the Bundesministerium f\"ur Wirtschaft und Technologie\,/\,Deutsches Zentrum f\"ur Luft- und Raumfahrt (BMWi/DLR, FKZ 50 OX 0001) and the Max-Planck Society. \begin{comment} Cerro Tololo Inter-American Observatory (CTIO) is operated by the Association of Universities for Research in Astronomy Inc. (AURA), under a cooperative agreement with the National Science Foundation (NSF) as part of the National Optical Astronomy Observatories (NOAO). We gratefully acknowledge the support of CTIO and all the assistance which has been provided in upgrading the Curtis Schmidt telescope. The MCELS is funded through the support of the Dean B. McLaughlin fund at the University of Michigan and through NSF grant 9540747. We used the {\sc karma} software package developed by the ATNF. The Australia Telescope Compact Array is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. P.\,M. acknowledges support by the Centre National d’\'Etudes Spatiales (CNES) and from the BMWi/DLR grant FKZ 50 OR 1201. M.\,S. acknowledges support by the Deutsche Forschungsgemeinschaft through the Emmy Noether Research Grant SA 2131/1-1. P.\,K. acknowledges support from the BMWi/DLR grant FKZ 50 OR 1309. This research has made use of Aladin, SIMBAD and VizieR, operated at the CDS, Strasbourg, France. \end{acknowledgements} \input{lmcsnr_arxviv.bbl}
\section{Introduction} \label{sec:intro} Tensors provide compact representations for multi-dimensional, multi-perspective data in many problem domains, including image and video processing \cite{Zhang:2008ey, Liu:2013bh, {Kang:2002iu}}, collaborative filtering \cite{Karatzoglou:2010bm,Chen:2005jn}, statistical modeling \cite{Anandkumar:2014vz, animatensor}, array signal processing \cite{Lim:2010if, Sidiropoulos:2000gb}, psychometrics \cite{Wold:1987iw,Smilde:2005wf}, neuroscience \cite{Beckmann:2005fg, MartnezMontes:2004ic}, and large-scale data analysis \cite{Papalexakis:2013tu, Sun:2009tc, Sun:2006ga, Anandkumar:2013va, Cichocki:2014vy}. In this paper we consider the problem of tensor recovery - given partial information of a tensor via linear measurements, one wishes to learn the entire tensor. While this inverse problem is ill-posed in general, we will focus on the setting where the underlying tensor is simple. The notion of simplicity that we adopt is based on the \emph{(Kruskal) rank} of the tensor, which much like the matrix rank is of fundamental importance - tensors of lower rank have fewer constituent components and are hence simple. For example, video sequences are naturally modeled as tensors, and these third order tensors have low rank as a result of homogeneous variations in the scene \cite{wang2014low}. Unlike the matrix case, however, computational tasks related to the tensor rank such as spectral decompositions, rank computation, and regularization are fraught with computational intractability \cite{tensorhard,tensorKolda} in the worst case. We focus on linear inverse problems involving tensors. Linear measurements of an unknown tensor $\vct{X}$ are specified by $y = \mathcal{L}(\vct{X})$ where $\mathcal{L}$ is a linear operator and $y \in \mathbb{R}^{m}$. Here the quantity $m$ refers to the number of measurements, and the minimum number of measurements \footnote{We use the terms measurements and samples interchangeably.} $m$ required to reliably recover $\vct{X}$ (called the \emph{sample complexity}) is of interest. While in general, such problems are ill-posed and unsolvable when $m$ is smaller than the dimensionality of $\vct{X}$, the situation is more interesting when the underlying signal (tensor) is structured, and the sensing mechanism $\mathcal{L}(\cdot)$ is able to exploit this structure. For instance, similar ill-posed problems are solvable, even if $m$ is substantially lower than the ambient dimension, when the underlying signal is a sparse vector, or a low-rank matrix, provided that $\mathcal{L} (\cdot)$ has appropriate properties. We focus for the most part on tensors of order $3$, and show later that all our results extend to the higher order case in a straightforward way. We introduce a class of measurement operators known as \emph{separable measurements}, and present an algorithm for low-rank tensor recovery for the same. We focus on two specific measurement mechanisms that are special cases of separable mechanisms: \begin{itemize} \item \emph{Separable random projections:} For tensors of order $3$, we consider observations where the $i^{th}$ measurement is of the form $\mathcal{L}_i(\vct{X}):=\langle a\otimes A_i, \vct{X} \rangle$, where $a$ is a random unit vector, $A_i$ is a random matrix, and $\otimes$ represents an outer product of the two. For higher order tensors, the measurements are defined in an analogous manner. Here $\langle \cdot, \cdot \rangle$ is the tensor inner product (to be made clear in the sequel). \item \emph{Completion:} The measurements here are simply a subset of the entries of the true tensor. The entries need to be restricted to merely four slices of the tensor, and can be random within these slices. \end{itemize} For both the random projection and completion settings, we analyze the performance of our algorithm and prove sample complexity bounds. The random sampling mechanisms mentioned above are of relevance in practical applications. For instance, the Gaussian random projection mechanism described above is a natural candidate for compressively sampling video and multi-dimensional imaging data. For applications where such data is ``simple'' (in the sense of low rank), the Gaussian sensing mechanism may be a natural means of compressive encoding. The completion framework is especially relevant to machine learning applications. For instance, it is useful in the context of multi-task learning \cite{icml2013_romera-paredes13}, where each individual of a collection of inter-related tasks corresponds to matrix completion. Consider the tasks of predicting ratings assigned by users for different clothing items, this is naturally modeled as a matrix completion problem \cite{candesmatcomp}. Similarly, the task of predicting ratings assigned by the same set of users to accessories is another matrix completion problem. The multi-task of jointly predicting the ratings assigned by the users to baskets of items consisting of both clothing items and accessories is a tensor completion problem. Another application of tensor completion is that of extending the matrix completion framework for contextual recommendation systems. In such a setup, one is given a \emph{rating matrix} that is indexed by users and items, and the entries correspond to the ratings given by different users to different items. Each user provides ratings for only a fraction of the items (these constitute the sensing operator $\mathcal{L}\left( \cdot \right))$, and one wishes to infer the ratings for all the others. Assuming that such a rating matrix is low rank is equivalent to assuming the presence of a small number of latent variables that drive the rating process. An interesting twist to this setup which requires a tensor based approach is \emph{contextual} recommendation - i.e. where different users provide ratings in different contexts (e.g., location, time, activity). Such a setting is naturally modeled via tensors; the three modes of the tensor are indexed by users, items, and contexts. The underlying tensor may be assumed to be low rank to model the small number of latent variables that influence the rating process. In this setting, our approach would need a few samples about users making decisions in two different contexts (this corresponds to two slices of the tensor along the third mode), and enough information about two different users providing ratings in a variety of different contexts (these are two slices along the first mode). Once the completion problem restricted to these slices is solved, one can complete the entire tensor by performing simple linear algebraic manipulations. Of particular note concerning our algorithm and the performance guarantees are the following: \begin{itemize} \item \textbf{Sample complexity: } In the absence of noise, our algorithm, named T-ReCs (Tensor Recovery via Contractions), provably and exactly recovers the true tensor and achieves an \emph{order-optimal} sample complexity for exact recovery of the underlying tensor in the context of random sensing, and order optimal modulo logarithmic factors in the context of tensor completion. Specifically, for a third order tensor of rank $r$ and largest dimension $n$, the achieved sample complexity is $O(nr)$ for recovery from separable random projections, and $O(nr \log^2n)$ for tensor completion. (These correspond to Theorems \ref{thm:exact_rec_sense} and \ref{thm:exact_rec_comp} respectively.) More generally, for order $K$ tensors the corresponding sample complexities are $O(Knr)$ and $O(Knr \log^2n)$ respectively (Theorems \ref{thm:exact_rec_high} and \ref{thm:exact_rec_highcomp}). \item \textbf{Factorization:} Equally important is the fact that our method recovers a minimal rank factorization in addition to the unknown tensor. This is of importance in applications such as dimension reduction and also latent variable models \cite{animatensor} involving tensors where the factorization itself holds meaningful interpretational value. \item \textbf{Absence of strong assumptions:} Unlike some prior art, our analysis relies only on relatively weak assumptions - namely that the rank of the tensor be smaller than the (smallest) dimension, that the factors in the rank decomposition be linearly independent, non-degenerate, and (for the case of completion) other standard assumptions such as incoherence between the factors and the sampling operator. We do not, for instance, require orthogonality-type assumptions of the said factors, as is the case in \cite{animatensor,Oh_tensor}. \item \textbf{Computational efficiency:} Computationally, our algorithm essentially reduces to linear algebraic operations and the solution of matrix nuclear norm (convex) optimization sub-problems, and is hence extremely tractable. Furthermore, our nuclear norm minimization methods deal with matrices that are potentially much smaller, up to factors of $n$, than competing methods that ``matricize" the tensor via unfolding \cite{squaredeal, tomioka}. In addition to recovering the true underlying tensor, it also produces its unique rank decomposition. \item \textbf{Simplicity:} Our algorithm is conceptually simple - both to implement as well as to analyze. Indeed the algorithm and its analysis follow in a transparent manner from Leurgans' algorithm (a simple linear algebraic approach for tensor decomposition) and standard results for low-rank matrix recovery and completion. We find this intriguing, especially considering the ``hardness'' of most tensor problems \cite{tensorhard,tensorKolda}. Recent work in the area of tensor learning has focused on novel regularization schemes and algorithms for learning low rank tensors; the proposed approach potentially obviates the need for developing these in the context of separable measurements. \end{itemize} The fundamental insight in this work is that while solving the tensor recovery problem directly may seem challenging (for example we do not know of natural tractable extensions of the ``nuclear norm'' for tensors), very important information is encoded in a two-dimensional matrix ``sketch'' of the tensor which we call a contraction. (This idea seems to first appear in \cite{unique1}, and is expanded upon in \cite{Moitra_tensor,Vempala_tensor} in the context of tensor decomposition.) These sketches are formed by taking linear combinations of two-dimensional slices of the underlying tensor - indeed the slices themselves may be viewed as ``extremal'' contractions. For the Gaussian random projections case, the contractions will be random linear combinations of slices, whereas for the completion setting the contractions we work with will be the slices themselves, randomly subsampled. Our method focuses on recovering these contractions efficiently (using matrix nuclear norm regularization) as a first step, followed by additional processing to recover the true tensor. \subsection{Related Work and Key Differences} With a view to computational tractability, the notion of \emph{Tucker rank} of a tensor has been explored; this involves \emph{matricizations} along different modes of the tensor and the ranks of the associated matrices. Based on the idea of Tucker rank, Tomioka et al. \cite{tomioka} have proposed and analyzed a nuclear norm heuristic for tensor completion, thereby bringing tools from matrix completion \cite{candesmatcomp} to bear for the tensor case. Mu et al. \cite{squaredeal}, have extended this idea further by studying reshaped versions of tensor matricizations. However, to date, the sample complexity associated to matrix-based regularization seem to be orders far from the anticipated sample complexity (for example based on a count of the degrees of freedom in the problem) \cite{squaredeal}. In this paper we resolve this conundrum by providing an efficient algorithm that provably enjoys order optimal sample complexity in the order, dimension, and rank of the tensor. In contrast to the matricization approach, alternative approaches for tensor completion with provable guarantees have appeared in the literature. In the restricted setting when the tensor has a symmetric factorization \cite{MOITRA} (in contrast we are able to work in the general non-symmetric setting), the authors propose employing the Lasserre hierarchy via a semidefinite programming based approach. Unfortunately, the method proposed in \cite{MOITRA} is not scalable - it requires solving optimization problems at the $6^{th}$ level of the Lasserre hierarchy which makes solving even moderate-sized problems numerically impractical as the resulting semidefinite programs grow rapidly with the dimension. Furthermore, the guarantees provided in \cite{MOITRA} are of a different flavor - they provide error bounds in the noisy setting, whereas we provide exact recovery results in the noiseless setting. Alternate methods based on thresholding in the noisy setting have also been studied in \cite{Aswani}. An alternating minimization approach for tensor completion was proposed in \cite{Oh_tensor}. Their approach relies on the restrictive assumptions also - that the underlying tensor be symmetric and orthogonally decomposable (we make no such assumptions), and neither do the sample complexity bounds scale optimally with the dimensions or the rank. Unlike alternating minimization schemes that are efficient but rely on careful initializations, our method directly solves convex optimization programs followed by linear algebraic manipulations. Also relevant is \cite{yuan}, where the authors propose solving tensor completion using the tensor nuclear norm regularizer; this approach is not known to be computationally tractable (no polynomial time algorithm is known for minimizing the tensor nuclear norm) and the guarantees they obtain do not scale optimally with the dimension and rank. Finally a method based on the tubal rank and $t$-SVD of a tensor \cite{Aeron} has also recently been proposed, however the sample complexity does not scale optimally. As a final point of contrast to the aforementioned work, our method is also conceptually very simple - both to implement and analyze. In Table \ref{table:comparisons}, we provide a brief comparison of the relevant approaches, their sample complexities in both the third order and higher order settings as well as a few key features of each approach. \small \begin{table}[H] \label{table:comparisons} \begin{center} \begin{tabular}{ccc p{4.5cm}} \hline \textbf{Reference} & \textbf{Sample Complexity} & \textbf{Sample Complexity} & \textbf{Key Features} \\ &\textbf{($3^{rd}$ order)} & \textbf{($K$th order)} & \\ \hline \\ \cite{tomioka} & $O(rn^2)$ & $O(rn^{K-1})$ & Tucker rank, tensor unfolding \\ &&& \\ \cite{squaredeal} & $O(rn^2)$ & $O(rn^{\lfloor{\frac{K}{2}}\rfloor})$ & Tucker rank, tensor unfolding \\ &&& \\ \cite{Oh_tensor} & $O(r^5n^{\frac{3}{2}}\log^{5}n)$ & - & Kruskal rank, alternating minimization, orthogonally decomposable tensors, symmetric setting, completion only. \\ &&& \\ \cite{Aeron} & $O(rn^2\log n)$ & - & Tensor tubal rank, completion only \\ &&& \\ \cite{yuan} & $O(r^{\frac{1}{2}}(n \log n)^{\frac{3}{2}})$ & $O(n^{\frac{K}{2}}\text{polylog}(n))$ & Kruskal rank, Exact tensor nuclear norm minimization, computationally intractable, completion only. \\ &&&\\ {Our Method} & $O(nr)$ (random projection) & $O(Knr)$ (random projection) & Kruskal rank, separable \\ & $O(nr\log^2 n)$ (completion) & $O(Knr\log^2 n)$ (completion) & measurements, Leurgans' algorithm\\ &&& \\ \hline \label{table:comparisons} \end{tabular} \caption{Table comparing sample complexities of various approaches.} \end{center} \end{table} \normalsize The rest of the paper is organized as follows: in Section \ref{sec:prelim}, we introduce the problem setup and describe the approach and result in the most general setting. We also describe Leurgans' algorithm, an efficient linear algebraic algorithm for tensor decomposition, which our results build upon. In Section \ref{sec:mainresults} we specialize our results for both the random projections and the tensor completion cases. We extend these results and our algorithm to higher order tensors in Section \ref{sec:higher_order}. We perform experiments that validate our theoretical results in Section \ref{sec:exp}. In Section \ref{sec:conclusion}, we conclude the paper and outline future directions. \section{Approach and Basic Results} \label{sec:prelim} \vspace{-2mm} In this paper, vectors are denoted using lower case characters (e.g. $x, y, a, b,$ etc.), matrices by upper-case characters (e.g. $X, Y,$ etc,) and tensors by upper-case bold characters (e.g. $\vct{X}, \vct{T}, \vct{A}$ etc.). Given two third order tensors $\vct{A}, \vct{B}$, their inner product is defined as: \[ \langle \vct{A}, \vct{B} \rangle = \sum_{i, j, k} \vct{A}_{ijk} \vct{B}_{ijk}. \] The Euclidean norm of a tensor $\vct{A}$ is generated by this inner product, and is a straightforward extension of the matrix Frobenius norm: \[ \| \vct{A} \|_{F}^2:= \langle \vct{A}, \vct{A} \rangle. \] We will work with tensors of third order (representationally to be thought of as three-way arrays), and the term mode refers to one of the axes of the tensor. A slice of a tensor refers to a two dimensional matrix generated from the tensor by varying indices along two modes while keeping the third mode fixed. For a tensor $\vct{X}$ we will refer to the indices of the $i^{th}$ mode-$1$ slice (i.e., the slice corresponding to the indices $\left\{i \right\} \times [n_2] \times [n_3]$) by $S_i^{(1)}$, where $[n_2] = \{1, 2, \ldots, n_2\}$ and $[n_3]$ is defined similarly. We denote the matrix corresponding to $S_i^{(1)}$ by $X^{1}_i$. Similarly the indices of the $k^{th}$ mode-$3$ slice will be denoted by $S_k^{(3)}$ and the matrix by $X^{3}_k$. Given a tensor of interest $\vct{X}$, consider its decomposition into rank one tensors \begin{equation} \label{eq:decomp0} \vct{X}=\sum_{i=1}^r u_i \otimes v_i \otimes w_i, \end{equation} where $\left\{ u_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_1}$, $\left\{ v_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_2}$, and $\left\{ w_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_3}$. Here $\otimes$ denotes the tensor product, so that $\vct{X} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$ is a tensor of order $3$ and dimension $n_1\times n_2 \times n_3$. Without loss of generality, throughout this paper we assume that $n_1 \leq n_2 \leq n_3$. We will first present our results for third order tensors, and analogous results for higher orders follow in a transparent manner. We will be dealing with \emph{low-rank} tensors, i.e. those tensors with $r \leq n_1$. Tensors can have rank larger than the dimension, indeed $r \geq n_3$ is an interesting regime, but far more challenging and will not be dealt with here. Kruskal's Theorem \cite{Kruskal1} guarantees that tensors satisfying Assumption \ref{assump:1} below have a unique minimal decomposition into rank one terms of the form \eqref{eq:decomp0}. The minimal number of terms is called the (Kruskal) rank\footnote{The Kruskal rank is also known as the CP rank in the literature.} of the tensor $\vct{X}$ \begin{assumption} \label{assump:1} The sets $\left\{ u_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_1}$ and $\left\{ v_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_2}$ are sets of linearly independent vectors and the set $\left\{ w_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_3}$ is a set of \emph{pairwise} independent vectors \end{assumption} While rank decomposition of tensors in the worst case is known to be computationally intractable \cite{tensorhard}, it is known that the (mild) assumption stated in Assumption \ref{assump:1} above suffices for an algorithm known as Leurgans' algorithm \cite{unique1,Moitra_tensor} to correctly identify the factors in this unique decomposition. In this paper, we will work with the following, somewhat stronger assumption: \begin{assumption} \label{assump:2} The sets $\left\{ u_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_1}$, $\left\{ v_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_2}$, and $\left\{ w_i\right\}_{i=1, \ldots, r} \subseteq \mathbb{R}^{n_3}$ are sets of linearly independent vectors. \end{assumption} \subsection{Separable Measurement Mechanisms} \label{sec:separable} As indicated above in the preceding discussions, we are interested in tensor linear inverse problems where, given measurements of the form $y_i= \mathcal{L}_i\left( \vct{X} \right) ~\ i = 1, 2, \cdots, m$, we recover the unknown tensor $\vct{X}$. We focus on a class of measurement mechanisms $\mathcal{L}\left( \cdot \right)$ which have a special property which we call \emph{separability}. We define the notion of separable measurements formally: \begin{definition} \label{def:separable} Consider a linear operator $\mathcal{L}: \mathbb{R}^{n_1 \times n_2 \times n_3} \rightarrow \mathbb{R}^n$. We say that $\mathcal{L}$ is separable with respect to the third mode if there exist $w \in \mathbb{R}^{n_3}$ and a linear operator $\mathcal{T}: \mathbb{R}^{n_1 \times n_2 } \rightarrow \mathbb{R}^n$, such that for every $\vct{X} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$: $$ \mathcal{L}\left( \vct{X} \right) = \sum_{i=1}^{n_3} w_i \mathcal{T} \left( X^3_i \right). $$ \end{definition} This definition extends in a natural way for separability of operators with respect to the second and first modes. In words, separability means that the effect of the linear operator $\mathcal{L}\left( \cdot \right)$ on a tensor can be decomposed into the (weighted) sum of actions of a \emph{single} linear operator $\mathcal{T}\left( \cdot \right)$ acting on \emph{slices} of the tensor along a particular mode. In several applications involving inverse problems, the design of appropriate measurement mechanisms is itself of interest. Indeed sensing methods that lend themselves to recovery from a small number of samples via efficient computational techniques has been intensely studied in the signal processing, compressed sensing, and machine learning literature \cite{Candes:2006eq,CandesTao2, recht2010guaranteed, Ben, CovSketch, Tang:2013fo}. In the context of tensors, we argue, separability of the measurement operator is a desirable property for precisely these reasons; because it lends itself to recovery up to almost optimal sample complexity via scalable computational methods (See for example \cite{kueng2014low} for rank one measurement operators in the matrix case). We now describe a few interesting measurement mechanisms that are separable. \begin{enumerate} \item \textbf{Separable random projections:} Given a matrix $M \in \mathbb{R}^{n_1 \times n_2}$ and vector $v \in \mathbb{R}^{n_3}$, we define the following two notions of ``outer products'' of $M$ and $v$: \begin{align*} \left[ M\otimes v \right]_{ijk}:= M_{ij}v_k \qquad \left[ v \otimes M \right]_{ijk}:= v_i M_{jk}. \end{align*} Hence, the $k^{th}$ mode $3$ slice of the tensor $M\otimes v $ is the matrix $v_k M$. Similarly, the $i^{th}$ mode $1$ slice of the tensor $v\otimes M $ is the matrix $v_i M$. A typical random separable projection is of the form: \begin{align} \label{eq:rand_proj} \mathcal{L}\left( \vct{X} \right) = \left[ \begin{array}{c} \langle A_1 \otimes a, \vct{X} \rangle \\ \vdots \\ \langle A_m \otimes a, \vct{X} \rangle \end{array}\right] \end{align} where $A_i \in \mathbb{R}^{n_1 \times n_2}$ is a random matrix drawn from a suitable ensemble such as the Gaussian ensemble with each entry drawn independently and identically from $\mathcal{N}(0,1)$, and $a \in \mathbb{R}^{n_3}$ is also a random vector, for instance distributed uniformly on the unit sphere in $n_3$ dimensions (i.e. with each entry drawn independently and identically from $\mathcal{N}(0,1)$ and then suitably normalized). To see that such measurements are separable, note that: \begin{align*} \langle A_i \otimes a, \vct{X} \rangle = \sum_{k=1}^{n_3} a_k \langle A_i, X_k^3 \rangle, \end{align*} so that the operator $\mathcal{T}\left( \cdot \right)$ from Definition \ref{def:separable} in this case is simply given by: $$ \mathcal{T}\left( X \right) = \left[ \begin{array}{c} \langle A_1, X \rangle \\ \vdots \\ \langle A_m, X \rangle \end{array}\right]. $$ Random projections are of basic interest in signal processing, and have played a key role in the development of sparse recovery and low rank matrix recovery literature \cite{recht2010guaranteed,CandesTao2}. From an application perspective they are relevant because they provide a method of compressive and lossless coding of ``simple signals'' such as sparse vectors \cite{CandesTao2} and low rank matrices \cite{recht2010guaranteed}. In subsequent sections we will establish that separable random projections share this desirable feature for low-rank tensors. \item \textbf{Tensor completion:} In tensor completion, a subset of the entries of the tensor $\vct{X}$ are revealed. Specifically, given a tensor $\vct{X} $, a subset of the entries $\vct{X}_{ijk}$ for $i,j,k \in \Omega$ are revealed for some index set $\Omega \subseteq [n_1] \times [n_2] \times [n_3]$ (we denote this by $\left( \vct{X}\right) _{\Omega}$). Whether or not the measurements are separable depends upon the nature of the set $\Omega$. For the $i^{th}$ mode-$1$ slice let us define \begin{align*} \Omega^{(1)}_i:=\Omega \cap S^{(1)}_i \qquad m^{(1)}_i:=\left| \Omega \cap S_i^{(1)} \right|. \end{align*} Measurements derived from entries within a \emph{single slice} of the tensor are separable. This follows from the fact that for $\mathcal{L} \left( \vct{X} \right):=\left( \vct{X} \right)_{\Omega^{(1)}_{i}}$, we have: $$ \mathcal{L} \left( \vct{X} \right) = \sum_{j=1}^{n_1} \left( \delta_{i} \right)_j \mathcal{M}_{\Omega^{(1)}_{i}} \left( X^{(1)}_j\right) $$ where $ \delta_{i} \in \mathbb{R}^{n_1}$ is a vector with a one is the $i$ index and zero otherwise, and $ \mathcal{M}_{\Omega}$ is the operator that acts on a matrix $X$, extracts the indices corresponding to the index $\Omega$, and returns the resulting vector. Comparing to Definition \ref{def:separable}, we have $w=\delta_{i}$ and $\mathcal{T}=\mathcal{M}_{\Omega^{(1)}_{i}}$. As a trivial extension, measurements obtained from parallel slices where the index set restricted to these slices is identical are also separable. Analogous to matrix completion, tensor completion is an important problem due to its applications to machine learning; the problems of multi-task learning and contextual recommendation are both naturally modeled in this framework as described in Section \ref{sec:intro}. \item \textbf{Rank one projections} Another separable sensing mechanism of interest is via rank-one projections of the tensor of interest. Specifically, measurements of the form: $$ \mathcal{L}\left( \vct{X} \right) = \left[ \begin{array}{c} \langle a_1 \otimes b_1 \otimes c, \vct{X} \rangle \\ \vdots \\ \langle a_m \otimes b_m \otimes c, \vct{X} \rangle \end{array} \right] $$ are also separable. Mechanisms of this form have recently gained interest in the context of low rank (indeed rank-one) matrices due to their appearance in the context of phase retrieval problems \cite{PhaseLift} and statistical estimation \cite{kueng2014low,cai2015}. We anticipate that studying rank one projections in the context of tensors will give rise to interesting applications in a similar spirit. \item \textbf{Separable sketching} The notion of covariance sketching (and more generally, matrix sketching) \cite{CovSketch} allows for the possibility of compressively acquiring a matrix $X$ via measurements $Y=AXB^T$, where $A \in \mathbb{R}^{m_1 \times p}$ and $B \in \mathbb{R}^{m_2 \times p}$, $X \in \mathbb{R}^p$, and $m_1, m_2 <p$. The problem of recovering $X$ from such measurements is of interest in various settings such as when one is interested in recovering a covariance matrix from compressed sample paths, and graph compression \cite{CovSketch}. In a similar spirit, we introduce the notion of separable sketching of tensors defined via: $$ Y_{qs}=\mathcal{L}\left(\vct{X}\right)=\sum_{i=1}^{n_1} \sum_{j=1}^{n_2} \sum_{k=1}^{n_3}A_{qi} B_{sj} c_{k}\vct{X}_{ijk}. $$ In the above $A \in \mathbb{R}^{m_1 \times n_1}$, $B \in \mathbb{R}^{m_2 \times n_2}$, $c \in \mathbb{R}^{n_3}$, and $Y \in \mathbb{R}^{m_1 \times m_2}$. Note that $\mathcal{T}\left( Z\right) = AZB^T$, i.e. precisely a matrix sketch of tensor slices. The problem of recovering $\vct{X}$ from $Y$ is thus a natural extension of matrix sketching to tensors. Finally, we note that while a variety of separable sensing mechanisms are proposed above, many sensing mechanisms of interest are not separable. For instance, a measurement of the form $\mathcal{L}(\vct{X})=\langle \vct{A}, \vct{X} \rangle$ where $\vct{A}$ is a full rank tensor is \emph{not} separable. Similarly, completion problems where entries of the tensor are revealed randomly and uniformly throughout the tensor (as apposed to from a single slice) are also not separable (although they may be thought of as a union of separable measurements). In Section \ref{sec:third_order}, we will provide sample complexity bounds for exact recovery for the first two aforementioned measurement mechanisms (i.e. random projections and tensor completion); the arguments extend in a natural manner to other separable sensing mechanisms. \end{enumerate} \subsubsection{Diversity in the Measurement Set} \label{sec:diversity} In order to recover the low rank tensor from a few measurements using our algorithm, we need the set of measurements to be a \emph{union} of separable measurements which satisfy the following: \begin{enumerate} \item \textbf{Diversity across modes:} Measurements of the form \eqref{eq:rand_proj} are separable with respect to the third mode. For the third order case, we also need an additional set of measurements separable with respect to the first mode \footnote{Any two modes suffice. In this paper we will focus on separability w.r.t the first and third modes.}. This extends naturally also to the higher order case. \item \textbf{Diversity across separable weights:} Recalling the notion of weight vectors, $w$, from Definition: \ref{def:separable}, we require that for both modes $1$ and $3$, each mode has two distinct sets of separable measurements with distinct weight vectors. \end{enumerate} To make the second point more precise later, we introduce the formal notation we will use in the rest of the paper for the measurement operators: $$ y_k^{(i)}=\mathcal{L}^{(i)}_k\left( \vct{X} \right) = \sum_{j=1}^{n_3} \left(w_k^{(i)}\right)_j \mathcal{T}^{(i)}_k \left( X^3_j \right) $$ In the above, the index $i \in \left\{1, 3\right\}$ refers to the mode with respect to which that measurement is separable. For each mode, we have two distinct sets of measurements corresponding to two different weight vectors $w_k^{(i)}$, with $k \in \left\{1,2 \right\}$. For each $k$ and $i$, we may have potentially different operators $\mathcal{T}_k^{(i)}$ (though they need not be different). To simplify notation, we will subsequently assume that $\mathcal{T}_1^{(i)} = \mathcal{T}_2^{(i)} = \mathcal{T}^{(i)}$. Collectively, all these measurements will be denoted by: $$ y = \mathcal{L} \left( \vct{X} \right), $$ where it is understood that $y$ is a concatenation of the vectors $y^{(i)}_k$ and similarly $\mathcal{L}\left( \cdot \right)$ is a concatenation of $\mathcal{L}^{(i)}_k\left( \cdot \right)$. We will see in the subsequent sections that when we have diverse measurements across different modes and different weight vectors, and when the $\mathcal{T}^{(i)}$ are chosen suitably, one can efficiently recover an unknown tensor from an (almost) optimal number of measurements of the form $y = \mathcal{L}\left( \vct{X} \right)$. \subsection{Tensor Contractions} A basic ingredient in our approach is the notion of a tensor contraction. This notion will allow us to form a bridge between inverse problems involving tensors and inverse problems involving matrices, thereby allowing us to use matrix-based techniques to solve tensor inverse problems. For a tensor $\vct{X}$, we define its mode-$3$ \emph{contraction} with respect to a contraction vector $a \in \mathbb{R}^{n_3}$, denoted by $X^{3}_a\in \mathbb{R}^{n_1 \times n_2}$, as the following matrix: \begin{equation} \label{eq:contr_def} \left[ X^{3}_a \right]_{ij} = \sum_{k=1}^{n_3} \vct{X}_{ijk} a_k, \end{equation} so that the resulting matrix is a weighted sum of the mode-$3$ slices of the tensor $\vct{X}$. We similarly define the mode-$1$ contraction with respect to a vector $c \in \mathbb{R}^{n_{1}}$ as \begin{equation} \left[ X^{1}_c \right]_{jk} = \sum_{k=1}^{n_1} \vct{X}_{ijk} c_i, \end{equation} Note that when $a=e_k$, a standard unit vector, $X_a^3=X_k^{3}$, i.e. a tensor slice. We will primarily be interested in two notions of contraction in this paper: \begin{itemize} \item \emph{Random Contractions}, where $a$ is a random vector distributed uniformly on the unit sphere. These will play a role in our approach for recovery from random projections. \item \emph{Coordinate Contractions}, where $a$ is a canonical basis vector, so that the resulting contractions is a tensor slice. These will play a role in our tensor completion approach. \end{itemize} We now state a basic result concerning tensor contractions. \begin{lemma} \label{lemma:contr_rank} Let $\vct{X} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$, with $n_1 \leq n_2 \leq n_3$ be a tensor of rank $r \leq n_1$. Then the rank of $X^{3}_a$ is at most $r$. Similarly, if $r \leq \min \left\{ n_2, n_3 \right\}$ then the rank of $X^1_c$ is at most $r$. \end{lemma} \begin{proof} Consider a tensor $\vct{X} = \sum_{i=1}^r u_i \otimes v_i \otimes w_i$. The reader may verify in a straightforward manner that $X^{3}_a$ enjoys the decomposition: \begin{equation} \label{eq:decomp} X^{3}_a=\sum_{i=1}^r \langle w_i, a \rangle u_i v_i^{T}. \end{equation} The proof for the rank of $X^1_c$ is analogous. \end{proof} Note that while \eqref{eq:decomp} is a matrix decomposition of the contraction, it is not a singular value decomposition (the components need not be orthogonal, for instance). Indeed it does not seem ``canonical'' in any sense. Hence, given contractions, resolving the components is a non-trivial task. A particular form of degeneracy we will need to avoid is situations where $\langle w_i, a \rangle =0$ for \eqref{eq:decomp}. It is interesting to examine this in the context of coordinate contractions, i.e. when $a=e_k$, we have $X^3_{e_k}=X^k_3$ (i.e. the $k^{th}$ mode $3$ slice), by Lemma \ref{lemma:contr_rank}, we see that the tensor slices are also of rank at most $r$. Applying $a=e_k$ in the decomposition \eqref{eq:decomp}, we see that if for some vector $w_i \in \mathbb{R}^{n_3}$ in the above decomposition we have that the $k^{th}$ component of $w_i$ (i.e. $(w_i)_{k}$) is zero then $\langle w_i, e_k \rangle =0$, and hence this component is missing in the decomposition of $X^{3}_k$. As a consequence the rank of $X^3_k$ drops, and in a sense information about the factors $u_k, v_k$ is ``lost'' from the contraction. We will want to avoid such situations and thus introduce the following definition: \begin{definition} \label{def:degen} Let $\vct{X}= \sum_{i=1}^r u_i \otimes v_i \otimes w_i$. We say that the contraction $X_a^3$ is non-degenerate if $\langle w_i, a \rangle \neq 0$, for all $i=1, \ldots, r$. \end{definition} We will extend the terminology and say that the tensor $\vct{X}$ is \emph{non-degenerate} at mode $3$ and component $k$ if the $k^{th}$ tensor slice is non-degenerate, i.e. component $k$ of the vectors $w_i$, $i=1, \ldots, r$ are all non-zero. The above definition extends in a natural way to other modes and components. The non-degeneracy condition is trivially satisfied (almost surely) when: \begin{enumerate} \item The vector $a$ with respect to which the contraction is computed is suitably random, for instance random normal. In such situations, non-degeneracy holds almost surely. \item When $a=e_k$ (i.e. the contraction is a slice), and the tensor factors are chosen from suitable random ensembles, e.g. when the low rank tensors are picked such that the rank one components $u_i, v_i, w_i$ are Gaussian random vectors, or random orthogonal vectors \footnote{the latter is known as random orthogonal model in the matrix completion literature \cite{Ben}}. \end{enumerate} We will also need the following definition concerning the genericity of a pair of contractions: \begin{definition} Given a tensor $\vct{X} = \sum_{i=1}^r u_i \otimes v_i \otimes w_i$, a pair of contractions $X^3_a, X^3_b$ are \emph{pairwise generic} if the diagonal entries of the (diagonal) $D_a D_b^{-1}$ are all distinct, where $D_a = \mathrm{diag} \left( \langle w_1, a \rangle, \ldots, \langle w_r, a \rangle \right)$, $D_b = \mathrm{diag} \left( \langle w_1, a \rangle, \ldots, \langle w_r, b\rangle \right)$. \end{definition} \begin{remark} We list two cases where pairwise genericity conditions hold in this paper. \begin{enumerate} \item In the context of random contractions, for instance when the contraction vectors $a, b$ are sampled uniformly and independently on the unit sphere. In this case pairwise genericity holds almost surely. \item In the context of tensor completion where $a=e_{k_{1}}, b=e_{k_{2}}$, the two diagonal matrices $D_a = \text{diag}\left( \left(w_1\right)_{k_1}, \ldots, \left(w_r\right)_{k_1} \right)$, and $D_b = \text{diag}\left( \left(w_1\right)_{k_2}, \ldots, \left(w_r\right)_{k_2} \right)$. Thus the pairwise genericity condition is a genericity requirement of the tensor factors themselves, namely that the ratios $ \frac{\left(w_i\right)_{k_1}}{\left(w_i\right)_{k_2}}$ all be distinct for $i=1, \ldots, r$. We will abuse terminology, and call such a tensor pairwise generic with respect to mode $3$ slices $k_1, k_2$. This form of genericity is easily seen to hold, for instance when the tensor factors are drawn from suitable random ensembles such as random normal and random uniformly distributed on the unit sphere. \end{enumerate} \end{remark} The next lemma, a variation of which appears in \cite{Moitra_tensor,unique1} shows that when the underlying tensor is non-degenerate, it is possible to decompose a tensor from pairwise generic contractions. \begin{lemma}\cite{Moitra_tensor,unique1} \label{lemma:fund_lemma} Suppose we are given an order 3 tensor $\vct{X} = \sum_{i = 1}^r u_i \otimes v_i \otimes w_i$ of size $n_1 \times n_2 \times n_3$ satisfying the conditions of Assumption \ref{assump:1}. Suppose the contractions $X_a^{3}$ and $X_b^3$ are non-degenerate, and consider the matrices $M_1$ and $M_2$ formed as: \begin{align*} M_1= X_a^3 (X_b^3)^\dagger \qquad M_2= (X_b^3)^\dagger X_a^3. \end{align*} Then the eigenvectors of $M_1$ (corresponding to the non-zero eigenvalues) are $\left\{ u_i \right\}_{i=1, \ldots, r}$, and the eigenvectors of $M_2^T$ are $\left\{ v_i \right\}_{i=1, \ldots, r}$. \end{lemma} \begin{proof} Suppose we are given an order 3 tensor $\vct{X} = \sum_{i = 1}^r u_i \otimes v_i \otimes w_i \in \mathbb{R}^{n_1 \times n_2 \times n_3}$. From the definition of contraction \eqref{eq:contr_def}, it is straightforward to see that \[ X_a^3 = UD_aV^T ~\ D_a = \mbox{diag}(a^Tw_1, \ldots, a^Tw_r) \] \[ X_b^3 = UD_bV^T ~\ D_b = \mbox{diag}(b^Tw_1, \ldots, b^Tw_r). \] In the above decompositions, $U \in \mathbb{R}^{n_1 \times r}$, $V \in \mathbb{R}^{n_2 \times r}$, and the matrices $D_a, D_b\in \mathbb{R}^{r \times r}$ are diagonal and non-singular (since the contractions are non-degenerate). Now, \begin{align} \notag M_1 &:= X_a^3 (X_b^3)^\dagger \\ \notag &= UD_aV^T (V^{\dagger})^{T}D_b^{-1}U^{\dagger} \\ \label{udiag} &= UD_aD_b^{-1}U^\dagger \end{align} and similarly we obtain \begin{equation} \label{vdiag} M_2^T = VD_b^{-1}D_aV^\dagger. \end{equation} Since we have $M_1U=U D_aD_b^{-1} $ and $M_2^TV=VD_b^{-1} D_a$, it follows that the columns of $U$ and $V$ are eigenvectors of $M_1$ and $M_2^T$ respectively (with corresponding eigenvalues given by the diagonal matrices $D_aD_b^{-1}$ and $D_b^{-1}D_a$). \end{proof} \begin{remark} Note that while the eigenvectors $\left\{u_i\right\}, \left\{v_j\right\}$ are thus determined, a source of ambiguity remains. For a fixed ordering of the $u_i$ one needs to determine the order in which the $v_j$ are to be arranged. This can be (generically) achieved by using the (common) eigenvalues of $M_1$ and $M_2$ for pairing. If the contractions $X_a^3, X_b^3$ satisfy pairwise genericity, we see that the diagonal entries of the matrix $D_a D_b^{-1}$ are distinct. It then follows that the eigenvalues of $M_1, ~ M_2$ are distinct, and can be used to pair the columns of $U$ and $V$. \end{remark} \subsection{Leurgans' algorithm} We now describe Leurgans' algorithm for tensor decomposition in Algorithm \ref{alg: Leurgans}. In the next section, we build on this algorithm to solve tensor inverse problems to obtain optimal sample complexity bounds. In words, Algorithm \ref{alg: Leurgans} essentially turns a problem involving decomposition of tensors into that of decomposition of matrices. This is achieved by first computing mode $3$ contractions of the given tensor $\vct{X}$ with respect to two non-degenerate and pairwise generic vectors $a, b$ (e.g. randomly uniformly distributed on the unit sphere). Given these contractions, one can compute matrices $M_1$ and $M_2$ as described in Lemma \ref{lemma:fund_lemma} whose eigenvectors turn out to be \emph{precisely} (up to scaling) the vectors $u_i$ and $v_i$ of the required decomposition. Finally the $w_i$ can be obtained by inverting an (overdetermined) system of linear equations, giving a unique and exact solution. The correctness of the algorithm follows directly from Lemma \ref{lemma:fund_lemma}. In this paper, we extend this idea to solving ill-posed linear inverse problems of tensors. The key idea is that since the contractions preserve information about the tensor factors, we focus on recovering the contractions first. Once those are recovered, we simply need to compute eigendecompositions to recover the factors themselves. \begin{algorithm}[!ht] \caption{Leurgans' algorithm for tensor decomposition} \label{alg: Leurgans} \begin{algorithmic}[1] \STATE {\bfseries Input:} Tensor $\vct{X}$ \STATE Generate contraction vectors $a, b \in \mathbb{R}^{n_{3}}$ (such that non-degeneracy and pairwise genericity holds). \STATE Compute mode $3$ contractions $X_a^3$ and $X_b^3$ respectively. \STATE Compute eigen-decomposition of $M_1 := X_a^{3}(X_b^3)^{\dagger}$ and $M_2 :=(X_b^3)^{\dagger}X_a$. Let $U$ and $V$ denote the matrices whose columns are the eigenvectors of $M_1$ and $M_2^T$ respectively corresponding to the non-zero eigenvalues, in sorted order. (Let $r$ be the (common) rank of $M_1$ and $M_2$.) The eigenvectors, thus arranged are denoted as $\left\{ u_i \right\}_{i=1, \ldots, r}$ and $\left\{ v_i \right\}_{i=1, \ldots, r}$. \label{step:order} \STATE Solve for $w_i$ in the (over-determined) linear system $\vct{X}=\sum_{i=1}^{r} u_i \otimes v_i \otimes w_i , i=1, \ldots, m$. \STATE {\bfseries Output:} Decomposition $\vct{X} = \sum_{i=1}^r u_i \otimes v_i \otimes w_i$. \end{algorithmic} \end{algorithm} \begin{remark} Note that in the last step, instead of solving a linear system of equations to obtain the $w_i$, there is an alternative approach whereby one may compute mode $1$ contractions and then obtain the factors $v_i$ and $w_i$. However, there is one minor caveat. Suppose we denote the factors obtained from the modal contractions $X_a^3$ and $X_b^3$ by $U$ and $V_1$ (we assume that these factors are normalized, i.e. the columns have unit Euclidean norm). Now, we can repeat the procedure with two more random vectors $c, d$ to compute the contractions $X_c^{1}$ and $X_d^{1}$. We can perform similar manipulations to construct matrices whose eigenvectors are the tensor factors of interest, and thence obtain (normalized) factors $V_2$ and $W$. While $V_1$ and $V_2$ essentially correspond to the same factors, the matrices themselves may (i) have their columns in different order, and (ii) have signs reversed relative to each other. Hence, while the modal contractions preserve information about the tensor factors, they may need to be properly aligned by rearranging the columns and performing sign reversals, if necessary. \end{remark} \subsection{High Level Approach} The key observation driving the methodology concerns the separability of the measurements. Given a set of separable measurements $y=\mathcal{L} \left( \vct{X} \right)$, from the definition of separability we have: \begin{align*} y &=\mathcal{L}\left( \vct{X} \right) = \sum_{i=1}^{n_3} w_i \mathcal{T} \left( X^3_i \right) = \mathcal{T} \left( \sum_{i=1}^{n_3} w_iX^3_i \right) = \mathcal{T} \left( X^3_w \right). \end{align*} In words, each separable measurement $\mathcal{L}$ acting on the tensor can also be interpreted as a measurement $\mathcal{T}$ acting on a \emph{contraction} of the tensor. Since these contractions are low rank (Lemma \ref{lemma:contr_rank}), when the underlying tensor is low-rank, the following nuclear norm minimization problem represents a principled, tractable heuristic for recovering the contraction: \begin{align*} {\operatorname*{minimize}}_Z\ \|Z\|_* \qquad \text{subject to} \qquad y= \mathcal{T} \left( Z\right). \end{align*} Let us informally define $\mathcal{T}$ to be ``faithful'' if nuclear norm minimization succeeds in exactly recovering the tensor contractions. Provided we correctly recover two contractions each along modes $1$ and $3$, and furthermore these contractions are non-degenerate and pairwise generic, we can apply Leurgans' algorithm to the recovered contractions to exactly recover the unknown tensor. This yields the following meta-theorem: \begin{metatheorem} Given a low rank tensor $\vct{X}$ and separable measurements $$ y_k^{(i)}=\mathcal{L}^{(i)}_k\left( \vct{X} \right) = \sum_{j=1}^{n_3} \left(w_k^{(i)}\right)_j \mathcal{T}^{(i)} \left( X^3_j \right), \qquad i\in \left\{1,3 \right\}, \;k \in \left\{1,2 \right\}. $$ Suppose the $\mathcal{T}^{(i)}$ are faithful and for the vectors $w_k^{(i)}$, the contractions $X^{i}_{w_k^{(i)}}$ are non-degenerate and pairwise generic. Then the proposed approach succeeds in exactly recovering the unknown tensor. \end{metatheorem} In the next section, we will make the above meta-theorem more precise, and detail the precise sample complexities for the separable random projections and tensor completion settings. We will see that faithfulness, non-degeneracy and pairwise genericity hold naturally in these settings. \section{Sample Complexity Results: Third Order Case} \label{sec:third_order} \label{sec:mainresults} \subsection{Tensor Recovery via Contractions} \label{sec:trecs} We start by describing the main algorithm of this paper more precisely: Tensor Recovery via Contractions (T-ReCs). We assume that we are given separable measurements $y^{(3)}_1=\mathcal{L}^{(3)}_1\left( \vct{X} \right)$, $y^{(3)}_2=\mathcal{L}^{(3)}_2 \left( \vct{X} \right)$, $y^{(1)}_1=\mathcal{L}^{(1)}_1\left( \vct{X} \right)$, $y^{(1)}_2=\mathcal{L}^{(1)}_2 \left( \vct{X} \right)$. We further assume that the measurements are separable as: \begin{equation} \label{eq:rand_proj} \begin{split} \mathcal{L}^{(3)}_1\left( \vct{X} \right) = \sum_{i=1}^{n_3} a_i \mathcal{T}^{(3)} \left( X^3_i \right) \qquad \mathcal{L}^{(3)}_2\left( \vct{X} \right) = \sum_{i=1}^{n_3} b_i \mathcal{T}^{(3)} \left( X^3_i \right) \\ \mathcal{L}^{(1)}_1\left( \vct{X} \right) = \sum_{i=1}^{n_1} c_i \mathcal{T}^{(1)} \left( X^1_i \right) \qquad \mathcal{L}^{(1)}_2\left( \vct{X} \right) = \sum_{i=1}^{n_1} d_i \mathcal{T}^{(1)} \left( X^1_i \right). \end{split} \end{equation} where $a,b,c,d$ and $\mathcal{T}^{(3)}$ and $\mathcal{T}^{(1)}$ are known in advance. Given these measurements our algorithm will involve the solution of the following convex optimization problems. \begin{equation} \label{eq:opt1} \underset{Z_1}{\text{minimize}} \qquad \|Z_1\|_* \qquad \text{s.t.} \qquad y^{(3)}_1=\mathcal{T}^{(3)} \left( Z_1 \right) \end{equation} \begin{equation} \label{eq:opt2} \underset{Z_2}{\text{minimize}} \qquad \|Z_2\|_* \qquad \text{s.t.} \qquad y^{(3)}_2=\mathcal{T}^{(3)} \left( Z_2 \right) \end{equation} \begin{equation} \label{eq:opt3} \underset{Z_3}{\text{minimize}} \qquad \|Z_3\|_* \qquad \text{s.t.} \qquad y^{(1)}_1=\mathcal{T}^{(1)} \left( Z_3 \right) \end{equation} \begin{equation} \label{eq:opt4} \underset{Z_4}{\text{minimize}} \qquad \|Z_4\|_* \qquad \text{s.t.} \qquad y^{(1)}_2=\mathcal{T}^{(1)} \left( Z_4 \right) \end{equation} Efficient computational methods have been extensively studied in recent years for solving problems of this type \cite{nuc_norm}. These matrices form the ``input matrices" in the next step which is an adaptation of Leurgans' method. In this step we form eigendecompositions to reconstruct first the pair of factors $u_i, v_i$, and then the pairs $v_i, w_i$ (the factors are normalized). Once these are recovered the last step involves solving a linear system of equations for the weights $\lambda_i$ in \[ \mathcal{L}^{(3)}_1(\vct{X}) = \sum_{i=1}^r \lambda_i \mathcal{L}^{(3)}_1(u_i \otimes v_i \otimes w_i) = y_1^{(3)} \] The pseudocode for T-ReCs is detailed in Algorithm \ref{alg:trecs}. \begin{algorithm}[!ht] \caption{Tensor-Recovery via Contractions \hspace{5mm}(T-ReCs)} \label{alg:trecs} \begin{algorithmic}[1] \STATE {\bfseries Input:} Separable measurements $y^{(3)}_1=\mathcal{L}^{(3)}_1\left( \vct{X} \right)$, $y^{(3)}_2=\mathcal{L}^{(3)}_2 \left( \vct{X} \right)$, $y^{(1)}_1=\mathcal{L}^{(1)}_1\left( \vct{X} \right)$, $y^{(1)}_2=\mathcal{L}^{(1)}_2 \left( \vct{X} \right)$. \STATE Solve convex optimization problems \eqref{eq:opt1} and \eqref{eq:opt2} to obtain optimal solutions $Z_1^*$ and $Z_2^*$ respectively. \STATE Compute eigen-decomposition of $M_1 := Z_1^{*}(Z_2^*)^{\dagger}$ and $M_2 := (Z_2^*)^{\dagger}Z_1$. Let $U$ and $V$ denote the matrices whose columns are the eigenvectors of $M_1$ and $M_2^T$ respectively corresponding to the non-zero eigenvalues, in sorted order. (Let $r$ be the (common) rank of $M_1$ and $M_2$.) The eigenvectors, thus arranged are denoted as $\left\{ u_i \right\}_{i=1, \ldots, r}$ and $\left\{ v_i \right\}_{i=1, \ldots, r}$. \STATE Solve convex optimization problems \eqref{eq:opt3} and \eqref{eq:opt4} to obtain optimal solutions $Z_3^*$ and $Z_4^*$ respectively. \STATE Compute eigen-decomposition of $M_3 := Z_3^{*}(Z_4^*)^{\dagger}$ and $M_4 := (Z_4^*)^{\dagger}Z_3$. Let $\tilde{V}$ and $\tilde{W}$ denote the matrices whose columns are the eigenvectors of $M_3$ and $M_4^T$ respectively corresponding to the non-zero eigenvalues, in sorted order. (Let $r$ be the (common) rank of $M_3$ and $M_4$.) The eigenvectors, thus arranged are denoted as $\left\{ \tilde{v}_k \right\}_{k=1, \ldots, r}$ and $\left\{ \tilde{w}_k \right\}_{k=1, \ldots, r}$. \STATE Simultaneously reorder the columns of $\tilde{V}, \tilde{W}$, also performing simultaneous sign reversals as necessary so that the columns of $V$ and $\tilde{V}$ are equal, call the resulting matrix $W$ with columns $\left\{ w_{i} \right\}_{i=1, \ldots, r}$. \STATE Solve for $\lambda_i$ in the (over-determined) linear system $$y_i= \sum_{i=1}^{r} \lambda_i \mathcal{L} \left( u_i \otimes v_i \otimes w_i \right).$$ \STATE {\bfseries Output:} Recovered tensor $\vct{X}=\sum_{i=1}^r \lambda_i \, u_i \otimes v_i \otimes w_i$. \end{algorithmic} \end{algorithm} We now focus on the case of recovery from random Gaussian measurements, and then move on to the case of recovery from partially observed samples - in these situations not only are the measurements separable but one can also obtain provable sample complexity bounds which are almost optimal. \subsection{Separable Random Projections}\label{sec:random_sensing} Recall that from the discussion in Section \ref{sec:prelim} and the notation introduced in Section \ref{sec:diversity}, we have the following set of measurements: \begin{align*} \mathcal{L}_{1}^{(3)}\left( \vct{X} \right) = \left[ \begin{array}{c} \langle A_1 \otimes a, \vct{X} \rangle \\ \vdots \\ \langle A_{m_1} \otimes a, \vct{X} \rangle \end{array} \right], \qquad \mathcal{L}_{2}^{(3)}\left( \vct{X} \right) = \left[ \begin{array}{c} \langle A_{1} \otimes b, \vct{X} \rangle \\ \vdots \\ \langle A_{m_1} \otimes b, \vct{X} \rangle \end{array} \right], \end{align*} \begin{align*} \mathcal{L}_{1}^{(1)}\left( \vct{X} \right) = \left[ \begin{array}{c} \langle c \otimes B_{1}, \vct{X} \rangle \\ \vdots \\ \langle c \otimes B_{m_2}, \vct{X} \rangle \end{array} \right], \qquad \mathcal{L}_{2}^{(1)}\left( \vct{X} \right) = \left[ \begin{array}{c} \langle d \otimes B_{1}, \vct{X} \rangle \\ \vdots \\ \langle d \otimes B_{m_2}, \vct{X} \rangle \end{array} \right]. \end{align*} In the above, each $A_{i}, B_i \in \mathbb{R}^{n_2 \times n_3}$ is a random Gaussian matrix with i.i.d $\mathcal{N}(0,1)$ entries, and $a,b \in \mathbb{R}^{n_3}, ~\ c,d \in \mathbb{R}^{n_1}$ are random vectors distributed uniformly on the unit sphere. Finally, collecting all of the above measurements into a single operator, we have $y=\mathcal{L} \left( \vct{X} \right)$, and the total number of samples is thus $m = 2m_1+2m_2$. In the context of random tensor sensing, \eqref{eq:opt1}, \eqref{eq:opt2}, \eqref{eq:opt3} and \eqref{eq:opt4} reduce to solving low rank matrix recovery problems from random Gaussian measurements, where the measurements are as detailed in Section \ref{sec:separable}. The following lemma shows that the observations $\mathcal{L}\left( \vct{X} \right)$ can essentially be thought of as linear Gaussian measurements of the contractions $X_a^3, X_b^3, X_c^1, X_d^1$. This is crucial in reducing the tensor recovery problem to the problem of recovering the tensor contractions, instead. \begin{lemma} \label{lemma:main_obs} For tensor $\vct{X}$, matrix $A$ and vector $a$ of commensurate dimensions, \begin{equation*} \langle A \otimes a, \vct{X} \rangle = \langle A, X^{3}_a\rangle. \end{equation*} Similarly, for a vector $c$ and matrix $B$ of commensurate dimensions \begin{equation*} \langle c \otimes B, \vct{X} \rangle = \langle B, X^{1}_c\rangle. \end{equation*} \end{lemma} \begin{proof} We only verify the first equality, the second equality is proved in an identical manner. Let us denote by $\vct{X}_k$ the $k^{th}$ mode $3$ slice of $\vct{X}$ where $k=1, \ldots, n_3$. Then we have, \begin{align*} \langle A \otimes a, \vct{X} \rangle = & \sum_{k=1}^{n_3} a_k \langle A, \vct{X}_k \rangle = \langle A, \sum_{k=1}^{n_3} a_k \vct{X}_k \rangle = \langle A, X^{3}_a\rangle. \end{align*} \end{proof} As a consequence of the above lemma, it is easy to see that $$ \langle A\otimes a, \vct{X} \rangle= \langle A, X^{3}_a\rangle = \langle A, \sum_{i=1}^{n_3} a_iX^{3}_i\rangle = \sum_{i=1}^{n_3} a_i \langle A, X^{3}_i\rangle, $$ thus establishing separability. Since $X^{3}_a$ and $X^{3}_b$ are low-rank matrices, the observation operators $\mathcal{L}_k^{(3)}\left( \vct{X} \right)$ essentially provide Gaussian random projections of $X^{3}_a$ and $X^{3}_b$, which in turn can be recovered using matrix-based techniques. The following lemma establishes ``faithfulness'' in the context of separable random projections. \begin{lemma} \label{lemma:contr_rec} Suppose $m_1 > 3r(n_1+n_2-r)$. Then the unique solutions to problems \eqref{eq:opt1} and \eqref{eq:opt2} are $X^{3}_a$ and $X^{3}_b$ respectively with high probability. Similarly, if $m_2 > 3r(n_2+n_3-r)$ then the unique solutions to problems \eqref{eq:opt3} and \eqref{eq:opt4} are $X^{1}_c$ and $X^{1}_d$ respectively with high probability. \end{lemma} \begin{proof} Again, we only prove the first part of the claim, the second follows in an identical manner. Note that by Lemma \ref{lemma:main_obs} and Lemma \ref{lemma:contr_rank}, $X^{3}_a$ and $X^{3}_b$ are feasible rank $r$ solutions to \eqref{eq:opt1} and \eqref{eq:opt2} respectively. By Proposition 3.11 of \cite{venkat}, we have that the nuclear norm heuristic succeeds in recovering rank $r$ matrices from $m_1 >3r(n_1+n_2-r)$ with high probability. \end{proof} \begin{remark} In this sub-section, we will refer to events which occur with probability exceeding $1-\exp(-C_0 n_1)$ as events that occur ``with high probability" (w.h.p.). We will transparently be able to take appropriate union bounds of high probability events since the number of events being considered is small enough that the union event also holds w.h.p. (thus affecting only the constants involved). Hence, in the subsequent results, we will not need to refer to the precise probabilities. \end{remark} Since the contractions $X^{3}_a$ and $X^{3}_b$ of the tensor $\vct{X}$ are successfully recovered and the tensor satisfies Assumption \ref{assump:1}, the second stage of Leurgans' algorithm can be used to recover the factors $u_i$ and $v_i$. Similarly, from $X^{1}_c$ and $X^{1}_d$, the factors $v_i$ and $w_i$ can be recovered. The above sequence of observations leads to the following sample complexity bound for low rank tensor recovery from random measurements: \begin{theorem} \label{thm:exact_rec_sense} Let $\vct{X} \in \mathbb{R}^{n_1\times n_2 \times n_3}$ be an unknown tensor of interest with rank $r \leq \min\left\{n_1,n_2, n_3\right\}$. Suppose we obtain samples as described by \eqref{eq:rand_proj}. Suppose $m_1 > 3r(n_1+n_2-r)$ and $m_2 > 3r(n_2+n_3-r)$. Then T-ReCs (Algorithm \ref{alg:trecs}) succeeds in exactly recovering $\vct{X}$ and its low rank decomposition \eqref{eq:decomp} with high probability. \end{theorem} \begin{proof} By Lemma \ref{lemma:contr_rank} $X^{3}_a$, $X^{3}_b$, $X^{1}_c$, $X^1_d$ are all rank at most $r$. By Lemma \ref{lemma:main_obs}, the tensor observations $y^{(3)}_1, y^{(3)}_2, y^{(1)}_1, y^{(1)}_2$ provide linear Gaussian measurements of $X^{3}_a$, $X^{3}_b$, $X^{1}_c$, $X^1_d$. By Lemma \ref{lemma:contr_rec}, the convex problems \eqref{eq:opt1}, \eqref{eq:opt2}, \eqref{eq:opt3}, \eqref{eq:opt4} correctly recover the modal contractions $X^{3}_a$, $X^{3}_b$, $X^{1}_c$, $X^1_d$. Since the vectors $a, b, c, d$ are chosen to be randomly uniformly distributed on the unit sphere, the contractions $X^3_a, X^3_b$ are non-degenerate and pairwise generic almost surely (and similarly $X^1_c, X^1_d$). Thus, Lemma \ref{lemma:fund_lemma} applies and $X^{3}_a$, $X^{3}_b$ can be used to correctly recover the factors $u_i, v_i, i=1, \ldots, r$. Again by Lemma \ref{lemma:fund_lemma} $X^{1}_c$, $X^{1}_d$ can be used to correctly recover the factors $v_i, w_i, i=1, \ldots, r$. Note that due to the linear independence of the factors, the linear system of equations involving $\lambda_i$ is full column rank, over-determined, and has an exact solution. The fact that the result holds with high probability follows because one simply needs to take the union bounds of the probabilities of failure exact recovery of the contractions via the solution of \eqref{eq:opt1}, \eqref{eq:opt2}, \eqref{eq:opt3}, \eqref{eq:opt4}. \end{proof} \begin{remarks} \hspace{10mm} \begin{enumerate} \item Theorem \ref{thm:exact_rec_sense} yields bounds that are order optimal. Indeed, consider the number of samples $m = 2m_1 + 2m_2 \sim O(r(n_1 + n_2 + n_3))$, which by a counting argument is the same as the number of parameters in an order 3 tensor of rank $r$. \item For symmetric tensors with symmetric factorizations of the form $\vct{X}=\sum_{l=1}^{3} \lambda_i v_i \otimes v_i \otimes v_i$, this method becomes particularly simple. Steps $4, 5, 6$ in Algorithm \ref{alg:trecs} become unnecessary, and the factors are revealed directly in step $3$. One then only needs to solve the linear system described in step $7$ to recover the scale factors. The sample complexity remains $O(nr)$, nevertheless. \item Note that for the method we propose, the most computationally expensive step is that of solving low-rank matrix recovery problems where the matrix is of size $n_i \times n_j$ for $i,j = 1,2,3$. Fast algorithms with rigorous guarantees exist for solving such problems, and we can use any of these pre-existing methods. An important point to note is that, other methods for minimizing the Tucker rank of a tensor by considering ``matricized" tensors solve matrix recovery problems for matrices of size $n_i \times n_j n_k$, which can be far more expensive. \item Note that the sensing operators $\langle A_i\otimes a , \cdot \rangle$ may seem non-standard (vis-a-vis the compressed sensing literature such as \cite{recht2010guaranteed}), but are very storage efficient. Indeed, one needs to only store random matrices $A_i, B_i$ and random vectors $a,b$. Storing each of these operators requires $O(n_1 n_2 + n_3)$ space, and is far more storage efficient than (perhaps the more suggestive) sensing operators of the form $\langle \vct{A}_i , \cdot \rangle$, with each $\vct{A}_i$ being a random tensor requiring $O(n_1 n_2 n_3)$ space. Similar ``low rank" sensing operators have been used for matrix recovery \cite{kueng2014low, jain2013provable}. \item While the results here are presented in the case where the $A_i, B_i$ are random Gaussian matrices and the $a,b$ are uniformly distributed on the sphere, the results are not truly dependent on these distributions. The $A_i, B_i$ need to be structured so that they enable low-rank matrix recovery (i.e., they need to be ``faithful''). Hence, for instance it would suffice if the entries of these matrices were sub-Gaussian, or had appropriate restricted isometry properties with respect to low rank matrices \cite{recht2010guaranteed}. \end{enumerate} \end{remarks} \subsection{Tensor Completion} \label{sec:third_order_completion} In the context of tensor completion, for a fixed (but unknown) $\vct{X} $, a subset of the entries $\vct{X}_\Omega$ are revealed for some index set $\Omega \subseteq [n_1] \times [n_2] \times [n_3]$. We assumed that the measurements thus revealed are in a union of four slices. For the $i^{th}$ mode-$1$ slice let us define \begin{align*} \Omega^{(1)}_i:=\Omega \cap S^{(1)}_i \qquad m^{(1)}_i:=\left| \Omega \cap S_i^{(1)} \right|. \end{align*} These are precisely the set of entries revealed in the $i^{th}$ mode-$1$ slice and the corresponding cardinality. Similarly for the $k^{th}$ mode-$3$ slice we define \begin{equation} \label{eq:comp_meas1} \Omega^{(3)}_k=\Omega \cap S^{(3)}_k \qquad m^{(3)}_k:=\left| \Omega \cap S_k^{(3)} \right|. \end{equation} We will require the existence of two distinct mode-$1$ slices (say $i^*_1$ and $i^*_2$) from which measurements are obtained. Indeed, \begin{equation} \label{eq:comp_meas2} \mathcal{L}^{(1)}_1 \left( \vct{X} \right):=\left( \vct{X} \right)_{\Omega^{(1)}_{i_1^*}} \qquad \mathcal{L}^{(1)}_2 \left( \vct{X} \right):=\left( \vct{X} \right)_{\Omega^{(1)}_{i_2^*}}. \end{equation} Similarly we will also require the existence of two different slices in mode $3$ \footnote{We choose modes 1 and 3 arbitrarily. Any two of the 3 modes suffice. } (say $k^*_1$ and $k^*_2$) from which we have measurements: $$ \mathcal{L}^{(3)}_1 \left( \vct{X} \right):=\left( \vct{X} \right)_{\Omega^{(3)}_{k_1^*}} \qquad \mathcal{L}^{(3)}_2 \left( \vct{X} \right):=\left( \vct{X} \right)_{\Omega^{(3)}_{k_2^*}}. $$ We will require the cardinalities of the measurements from mode $1$, $m^{(1)}_{i_{1}^{*}}$ and $m^{(1)}_{i_{2}^{*}}$ and from mode $3$, $m^{(3)}_{k_{1}^{*}}$ and $m^{(3)}_{k_{2}^{*}}$ to be sufficiently large so that they are faithful (to be made precise subsequently), and this will determine the sample complexity. The key aspect of the algorithm is that it \emph{only makes use of the samples in these four distinct slices}. No other samples outside these four slices need be revealed at all (so that all the other $m^{(1)}_i$ and $m^{(3)}_k$ can be zero). The indices sampled from each slice are drawn uniformly and randomly without replacement. Note that for a specified $m^{(1)}_{i_{1}^{*}}$, $m^{(1)}_{i_{2}^{*}}$, $m^{(3)}_{k_{1}^{*}}$ and $m^{(3)}_{k_{2}^{*}}$ the overall sample complexity implied is $m^{(1)}_{i_{1}^{*}} + m^{(1)}_{i_{2}^{*}} + m^{(3)}_{k_{1}^{*}} + m^{(3)}_{k_{2}^{*}}$. In the context of tensor completion, \eqref{eq:opt1}, \eqref{eq:opt2}, \eqref{eq:opt3} and \eqref{eq:opt4} reduce to solving low rank matrix completion problems for the slices $S_{i_{1}^{*}}^{(1)}$, $S_{i_{2}^{*}}^{(1)}$, $S_{k_{1}^{*}}^{(3)}, S_{k_{2}^{*}}^{(3)}$. Contraction recovery in this context amounts to obtaining complete slices, which can then be used as inputs to Leurgans' algorithm. There are a few important differences however, when compared to the case of recovery from Gaussian random projections. For the matrix completion sub-steps to succeed, we need the following standard incoherence assumptions from the matrix completion literature \cite{Ben}. Let $\mathcal{U}, \mathcal{V}$ and $\mathcal{W}$ represent the linear spans of the vectors $\left\{ u_i \right\}_{=1, \ldots, r}, \left\{ v_i \right\}_{=1, \ldots, r}, \left\{ w_i \right\}_{=1, \ldots, r}$. Let $P_{\mathcal{U}}$, $P_{\mathcal{V}}$ and $P_{\mathcal{W}}$ respectively represent the projection operators corresponding to $\mathcal{U}, \mathcal{V}$ and $\mathcal{W}$. The coherence of the subspace $\mathcal{U}$ (similarly for $\mathcal{V}$ and $\mathcal{W}$) is defined as: $$ \mu(\mathcal{U}):=\frac{n_1}{r} \max_{i=1, \ldots, n_1} \| P_{\mathcal{U}}\left( e_i \right) \|^2, $$ where $\{e_i\}$ are the canonical basis vectors. \begin{assumption}[Incoherence] \label{assump:3} $\mu_0:= \max\left\{ \mu(\mathcal{U}), \mu(\mathcal{V}), \mu(\mathcal{W}) \right\}$ is a positive constant independent of the rank and the dimensions of the tensor. \end{assumption} Such an incoherence condition is required in order to be able to complete the matrix slices from the observed data \cite{Ben}. We will see subsequently that when the tensor is of rank $r$, so are the different slices of the tensor and each slice will have a ``thin'' singular value decomposition. Furthermore, the incoherence assumption will also hold for these slices. \begin{definition} \label{def:1} Let $X_i^{1} = U \Sigma V^T$ be the singular value decomposition of the tensor slice $X_i^{1}$. We say that the tensor $\vct{X}$ satisfies the \emph{slice condition} for slice $S^{(1)}_i$ with constant $\mu^{(1)}_i$ if the element-wise infinity (max) norm $$\|UV^T\|_{\infty} \leq \mu^{(1)}_i \sqrt{\frac{r}{n_2n_3}}.$$ \end{definition} The slice condition is analogously defined for the slices along other modes, i.e. $S^{(2)}_j$ and $S^{(3)}_k$. We will denote by $\mu^{(2)}_j$ and $\mu^{(3)}_k$ the corresponding slice constants. We will require our distinct slices from which samples are obtained to satisfy these slice conditions. \begin{remark} The slice conditions are standard in the matrix completion literature, see for instance \cite{Ben}. As pointed out in \cite{Ben}, the slice conditions are not much more restrictive than the incoherence condition, because if the incoherence condition is satisfied with constant $\mu_0$ then (by a simple application of the Cauchy-Schwartz inequality) the slice condition for $S_{i}^{(1)}$ is also satisfied with constant $\mu_1(i) \leq \mu_0 \sqrt{r}$ for all $i$ (and similarly for $\mu^{(2)}_j$ and $\mu^{(3)}_k$). Hence, the slice conditions can be done away with, and using this weaker bound only increases the sample complexity bound for exact reconstruction by a multiplicative factor of $r$. \end{remark} \begin{remark} Note that the incoherence assumption and the slice condition are known to be satisfied for suitable random ensembles of models, such as the random orthogonal model, and models where the singular vectors are bounded element-wise \cite{Ben}. \end{remark} The decomposition \eqref{eq:decomp} ties factor information about the tensor to factor information of contractions. A direct corollary of Lemma \ref{lemma:contr_rank} is that contraction matrices are incoherent whenever the tensor is incoherent: \begin{corollary} \label{cor:incoherence} If the tensor satisfies the incoherence assumption, then so do the contractions. Specifically all the tensor slices satisfy incoherence. \end{corollary} \begin{proof} Consider for instance the slices $X_k^3$ for $k=1, \ldots, n_3$. By Lemma \ref{lemma:contr_rank}, the row and column-spaces of each slice are precisely $\mathcal{U}$ and $\mathcal{V}$ respectively, thus the incoherence assumption also holds for the slices. \end{proof} We now detail our result for the tensor completion problem: \begin{lemma} \label{lemma:contr_rec} Given a tensor $\vct{X}$ with rank $r \leq n_1$ which satisfies the following: \begin{itemize} \item Assumptions \ref{assump:2} and \ref{assump:3}, \item The samples are obtained as described in \eqref{eq:comp_meas1}, \eqref{eq:comp_meas2}. \item Suppose the number of samples from each slice satisfy: \begin{align*} m^{(1)}_{i_{1}^{*}} \geq 32 \max \left\{ \mu_0, \left( \mu_{i_1^{*}}^{(1)} \right)^2 \right\} r(n_2+n_3) \log^{2}n_3 \\ m^{(1)}_{i_{2}^{*}} \geq 32 \max \left\{ \mu_0, \left( \mu_{i_2^{*}}^{(1)} \right)^2 \right\} r(n_2+n_3) \log^{2}n_3 \\ m^{(3)}_{k_{1}^{*}} \geq 32 \max \left\{ \mu_0, \left( \mu_{k_1^{*}}^{(3)} \right)^2 \right\} r(n_1+n_2) \log^{2}n_2 \\ m^{(3)}_{k_{2}^{*}} \geq 32 \max \left\{ \mu_0, \left( \mu_{k_2^{*}}^{(3)} \right)^2 \right\} r(n_1+n_2) \log^{2}n_2 \\ \end{align*} \vspace{-13mm} \item The slice condition (Definition \ref{def:1}) for each of the four slices $S_{i_{1}^{*}}^{(1)}, S_{i_{2}^{*}}^{(1)}, S_{k_{1}^{*}}^{(3)}, S_{k_{2}^{*}}^{(3)}$ hold. \end{itemize} Then the unique solutions to problems \eqref{eq:opt1}, \eqref{eq:opt2}, \eqref{eq:opt3} and \eqref{eq:opt4} are $X_{k_{1}^{*}}^{3}$ $X_{k_{2}^{*}}^{3}$, $X_{i_{1}^{*}}^{1}$, and $X_{i_{2}^{*}}^{1}$ respectively with probability exceeding $1-C \log(n_2)n_2^{-\beta}$ for some constants $C, \beta >0$. \end{lemma} \begin{proof} By Lemma \ref{lemma:contr_rank} $X_{i_{1}^{*}}^{1}$, $X_{i_{2}^{*}}^{1}$, $X_{k_{1}^{*}}^{3}$, $X_{k_{2}^{*}}^{3}$ are all rank at most $r$. By Theorem 1.1 of \cite{Ben}, the convex problems \eqref{eq:opt1}, \eqref{eq:opt2}, \eqref{eq:opt3}, \eqref{eq:opt4} correctly recover the full slices $X_{k_{1}^{*}}^{3}$, $X_{k_{2}^{*}}^{3}$, $X_{i_{1}^{*}}^{1}$, $X_{i_{2}^{*}}^{1}$ with high probability. (Note that the relevant incoherence conditions in \cite{Ben} are satisfied due to Corollary \ref{lemma:contr_rank} and the slice condition assumption. Furthermore the number of samples specified meets the sample complexity requirements of Theorem 1.1 in \cite{Ben} for exact recovery.) \end{proof} \begin{remark} We note that in this sub-section, events that occur with probability exceeding $1-C \log(n_2) n_2^{-\beta}$ (recall that $n_1 \leq n_2 \leq n_3$) are termed as occurring with high probability (w.h.p.). We will transparently be able to union bound these events (thus changing only the constants) and hence we refrain from mentioning these probabilities explicitly. \end{remark} \begin{theorem} \label{thm:exact_rec_comp} Let $\vct{X} \in \mathbb{R}^{n_1\times n_2 \times n_3}$ be an unknown tensor of interest with rank $r \leq n_1$, such that the tensor slices $X_{k_{1}^{*}}^{3}$ $X_{k_{2}^{*}}^{3}$ are non-degenerate and pairwise generic, and similarly $X_{i_{1}^{*}}^{1}, X_{i_{2}^{*}}^{1}$ are non-degenerate and pairwise generic. Then, under the same set of assumptions made for Lemma \ref{lemma:contr_rec}, the procedure outlined in Algorithm \ref{alg:trecs} succeeds in exactly recovering $\vct{X}$ and its low rank decomposition \eqref{eq:decomp0} with high probability. \end{theorem} \begin{proof} The proof follows along the same lines as that of Theorem \ref{thm:exact_rec_sense}, with Lemma \ref{lemma:contr_rec} allowing us to exactly recover the slices $X_{i_{1}^{*}}^{1}, X_{i_{2}^{*}}^{1}, X_{k_{1}^{*}}^{3}, X_{k_{2}^{*}}^{3}$. Since these slices satisfy non-degeneracy and pairwise genericity, the tensor factors $u_i, v_i, w_i$, $i=1, \ldots, r$ can be exactly recovered (up to scaling) by following steps $(3)$, $(5)$ and $(6)$ of Algorithm \ref{alg:trecs}. Also, the system of equations to recover $\lambda$ is given by \[ \mtx{X}_{\Omega} = \sum_{i = 1}^r \lambda_i (u_i \otimes v_i \otimes w_i)_{\Omega}. \] \end{proof} \begin{remarks} \hspace{10mm} \begin{enumerate} \item Theorem \ref{thm:exact_rec_comp} yields bounds that are almost order optimal when $\mu_0$ and $\mu^{(i)}_k$ are constant (independent of $r$ and the dimension). Indeed, the total number of samples required is $m \sim O(rn_3 \log^2 n_3)$, which by a counting argument is nearly the same number of parameters in an order 3 tensor of rank $r$ (except for the additional logarithmic factor). \item The comments about efficiency for symmetric factorizations in the Gaussian random projections case hold here as well. \item We do not necessarily need sampling without replacement from the four slices. Similar results can be obtained for other sampling models such as with replacement \cite{Ben}, and even non-uniform sampling \cite{Sujay}. Furthermore, while the method proposed here for the task of matrix completion relies on nuclear norm minimization, a number of other approaches such as alternating minimization \cite{optspace,Sujay,Burer} can also be adopted; our algorithm relies only on the successful completion of the slices. \item Note that we can remove the slice condition altogether since the incoherence assumption implies the slice condition with $\mu_1 = \mu_0 \sqrt r$. Removing the slice condition then implies an overall sample complexity of $O(r^2n_3\log^2n_3)$. \end{enumerate} \end{remarks} \section{Extension to Higher Order Tensors} \label{sec:higher_order} The results of Section \ref{sec:third_order} can be extended to higher order tensors in a straightforward way. While the ideas remain essentially the same, the notation is necessarily more cumbersome in this section. We omit some technical proofs to avoid repetition of closely analogous arguments from the third order case, and focus on illustrating how to extend the methods to the higher order setting. Consider a tensor $\vct{X} \in \mathbb{R}^{n_{1} \times \cdots \times n_{K}}$ of order $K$ and dimension $n_1\times \cdots \times n_{K}$. Let us assume, without loss of generality, that $n_1 \leq n_2 \leq \ldots \leq n_K$. Let the rank of this tensor be $r \leq n_1$ and be given by the decomposition: \begin{align*} \vct{X} &= \sum_{l=1}^r u_l^1 \otimes \cdots \otimes u_l^K = \sum_{l=1}^r \bigotimes_{p=1}^{K} u^p_l, \end{align*} where $u_l^p \in \mathbb{R}^{n_p}$. We will be interested in slices of the given tensor that are identified by picking two \emph{consecutive} modes $(k, k+1)$, and by fixing all the indices not in those modes, i.e. $i_1 \in [n_{1}], \ldots, i_{k-1} \in [n_{k-1}], i_{k+2} \in [n_{k+2}], \ldots, i_K \in [n_K]$. Thus the indices of a slice $S$ are: \begin{align*} S:=\left\{ i_1 \right\} \times \cdots \times \left\{ i_{k-1} \right\} \times [n_k] \times [n_{k+1}] \times \left\{ i_{k+2} \right\} \times \cdots \times \left\{ i_K \right\}, \end{align*} and the corresponding slice may be viewed as a matrix, denoted by $\vct{X}_S$. While slices of tensors can be defined more generally (i.e. the modes need not be consecutive), in this paper we will only need to deal with such ``contiguous'' slices. \footnote{In general, a slice corresponding to any pair of modes $(k_1,k_2)$ suffices for our approach. However, to keep the notation simple we present the case where slices correspond to mode pairs of the form $(k,k+1)$.} We will denote the collection of all slices where modes $(k,k+1)$ are contained to be: \small $$ \mathcal{S}^{(k)} := \left\{ \left\{ i_1 \right\} \times \cdots \times \left\{ i_{k-1} \right\} \times [n_k] \times [n_{k+1}] \times \left\{ i_{k+2} \right\} \times \cdots \times \left\{ i_K \right\} \; | \; i_1 \in [n_1], \ldots, i_K \in [n_K] \right\}. $$ \normalsize Every element of $\mathcal{S}^{(k)}$ is a set of indices, and we can identify a tensor $\vct{A} \in \mathbb{R}^{n_1 \times \cdots \times n_{k-1} \times n_{k+2} \times \cdots \times n_{K}}$ with a map $\mathcal A: \mathcal{S}^{(k)} \rightarrow \mathbb{R}$. Using this identification, every element of $\vct{A}$ can thus also be referenced by $S \in \mathcal{S}^{(k)}$. To keep our notation succinct, we will thus refer to $\vct{A}_S$ as the element corresponding to $S$ under this identification. Thus if $S=\left\{ i_1 \right\} \times \cdots \times \left\{ i_{k-1} \right\} \times [n_k] \times [n_{k+1}] \times \left\{ i_{k+2} \right\} \times \cdots \times \left\{ i_K \right\}$, the element: $$ \vct{A}_S = \vct{A}_{i_1, \ldots, i_{k-1}, i_{k+2}, \ldots, i_K}. $$ Using this notation, we can define a high-order contraction. A mode-$k$ contraction of $\vct{X}$ with respect to a tensor $\vct{A}$ is thus: \begin{equation} \label{eq:contraction_ho} X_{\vct{A}}^{k} := \sum_{S \in \mathcal{S}^{(k)}}\vct{A}_S \vct{X}_S. \end{equation} Note that since $X_{\vct{A}}^{k}$ is a sum of (two-dimensional) slices, it is a matrix. As in the third order case, we will be interested in contractions where $\vct{A}$ is either random or a coordinate tensor. The analogue of Lemma \ref{lemma:fund_lemma} for the higher order case is the following: \begin{lemma} \label{lemma:decomp_high} Let $\vct{X}$ have the decomposition $\vct{X} = \sum_{l=1}^r \bigotimes_{p=1}^{K} u^p_l$. Then we have that the contraction $X^k_{\vct{A}}$ has the following matrix decomposition: \begin{equation} \label{eq:decomp_high1} X^k_{\vct{A}}=\sum_{l=1}^r \nu_l^k u_l^k \left( u_l^{k+1} \right)^{T}, \end{equation} where $\nu_l^k:= \langle \vct{A}, \underset{{p \neq k, k+1}}{\bigotimes}u^p_l\rangle $. Furthermore, if $X_{\vct{B}}^k$ is another contraction with respect to $\vct{B}$, then the eigenvectors of the matrices \begin{align} \label{eq:m_and_n} M_1=X^k_{\vct{A}} \left( X^k_{\vct{B}} \right) ^{\dag} & \qquad M_2=\left( \left( X^k_{\vct{B}} \right)^{\dag} X^k_{\vct{A}} \right)^{T} \end{align} respectively are $\{ u^k_l \}_{l=1, \ldots, r}$ and $\{ u^{k+1}_l \}_{l=1, \ldots, r}$. \end{lemma} \begin{proof} It is straightforward to verify by simply expanding the definition of $X_{\vct{A}}^k$ using the definition of contraction \eqref{eq:contraction_ho}: \begin{align*} \left[ X_{\vct{A}}^{k} \right]_{j_{k},j_{k+1}} ={\sum_{j_1, \ldots, j_{k-1}, j_{k+2}, \ldots, j_K} \sum_{l=1}^{r} \left( \prod_{p=1}^{K} \left( u_l^{p} \right)_{j_k} \right) \vct{A}_{j_1, \ldots, j_{k-1},{j_{k+2}}, \ldots, j_{K}}}. \end{align*} \normalsize Rearranging terms, we get the decomposition \eqref{eq:decomp_high1}. The eigenvalues of $M_1, M_2$ follow along similar lines to the proof of Lemma \ref{lemma:fund_lemma}. \end{proof} As a consequence of the above lemma, if $\vct{X}$ is of low rank, so are all the contractions. The notions of non-degeneracy and pairwise genericity of contractions extend in a natural way to the higher order case. We say that a contraction $X^k_{\vct{A}}$ is non-degenerate if $\nu_l^k \neq 0$ for all $l=1, \ldots, r$. Furthermore, a pair of contractions is pairwise generic if the corresponding ratios $\nu_l^k$ are all distinct for $l=1, \ldots, r$. Non-degeneracy and pairwise genericity hold almost surely when the contractions are computed with random tensors $\vct{A}$, $\vct{B}$ from appropriate random ensembles (e.g. $i.i.d.$ normally distributed entries). In much the same way as the third order case, Leurgans' algorithm can be used to perform decomposition of low-rank tensors using Lemma \ref{lemma:decomp_high}. This is described in Algorithm \ref{alg:leurgans_high}. \begin{algorithm}[!ht] \caption{Leurgans' Algorithm for Higher Order Tensors} \label{alg:leurgans_high} \begin{algorithmic}[1] \STATE {\bfseries Input:} Tensor $\vct{X}$. \FOR{ $k=1$ to $K-1$} \STATE Compute contractions $X_{\vct{A}}^k$ and $X_{\vct{B}}^k$ for some tensors $\vct{A}$ and $\vct{B}$ of appropriate dimensions, such that the contractions are non-degenerate and pairwise generic. \STATE Compute eigen-decompositions of $M_1 := X_{\vct{A}}^k \left(X_{\vct{B}}^k \right)^{\dagger}$ and $M_2 := \left(X_{\vct{B}}^k\right)^{\dagger}X_{\vct{A}}^k$. Let $\tilde{U}^k$ and $\tilde{U}^{k+1}$ denote the matrices whose columns are the eigenvectors of $M_1$ and $M_2^T$ respectively corresponding to the non-zero eigenvalues, in sorted order. (Let $r$ be the (common) rank of $M_1$ and $M_2$.) \STATE If $k=1$, let $U^1:=\tilde{U}^1$ and $U^2:= \tilde{U}^2$. \STATE If $k \geq 2$, simultaneously reorder the columns of $\tilde{U}^k$, $\tilde{U}^{k+1}$, also performing simultaneous sign reversals as necessary so that the columns of $\tilde{U}^{k}$ obtained match with the columns of $U^{k}$ (obtained in the previous iteration), call the resulting matrices $U^{k}$, $U^{k+1}$. (The eigenvectors corresponding to mode $k+1$, thus obtained are denoted as $\{ u_l^{k+1} \}_{l=1, \ldots, r}$.) \ENDFOR \STATE Solve for $\lambda_l$ in the (over-determined) linear system $$\vct{X}= \sum_{l=1}^{r} \lambda_l \bigotimes_{k=1}^{K} u^k_l. $$ \STATE {\bfseries Output:} Recovered tensor $\vct{X}=\sum_{l=1}^r \lambda_l \bigotimes_{k=1}^{K} u_l^k$. \end{algorithmic} \end{algorithm} Finally, the notion of \emph{separable measurements} can be extended to higher order tensors in a natural way. \begin{definition} Consider a linear operator $\mathcal{L}: \mathbb{R}^{n_1 \times \cdots \times n_K} \rightarrow \mathbb{R}^n$. We say that $\mathcal{L}$ is separable with respect to the $k^{th}$ mode if there exist $\vct{W} \in \mathbb{R}^{n_1 \times \cdots \times n_{k-1} \times n_{k+2} \times \cdots \times n_K}$ and a linear operator $\mathcal{T}^{(k)}: \mathbb{R}^{n_k \times n_{k+1} } \rightarrow \mathbb{R}^n$, such that for every $\vct{X} \in \mathbb{R}^{n_1 \times \cdots \times n_K}$: $$ \mathcal{L}\left( \vct{X} \right) = \sum_{S \in \mathcal{S}^{(k)}} \vct{W}_S \,\mathcal{T}^{(k)} \left( X^k_S \right). $$ \end{definition} Analogous to the third order case, we assume that we are presented with two sets of separable measurements per mode: \begin{align*} y_1^{(k)}&=\mathcal{L}^{(k)}_1\left( \vct{X} \right) = \sum_{S \in \mathcal{S}^{(k)}} \left(\vct{W}_1\right)_{S} \,\mathcal{T}^{(k)} \left( X^k_S \right)\\ y_2^{(k)} & =\mathcal{L}^{(k)}_2\left( \vct{X} \right) = \sum_{S \in \mathcal{S}^{(k)}} \left( \vct{W}_2\right)_{S} \,\mathcal{T}^{(k)} \left( X^k_S \right) \end{align*} for $k=1, \ldots, K-1$ with each of $y_1^{(k)}, y_2^{(k)} \in \mathbb{R}^{m_{k}}$. Once again, by separability we have: $$ y_1^{(k)} = \mathcal{T}^{(k)} \left( X_{{\vct{W}}_1}^k \right) \qquad y_2^{(k)} = \mathcal{T}^{(k)} \left( X_{{\vct{W}}_2}^k \right), $$ and since the contractions $X_{{\vct{W}}_1}^k$ and $ X_{{\vct{W}}_2}^k$ are low rank, nuclear norm minimization can be used to recover these contractions via: \begin{equation} \label{eq:opt_ho1} \underset{Z_1}{\text{minimize}} \qquad \|Z_1 \|_* \qquad \text{subject to} \qquad y_1^{(k)}=\mathcal{T}^{(k)}\left( Z_1 \right), \end{equation} \begin{equation} \label{eq:opt_ho2} \underset{Z_2}{\text{minimize}} \qquad \|Z_2 \|_* \qquad \text{subject to} \qquad y_2^{(k)}=\mathcal{T}^{(k)}\left( Z_2 \right), \end{equation} for each $k=1, \ldots, K-1$. After recovering the two contractions for each mode, we can then apply (the higher order) Leurgans' algorithm to recover the tensor factors. The precise algorithm is described in Algorithm \ref{alg:trecs_high}. Provided the $\mathcal{T}^{(k)}\left( \cdot \right)$ are faithful, the tensor contractions can be successfully recovered via nuclear norm minimization. Furthermore, if the contractions are non-degenerate and pairwise generic, the method can successfully recover the entire tensor. \begin{algorithm}[!ht] \caption{T-ReCs for Higher Order Tensors} \label{alg:trecs_high} \begin{algorithmic}[1] \STATE {\bfseries Input:} Measurements $y_i^{(k)} = \mathcal{L}_i^{(k)} \left( \vct{X} \right)$, for $k=1, \ldots, K$, $i=1, 2.$ \FOR{ $k=1$ to $K-1$} \STATE Solve convex optimization problems \eqref{eq:opt_ho1} and \eqref{eq:opt_ho2} to obtain optimal solutions $Z_1^*$ and $Z_2^*$ respectively. \STATE Compute eigen-decompositions of $M_1 := Z_1^{*}(Z_2^*)^{\dagger}$ and $M_2 := (Z_2^*)^{\dagger}Z_1^*$. Let $\tilde{U}^k$ and $\tilde{U}^{k+1}$ denote the matrices whose columns are the normalized eigenvectors of $M_1$ and $M_2^T$ respectively corresponding to the non-zero eigenvalues, in sorted order. (Let $r$ be the (common) rank of $M_1$ and $M_2^{T}$.) \STATE If $k=1$, let $U^1:=\tilde{U}^1$ and $U^2:= \tilde{U}^2$. \STATE If $k \geq 2$, simultaneously reorder the columns of $\tilde{U}^k$, $\tilde{U}^{k+1}$, also performing simultaneous sign reversals as necessary so that the columns of $\tilde{U}^{k}$ obtained match with the columns of $U^{k}$ (obtained in the previous iteration), call the resulting matrices $U^{k}$, $U^{k+1}$. (The eigenvectors corresponding to mode $k+1$, thus obtained are denoted as $\{ u_l^{k+1} \}_{l=1, \ldots, r}$.) \ENDFOR \STATE Solve for $\lambda_l$ in the (over-determined) linear system $$y_i^{(k)}= \sum_{l=1}^{r} \lambda_l \mathcal{L}_i^{(k)}\left(\bigotimes_{k=1}^{K} u^k_l \right), \; \; k=1, \ldots, K-1, \; i=1,2.$$ \STATE {\bfseries Output:} Recovered tensor $\vct{X}=\sum_{l=1}^r \lambda_l \bigotimes_{k=1}^{p} u_l^k$. \end{algorithmic} \end{algorithm} \subsection{Separable Random Projections} Given tensors $\vct{A} \in \mathbb{R}^{n_1 \times \cdots \times n_{K_1}}$, $\vct{B} \in \mathbb{R}^{n_{K_1+1} \times \cdots \times n_{K_1+K_2}}$, $\vct{C} \in \mathbb{R}^{n_{K_1+K_2+1} \times \cdots \times n_{K_1+K_2+K_3}}$ of orders $K_1$, $K_2$ and $K_3$ respectively with $K_1+K_2+K_3=K$, we define their outer product as: \begin{align*} \left[ \vct{A} \otimes \vct{B} \otimes \vct{C} \right]_{i_1, \ldots, i_K} := \left[ \vct{A} \right]_{i_1, \ldots, i_{K_{1}}} \left[ \vct{B} \right]_{i_{K_{1}+1}, \ldots, i_{K_{1}+K_{2}}} \left[ \vct{C} \right]_{i_{K_{1}+K_{2}+1}, \ldots, i_{K_{1}+K_{2}+K_{3}}} \end{align*} Note also that the inner-product for higher order tensors is defined in the natural way: $$ \langle\vct{T}, \vct{X} \rangle := \sum_{i_1, \ldots, i_K} \left[ \vct{T} \right]_{i_1, \ldots, i_K} \left[ \vct{X} \right]_{i_1, \ldots, i_K}. $$ In this higher order setting, we also work with specific separable random projection operators, which are defined as below: \begin{equation} \label{eq:meas} \begin{split} y_1^{(k)}=\mathcal{L}_1^{(k)} \left( \vct{X} \right) := \left[ \begin{array}{c} \langle \vct{A}_k \otimes \Gamma^{(k)}_1 \otimes \vct{B}_k, \vct{X} \rangle \\ \vdots \\ \langle \vct{A}_k \otimes \Gamma^{(k)}_{m_k} \otimes \vct{B}_k, \vct{X} \rangle \end{array} \right] \\ y_2^{(k)}=\mathcal{L}_2^{(k)} \left( \vct{X} \right) := \left[ \begin{array}{c} \langle \vct{C}_k \otimes \Gamma^{(k)}_1 \otimes \vct{D}_k, \vct{X} \rangle \\ \vdots \\ \langle \vct{C}_k \otimes \Gamma^{(k)}_{m_k} \otimes \vct{D}_k, \vct{X} \rangle \end{array} \right] \end{split} \end{equation} \small \begin{equation} \label{eq:meas} \end{equation} \normalsize In the above expressions, $\vct{A}_k, \vct{C}_k \in \mathbb{R}^{n_{1} \times \cdots \times n_{k-1}}$, and $\vct{B}_k, \vct{D}_k \in \mathbb{R}^{n_{k+2} \times \cdots \times n_{K}}$. The tensors $\vct{A}_k$, $\vct{B}_k$, $\vct{C}_k$, $\vct{D}_k$ are all chosen so that their entries are randomly and independently distributed according to $\mathcal{N}(0,1)$ and subsequently normalized to have unit Euclidean norm. The matrices $\Gamma^{(k)}_i \in \mathbb{R}^{n_k \times n_{k+1}}$ for $i=1,\ldots, m_k$ have entries randomly and independently distributed according to $\mathcal{N}(0,1)$. For each $k$ we have $2m_k$ measurements so that in total there are $2\sum_{i=1}^{K-1} m_k$ measurements. \begin{lemma} \label{lemma:high_eq} We have the following identity: $$ \langle \vct{A}_k \otimes \Gamma^{(k)}_i \otimes \vct{B}_k, \vct{X} \rangle = \langle \Gamma^{(k)}_i, X^k_{\vct{A}_k \otimes \vct{B}_k}\rangle. $$ \end{lemma} \begin{proof} The proof is analogous to that of Lemma \ref{lemma:main_obs}. \begin{align*} &\langle \vct{A}_k \otimes \Gamma^{(k)}_i \otimes \vct{B}_k, \vct{X}\rangle \\ & = \sum_{l=1}^{r} \langle \vct{A}_k \otimes \Gamma^{(k)}_i \otimes \vct{B}_k, \bigotimes_{p=1}^{K} u^p_l\rangle\\ &\stackrel{(\text{i})}{=}\sum_{l=1}^r \langle \vct{A}_k, \bigotimes_{p=1}^{k-1}u^p_l\rangle \langle \vct{B}_k, \bigotimes_{p=k+2}^{K}u^p_l\rangle \langle \Gamma^{(k)}_i ,u_l^{k} \otimes u_l^{k+1} \rangle \\ &= \sum_{l=1}^{r} \langle \vct{A}_k \otimes \vct{B}_k, \bigotimes_{p=1}^{k-1}u^p_l \bigotimes_{p=k+2}^{K}u^p_l\rangle \langle \Gamma^{(k)}_i ,u_l^{k} \otimes u_l^{k+1} \rangle \\ &\stackrel{(\text{ii})}{=} \sum_{l=1}^r \nu_l^{k} \langle \Gamma^{(k)}_i ,u_l^{k} \otimes u_l^{k+1} \rangle \; \; \; (\text{where } \nu_l^k= \langle \vct{A}_k \otimes \vct{B}_k, \underset{{p \neq, k, k+1}}{\bigotimes}u^p_l\rangle )\\ &= \langle \Gamma^{(k)}_i, \sum_{l=1}^{r} \nu^k_l u_l^{k} \otimes u_l^{k+1} \rangle \\ &= \langle \Gamma^{(k)}_i, X^k_{\vct{A}_k \otimes \vct{B}_k} \rangle. \end{align*} The equality (i) follows from the identity $\langle a \otimes b \otimes c , x \otimes y \otimes z \rangle = \langle a, x \rangle \langle b, y \rangle \langle c, z \rangle$ for $a, b, c, x, y, z$ of commensurate dimensions. The equality (ii) follows from the definition $\nu_k^l$ in Lemma \ref{lemma:decomp_high} \end{proof} It follows immediately from Lemma \ref{lemma:high_eq} that in \eqref{eq:meas}, for each $k=1, \ldots, K-1$, $\mathcal{L}_1^{(k)} \left( \cdot \right)$, $\mathcal{L}_2^{(k)} \left( \cdot \right)$ are in fact, separable so that Algorithm \ref{alg:trecs_high} is applicable. Recovering the contractions involves solving a set of nuclear norm minimization sub-problems for each $k=1, \ldots, K-1$: \begin{equation} \label{eq:opt_high1} \begin{split} \underset{Z_1}{\text{minimize}} &\qquad \|Z_1\|_* \\ \text{subject to } & \qquad y_1^{(k)}= \left[ \begin{array}{c} \langle \Gamma^{(k)}_1, Z_1 \rangle \\ \vdots \\ \langle \Gamma^{(k)}_{m_k}, Z_1 \rangle \end{array} \right] \end{split} \end{equation} \begin{equation} \label{eq:opt_high2} \begin{split} \underset{Z_2}{\text{minimize}} &\qquad \|Z_2\|_* \\ \text{subject to } & \qquad y_2^{(k)}= \left[ \begin{array}{c} \langle \Gamma^{(k)}_1, Z_2 \rangle \\ \vdots \\ \langle \Gamma^{(k)}_{m_k}, Z_2 \rangle \end{array} \right] \end{split} \end{equation} We have the following lemma concerning the solutions of these optimization problems: \begin{lemma} Suppose $m_k > 3r(n_k+n_{k+1}-r)$. Then the unique solutions to problems \eqref{eq:opt_high1} and \eqref{eq:opt_high2} are $X^{k}_{\vct{A}_{k} \otimes\vct{B}_k}$ and $X^{k}_{\vct{C}_{k}\otimes \vct{D}_k}$ respectively with high probability. \end{lemma} The proof is analogous to that of Lemma \ref{lemma:contr_rec}. We have the following theorem concerning the performance of Algorithm \ref{alg:trecs_high}. \begin{theorem} \label{thm:exact_rec_high} Let $\vct{X} \in \mathbb{R}^{n_1\times \cdots \times n_K }$ be an unknown tensor of interest with rank $r \leq \min\left\{n_1, \ldots, n_K\right\}$. Suppose $m_k > 3r(n_k+n_{k+1}-r)$ for each $k=1, \ldots, K-1$. Then the procedure outlined in Algorithm \ref{alg:trecs_high} succeeds in exactly recovering $\vct{X}$ and its low rank decomposition with high probability. \end{theorem} The proof parallels that of the proof of Theorem \ref{thm:exact_rec_sense} and is omitted for the sake of brevity. \begin{remarks} \hspace{10mm} \begin{itemize} \item Note that the overall sample complexity is $2\sum_{k=1}^{K} m_k$, i.e., $6\sum_{k=1}^{K-1} r(n_{k}+n_{k+1}-r)$. This constitutes an order optimal sample complexity because a tensor of rank $r$ and order $K$ of these dimensions has $r\sum_{k=1}^{K}n_k$ degrees of freedom. In particular, when the tensor is ``square'' i.e., $n_1= \cdots = n_K=n$ the number of degrees of freedom is $Knr$ whereas the achieved sample complexity is no larger than $12Knr$, i.e., $O(Knr)$. \item As with the third order case, the algorithm is tractable. The main operations involve solving a set of matrix nuclear norm minimization problems (i.e. convex programs), computing eigenvectors, and aligning them. All of these are routine, efficiently solvable steps (thus ``polynomial time'') and indeed enables our algorithm to be scalable. \end{itemize} \end{remarks} \subsection{Tensor Completion} \label{sec:higher_order_completion} The method described in Section \ref{sec:third_order} for tensor completion can also be extended to higher order tensors in a straightforward way. Consider a tensor $\vct{X} \in \mathbb{R}^{n_{1} \times \cdots \times n_{K}}$ of order $K$ and dimensions $n_1\times \cdots \times n_{K}$. Let the rank of this tensor be $r \leq \min \left\{ n_1, \ldots, n_K \right\}$ and be given by the decomposition: \begin{align*} \vct{X} &= \sum_{l=1}^r u_l^1 \otimes \ldots \otimes u_l^K = \sum_{l=1}^r \bigotimes_{p=1}^{K} u^p_l, \end{align*} where $u_l^p \in \mathbb{R}^{n_p}$. Extending the sampling notation for tensor completion from the third order case, we define $\Omega$ to be the set of indices corresponding to the observed entries of the unknown low rank tensor $\vct{X}$, and define: \begin{align*} \Omega^{(k)}:=S^{(k)} \cap \Omega \qquad m^{(k)}:=|\Omega^{(k)}|, \end{align*} where ${S}^{(k)} \in \mathcal{S}^{k}$. Akin to the third-order case, along each pair of consecutive modes, we will need samples from two distinguished slices. We denote the index set of these distinct slices by $S_1^{(k)}$ and $S_2^{(k)}$, the corresponding slices by $X_1^k$ and $X_2^k$, the index set of the samples revealed from these slices by $\Omega_1^{(k)}$ and $\Omega_2^{(k)}$, and their cardinality by $m_1^{(k)}$ and $m_2^{(k)}$. It is a straightforward exercise to argue that observations obtained from each slice $S_i^{(k)}$, $i=1,2$ correspond to separable measurements, so that Algorithm \ref{alg:trecs_high} applies. The first step of Algorithm \ref{alg:trecs_high} involves solving a set of nuclear norm minimization sub-problems (two problems for each $k=1, \ldots, K-1$) to recover the slices: \begin{equation} \label{eq:opt_high_comp1} \underset{Z_1}{\text{minimize}} \qquad \|Z_1\|_* \qquad \text{subject to } \qquad \vct{X}_{\Omega_1^{(k)}}=\left[ Z_1 \right]_{\Omega_1^{(k)}} \end{equation} \begin{equation} \label{eq:opt_high_comp2} \underset{Z_2}{\text{minimize}} \qquad \|Z_2\|_* \qquad \text{subject to } \qquad \vct{X}_{\Omega_2^{(k)}}=\left[ Z_2 \right]_{\Omega_2^{(k)}} \end{equation} Each of these optimization problems will succeed in recovering the corresponding slices provided incoherence, non-degeneracy and the slice conditions hold (note that these notions all extend to the higher order setting in a transparent manner). Under these assumptions, if the entries from each slice are uniformly randomly sampled with cardinality at least $m^{(i)}_k > C(n_k+n_{k+1})\log^2(n_{k+1})$, $i=1,2$ for some constant $C$, then the unique solutions to problems \eqref{eq:opt_high1} and \eqref{eq:opt_high2} will be $X^{k}_{1}$ and $X^{k}_2$ respectively with high probability. Once the slices $X^{k}_{1}$ and $X^{k}_{2}$ are recovered correctly using \eqref{eq:opt_high1}, \eqref{eq:opt_high2} for each $k=1, \ldots, K-1$ one can compute $M_1:=X^{k}_{1} \left( X^{k}_{2} \right)^{\dagger}$ and $M_2:=\left( X^{k}_{2}\right)^{\dagger}X^{k}_{1} $ and perform eigen-decompositions to obtain the factors (up to possible rescaling) $\{u_l^k \}$ and $\{u_l^{k+1} \}$. Finally, once the tensor factors are recovered, the tensor itself can be recovered exactly by solving a system of linear equations. These observations can be summarized by the following theorem: \begin{theorem} \label{thm:exact_rec_highcomp} Let $\vct{X} \in \mathbb{R}^{n_1\times \cdots \times n_K }$ be an unknown tensor of interest with rank $r \leq \min\left\{n_1, \ldots, n_K\right\}$. Suppose we obtain $m_1^{(k)}$ and $m_2^{(k)}$ random samples from each of the two distinct mode $k$ slices for each $k=1, \ldots, K-1$. Furthermore suppose the tensor $X$ is incoherent, satisfies the slice conditions for each mode, and the slices from which samples are obtained satisfy non-degeneracy and pairwise genericity for each mode. Then there exists a constant $C$ such that if $$m_i^{(k)}> C(n_k+n_{k+1})\log^2(n_{k+1}) \qquad i \in \left\{1,2 \right\}$$ the procedure outlined in Algorithm \ref{alg:trecs_high} succeeds in exactly recovering $\vct{X}$ and its low rank decomposition with high probability. \end{theorem} We finally remark that the resulting sample complexity of the entire algorithm is $\sum_{k=1}^{K-1}(m^{(k)}_1~+~m^{(k)}_2)$, which is $O(Kr n_K \log^2(n_K))$. \section{Experiments} In this section we present numerical evidence in support of our algorithm. We conduct experiments involving (suitably) random low-rank target tensors and their recovery from (a) Separable Random Projections and (b) Tensor Completion. We obtain phase transition plots for the same, and compare our performance to that obtained from the matrix-unfolding based approach proposed in \cite{squaredeal}. For the phase transition plots, we implemented matrix completion using the method proposed in \cite{rao2013conditional}, since the SDP approach for exact matrix completion of unfolded tensors was found to be impractical for even moderate-sized problems. \label{sec:exp} \subsection{Separable Random Projections : Phase Transition} In this section, we run experiments comparing T-ReCs to tensor recovery methods based on ``matricizing" the tensor via unfolding \cite{squaredeal}. \begin{figure \centering \subfigure[Tensor recovery using T-ReCs. (n=$30$)]{ \includegraphics[trim = 40mm 70mm 40mm 70mm, clip = true, scale = 0.5]{tenrec.pdf} } \subfigure[Tensor recovery using \cite{squaredeal}. (n=$30$)]{ \includegraphics[trim = 40mm 70mm 40mm 70mm, clip = true, scale = 0.5]{matrec.pdf} } \caption{Phase transition diagram for tensor recovery using our method. White indicates a probability of recovery of 1, while black indicates failure of exact recovery. Note that in the matrix unfolding case, one requires more measurements compared to our method to achieve the same probability of recovery for a given rank. } \label{phase} \end{figure} We consider a tensor of size $30 \times 30 \times 30$ whose factors $U, V, W \in \mathbb{R}^{n \times r}$ are i.i.d standard Gaussian entries. We vary the rank $r$ from 2 to 10, and look to recover these tensors from different number of measurements $m \in [2,20]*n$. For each $(r,n)$ pair, we repeated the experiment 10 times, and consider recovery a ``success" if the MSE is less than $10^{-5}$. Figure \ref{phase} shows that the number of measurements needed for accurate tensor recovery is typically less in our method, compared to the ones where the entire tensor is converted to a matrix for low rank recovery. \subsection{Tensor Completion: Phase Transition} \label{sec:exp_completion} We again considered tensors of size $30 \times 30 \times 30$, varied the rank of the tensors from $2$ to $10$, and obtained random measurements from four slices (without loss of generality we may assume they are the first 2 slices across modes 1 and 2). The number of measurements obtained varied as $n \times [2,20]$. Figure \ref{ten_rec} shows the phase transition plots of our method. We deem the method to be a ``success" if the MSE of the recovered tensor is less than $10^{-5}$. Results were averaged over $10$ independent trials. \begin{figure}[!h] \centering \subfigure[Phase transition for tensor completion using T-ReCs. (n=$30$)]{ \includegraphics[trim = 60mm 70mm 60mm 70mm, clip = true, scale = 0.5]{tencom.pdf} } \subfigure[Phase transition for tensor completion using \cite{squaredeal}. (n=$30$)]{ \includegraphics[trim = 60mm 70mm 60mm 70mm, clip = true, scale = 0.5]{matcom.pdf} } \label{ten_rec} \caption{Phase transition plots for tensor recovery. Results are averaged over 10 independent trials. White indicates success whereas black indicates failure.} \end{figure} \subsection{Speed Comparisons} \label{sec:speed} We finally compared the time taken to recover an $n \times n \times n$ tensor of rank 3. Figure \ref{time} shows that, T-ReCs with four smaller nuclear norm minimizations is far more scalable computationally as compared to the method of unfolding the tensor to a large matrix and then solving a single nuclear norm minimization program. This follows since matricizing the tensor involves solving for an $n^2 \times n$ matrix. Our method can thus be used for tensors that are orders of magnitude larger than competing methods. \begin{figure}[!h] \begin{center} \subfigure[Time taken to recover third order tensors. The numbers $5$ and $10$ in the legend refer to the cases where we obtain $5n$ and $10n$ measurements respectively]{ \includegraphics[width = 60mm, height = 50mm]{ten_rec_new.eps} \label{time}} \quad \subfigure[Time taken for tensor completion by our method (T-ReCs) to that of flattening the tensor (Matrix Unfolding).]{\includegraphics[width = 60mm, height = 50mm]{time_comp_new.eps} \label{ten_time}} \end{center} \end{figure} Along lines similar to the recovery case, we compared execution times to complete a $35 \times 35 \times 35$ sized tensor. Figure \ref{ten_time} shows again that the matrix completion approach takes orders of magnitude more time than that taken by our method. We average the results over 10 independent trials, and set $r = \frac{n}{5}, ~\ m = 3nr$ \section{Conclusion and Future Directions} \label{sec:conclusion} We introduced a computational framework for exact recovery of low rank tensors. A new class of measurements, known as \emph{separable} measurements was defined, and sensing mechanisms pf practical interest such as random projections and tensor completion with samples restricted to a few slices were shown to fit into the separable framework. Our algorithm, known as T-ReCs, built on the classical Leurgans' algorithm for tensor decomposition, was shown to be computationally efficient, and enjoy almost optimal sample complexity guarantees in both the random projection and the completion settings. A number of interesting avenues for further research follow naturally as a consequence of this work: \begin{enumerate} \item \textbf{Robustness:} Our algorithm has been analyzed in the context of \emph{noiseless} measurements. It would be interesting to study variations of the approach and the resulting performance guarantees in the case when measurements are noisy, in the spirit of the matrix completion literature \cite{matcompnoise}. \item \textbf{Non-separable measurements:} Our approach relies fundamentally on the measurements being separable. Tensor inverse problems, such as tensor completion in the setting when samples are obtained randomly and uniformly from the tensor do not fit into the separable framework. Algorithmic approaches for non-separable measurements thus remains an important avenue for further research. \item \textbf{Tensors of intermediate rank:} Unlike matrices, the rank of a tensor can be larger than its (largest) dimension, and indeed increase polynomially in the dimension. The approach described in this paper addresses inverse problems where the rank is smaller than the dimension (low-rank setting). Extending these methods to the intermediate rank setting is an interesting and challenging direction for future work. \item \textbf{Methods for tensor regularization:} Tensor inverse problems present an interesting dichotomy with regards to rank regularization. On the one hand, there is no known natural and tractable rank-regularizer (unlike the matrix case, the nuclear norm is not known to be tractable to compute). While various relaxations for the same have been proposed, the resulting approaches (while polynomial time), are neither scalable nor known to enjoy strong sample complexity guarantees. On the other hand, matrix nuclear norm has been used in the past in conjunction with matrix unfolding, but the resulting sample complexity performance is known to be weak. Our work establishes a third approach, we bypass the need for unfolding and expensive regularization, yet achieve almost optimal sample complexity guarantees and a computational approach that is also far more scalable. However, the method applies only for the case of separable measurements. This raises interesting questions regarding the need/relevance for tensor regularizers, and the possibility to bypass them altogether. \end{enumerate} \bibliographystyle{plain}
\section{\textbf{Introduction, Definitions and Notations}} \qquad Let $f$ be an entire function defined in the open complex plane \mathbb{C} .$ The\ maximum modulus function $M_{f}\left( r\right) $ corresponding to $f \ (see \cite{14}) is defined on $\left\vert z\right\vert =r$ as $M_{f}\left( r\right) =\QTATOP{\max }{\left\vert z\right\vert =r}\left\vert f\left( z\right) \right\vert $. A non-constant entire function $f$ is said have the Property (A) if for any $\sigma >1$ and for all sufficiently large $r,$ \left[ M_{f}\left( r\right) \right] ^{2}\leq M_{f}\left( r^{\sigma }\right) $ holds $\left( \text{see \cite{1}}\right) $. When $f$ is meromorphic, one may introduce another function $T_{f}\left( r\right) $ known as Nevanlinna's characteristic function of $f$ (see \cite[p.4]{6}), playing the same role as $M_{f}\left( r\right) .$ If $f$ is non-constant entire function, then its Nevanlinna's characteristic function is strictly increasing and continuous and therefore there exists its inverse functions $T_{f}^{-1}(r)$ $:$ $\left( \left\vert f\left( 0\right) \right\vert ,\infty \right) $ $\rightarrow $ \left( 0,\infty \right) $ with $\underset{s\rightarrow \infty }{\lim T_{f}^{-1}\left( s\right) =\infty .$ \qquad However, throughout this paper, we assume that the reader is familiar with the fundamental results and the standard notations of the Nevanlinna theory of meromorphic functions which are available in \cite{6, 10, 12, 13} and therefore we do not explain those in details. Now we define $\exp ^{[k]}x=\exp \left( \exp ^{[k-1]}x\right) $ and $\log ^{[k]}x=\log \left( \log ^{[k-1]}x\right) $ for $x\in \lbrack 0,\infty )$ and $k\in \mathbb{N} $ where \mathbb{N} $ be the set of all positive integers$.$ We also denote $\log ^{[0]}x=x,$ \log ^{[-1]}x=\exp x,$ $\exp ^{[0]}x=x$ and $\exp ^{[-1]}x=\log x.$ Further we assume that throughout the present paper $p$ and $q$ always denote positive integers. \qquad Mainly the growth investigation of meromorphic functions has usually been done through its Nevanlinna's characteristic function in comparison with those of exponential function. But if one is paying attention to evaluate the growth rates of any meromorphic function with respect to an entire function, the notions of relative growth indicators \cite{9} will come. Extending this notion, Debnath et. al. \cite{3} introduce the definition of relative $\left( p,q\right) $-th order and relative $\left( p,q\right) $-th lower order of a meromorphic function $f$ with respect to another entire function $g$ respectively in the light of index-pair ( detail about index-pair one may see \cite{3, 7, 8} ). For details about it, one may see \cite{3}. Extending this notion, recently Biswas \cite{xxx} introduce the definitions of relative $\left( p,q\right) $-$\varphi $ order and the relative $\left( p,q\right) $-$\varphi $ lower order of a meromorphic function $f$ with respect to another entire function $g$ as follows: \begin{definition} \label{d1}\cite{xxx} Let $\varphi $ $:$ $[0,+\infty )\rightarrow (0,+\infty ) $ be a non-decreasing unbounded function. The relative $\left( p,q\right) -$\varphi $ order and the relative $\left( p,q\right) $-$\varphi $ lower order of a meromorphic function $f$ with respect to an entire function $g$ are defined a \begin{equation*} \begin{array}{c} \rho _{g}^{\left( p,q\right) }\left( f,\varphi \right) \\ \lambda _{g}^{\left( p,q\right) }\left( f,\varphi \right \end{array =\underset{r\rightarrow \infty }{\lim \begin{array}{c} \sup \\ \in \end{array \frac{\log ^{\left[ p\right] }T_{g}^{-1}\left( T_{f}\left( r\right) \right) }{\log ^{\left[ q\right] }\varphi \left( r\right) }. \end{equation*} \end{definition} \qquad If we consider $\varphi (r)=r$, then the above definition reduce to the definitions of relative $\left( p,q\right) $-th order and relative \left( p,q\right) $-th lower order of a meromorphic $f$ with respect to an entire $g,$ introduced by Debnath et. al. \cite{3}. \qquad If the relative $\left( p,q\right) $-$\varphi $ order and the relative $\left( p,q\right) $-$\varphi $ lower order of $f$ with respect to g$ are the same, then $f$ is called a function of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g$. Otherwise, $f$ is said to be\ irregular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g$. \qquad Now in order to refine the above growth scale, one may introduce the definitions of other growth indicators, such as relative $\left( p,q\right) -$\varphi $ type and relative $\left( p,q\right) $-$\varphi $ lower typ \emph{\ }of entire or meromorphic functions with respect to another entire function which are as follows: \begin{definition} \label{d2}\cite{xxx} Let $\varphi $ $:$ $[0,+\infty )\rightarrow (0,+\infty ) $ be a non-decreasing unbounded function. The relative $\left( p,q\right) -$\varphi $ type and the relative $\left( p,q\right) $-$\varphi $ lower type of a meromorphic function $f$ with respect to another entire function $g$ having non-zero finite relative $\left( p,q\right) $-$\varphi $ order $\rho _{g}^{\left( p,q\right) }\left( f,\varphi \right) $ are defined as \begin{equation*} \begin{array}{c} \sigma _{g}^{\left( p,q\right) }\left( f,\varphi \right) \\ \overline{\sigma }_{g}^{\left( p,q\right) }\left( f,\varphi \right \end{array =\underset{r\rightarrow +\infty }{\lim \begin{array}{c} \sup \\ \in \end{array \frac{\log ^{\left[ p-1\right] }T_{g}^{-1}\left( T_{f}\left( r\right) \right) }{\left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g}^{\left( p,q\right) }\left( f,\varphi \right) }}. \end{equation*} \end{definition} \qquad Analogously, to determine the relative growth of $f$ having same non zero finite relative $\left( p,q\right) $-$\varphi $ lower order with respect to $g$, one can introduce the definition of relative $\left( p,q\right) $-$\varphi $ weak type $\tau _{g}^{\left( p,q\right) }\left( f\right) $ and the growth indicator $\overline{\tau }_{g}^{\left( p,q\right) }\left( f\right) $ of $f$ with respect to $g$ of finite positive relative \left( p,q\right) $-$\varphi $ lower order $\lambda _{g}^{\left( p,q\right) }\left( f\right) $ in the following way: \begin{definition} \label{d3}\cite{xxx} Let $\varphi $ $:$ $[0,+\infty )\rightarrow (0,+\infty ) $ be a non-decreasing unbounded function. The relative $\left( p,q\right) -$\varphi $ weak type $\tau _{g}^{\left( p,q\right) }\left( f,\varphi \right) $ and the growth indicator $\overline{\tau }_{g}^{\left( p,q\right) }\left( f,\varphi \right) $ of a meromorphic function $f$ with respect to another entire function $g$ having non-zero finite relative $\left( p,q\right) $-$\varphi $ lower order $\lambda _{g}^{\left( p,q\right) }\left( f,\varphi \right) $ are defined as \begin{equation*} \begin{array}{c} \tau _{g}^{\left( p,q\right) }\left( f,\varphi \right) \\ \overline{\tau }_{g}^{\left( p,q\right) }\left( f,\varphi \right \end{array =\underset{r\rightarrow +\infty }{\lim \begin{array}{c} \inf \\ \su \end{array \frac{\log ^{\left[ p-1\right] }T_{g}^{-1}\left( T_{f}\left( r\right) \right) }{\left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g}^{\left( p,q\right) }\left( f,\varphi \right) }}. \end{equation*} \end{definition} \qquad If we consider $\varphi (r)=r$, then $\sigma _{g}^{\left( p,q\right) }\left( f,r\right) $ and $\tau _{g}^{\left( p,q\right) }\left( f,r\right) $ are respectively known as relative $\left( p,q\right) $-th type and relative $\left( p,q\right) $-th weak type of $f$ with respect to $g$. For details about relative $\left( p,q\right) $-th type, relative $\left( p,q\right) -th weak type etc., one may see \cite{yyy}. \qquad Here, in this paper, we aim at investigating some basic properties of relative $\left( p,q\right) $-$\varphi $ order, relative $\left( p,q\right) -$\varphi $ type and relative $\left( p,q\right) $-$\varphi $ weak type of a meromorphic function with respect to an entire function under somewhat different conditions. Throughout this paper, we assume that all the growth indicators are all nonzero finite. \section{\textbf{Lemmas}} \qquad In this section we present some lemmas which will be needed in the sequel. \begin{lemma} \label{l9.2}\cite{1} Let $f$ be an entire function which satisfies the Property (A) then for any positive integer $n$ and for all sufficiently large $r, \begin{equation*} \left[ M_{f}\left( r\right) \right] ^{n}\leq M_{f}\left( r^{\delta }\right) \end{equation* holds where $\delta >1.$ \end{lemma} \begin{lemma} \label{l9.6}\cite[p. 18]{6} Let $f$ be an entire function. Then for all sufficiently large values of $r, \begin{equation*} T_{f}\left( r\right) \leq \log M_{f}\left( r\right) \leq 3T_{f}\left( 2r\right) ~. \end{equation*} \end{lemma} \section{\textbf{Main Results}} \qquad In this section we present some results which will be needed in the sequel. \begin{theorem} \label{t9.2x} Let $f_{1}$, $f_{2}$ be meromorphic functions and $g_{1}$ be any entire function such that at least $f_{1}$ or $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$. Also let $g_{1}$ has the Property (A). The \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ~. \end{equation* The equality holds when $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ where $i,j=1,2$ and i\neq j$. \end{theorem} \begin{proof} The result is obvious when $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =0$. So we suppose that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) $ $>$ $0 . We can clearly assume that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{k},\varphi \right) $ is finite for $k=1,2.$ Now let us consider that \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} =\Delta $ and $f_{2}$ is of regular relative $\left( p,q\right) $- \varphi $ growth with respect to $g_{1}.$ \qquad Now for any arbitrary $\varepsilon >0$ from the definition of \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, we have for a sequence values of $r$ tending to infinity tha \begin{equation*} T_{f_{1}}\left( r\right) \leq T_{g_{1}}\left[ \exp ^{\left[ p\right] }\left[ \left( \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] \end{equation* \begin{equation} i.e.,~T_{f_{1}}\left( r\right) \leq T_{g_{1}}\left[ \exp ^{\left[ p\right] \left[ \left( \Delta +\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] ~. \label{50.1} \end{equation} \qquad Also for any arbitrary $\varepsilon >0$ from the definition of $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \left( =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right) $, we obtain for all sufficiently large values of $r$ tha \begin{equation} T_{f_{2}}\left( r\right) \leq T_{g_{1}}\left[ \exp ^{\left[ p\right] }\left[ \left( \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) +\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] \label{91.1} \end{equation \begin{equation} i.e.,~T_{f_{2}}\left( r\right) \leq T_{g_{1}}\left[ \exp ^{\left[ p\right] \left[ \left( \Delta +\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] ~. \label{9.1c} \end{equation} \qquad Since $T_{f_{1}\pm f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) +O(1)$ for all large $r,$ so in view of \left( \ref{50.1}\right) $ , $\left( \ref{9.1c}\right) $ and Lemma \ref{l9.6 , we obtain for a sequence values of $r$ tending to infinity tha \begin{equation*} T_{f_{1}\pm f_{2}}\left( r\right) \leq 2\log M_{g_{1}}\left[ \exp ^{\left[ \right] }\left[ \left( \Delta +\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] +O(1) \end{equation* \begin{equation} i.e.,~T_{f_{1}\pm f_{2}}\left( r\right) \leq 3\log M_{g_{1}}\left[ \exp ^ \left[ p\right] }\left[ \left( \Delta +\varepsilon \right) \log ^{\left[ \right] }\varphi \left( r\right) \right] \right] ~. \label{9.2} \end{equation} \qquad Therefore in view of Lemma \ref{l9.2} and Lemma \ref{l9.6}$,$ we obtain from $\left( \ref{9.2}\right) $ for a sequence values of $r$ tending to infinity and $\sigma >1$ tha \begin{equation*} T_{f_{1}\pm f_{2}}\left( r\right) \leq \frac{1}{3}\log \left[ M_{g_{1}}\left[ \exp ^{\left[ p\right] }\left[ \left( \Delta +\varepsilon \right) \log ^ \left[ q\right] }\varphi \left( r\right) \right] \right] \right] ^{9} \end{equation* \begin{equation*} i.e.,~T_{f_{1}\pm f_{2}}\left( r\right) \leq \frac{1}{3}\log M_{g_{1}}\left[ \left[ \exp ^{\left[ p\right] }\left[ \left( \Delta +\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] ^{\sigma \right] \end{equation* \begin{equation*} i.e.,~T_{f_{1}\pm f_{2}}\left( r\right) \leq T_{g_{1}}\left[ 2\left[ \exp ^ \left[ p\right] }\left[ \left( \Delta +\varepsilon \right) \log ^{\left[ \right] }\varphi \left( r\right) \right] \right] ^{\sigma }\right] ~. \end{equation*} \qquad Now we get from above by letting $\sigma \rightarrow 1^{+} \begin{equation*} i.e.,~\underset{r\rightarrow \infty }{\lim \inf }\frac{\log ^{\left[ p\right] }T_{g_{1}}^{-1}\left( T_{f_{1}\pm f_{2}}\left( r\right) \right) }{\log ^ \left[ q\right] }\varphi \left( r\right) }<\left( \Delta +\varepsilon \right) ~.~\ \ \ \ \ \ \ \ \ \ \ \end{equation* Since $\varepsilon >0$ is arbitrary \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \Delta =\max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ~. \end{equation*} \qquad Similarly, if we consider that $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ or both $f_{1}$ and f_{2}$ are of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1},$ then one can easily verify tha \begin{equation} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \Delta =\max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ~. \label{9.3} \end{equation} \qquad Further without loss of any generality, let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ and $f=f_{1}\pm f_{2}.$ Then in view of $\left( \ref{9.3}\right) $ we get that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $\leq $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ As, $f_{2}=\pm \left( f-f_{1}\right) $ and in this case we obtain that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $\leq $ $\max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~.$ As we assume that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,$ therefore we have $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $\leq $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ and hence $\lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $=$ $\max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} .$ Therefore, $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2$ provided $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ Thus the theorem is established. \end{proof} \begin{theorem} \label{t9.1x} Let $f_{1}$ and $f_{2}$ be any two meromorphic functions and g_{1}$ be an entire function such that such that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ exists . Also let $g_{1}$ has the Property (A). The \begin{equation*} \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \max \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ~. \end{equation* The equality holds when $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. \end{theorem} \qquad We omit the proof of Theorem \ref{t9.1x} as it can easily be carried out in the line of Theorem \ref{t9.2x}. \begin{theorem} \label{t9.3} Let $f_{1}$ be a meromorphic function and $g_{1}$, $g_{2}$ be any two entire functions such that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ exists. Also let $g_{1}\pm g_{2}$ has the Property (A). The \begin{equation*} \lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \min \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~. \end{equation* The equality holds when $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \end{theorem} \begin{proof} The result is obvious when $\lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\infty $. So we suppose that $\lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\infty . We can clearly assume that $\lambda _{g_{k}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ is finite for $k=1,2.$ Further let $\Psi =\min \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} . $ Now for any arbitrary $\varepsilon >0$ from the definition of $\lambda _{g_{k}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, we have for all sufficiently large values of $r$ tha \begin{equation} T_{g_{k}}\left[ \exp ^{\left[ p\right] }\left[ \left( \lambda _{g_{k}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] \leq T_{f_{1}}\left( r\right) \text{ \ where }k=1,2 \label{99.4x} \end{equation \begin{equation*} i.e,~T_{g_{k}}\left[ \exp ^{\left[ p\right] }\left[ \left( \Psi -\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] \leq T_{f_{1}}\left( r\right) \text{ \ where }k=1,2 \end{equation*} \qquad Since $T_{g_{1}\pm g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) +O(1)$ for all large $r,$, we obtain from above and Lemma \ref{l9.6} for all sufficiently large values of $r$ tha \begin{equation*} T_{g_{1}\pm g_{2}}\left[ \exp ^{\left[ p\right] }\left[ \left( \Psi -\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] \leq 2T_{f_{1}}\left( r\right) +O(1) \end{equation* \begin{equation*} i.e.,~T_{g_{1}\pm g_{2}}\left[ \exp ^{\left[ p\right] }\left[ \left( \Psi -\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] \right] <3T_{f_{1}}\left( r\right) ~. \end{equation*} \qquad Therefore in view of Lemma \ref{l9.2} and Lemma \ref{l9.6}$,$ we obtain from above for all sufficiently large values of $r$ and any $\sigma >1 $ that \begin{equation*} \frac{1}{9}\log M_{g_{1}\pm g_{2}}\left[ \frac{\exp ^{\left[ p\right] }\left[ \left( \Psi -\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] }{2}\right] <T_{f_{1}}\left( r\right) \end{equation* \begin{equation*} i.e.,~\log M_{g_{1}\pm g_{2}}\left[ \frac{\exp ^{\left[ p\right] }\left[ \left( \Psi -\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] }{2}\right] ^{\frac{1}{9}}<T_{f_{1}}\left( r\right) \end{equation* \begin{equation*} i.e.,~\log M_{g_{1}\pm g_{2}}\left[ \left( \frac{\exp ^{\left[ p\right] \left[ \left( \Psi -\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] }{2}\right) ^{\frac{1}{\sigma }}\right] <T_{f_{1}}\left( r\right) \end{equation* \begin{equation*} i.e.,~T_{g_{1}\pm g_{2}}\left[ \left( \frac{\exp ^{\left[ p\right] }\left[ \left( \Psi -\varepsilon \right) \log ^{\left[ q\right] }\varphi \left( r\right) \right] }{2}\right) ^{\frac{1}{\sigma }}\right] <T_{f_{1}}\left( r\right) \end{equation*} \qquad As $\varepsilon >0$ is arbitrary, we get from above by letting \sigma \rightarrow 1^{+} \begin{equation} \lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\geq \Psi =\min \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~. \label{99.3} \end{equation} \qquad Now without loss of any generality, we may consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $g=g_{1}\pm g_{2}.$ Then in view of $\left( \ref{99.3}\right) $ we get that $\lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Further, g_{1}=\left( g\pm g_{2}\right) $ and in this case we obtain that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\geq $ $\min \left\{ \lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~.$ As we assume that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ therefore we have $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and hence $\lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\min \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} .$ Therefore, $\lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2$ provided $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Thus the theorem follows. \end{proof} \begin{theorem} \label{t9.4} Let $f_{1}$ be a meromorphic function and $g_{1}$, $g_{2}$ be any two entire functions such that $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to at least any one of $g_{1}$ or $g_{2}.$ If $g_{1}\pm g_{2}$ has the Property (A), the \begin{equation*} \rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~. \end{equation* The equality holds when $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ where $i,j=1,2$ and i\neq j.$ \end{theorem} \qquad We omit the proof of Theorem \ref{t9.4} as it can easily be carried out in the line of Theorem \ref{t9.3}. \begin{theorem} \label{t9.5} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1}$, $g_{2}$ be any two entire functions. Also let $g_{1}\pm g_{2}$ has the Property (A). The \begin{eqnarray*} &&\rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \\ &\leq &\max \left[ \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ,\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \end{eqnarray* when the following two conditions holds:\newline $\left( i\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and i\neq j$; and\newline $\left( ii\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and i\neq j$.\newline The equality holds when $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ holds simultaneously for $i,1,2;$ $j=1,2\ $and i\neq j.$ \end{theorem} \begin{proof} Let the conditions $\left( i\right) $ and $\left( ii\right) $ of the theorem hold. Therefore in view of Theorem \ref{t9.1x} and Theorem \ref{t9.4} we get tha \begin{align} & \max \left[ \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ,\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \notag \\ & =\max \left[ \rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right] \notag \\ & \geq \rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) ~. \label{9.xyz} \end{align} \qquad Since $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ hold simultaneously for $i=1,2;$ $j=1,2\ $and $i\neq j,$ we obtain tha \begin{equation*} \text{either }\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} >\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \text{ or} \end{equation* \begin{equation*} \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} >\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} \text{ holds.} \end{equation*} \qquad Now in view of the conditions $\left( i\right) $ and $\left( ii\right) $ of the theorem, it follows from above tha \begin{equation*} \text{either }\rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \text{ or }\rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) >\rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \end{equation* which is the condition for holding equality in $\left( \ref{9.xyz}\right) $. \qquad Hence the theorem follows. \end{proof} \begin{theorem} \label{t9.6} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1}$, $g_{2}$ be any two entire functions. Also let $g_{1},g_{2}$ and $g_{1}\pm g_{2}$ satisfy the Property (A). The \begin{align*} & \lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \\ & \geq \min \left[ \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ,\max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \end{align* when the following two conditions holds:\newline $\left( i\right) $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and $i\neq j$; and\newline $\left( ii\right) $ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and $i\neq j$.\newline The equality holds when $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ hold simultaneously for $i=1,2;$ $j=1,2\ $and $i\neq j.$ \end{theorem} \begin{proof} Suppose that the conditions $\left( i\right) $ and $\left( ii\right) $ of the theorem holds. Therefore in view of Theorem \ref{t9.2x} and Theorem \re {t9.3}, we obtain tha \begin{align} & \min \left[ \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ,\max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \notag \\ & =\min \left[ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \right] \notag \\ & \geq \lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) ~. \label{9.xya} \end{align} \qquad Since $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and $i\neq j$, we get tha \begin{equation*} \text{either }\max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} <\max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \text{ or} \end{equation* \begin{equation*} \max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} <\max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \text{ holds.} \end{equation*} \qquad Since condition $\left( i\right) $ and $\left( ii\right) $ of the theorem holds, it follows from above that \begin{equation*} \text{either }\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \text{ or }\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \end{equation* which is the condition for holding equality in $\left( \ref{9.xya}\right) $. \qquad Hence the theorem follows. \end{proof} \begin{theorem} \label{t9.8} Let $f_{1}$, $f_{2}$ be any two meromorphic functions and g_{1} $ be any entire function such that at least $f_{1}$ or $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to g_{1}$. Also let $g_{1}$ satisfy the Property (A). The \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \leq \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ~. \end{equation* The equality holds when $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ where $i,j=1,2$ and i\neq j$. \end{theorem} \begin{proof} Since $T_{f_{1}\cdot f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) $ for all large $r,$ therefore applying the same procedure as adopted in Theorem \ref{t9.2x} we get tha \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \leq \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ~. \end{equation*} Now without loss of any generality, let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ and $f=f_{1}\cdot f_{2}.$ Then $\lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) \leq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ Further, $f_{2} \frac{f}{f_{1}}$ and $T_{f_{1}}\left( r\right) =T_{\frac{1}{f_{1}}}\left( r\right) +O(1).$ Therefore $T_{f_{2}}\left( r\right) \leq T_{f}\left( r\right) +T_{f_{1}}\left( r\right) +O(1)$ and in this case we obtain that \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \leq \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~.$ As we assume that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,$ therefore we have $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \leq \lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ and hence $\lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $=$ $\max $ $\{$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $\}.$ Therefore, $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2$ provided $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ \qquad Hence the theorem follows. \end{proof} \qquad Next we prove the result for the quotient $\frac{f_{1}}{f_{2}},$ provided $\frac{f_{1}}{f_{2}}$ is meromorphic. \begin{theorem} \label{t9.8 A} Let $f_{1}$, $f_{2}$ be any two meromorphic functions and g_{1}$ be any entire function such that at least $f_{1}$ or $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to g_{1}$. Also let $g_{1}$ satisfy the Property (A). The \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) \leq \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} , \end{equation* provided $\frac{f_{1}}{f_{2}}$ is meromorphic. The equality holds when at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ and $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. \end{theorem} \begin{proof} Since $T_{_{f_{2}}}\left( r\right) =T_{_{\frac{1}{f_{2}}}}\left( r\right) +O(1)$ and $T_{_{\frac{f_{1}}{f_{2}}}}\left( r\right) \leq T_{_{f_{1}}}\left( r\right) +T_{_{\frac{1}{f_{2}}}}\left( r\right) ,$ we get in view of Theorem \ref{t9.2x} tha \begin{equation} \lambda _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) \leq \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ~. \label{50.3} \end{equation} \qquad Now in order to prove the equality conditions, we discuss the following two cases:\medskip \newline \textbf{Case I. }Suppose $\frac{f_{1}}{f_{2}}\left( =h\right) $ satisfies the following conditio \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) , \end{equation* and $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}.$ \qquad Now if possible, let $\lambda _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. Therefore from $f_{1}=h\cdot f_{2}$ we get that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ which is a contradiction. Therefore $\lambda _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) \geq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ and in view of $\left( \ref{50.3}\right) $, we get tha \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \medskip \newline \textbf{Case II. } Suppose $\frac{f_{1}}{f_{2}}\left( =h\right) $ satisfies the following conditio \begin{equation*} \text{ }\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) , \end{equation* and $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}.$ \qquad Now from $f_{1}=h\cdot f_{2}$ we get that either $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \leq \lambda _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) $ or $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \leq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. But according to our assumption $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \nleq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. Therefore $\lambda _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) \geq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and in view of $\left( \ref{50.3 \right) $, we get tha \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation*} \qquad Hence the theorem follows. \end{proof} \qquad Now we state the following theorem which can easily be carried out in the line of Theorem \ref{t9.8} and Theorem \ref{t9.8 A} and therefore its proof is omitted. \begin{theorem} \label{t9.7} Let $f_{1}$ and $f_{2}$ be any two meromorphic functions and g_{1}$ be any entire function such that such that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ exists. Also let $g_{1}$ satisfy the Property (A). The \begin{equation*} \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \leq \max \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ~. \end{equation* The equality holds when $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. Similar results hold for the quotient $\frac{f_{1}} f_{2}}$, provided $\frac{f_{1}}{f_{2}}$ is meromorphic. \end{theorem} \begin{theorem} \label{t9.9} Let $f_{1}$ be a meromorphic function and $g_{1}$, $g_{2}$ be any two entire functions such that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ exists. Also let $g_{1}\cdot g_{2}$ satisfy the Property (A). The \begin{equation*} \lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \min \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~. \end{equation* The equality holds when $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ where $i,j=1,2$ and $i\neq j$ and $g_{i}$ satisfy the Property (A). Similar results hold for the quotient $\frac{g_{1}}{g_{2}} , provided $\frac{g_{1}}{g_{2}}$ is entire and satisfy the Property (A). The equality holds when $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $g_{1}$ satisfy the Property (A). \end{theorem} \begin{proof} Since $T_{g_{1}\cdot g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) $ for all large $r,$ therefore applying the same procedure as adopted in Theorem \ref{t9.3} we get tha \begin{equation*} \lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \min \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~. \end{equation*} \qquad Now without loss of any generality, we may consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and g=g_{1}\cdot g_{2}.$ Then $\lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Further, $g_{1}=\frac{g}{g_{2}}$ and and T_{g_{2}}\left( r\right) =T_{\frac{1}{g_{2}}}\left( r\right) +O(1).$ Therefore $T_{g_{1}}\left( r\right) $ $\leq $ $T_{g}\left( r\right) $ $+$ T_{g_{2}}\left( r\right) $ $+$ $O(1)$ and in this case we obtain that \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \min \left\{ \lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~.$ As we assume that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ so we have $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and hence $\lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\min $ $\left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} .$ Therefore, $\lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2 $ provided $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $g_{1}$ satisfy the Property (A)$.$ Hence the first part of the theorem follows. \qquad Now we prove our results for the quotient $\frac{g_{1}}{g_{2}}$, provided $\frac{g_{1}}{g_{2}}$ is entire and $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Since $T_{_{g_{2}}}\left( r\right) =T_{_{\frac{1}{g_{2}}}}\left( r\right) +O(1)$ and $T_{_{\frac{g_{1 }{g_{2}}}}\left( r\right) \leq T_{_{g_{1}}}\left( r\right) +T_{_{\frac{1} g_{2}}}}\left( r\right) ,$ we get in view of Theorem \ref{t9.3} tha \begin{equation} \lambda _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \min \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~. \label{50.11} \end{equation} \qquad Now in order to prove the equality conditions, we discuss the following two cases:\medskip \newline \textbf{Case I. }Suppose $\frac{g_{1}}{g_{2}}\left( =h\right) $ satisfies the following conditio \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation*} \qquad Now if possible, let $\lambda _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore from $g_{1}=h\cdot g_{2}$ we get that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, which is a contradiction. Therefore $\lambda _ \frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \leq \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and in view of $\left( \ref{50.11}\right) $, we get tha \begin{equation*} \lambda _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation* \medskip \newline \textbf{Case II. } Suppose that $\frac{g_{1}}{g_{2}}\left( =h\right) $ satisfies the following conditio \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation*} \qquad Therefore from $g_{1}=h\cdot g_{2}$, we get that either $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \lambda _ \frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. But according to our assumption $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \ngeqslant \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore $\lambda _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \leq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and in view of $\left( \ref{50.11 \right) $, we get tha \begin{equation*} \lambda _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation*} \qquad Hence the theorem follows. \end{proof} \begin{theorem} \label{t9.10} Let $f_{1}$ be any meromorphic function and $g_{1}$, $g_{2}$ be any two entire functions such that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ exists. Further let $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to at least any one of $g_{1}$ or $g_{2}.$ Also let $g_{1}\cdot g_{2}$ satisfy the Property (A). The \begin{equation*} \rho _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ~. \end{equation* The equality holds when $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ where $i,j=1,2$ and i\neq j$ and $g_{i}$ satisfy the Property (A). \end{theorem} \begin{theorem} \label{t9.10A} Let $f_{1}$ be any meromorphic function and $g_{1}$, $g_{2}$ be any two entire functions such that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ exists. Further let $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to at least any one of $g_{1}$ or $g_{2}.$ The \begin{equation*} \rho _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} , \end{equation* provided $\frac{g_{1}}{g_{2}}$ is entire and satisfy the Property (A). The equality holds when at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$, $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $g_{1}$ satisfy the Property (A). \end{theorem} \qquad We omit the proof of Theorem \ref{t9.10} and Theorem \ref{t9.10A} as those can easily be carried out in the line of Theorem \ref{t9.9}. \qquad Now we state the following four theorems without their proofs as those can easily be carried out in the line of Theorem \ref{t9.5} and Theorem \ref{t9.6} respectively. \begin{theorem} \label{t9.11} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1} , $g_{2}$ be any two entire functions. Also let $g_{1}\cdot g_{2}$ be satisfy the Property (A). The \begin{eqnarray*} &&\rho _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \\ &\leq &\max \left[ \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ,\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] , \end{eqnarray* when the following two conditions holds:\newline $\left( i\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ and $g_{i}$ satisfy the Property (A) for $i$ =$ $1,$ $2$, $j$ $=$ $1,2$ and $i\neq j$; and\newline $\left( ii\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ and $g_{i}$ satisfy the Property (A) for $i$ =$ $1,$ $2$, $j$ $=$ $1,2$ and $i\neq j$.\newline The quality holds when $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and i\neq j.$ \end{theorem} \begin{theorem} \label{t9.12} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1} , $g_{2}$ be any two entire functions. Also let $g_{1}\cdot g_{2}$, $g_{1}$ and $g_{2}$ be satisfy the Property (A). The \begin{eqnarray*} &&\lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \\ &\geq &\min \left[ \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ,\max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \end{eqnarray* when the following two conditions holds:\newline $\left( i\right) $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and $i\neq j$; and\newline $\left( ii\right) $ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and $i\neq j$.\newline The equality holds when $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and i\neq j.$ \end{theorem} \begin{theorem} \label{t9.11A} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1} , $g_{2}$ be any two entire functions such that $\frac{f_{1}}{f_{2}}$ is meromorphic and $\frac{g_{1}}{g_{2}}$ is entire. Also let $\frac{g_{1}}{g_{2 }$ satisfy the Property (A). The \begin{eqnarray*} &&\rho _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2} ,\varphi \right) \\ &\leq &\max \left[ \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ,\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \end{eqnarray* when the following two conditions holds:\newline $\left( i\right) $ At least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ and $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $; and\newline $\left( ii\right) $ At least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ and $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $.\newline The equality holds when $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and i\neq j.$ \end{theorem} \begin{theorem} \label{t9.12A} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1} , $g_{2}$ be any two entire functions such that $\frac{f_{1}}{f_{2}}$ is meromorphic and $\frac{g_{1}}{g_{2}}$ is entire. Also let $\frac{g_{1}}{g_{2 }$, $g_{1}$ and $g_{2}$ be satisfy the Property (A). The \begin{eqnarray*} &&\lambda _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( \frac{f_{1}} f_{2}},\varphi \right) \\ &\geq &\min \left[ \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ,\max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \end{eqnarray* when the following two conditions hold:\newline $\left( i\right) $ At least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ and $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $; and\newline $\left( ii\right) $ At least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $.\newline The equality holds when $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and i\neq j.$ \end{theorem} \qquad Next we intend to find out the sum and product theorems of relative \left( p,q\right) $-$\varphi $ type ( respectively relative $\left( p,q\right) $-$\varphi $ lower type) and relative $\left( p,q\right) $- \varphi $ weak type of meromorphic function with respect to an entire function taking into consideration of the above theorems. \begin{theorem} \label{t9.13} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1} , $g_{2}$ be any two entire functions. Also let $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ are all non zero and finite \newline \textbf{(A) }If $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ for $i,$ $j$ $=$ $1,2$; $i\neq j,$ and $g_{1}$ has the Property (A)$,$ the \begin{equation*} \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \text{ and\ }\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2. \end{equation* \textbf{(B)} If $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ for $i$, $j$ $=$ $1,2$; $i\neq j$ and g_{1}\pm g_{2}$ has the Property (A)$,$ the \begin{equation*} \sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \text{ and\ }\overline{\sigma }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline{\sigma }_{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2. \end{equation* \textbf{(C)} Assume the functions $f_{1},f_{2},g_{1}$ and $g_{2}$ satisfy the following conditions:\newline $\left( i\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and i\neq j$;\newline $\left( ii\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and i\neq j$;\newline $\left( iii\right) $ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\rho _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and $i\neq j$;\newline $\left( iv\right) \rho _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) =\max \left[ \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ,\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \mid l,m=1,2$, and $g_{1}\pm g_{2}$ has the Property (A);\newline the \begin{equation*} \sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\sigma _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \mid l,m=1,2 \end{equation* an \begin{equation*} \overline{\sigma }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\overline{\sigma }_{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \mid l,m=1,2. \end{equation*} \end{theorem} \begin{proof} From the definition of relative $\left( p,q\right) $-$\varphi $ type and relative $\left( p,q\right) $-$\varphi $ lower type of meromorphic function with respect to an entire function, we have for all sufficiently large values of $r$ tha \begin{equation} T_{f_{k}}\left( r\right) \leq T_{g_{l}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{l}}^{\left( p,q\right) }\left( f_{k}\right) }\right\} \right] , \label{9.15} \end{equation \begin{equation} T_{f_{k}}\left( r\right) \geq T_{g_{l}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma }_{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{l}}^{\left( p,q\right) }\left( f_{k}\right) }\right\} \right] \label{9.15a} \end{equation and for a sequence of values of $r$ tending to infinity, we obtain tha \begin{equation} T_{f_{k}}\left( r\right) \geq T_{g_{l}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{l}}^{\left( p,q\right) }\left( f_{k}\right) }\right\} \right] , \label{9.20} \end{equation an \begin{equation} T_{f_{k}}\left( r\right) \leq T_{g_{l}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma }_{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{l}}^{\left( p,q\right) }\left( f_{k}\right) }\right\} \right] , \label{9.20a} \end{equation where $\varepsilon >0$ is any arbitrary positive number $k=1,\,2$ and l=1,2. $\medskip \newline \textbf{Case I.} Suppose that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ hold. Also let $\varepsilon \left( >0\right) $ be arbitrary. Since $T_{f_{1}\pm f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) +O(1)$ for all large $r,$ so in view of \left( \ref{9.15}\right) ,$ we get for all sufficiently large values of $r$ tha \begin{equation*} T_{f_{1}\pm f_{2}}\left( r\right) \leq ~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{equation* \begin{equation} T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\right) }\right\} \right] \left( 1+A\right) ~. \label{9.18} \end{equation where $A=\frac{T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) }\right\} \right] +O(1)}{T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] },$ and in view of $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, and for all sufficiently large values of $r , we can make the term $A$ sufficiently small . Hence for any $\alpha =1+\varepsilon _{1}$, it follows from $\left( \ref{9.18}\right) $ for all sufficiently large values of $r$ tha \begin{equation*} T_{f_{1}\pm f_{2}}\left( r\right) \leq T_{g_{1}}\left[ \exp ^{\left[ p- \right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\right) }\right\} \right] \cdot \left( 1+\varepsilon _{1}\right) \end{equation* \begin{equation} i.e.,~T_{f_{1}\pm f_{2}}\left( r\right) \leq T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\right) }\right\} \right] \cdot \alpha ~. \notag \end{equation} \qquad Hence making $\alpha \rightarrow 1+,$ we get in view of Theorem \re {t9.1x}, $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ and above for all sufficiently large values of $r$ tha \begin{equation*} \underset{r\rightarrow \infty }{\lim \sup }\frac{\log ^{\left[ p-1\right] }T_{g_{1}}^{-1}\left( T_{f_{1}\pm f_{2}}\left( r\right) \right) }{\left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) }}\leq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \end{equation* \begin{equation} i.e.,~\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~.~\ \ \ \ \ \ \ \label{9.21} \end{equation} Now we may consider that $f=f_{1}\pm f_{2}.$ Since $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ hold. Then $\sigma _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Further, let $f_{1}=\left( f\pm f_{2}\right) $. Therefore in view of Theorem \ref{t9.1x} and $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, we obtain that \rho _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds. Hence in view of $\left( \ref{9.21}\right) $ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \leq \sigma _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) .$ Therefore $\sigma _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \Rightarrow $ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Similarly, if we consider $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,$ then one can easily verify that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $.\medskip \newline \textbf{Case II.} Let us consider that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ hold. Also let $\varepsilon \left( >0\right) $ are arbitrary. Since $T_{f_{1}\pm f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) +O(1)$ for all large $r,$ from $\left( \ref{9.15}\right) $ and $\left( \ref{9.20a}\right) ,$ we get for a sequence of values of $r$ tending to infinity tha \begin{equation*} T_{f_{1}\pm f_{2}}\left( r_{n}\right) \leq ~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{equation* \begin{equation} T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] \left( 1+B\right) ~. \label{9.185} \end{equation where $B=\frac{T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) }\right\} \right] +O(1)}{T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q- \right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] },$ and in view of $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, we can make the term $B$\ sufficiently small by taking $n$ sufficiently large and therefore using the similar technique for as executed in the proof of Case I we get from $\left( \ref{9.185}\right) $ that $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\overline{\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ when $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ hold. Likewise, if we consider $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,$ then one can easily verify that $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\overline{\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. \qquad Thus combining Case I and Case II, we obtain the first part of the theorem.\medskip \newline \textbf{Case III.} Let us consider that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}.$ We can make the term \linebreak $C=\frac{T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] +O(1)}{T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q- \right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] }$ sufficiently small by taking $n$ sufficiently large, since $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Hence $C<\varepsilon _{1}.$ \qquad As $T_{g_{1}\pm g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) +O(1)$ for all large $r$, we get tha \begin{equation*} T_{g_{1}\pm g_{2}}\left( \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right) \leq \end{equation* \begin{equation*} ~\ \ \ \ T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}}^{\left( p,\right) }\left( f_{1},\varphi \right) }\right\} \right] + \end{equation* \begin{equation*} T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] +O(1)~. \end{equation*} \qquad Therefore for any $\alpha =1+\varepsilon _{1},$ we obtain in view of C<\varepsilon _{1},$ $\left( \ref{9.15a}\right) $ and $\left( \ref{9.20 \right) $ for a sequence of values of $r$ tending to infinity tha \begin{equation*} T_{g_{1}\pm g_{2}}\left( \exp ^{\left[ p-1\right] }\left\{ \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right) \leq \alpha T_{f_{1}}\left( r_{n}\right) \end{equation*} \qquad Now making $\alpha \rightarrow 1+$, we obtain from above for a sequence of values of $r$ tending to infinity tha \begin{equation} \left( \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }<\log ^{\left[ p-1\right] }T_{g_{1}\pm g_{2}}^{-1}T_{f_{1}}\left( r_{n}\right) \notag \end{equation} \qquad Since $\varepsilon >0$ is arbitrary, we find tha \begin{equation} \sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{9.237} \end{equation} \qquad Now we may consider that $g=g_{1}\pm g_{2}.$ Also $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and at least f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$. Then $\sigma _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Further let $g_{1}=\left( g\pm g_{2}\right) $. Therefore in view of Theorem \ref{t9.4} and $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, we obtain that $\rho _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ as at least $f_{1}$ is of regular relative \left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$. Hence in view of $\left( \ref{9.237}\right) $, $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \sigma _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Therefore $\sigma _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \Rightarrow $ $\sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Similarly if we consider $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$, then $\sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$\medskip \newline \textbf{Case IV. }In this case suppose that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}.$ we can also make the term $D=\frac{T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] +O(1)}{T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] }$ sufficiently small by taking $r$ sufficiently large as $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ So $D<\varepsilon _{1}$ for sufficiently large $r.$ As $T_{g_{1}\pm g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) +O(1)$ for all large $r$, therefore from $\left( \ref{9.15a}\right) ,$ we get for all sufficiently large values of $r$ tha \begin{equation*} T_{g_{1}\pm g_{2}}\left( \exp ^{\left[ p-1\right] }\left\{ \left( \overline \sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right) \leq \end{equation* \begin{equation*} ~\ \ \ \ T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline \sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] + \end{equation* \begin{equation*} T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] +O(1) \end{equation* \begin{equation*} i.e.,~T_{g_{1}\pm g_{2}}\left( \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\sigma }_{g_{1}}^{\left( p,q,t\right) L}\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\rho _{g_{1}}^{\left( p,q,t\right) L}\left( f_{1},\varphi \right) }\right\} \right) \end{equation* \begin{equation} ~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq \left( 1+\varepsilon _{1}\right) T_{f_{1}}\left( r\right) ~., \label{9.238} \end{equation} and therefore using the similar technique for as executed in the proof of Case III we get from $\left( \ref{9.238}\right) $ that $\overline{\sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ where $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and at least f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$. \qquad Likewise if we consider $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$, then $\overline \sigma }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) . $ \qquad Thus combining Case III and Case IV, we obtain the second part of the theorem. \qquad The third part of the theorem is a natural consequence of Theorem \re {t9.5} and the first part and second part of the theorem. Hence its proof is omitted. \end{proof} \begin{theorem} \label{t9.14} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1} , $g_{2}$ be any two entire functions. Also let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ are all non zero and finite \newline \textbf{(A)} If $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ for $i$, $j$ $=$ $1,2$; $i\neq j$, and $g_{1}$ has the Property (A), the \begin{equation*} \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \text{ and \ }\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2~. \end{equation* \textbf{(B)} If $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ for $i$, $j$ $=$ $1,2$; $i\neq j$ and $g_{1}\pm g_{2}$ has the Property (A), the \begin{equation*} \tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \text{ and \ }\overline{\tau }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline{\tau }_{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2~. \end{equation* \textbf{(C)} Assume the functions $f_{1},f_{2},g_{1}$ and $g_{2}$ satisfy the following conditions:\newline $\left( i\right) $ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ for $i$, $j$ $=$ $1,2$ and $i\neq j$;\newline $\left( ii\right) $ $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\rho _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ for $i$, $j$ $=$ $1,2$ and $i\neq j$;\newline $\left( iii\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds simultaneously for $i,$ $j=1,2\ $and $i\neq j$;\newline $\left( iv\right) \lambda _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) =\min \left[ \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ,\max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \mid l,m=1,2$ and g_{1}\pm g_{2}$ has the Property (A)\newline then we hav \begin{equation*} \tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\tau _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \mid l,m=1,2 \end{equation* an \begin{equation*} \overline{\tau }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\overline{\tau }_{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \mid l,m=1,2~. \end{equation*} \end{theorem} \begin{proof} For any arbitrary positive number $\varepsilon (>0)$, we have for all sufficiently large values of $r$ tha \begin{equation} T_{f_{k}}\left( r\right) \leq T_{g_{l}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\tau }_{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) }\right\} \right] , \label{9.15x} \end{equation \begin{equation} T_{f_{k}}\left( r\right) \geq T_{g_{l}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) }\right\} \right] , \label{9.15ax} \end{equation and for a sequence of values of $r$ tending to infinity we obtain tha \begin{equation} T_{f_{k}}\left( r\right) \geq T_{g_{l}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\tau }_{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) }\right\} \right] \label{9.20x} \end{equation an \begin{equation} T_{f_{k}}\left( r\right) \leq T_{g_{l}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{l}}^{\left( p,q\right) }\left( f_{k},\varphi \right) }\right\} \right] , \label{9.20ax} \end{equation where $k=1,2$ and $l=1,2.$\medskip \newline \textbf{Case I.} Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$. Also let \varepsilon \left( >0\right) $ be arbitrary. Since $T_{f_{1}\pm f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) +O(1)$ for all large $r,$ we get from $\left( \ref{9.15x}\right) $ and $\left( \re {9.20ax}\right) ,$ for a sequence $\left\{ r_{n}\right\} $ of values of $r$ tending to infinity tha \begin{equation*} T_{f_{1}\pm f_{2}}\left( r_{n}\right) \leq ~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{equation* \begin{equation} T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] \left( 1+E\right) ~. \label{9.18x} \end{equation where $E=\frac{T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) }\right\} \right] +O(1)}{T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] }$ and in view of $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2}\right) $, we can make the term $E$ sufficiently small by taking $n$ sufficiently large. Now with the help of Theorem \re {t9.2x} and using the similar technique of Case I of Theorem \ref{t9.13}, we get from $\left( \ref{9.18x}\right) $ tha \begin{equation} \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{9.21x} \end{equation} \qquad Further, we may consider that $f=f_{1}\pm f_{2}.$ Also suppose that \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ and at least f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$. Then $\tau _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) . $ Now let $f_{1}=\left( f\pm f_{2}\right) $. Therefore in view of Theorem \ref{t9.2x}, $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ and at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1},$ we obtain that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds. Hence in view of $\left( \ref{9.21x \right) $, $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \leq \tau _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) .$ Therefore $\tau _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \Rightarrow $ \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Similarly, if we consider $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ then one can easily verify that $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .\medskip \newline \textbf{Case II.} Let us consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$. Also let \varepsilon \left( >0\right) $ be arbitrary. As $T_{f_{1}\pm f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) +O(1)$ for all large $r,$ we obtain from $\left( \ref{9.15x}\right) $ for all sufficiently large values of $r$ tha \begin{equation*} T_{f_{1}\pm f_{2}}\left( r\right) \leq ~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{equation* \begin{equation} T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] \left( 1+F\right) ~. \label{9.185x} \end{equation where $F=\frac{T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) }\right\} \right] +O(1)}{T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) +\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1,\varphi }\right) }\right\} \right] },$ and in view of $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, we can make the term $F$ sufficiently small by taking $r$ sufficiently large and therefore for similar reasoning of Case I we get from $\left( \ref{9.185x}\right) $ that $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ when $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ and at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$. \qquad Likewise, if we consider $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ then one can easily verify that $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ \qquad Thus combining Case I and Case II, we obtain the first part of the theorem.\medskip \newline \textbf{Case III.} Let us consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore we can make the term $G=\frac{T_{g_{2} \left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^ \left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] +O(1)}{T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] }$ sufficiently small by taking $r$ sufficiently large since $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ < $ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ So $G<\varepsilon _{1}.$ Since $T_{g_{1}\pm g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) +O(1)$ for all large $r,$ we get from $\left( \ref{9.15ax}\right) $ for all sufficiently large values of $r$ tha \begin{equation*} T_{g_{1}\pm g_{2}}\left( \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right) \leq \end{equation* \begin{equation*} ~\ T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] + \end{equation* \begin{equation*} T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] +O(1) \end{equation* \begin{equation*} i.e.,~T_{g_{1}\pm g_{2}}\left( \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right) \end{equation* \begin{equation} ~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq \left( 1+\varepsilon _{1}\right) T_{f_{1}}\left( r\right) ~. \label{9.236x} \end{equation} \qquad Therefore in view of Theorem \ref{t9.3} and using the similar technique of Case III of Theorem \ref{t9.13}, we get from $\left( \re {9.236x}\right) $ tha \begin{equation} \tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{9.237x} \end{equation} \qquad Further, we may consider that $g=g_{1}\pm g_{2}.$ As $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, so $\tau _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Further let g_{1}=\left( g\pm g_{2}\right) $. Therefore in view of Theorem \ref{t9.3} and $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ we obtain that $\lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ holds. Hence in view of $\left( \ref{9.237x}\right) $ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \tau _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Therefore $\tau _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \Rightarrow $ $\tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Likewise, if we consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ then one can easily verify that $\tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$\medskip \newline \textbf{Case IV. }In this case further we consider $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Further we can make the term $H \frac{T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline \tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] +O(1)}{T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] }$ sufficiently small by taking $n$ sufficiently large, since $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Therefore $H<\varepsilon _{1}$ for sufficiently large $n.$ As $T_{g_{1}\pm g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) +O(1)$ for all large $r,$ we obtain from \left( \ref{9.15ax}\right) $ and $\left( \ref{9.20x}\right) ,$ we obtain for a sequence $\left\{ r_{n}\right\} $ of values of $r$ tending to infinity tha \begin{equation*} T_{g_{1}\pm g_{2}}\left( \exp ^{\left[ p-1\right] }\left\{ \left( \overline \tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right) \leq \end{equation* \begin{equation*} ~\ \ \ \ T_{g_{1}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline \tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] + \end{equation* \begin{equation*} T_{g_{2}}\left[ \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right] +O(1) \end{equation* \begin{equation*} i.e.,~T_{g_{1}\pm g_{2}}\left( \exp ^{\left[ p-1\right] }\left\{ \left( \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) -\varepsilon \right) \left[ \log ^{\left[ q-1\right] }\varphi \left( r_{n}\right) \right] ^{\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) }\right\} \right) \end{equation* \begin{equation} ~\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq \left( 1+\varepsilon _{1}\right) T_{f_{1}}\left( r\right) , \label{9.238x} \end{equation} and therefore using the similar technique for as executed in the proof of Case IV of Theorem \ref{t9.13}, we get from $\left( \ref{9.238x}\right) $ that $\overline{\tau }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ when $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Similarly, if we consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ then one can easily verify that $\overline{\tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ \qquad Thus combining Case III and Case IV, we obtain the second part of the theorem. \qquad The proof of the third part of the Theorem is omitted as it can be carried out in view of Theorem \ref{t9.6} and the above cases. \end{proof} \qquad In the next two theorems we reconsider the equalities in Theorem \re {t9.2x} to Theorem \ref{t9.4} under somewhat different conditions. \begin{theorem} \label{t9.15} Let $f_{1},\,f_{2}\,$be any two meromorphic functions and g_{1}$, $g_{2}$ be any two entire functions.\newline \textbf{(A) }The following condition is assumed to be satisfied:\newline $\left( i\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds and $g_{1}$ has the Property (A), the \begin{equation*} \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \textbf{(B)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ holds and $g_{1}\pm g_{2}$ has the Property (A);\newline $\left( ii\right) $ $f_{1}$ is of regular relative $\left( p,q\right) $- \varphi $ growth with respect to at least any one of $g_{1}$ or $g_{2}$, the \begin{equation*} \rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation*} \end{theorem} \begin{proof} Let $f_{1},\,f_{2},\,g_{1}$ and $g_{2}$ be any four entire functions satisfying the conditions of the theorem.\medskip \newline \textbf{Case I.} Suppose that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $(0<$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $<\infty )$. Now in view of Theorem \ref{t9.1x} it is easy to see that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~.$ If possible let \begin{equation} \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \label{93.1x} \end{equation} \qquad Let\textbf{\ }$\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ Then in view of the first part of Theorem \re {t9.13} and $\left( \ref{93.1x}\right) $ we obtain that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2}\mp f_{2},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ which is a contradiction. Hence $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) $ $=$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~.$ Similarly with the help of the first part of Theorem \ref{t9.13}, one can obtain the same conclusion under the hypothesis $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ This proves the first part of the theorem.\medskip \newline \textbf{Case II. }Let us consider that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $(0<$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<\infty )$, $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to at least any one of $g_{1}$ or $g_{2}$ and $\left( g_{1}\pm g_{2}\right) $ and $g_{1}\pm g_{2}$ satisfy the Property (A). Therefore in view of Theorem \ref{t9.4}, it follows that \rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and if possible let \begin{equation} \rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{93.3x} \end{equation} \qquad Let us consider that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Then. in view of the proof of the second part of Theorem \ref{t9.13} and $\left( \ref{93.3x}\right) $ we obtain that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{1}\pm g_{2}\mp g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ which is a contradiction. Hence $\rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~.$ Also in view of the proof of second part of Theorem \ref{t9.13} one can derive the same conclusion for the condition \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and therefore the second part of the theorem is established. \end{proof} \begin{theorem} \label{t9.15A} Let $f_{1},\,f_{2}$ be any two meromorphic functions and g_{1}$, $g_{2}$ be any two entire functions.\newline \textbf{(A) }The following conditions are assumed to be satisfied:\newline $\left( i\right) $ $\left( f_{1}\pm f_{2}\right) $ is of regular relative \left( p,q\right) $-$\varphi $ growth with respect to at least any one of g_{1}$ or $g_{2}$, and $g_{1},$ $g_{2}$ , $g_{1}\pm g_{2}$ have the Property (A);\newline $\left( ii\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) $ or $\overline{\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) $;\newline $\left( iii\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $;\newline $\left( iv\right) $ Either $\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $; the \begin{equation*} \rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \textbf{(B) }The following conditions are assumed to be satisfied:\newline $\left( i\right) $ $f_{1}$ and $f_{2}$ are of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to at least any one\ of $g_{1}$ or $g_{2},$ and $g_{1}\pm g_{2}$ has the Property (A);\newline $\left( ii\right) $ Either $\sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline \sigma }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ \newline $\left( iii\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $;\newline $\left( iv\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $; the \begin{equation*} \rho _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation*} \end{theorem} \qquad We omit the proof of Theorem \ref{t9.15A} as it is a natural consequence of Theorem \ref{t9.15}. \begin{theorem} \label{t9.16} Let $f_{1},\,f_{2}\,$be ant two meromorphic functions and g_{1}$,$g_{2}$ be any two entire functions.\newline \textbf{(A)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ At least any one of $f_{1}$ or $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ \newline $\left( ii\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds and $g_{1}$ has the Property (A), the \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \textbf{(B)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ $f_{1},$ $g_{1}$ and $g_{2}$ be any three entire functions such that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ exists;\newline $\left( ii\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ holds and $g_{1}\pm g_{2}$ has the Property (A), the \begin{equation*} \lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation*} \end{theorem} \begin{proof} Let $f_{1},\,f_{2},\,g_{1}$ and $g_{2}$ be any four entire functions satisfying the conditions of the theorem.\medskip \newline \textbf{Case I.} Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $(0<\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\infty )$ and at least $f_{1}$ or $f_{2}$ and $\left( f_{1}\pm f_{2}\right) $ are of regular relative $\left( p,q\right) $- \varphi $ growth with respect to $g_{1}$. Now, in view of Theorem \ref{t9.2x , it is easy to see that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \leq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ If possible let \begin{equation} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \label{96.19xy} \end{equation} \qquad Let\textbf{\ }$\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) . $ Then in view of the proof of the first part of Theorem \ref{t9.14} and \left( \ref{96.19xy}\right) $ we obtain that $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2}\mp f_{2},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ which is a contradiction. Hence \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~.$ Similarly in view of the proof of the first part of Theorem \ref{t9.14} , one can establish the same conclusion under the hypothesis $\overline{\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline \tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ This proves the first part of the theorem.\medskip \newline \textbf{Case II.} Let us consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $(0<\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\infty .$ Therefore in view of Theorem \ref{t9.3}, it follows that $\lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and if possible let \begin{equation} \lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{96.2t} \end{equation} \qquad Suppose\textbf{\ }$\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Then in view of the second part of Theorem \re {t9.14} and $\left( \ref{96.2t}\right) $, we obtain that $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{1}\pm g_{2}\mp g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ which is a contradiction. Hence $\lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~.$ Analogously with the help of the second part of Theorem \ref{t9.14}, the same conclusion can also be derived under the condition $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and therefore the second part of the theorem is established. \end{proof} \begin{theorem} \label{t9.16A} Let $f_{1},\,f_{2}\,$be any two meromorphic functions and g_{1}$, $g_{2}$ be any two entire functions.\newline \textbf{(A)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ At least any one of $f_{1}$ or $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ and g_{2}$. Also $g_{1},$ $g_{2},$ $g_{1}\pm g_{2}$ have satisfy the Property (A);\newline $\left( ii\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) \neq \overline{\tau _{g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) $;\newline $\left( iii\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $;\newline $\left( iv\right) $ Either $\tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $; the \begin{equation*} \lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \textbf{(B)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ At least any one of $f_{1}$ or $f_{2}$ are of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}\pm g_{2}$, and $g_{1}\pm g_{2}$ has satisfy the Property (A);\newline $\left( ii\right) $ Either $\tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds \newline $\left( iii\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ holds;\newline $\left( iv\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds, the \begin{equation*} \lambda _{g_{1}\pm g_{2}}^{\left( p,q\right) }\left( f_{1}\pm f_{2},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation*} \end{theorem} \qquad We omit the proof of Theorem \ref{t9.16A} as it is a natural consequence of Theorem \ref{t9.16}. \begin{theorem} \label{t9.17} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1} , $g_{2}$ be any two entire functions. Also let $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ are all non zero and finite \newline \textbf{(A)} Assume the functions $f_{1},f_{2}$ and $g_{1}$ satisfy the following conditions:\newline $\left( i\right) $ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ for $i$, $j$ $=$ $1,2$ and $i\neq j$;\newline $\left( ii\right) $ $g_{1}$ satisfies the Property (A), the \begin{equation*} \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \text{ and \ }\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2\text{ }~. \end{equation* Similarly \begin{equation*} \sigma _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \text{ and \ }\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( \frac f_{1}}{f_{2}},\varphi \right) =\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2 \end{equation* holds provided $\left( i\right) $ $\frac{f_{1}}{f_{2}}$ is meromorphic, \left( ii\right) $ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) $ $>$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ $\mid $ $i$, $1,2;$ $j$ $=$ $1,2;$ $i$ $\neq $ $j$ and $\left( iii\right) $ $g_{1}$ satisfy the Property (A).\newline \textbf{(B) }Assume the functions $g_{1},g_{2}$ and $f_{1}$ satisfy the following conditions:\newline $\left( i\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ for $i$, $j$ $=$ $1,2$ and $i\neq j,$ and g_{i}$ satisfy the Property (A);\newline $\left( ii\right) $ $g_{1}\cdot g_{2}$ satisfy the Property (A), the \begin{equation*} \sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \text{ and \ }\overline{\sigma }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline{\sigma }_{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2~. \end{equation* Similarly \begin{equation*} \sigma _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \text{ and \ }\overline{\sigma }_{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline{\sigma }_{gi}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2 \end{equation* holds provided $\left( i\right) $ $\frac{g_{1}}{g_{2}}$ is entire and satisfy the Property (A), $\left( ii\right) $ At least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$, \left( iii\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ \mid $ $i$ $=$ $1,2;$ $j$ $=$ $1,2;$ $i$ $\neq $ $j$ and $\left( iv\right) $ $g_{1}$ satisfy the Property (A).\newline \textbf{(C)} Assume the functions $f_{1},f_{2}$, $g_{1}$ and $g_{2}$ satisfy the following conditions:\newline $\left( i\right) $ $g_{1}\cdot g_{2}$ satisfy the Property (A);\newline $\left( ii\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and i\neq j$;\newline $\left( iii\right) $ $\rho _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\rho _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{j}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and i\neq j$;\newline $\left( iv\right) $ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\rho _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and $i\neq j$; \newline $\left( v\right) $ $\rho _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) =\max \left[ \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ,\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \mid l,m=1,2$; the \begin{equation*} \sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\sigma _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \text{ and }\overline{\sigma }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \overline{\sigma }_{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \mid l,m=1,2~. \end{equation* Similarly \begin{equation*} \sigma _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2} ,\varphi \right) =\sigma _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \text{ and }\overline{\sigma }_{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) =\overline{\sigma _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \mid l,m=1,2. \end{equation* holds provided $\frac{f_{1}}{f_{2}}$ is meromorphic function and $\frac{g_{1 }{g_{2}}$ is entire function which satisfy the following conditions:\newline $\left( i\right) $ $\frac{g_{1}}{g_{2}}$ satisfy the Property (A);\newline $\left( ii\right) $ At least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ and $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $;\newline $\left( iii\right) $ At least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ and $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $;\newline $\left( iv\right) $ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ and $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and $i\neq j$;\newline $\left( v\right) $ $\rho _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) =\max \left[ \min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \right\} ,\min \left\{ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \mid l,m=1,2$. \end{theorem} \begin{proof} Let us suppose that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) , $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and \rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ are all non zero and finite.\medskip \textbf{\newline Case I.} Suppose that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. Also let $g_{1}$ satisfy the Property (A). \ Since T_{f_{1}\cdot f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) $ for all large $r,$ therefore applying the same procedure as adopted in Case I of Theorem \ref{t9.13} we get tha \begin{equation} \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \leq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{77.3} \end{equation} \qquad Further without loss of any generality, let $f=f_{1}\cdot f_{2}$ and \rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $<$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) .$ Then in view of \left( \ref{77.3}\right) ,$ we obtain that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $\leq $ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Also $f_{1} \frac{f}{f_{2}}$ and $T_{f_{2}}\left( r\right) $ $=$ $T_{\frac{1}{f_{2} }\left( r\right) $ $+$ $O(1).$ Therefore $T_{f_{1}}\left( r\right) \leq T_{f}\left( r\right) +T_{f_{2}}\left( r\right) +O(1)$ and in this case also we obtain from $\left( \ref{77.3}\right) $ that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\leq $ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) .$ Hence $\sigma _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\Rightarrow $ \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ \qquad Similarly, if we consider $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,$ then one can verify that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ \qquad Next we may suppose that $f=\frac{f_{1}}{f_{2}}$ with $f_{1},$ $f_{2}$ and $f$ are all meromorphic functions.\medskip \newline \textbf{Sub Case I}$_{\mathbf{A}}$\textbf{.} Let $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $<$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore in view of Theorem \re {t9.7}, $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $<$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $. We have f_{1}=f\cdot f_{2}$. So, $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( \frac f_{1}}{f_{2}},\varphi \right) $.\medskip \newline \textbf{Sub Case I}$_{\mathbf{B}}$\textbf{. }Let $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $>$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore in view of Theorem \re {t9.7}, $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $=$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $. Since $T_{f}\left( r\right) =T_{\frac{1}{f}}\left( r\right) +O(1)=T_{\frac{f_{2}}{f_{1}}}\left( r\right) +O(1),$ So $\sigma _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}} f_{2}},\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $.\medskip \newline \textbf{Case II. }Let $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. Also let $g_{1}$ satisfy the Property (A). As T_{f_{1}\cdot f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) $ for all large $r,$ therefore applying the same procedure as explored in Case II of Theorem \ref{t9.13}, one can easily verify that $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\overline{\sigma _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) $ $=$ $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2$ \ under the conditions specified in the theorem. \qquad Similarly, if we consider $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ,$ then one can verify that $\overline{\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $=$ \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ and $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2 },\varphi \right) $ $=$ $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ \qquad Therefore the first part of theorem follows from Case I and Case II.\medskip \newline \textbf{Case III. }Let $g_{1}\cdot g_{2}$ satisfy the Property (A) and $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}.$ Since $T_{g_{1}\cdot g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) $ for all large $r,$ therefore applying the same procedure as adopted in Case III of Theorem \re {t9.13} we get tha \begin{equation} \sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{7.6j} \end{equation} \qquad Further without loss of any generality, let $g=g_{1}\cdot g_{2}$ and \rho _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Then in view of $\left( \ref{7.6j}\right) ,$ we obtain that $\sigma _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\geq $ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Also $g_{1}=\frac{g}{g_{2}}$ and $T_{g_{2}}\left( r\right) $ $=$ $T_{\frac{1}{g_{2}}}\left( r\right) $ $+$ O(1).$ Therefore $T_{g_{1}}\left( r\right) \leq T_{g}\left( r\right) +T_{g_{2}}\left( r\right) +O(1)$ and in this case we obtain from $\left( \re {7.6j}\right) $ that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\geq $ \ $\sigma _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Hence $\sigma _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\Rightarrow $ $\sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Similarly, if we consider $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$, then one can verify that $\sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Next we may suppose that $g=\frac{g_{1}}{g_{2}}$ with $g_{1},$ g_{2}, $ $g$ are all entire functions satisfying the conditions specified in the theorem.\medskip \newline \textbf{Sub Case III}$_{\mathbf{A}}$\textbf{.} Let $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore in view of Theorem \re {t9.10A}, $\rho _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. We have g_{1}=g\cdot g_{2}$. So $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=\sigma _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $.\medskip \newline \textbf{Sub Case III}$_{\mathbf{B}}$\textbf{. }Let $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $>$ $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore in view of Theorem \re {t9.10A}, $\rho _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ \rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Since T_{g}\left( r\right) =T_{\frac{1}{g}}\left( r\right) +O(1)=T_{\frac{g_{2}} g_{1}}}\left( r\right) +O(1),$ So $\sigma _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $.\medskip \newline \textbf{Case IV. }Suppose $g_{1}\cdot g_{2}$ satisfy the Property (A). Also let $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}.$ As $T_{g_{1}\cdot g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) $ for all large $r,$ the same procedure as explored in Case IV of Theorem \ref{t9.13}, one can easily verify that $\overline{\sigma }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\overline{\sigma }_{\frac g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ \overline{\sigma }_{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2$ under the conditions specified in the theorem. \qquad Likewise, if we consider $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$, then one can verify that $\overline{\sigma }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\overline{\sigma }_{\frac{g_{1}}{g_{2} }^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\overline{\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore the second part of theorem follows from Case III and Case IV. \qquad Proof of the third part of the Theorem is omitted as it can be carried out in view of Theorem \ref{t9.11} and Theorem \ref{t9.11A} and the above cases. \end{proof} \begin{theorem} \label{t9.18} Let $f_{1},f_{2}$ be any two meromorphic functions and $g_{1} , $g_{2}$ be any two entire functions. Also let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ are all non zero and finite \newline \textbf{(A)} Assume the functions $f_{1},f_{2}$ and $g_{1}$ satisfy the following conditions:\newline $\left( i\right) $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ for $i$, $j$ $=$ $1,2$ and $i\neq j$;\newline $\left( ii\right) $ $g_{1}$ satisfy the Property (A)$,$ the \begin{equation*} \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \text{ and \ }\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2~. \end{equation* Similarly \begin{equation*} \tau _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \text{ and \ }\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2} ,\varphi \right) =\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) \mid i=1,2 \end{equation* holds provided $\frac{f_{1}}{f_{2}}$ is meromorphic, at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to g_{1}$ where $g_{1}$ satisfy the Property (A) and $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) $ $>$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ $\mid $ $i$ $=$ $1,2;$ $j$ $=$ 1,2;$ $i$ $\neq $ $j$.\newline \textbf{(B) }Assume the functions $g_{1},g_{2}$ and $f_{1}$ satisfy the following conditions:\newline $\left( i\right) $ $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ for $i$, $j$ $=$ $1,2$, $i\neq j$; and $g_{i}$ satisfy the Property (A)\newline $\left( ii\right) $ $g_{1}\cdot g_{2}$ satisfy the Property (A), the \begin{equation*} \tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \text{ and \ }\overline{\tau }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline{\tau }_{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2~. \end{equation* Similarly \begin{equation*} \tau _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \text{ and \ }\overline{\tau }_{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline{\tau }_{gi}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2 \end{equation* holds provided $\frac{g_{1}}{g_{2}}$ is entire and satisfy the Property (A), $g_{1}$ satisfy the Property (A) and $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\mid $ $i$ $=$ $1,2;$ $j$ $=$ $1,2;$ $i$ $\neq $ $j .\newline \textbf{(C)} Assume the functions $f_{1},f_{2}$, $g_{1}$ and $g_{2}$ satisfy the following conditions:\newline $\left( i\right) $ $g_{1}\cdot g_{2}$, $g_{1}$ and $g_{2}$ are satisfy the Property (A)$;$\newline $\left( ii\right) $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and $i\neq j$;\newline $\left( iii\right) $ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{i},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{j},\varphi \right) $ with at least $f_{j}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ for $i$ $=$ $1,$ $2$, $j$ $=$ $1,2$ and $i\neq j$;\newline $\left( iv\right) $ $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and i\neq j$;\newline $\left( v\right) $ $\lambda _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) =\min \left[ \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ,\max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \mid l,m=1,2$; the \begin{equation*} \tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\tau _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \text{ and }\overline{\tau }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\overline{\tau }_{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \mid l,m=1,2~. \end{equation* Similarly \begin{equation*} \tau _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2} ,\varphi \right) =\tau _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \text{ and }\overline{\tau }_{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) =\overline{\tau _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) \mid l,m=1,2~. \end{equation* holds provided $\frac{f_{1}}{f_{2}}$ is meromorphic and $\frac{g_{1}}{g_{2}}$ is entire functions which satisfy the following conditions:\newline $\left( i\right) $ $\frac{g_{1}}{g_{2}}$, $g_{1}$ and $g_{2}$ satisfy the Property (A);\newline $\left( ii\right) $ At least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ and $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $;\newline $\left( iii\right) $ At least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{2}$ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $;\newline $\left( iv\right) $ $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{i}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\lambda _{g_{j}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds simultaneously for $i=1,2;$ $j=1,2\ $and i\neq j$;\newline $\left( v\right) $ $\lambda _{g_{m}}^{\left( p,q\right) }\left( f_{l},\varphi \right) =\min \left[ \max \left\{ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} ,\max \left\{ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \right\} \right] \mid l,m=1,2$. \end{theorem} \begin{proof} Let us consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $, $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $, $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ are all non zero and finite.\textbf{\newline Case I.} Suppose $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ and $g_{1}$ satisfy the Property (A). Since T_{f_{1}\cdot f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) $ for all large $r,$ therefore applying the same procedure as adopted in Case I of Theorem \ref{t9.14} we get tha \begin{equation} \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \leq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{77.90} \end{equation} \qquad Further without loss of any generality, let $f=f_{1}\cdot f_{2}$ and \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $<$ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ \lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) .$ Then in view of $\left( \ref{77.90}\right) ,$ we obtain that $\tau _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $\leq $ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Also $f_{1}=\frac{f}{f_{2}}$ and $T_{f_{2}}\left( r\right) $ $=$ $T_{\frac{1}{f_{2}}}\left( r\right) $ $+$ O(1).$ Therefore $T_{f_{1}}\left( r\right) \leq T_{f}\left( r\right) +T_{f_{2}}\left( r\right) +O(1)$ and in this case we obtain from the above arguments that $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\leq $ $\tau _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) .$ Hence $\tau _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\Rightarrow $ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ \qquad Similarly, if we consider $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$, then one can easily verify that $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ \qquad Next we may suppose that $f=\frac{f_{1}}{f_{2}}$ with $f_{1},$ $f_{2}$ and $f$ are all meromorphic functions satisfying the conditions specified in the theorem.\medskip \newline \textbf{Sub Case I}$_{\mathbf{A}}$\textbf{.} Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $<\lambda $ $_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore in view of Theorem \re {t9.8 A}, $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $<$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ = $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $. We have $f_{1}=f\cdot f_{2}$. So $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1 }{f_{2}},\varphi \right) $.\medskip \newline \textbf{Sub Case I}$_{\mathbf{B}}$\textbf{. }Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $>$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore in view of Theorem \re {t9.8 A}, $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ = $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f,\varphi \right) $. Since T_{f}\left( r\right) =T_{\frac{1}{f}}\left( r\right) +O(1)=T_{\frac{f_{2}} f_{1}}}\left( r\right) +O(1),$ So $\tau _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}}{f_{2}},\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $.\medskip \newline \textbf{Case II. }Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{2}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ where $g_{1}$ satisfy the Property (A). As $T_{f_{1}\cdot f_{2}}\left( r\right) \leq T_{f_{1}}\left( r\right) +T_{f_{2}}\left( r\right) $ for all large $r,$ so applying the same procedure as adopted in Case II of Theorem \ref{t9.14} we can easily verify that $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and\ $\overline{\tau }_{\frac g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline \tau }_{gi}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2$ under the conditions specified in the theorem. \qquad Similarly, if we consider $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ with at least $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$, then one can easily verify that $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $=$ $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ \qquad Therefore the first part of theorem follows Case I and Case II \newline \textbf{Case III. }Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $g_{1}\cdot g_{2}$ satisfy the Property (A).Since $T_{g_{1}\cdot g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) $ for all large $r,$ therefore applying the same procedure as adopted in Case III of Theorem \ref{t9.14} we get that \begin{equation} \tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \leq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{79.0} \end{equation} \qquad Further without loss of any generality, let $g=g_{1}\cdot g_{2}$ and \lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Then in view of $\left( \ref{79.0}\right) ,$ we obtain that $\tau _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\geq $ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Also $g_{1}=\frac{g}{g_{2}}$ and $T_{g_{2}}\left( r\right) $ $=$ $T_{\frac{1}{g_{2}}}\left( r\right) $ $+$ O(1).$ Therefore $T_{g_{1}}\left( r\right) \leq T_{g}\left( r\right) +T_{g_{2}}\left( r\right) +O(1)$ and in this case we obtain from above arguments that $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\geq $ \ $\tau _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Hence $\tau _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\Rightarrow $ $\tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad If $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ then one can easily verify that $\tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Next we may suppose that $g=\frac{g_{1}}{g_{2}}$ with $g_{1},$ g_{2}, $ $g$ are all entire functions satisfying the conditions specified in the theorem.\medskip \newline \textbf{Sub Case III}$_{\mathbf{A}}$\textbf{.} Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore in view of Theorem \re {t9.9}, $\lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. We have $g_{1}=g\cdot g_{2}$. So $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\tau _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=\tau _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $.\medskip \newline \textbf{Sub Case III}$_{\mathbf{B}}$\textbf{. }Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $>$ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Therefore in view of Theorem \re {t9.9}, $\lambda _{g}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $<$ \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. Since T_{g}\left( r\right) =T_{\frac{1}{g}}\left( r\right) +O(1)=T_{\frac{g_{2}} g_{1}}}\left( r\right) +O(1),$ So $\tau _{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $.\medskip \newline \textbf{Case IV. }Suppose $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $g_{1}\cdot g_{2}$ satisfy the Property (A). Since $T_{g_{1}\cdot g_{2}}\left( r\right) \leq T_{g_{1}}\left( r\right) +T_{g_{2}}\left( r\right) $ for all large $r,$ then adopting the same procedure as of Case IV of Theorem \ref{t9.14}, we obtain that $\overline \tau }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\overline{\tau }_{\frac{g_{1}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\overline{\tau }_{gi}^{\left( p,q\right) }\left( f_{1},\varphi \right) \mid i=1,2$. \qquad Similarly if we consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,$ then one can easily verify that $\overline{\tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $. \qquad Therefore the second part of the theorem follows from Case III and Case IV. \qquad Proof of the third part of the Theorem is omitted as it can be carried out in view of Theorem \ref{t9.12} , Theorem \ref{t9.12A} and the above cases. \end{proof} \begin{theorem} \label{t9.19} Let $f_{1},\,f_{2}$ be any two meromorphic functions and g_{1} $, $g_{2}$ be any two entire functions.\newline \textbf{(A)} The following condition is assumed to be satisfied:\newline $\left( i\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds;\newline $\left( ii\right) $ $g_{1}$ satisfies the Property (A), the \begin{equation*} \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \textbf{(B)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ holds;\newline $\left( ii\right) $ $f_{1}$ is of regular relative $\left( p,q\right) $- \varphi $ growth with respect to at least any one of $g_{1}$ or $g_{2}$. Also $g_{1}\cdot g_{2}$ satisfy the Property (A). Then we have \begin{equation*} \rho _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation*} \end{theorem} \begin{proof} Let $f_{1},\,f_{2}$ be any two meromorphic functions and $g_{1}$, $g_{2}$ be any two entire functions satisfying the conditions of the theorem.\medskip \newline \textbf{Case I.} Suppose that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $(0<\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\infty )$ and $g_{1}$ satisfy the Property (A). Now in view of Theorem \ref{t9.7}, it is easy to see that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \leq \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ If possible let \begin{equation} \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) <\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \label{20.1} \end{equation} \qquad Let\textbf{\ }$\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ Now in view of the first part of Theorem \ref{t9.17} and $\left( \ref{20.1}\right) $ we obtain that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}\cdot f_{2}}{f_{2}},\varphi \right) =\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ which is a contradiction. Hence $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $=$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ Similarly with the help of the first part of Theorem \ref{t9.17}, one can obtain the same conclusion under the hypothesis $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ This prove the first part of the theorem.\newline \textbf{Case II.} Let us consider that $\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $(0<\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\infty )$, $f_{1}$ is of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to at least any one of $g_{1}$ or $g_{2}$. Also $g_{1}\cdot g_{2}$ satisfy the Property (A). Therefore in view of Theorem \ref{t9.10}, it follows that $\rho _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \geq \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and if possible let \begin{equation} \rho _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{20.2} \end{equation} \qquad Further suppose that\textbf{\ }$\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Therefore in view of the proof of the second part of Theorem \ref{t9.17} and $\left( \ref{20.2}\right) $, we obtain that $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{\frac{g_{1}\cdot g_{2}}{g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ which is a contradiction. Hence $\rho _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ \rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~.$ Likewise in view of the proof of second part of Theorem \ref{t9.17}, one can obtain the same conclusion under the hypothesis $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ This proves the second part of the theorem. \end{proof} \begin{theorem} \label{t9.19A} Let $f_{1},\,f_{2}$ be any two meromorphic functions and g_{1}$, $g_{2}$ be any two entire functions.\newline \textbf{(A) }The following conditions are assumed to be satisfied:\newline $\left( i\right) $ $\left( f_{1}\cdot f_{2}\right) $ is of regular relative \left( p,q\right) $-$\varphi $ growth with respect to at least any one g_{1} $ or $g_{2}$;\newline $\left( ii\right) $ $\left( g_{1}\cdot g_{2}\right) $, $g_{1}$ and $g_{2}$ all satisfy the Property (A);\newline $\left( iii\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ or $\overline{\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $;\newline $\left( iv\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $;\newline $\left( v\right) $ Either $\sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $; the \begin{equation*} \rho _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \textbf{(B) }The following conditions are assumed to be satisfied:\newline $\left( i\right) $ $\left( g_{1}\cdot g_{2}\right) $ satisfy the Property (A);\newline $\left( ii\right) $ $f_{1}$ and $f_{2}$ are of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to at least any one $g_{1}$ or g_{2}$;\newline $\left( iii\right) $ Either $\sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $;\newline $\left( iv\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $;\newline $\left( v\right) $ Either $\sigma _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \sigma _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\sigma }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \overline{\sigma }_{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $; the \begin{equation*} \rho _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\rho _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation*} \end{theorem} \qquad We omit the proof of Theorem \ref{t9.19A} as it is a natural consequence of Theorem \ref{t9.19}. \begin{theorem} \label{t9.20.} Let $f_{1},\,f_{2}$ be any two meromorphic functions and g_{1}$, $g_{2}$ be any two entire functions.\newline \textbf{(A)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ At least any one of $f_{1}$ or $f_{2}$ are of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ \newline $\left( ii\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds.\newline $\left( iii\right) $ $g_{1}$ satisfy the Property (A), the \begin{equation*} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \textbf{(B)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ $f_{1}$ be any meromorphic function and $g_{1}$, $g_{2}$ be any two entire functions such that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ exist and $g_{1}\cdot g_{2}$ satisfy the Property (A);\newline $\left( ii\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ holds, the \begin{equation*} \lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \end{equation*} \end{theorem} \begin{proof} Let $f_{1},\,f_{2}$ be any two meromorphic functions and $g_{1}$, $g_{2}$ be any two entire functions satisfy the conditions of the theorem.\medskip \newline \textbf{Case I.} Let $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ $(0<\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) <\infty ),$ $g_{1}$ satisfy the Property (A) and at least $f_{1}$ or $f_{2}$ is of regular relative $\left( p,q\right) $- \varphi $ growth with respect to $g_{1}$. Now in view of Theorem \ref{t9.8} it is easy to see that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $\leq $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ If possible le \begin{equation} \lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) <\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \label{20.3} \end{equation} \qquad Also let\textbf{\ }$\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ Then in view of the proof of first part of Theorem \ref{t9.18} and $\left( \ref{20.3}\right) ,$ we obtain that $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( \frac{f_{1}\cdot f_{2}}{f_{2}},\varphi \right) =\tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ which is a contradiction. Hence $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) .$ Analogously, in view of the proof of first part of Theorem \ref{t9.18}$,$ one can derived the same conclusion under the hypothesis $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $. Hence the first part of the theorem is established.\newline \textbf{Case II.} Let us consider that $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $(0<\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ,\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) <\infty $ and $g_{1}\cdot g_{2}$ satisfy the Property (A). Therefore in view of Theorem \ref{t9.9}, it follows that $\lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $\geq $ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ \lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and if possible le \begin{equation} \lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) >\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) ~. \label{20.4} \end{equation} \qquad Further let\textbf{\ }$\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Then in view of second part of Theorem \ref{t9.18} and $\left( \ref{20.4}\right) $, we obtain that $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{\frac{g_{1}\cdot g_{2}} g_{2}}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ which is a contradiction. Hence $\lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ $=$ $\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) .$ Similarly by second part of Theorem \ref{t9.18}, we get the same conclusion when $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ and therefore the second part of the theorem follows. \end{proof} \begin{theorem} \label{t9.20A} Let $f_{1},\,f_{2}$ be any two meromorphic functions and g_{1}$, $g_{2}$ be any two entire functions.\newline \textbf{(A)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ $g_{1}\cdot g_{2}$, $g_{1}$ and $g_{2}$ satisfy the Property (A);\newline $\left( ii\right) $ At least any one of $f_{1}$ or $f_{2}$ are of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}$ and g_{2}$;\newline $\left( iii\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $ or $\overline{\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) $;\newline $\left( iv\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $;\newline $\left( v\right) $ Either $\tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $; the \begin{equation*} \lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation* \textbf{(B)} The following conditions are assumed to be satisfied:\newline $\left( i\right) $ $g_{1}\cdot g_{2}$ satisfy the Property (A);\newline $\left( ii\right) $ At least any one of $f_{1}$ or $f_{2}$ are of regular relative $\left( p,q\right) $-$\varphi $ growth with respect to $g_{1}\cdot g_{2}$;\newline $\left( iii\right) $ Either $\tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds \newline $\left( iv\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) $ holds;\newline $\left( v\right) $ Either $\tau _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \tau _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ or $\overline{\tau }_{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) \neq \overline{\tau }_{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) $ holds, the \begin{equation*} \lambda _{g_{1}\cdot g_{2}}^{\left( p,q\right) }\left( f_{1}\cdot f_{2},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{1}}^{\left( p,q\right) }\left( f_{2},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{1},\varphi \right) =\lambda _{g_{2}}^{\left( p,q\right) }\left( f_{2},\varphi \right) ~. \end{equation*} \end{theorem} \qquad We omit the proof of Theorem \ref{t9.20A} as it is a natural consequence of Theorem \ref{t9.20.}. \begin{remark} If we take $\frac{f_{1}}{f_{2}}$ instead of $f_{1}\cdot f_{2}$ and $\frac g_{1}}{g_{2}}$ instead of $g_{1}\cdot g_{2}$ where $\frac{f_{1}}{f_{2}}$ is meromorphic and $\frac{g_{1}}{g_{2}}$ is entire function, and the other conditions of Theorem \ref{t9.19}, Theorem \ref{t9.19A}, Theorem \ref{t9.20.} and Theorem \ref{t9.20A} remain the same, then conclusion of Theorem \re {t9.19}, Theorem \ref{t9.19A}, Theorem \ref{t9.20.} and Theorem \ref{t9.20A} remains valid. \end{remark}
\section{Introduction} This note is inspired by \cite{BF} where Braun and dos Santos Filho proved that every polynomial mapping $(p,q):\Rn^2\to\Rn^2$ with everywhere positive Jacobian determinant and such that $\deg p\leq 3$ is a global diffeomorphism. A pair of polynomials $p,q\in\Rn[x,y]$ such that the mapping $(p,q):\Rn^2\to\Rn^2$ has everywhere positive Jacobian will be called \emph{real Jacobian mates}. The key role in \cite{BF} plays the result that $p=x(1+xy)$ does not have a real Jacobian mate. This result is a special case of Theorem~\ref{glacial}. In Theorem~\ref{branches} a wide class of polynomials that do not have real Jacobian mates is characterized. In particular every polynomial such that its Newton polygon has an edge described in Corollary~\ref{edge} belong to this class. This gives a new proof of \cite[Theorem~5.5]{BO} that polynomials of degree~4 with at least one disconnected level set do not have real Jacobian mates. \section{Glacial tongues} \begin{Theorem}\label{glacial} Let $p$ be a real polynomial in two variables and let $B\subset A$, be the subsets of the real plane such that: \begin{itemize} \item[\emph{(i)}] the set $B$ is bounded, \item[\emph{(ii)}] for every $t\in\Rn$ the set $p^{-1}(t)\cap A$ is either empty, or is contained in $B$, or is homeomorphic to a segment and its endpoints belong to $B$, \item[\emph{(iii)}] the border of $A$ contains a half-line. \end{itemize} Then for every $q\in\Rn[x,y]$ there exists $v\in\Rn^2$, such that $\Jac(p,q)(v)=0$. \end{Theorem} \begin{proof} Suppose that there exists $q\in\Rn[x,y]$ such that $\Jac(p,q)$ vanishes nowhere. Under this assumption the mapping $\Phi=(p,q):\Rn^2\to\Rn^2$ is a local diffeomorphism. Take any $t\in\Rn$ such that the set $A_t=p^{-1}(t)\cap A$ is nonempty. If $A_t\subset B$ then $\Phi(A_t)\subset\Phi(B)$. If $A_t$ is homeomorphic to a segment with endpoints in $B$ then the restriction of $\Phi$ to $A_t$ is a locally injective continuous mapping from the source $A_t$ which is homeomorphic to a segment to a vertical line $\{t\}\times\Rn$ which is homeomorphic to $\Rn$. By the extreme value theorem and the mean value theorem such a mapping is either increasing or decreasing \footnote{Suppose that a continuous and locally injective function $f:[a,b]\to\Rn$ in neither increasing nor decreasing. Then there exist $x_1$, $x_2$ $x_3$, $a\leq x_1<x_2<x_3\leq b$ such that $f(x_1)\leq f(x_2) \geq f(x_3)$ or $f(x_1)\geq f(x_2) \leq f(x_3)$. By the extreme value theorem $f$ restricted to $[x_1,x_3]$ has a maximal or a minimal value at some point $c$ inside the interval $[x_1,x_3]$. Shrinking $[x_1,x_3]$, if necessary, we may assume that $f$ restricted to $[x_1,x_3]$ is injective. By the mean value theorem $f(x_3)\in f([x_1,c])$ or $f(x_1)\in f([c,x_3])$ which gives a contradiction.} Hence, the image $\Phi(A_t)$ is a vertical segment with endpoints that belong to $\Phi(B)$. Since $A$ is the union of the sets $A_t$ and $\Phi(B)$ is bounded, so it is $\Phi(A)$. \smallskip Let $L$ be a half-line contained in the border of $A$. Because the mapping $\Phi$ is bounded on $A$ it is also bounded on $L$. Consequently the polynomials $p$ and $q$ restricted to $L$ are constant (because they behave on $L$ like polynomials in one variable). We arrived to a contradiction with the condition that $\Phi$ is locally injective. \end{proof} \medskip Every set $A$ satisfying assumptions of Theorem~\ref{glacial} will be called \textit{a glacial tongue with a straight border}. \begin{Example} Let $p=x(1+xy)$. In~\cite{BF} it is checked (Lemma~4.1 and Remark~1) that $A=\{(x,y)\in\Rn^2: 0<x<1, -\frac{1}{x}<y\leq -1\}$ is a glacial tongue with a straight border for the polynomial $p$. Hence, $p$ does not have a Jacobian mate. \end{Example} \section{Newton polygon and branches at infinity} Let $p=\sum a_{i,j}x^iy^j$ be a nonzero polynomial. By definition the Newton polygon $\Delta(p)$ is the convex hull of the set $\{(i,j)\in\Nn^2:a_{i,j}\neq0\}$. An edge $S$ of the Newton polygon $\Delta(p)$ will be called an outer edge if it has a normal vector $\vec v=(v_1,v_2)$ directed outwards $\Delta(p)$ such that $v_1>0$ or $v_2>0$. If $v_1>0$ then $S$ will be called a right outer edge. With every right outer edge $S$ we associate a rational number $\theta(S)=v_2/v_1$ and call this number the slope of $S$. \begin{Example} The Newton polygon of $p=x+x^2+x^3y+y^2+x^3y^2+xy^4$ has $4$ outer edges. Three of them are right outer edges with slopes $-1$, $0$, and $2$. \setlength{\unitlength}{12pt} \begin{picture}(6,5)(0,0) \thinlines \put(0,0){\vector(1,0){4}} \put(0,0){\vector(0,1){4}} \put(0,0){\makebox(0,0){.}} \put(1,0){\makebox(0,0){.}} \put(2,0){\makebox(0,0){.}} \put(3,0){\makebox(0,0){.}} \put(0,1){\makebox(0,0){.}} \put(1,1){\makebox(0,0){.}} \put(2,1){\makebox(0,0){.}} \put(3,1){\makebox(0,0){.}} \put(0,2){\makebox(0,0){.}} \put(1,2){\makebox(0,0){.}} \put(2,2){\makebox(0,0){.}} \put(3,2){\makebox(0,0){.}} \put(0,3){\makebox(0,0){.}} \put(1,3){\makebox(0,0){.}} \put(2,3){\makebox(0,0){.}} \put(0,4){\makebox(0,0){.}} \put(3,3){\makebox(0,0){.}} \thicklines \put(0,2){\line(1,1){1}} \put(1,3){\line(2,-1){2}} \put(3,1){\line(0,1){1}} \put(2,0){\line(1,1){1}} \put(0,2){\line(1,-2){1}} \put(1,0){\line(1,0){1}} \end{picture} \end{Example} \medskip The objective of this section is to describe branches at infinity of a curve $p(x,y)=0$ and associate with each branch a certain outer edge of the Newton polygon of $p$. Let $V=\{(x,y)\in\Rn^2:p(x,y)=0\}$. Assume that the curve $V$ is unbounded and consider a one-point algebraic compactification $\widehat \Rn^2=\Rn^2\cup\{\infty\}$ of the real plane (see \cite[Definition~3.6.12]{BR}). Then $\infty$ belongs to the Zariski closure of $V$ in $\widehat \Rn^2$. By \cite[Lemma~3.3]{Mi} in a suitably chosen neighborhood of $\infty$ the curve $V\cup\{\infty\}$ is the union of finitely many branches which intersect only at $\infty$. Each branch is homeomorphic to an open interval under an analytic homeomorphism $p:(-\epsilon,\epsilon)\to V\cup\{\infty\}$, $p(0)=\infty$. It follows from the above that after passing to coordinates $x$ and $y$ in $\Rn^2$ and substituting $s=t^{-1}$ in $p$ we get the following characterization of branches at infinity. \begin{Lemma}\label{structure} Assume that $V=\{(x,y)\in\Rn^2:p(x,y)=0\}$ is an unbounded polynomial curve. Then in a suitably chosen neighborhood of infinity in $\Rn^2$ the curve $V$ is the union of finitely many pairwise disjoint ``branches at infinity''. Each branch at infinity is homeomorphic to a union of two open intervals $(-\infty,-R)\cup(R,+\infty)$ under a homeomorphism $(x,y)=(\tilde x(t),\tilde y(t))$ which is given by Laurent power series \begin{eqnarray} \tilde x(t)&=&a_kt^k+a_{k-1}t^{k-1}+a_{k-2}t^{k-2}+\cdots \label{eq:x} \\ \tilde y(t)&=&b_lt^l+b_{l-1}t^{l-1}+b_{l-2}t^{l-2}+\cdots \label{eq:y} \end{eqnarray} convergent for $|x|>R$. \end{Lemma} \begin{Lemma}\label{edges} Keep the assumptions an notations of Lemma~\ref{structure}. If $a_k\neq0$, $b_l\neq0$ then $(k,l)$ is a normal vector to some outer edge of the Newton polygon of~$p$. \end{Lemma} \begin{proof} Let $d=\max \{ki+lj:(i,j)\in\Delta(p)\}$. The polynomial $p$ can be written as a sum $p=\sum_{ki+lj\leq d}c_{i,j}x^iy^j$. Substituting $(x,y)=(\tilde x(t),\tilde y(t))$ to $p$ and collecting together the terms of the highest degree we get $$ 0=p(\tilde x(t),\tilde y(t))= \Bigl( \sum_{ki+lj=d}c_{i,j}a_k^ib_l^j\Bigr)t^d+\mbox{ terms of lower degrees}. $$ The necessary condition for this identity is a cancellation of terms in the sum in parenthesis. If there are at least two distinct coefficients $c_{i_1,j_1}\neq0$, $c_{i_2,j_2}\neq0$, satisfying $ki_1+lj_1=ki_2+lj_2=d$ then the straight line $\{(i,j)\in\Rn^2:ki+lj=d\}$ touches $\Delta(p)$ at a least two points, hence along the edge. Since $(x,y)=(\tilde x(t),\tilde y(t))$ is a Laurent parametrization of a branch at infinity, we have $\|(\tilde x(t),\tilde y(t))\|\to\infty$ as $t\to+\infty$ which proves that at least one of exponents $k$, $l$ is positive and shows that $\Delta(p)\cap\{(i,j)\in\Rn^2:ki+lj=d\}$ is an outer edge. \end{proof} \medskip Using Lemma~\ref{edges} we may associate to every branch at infinity of the curve $p=0$ the unique outer edge of the Newton polygon of $p$. In the next lemma we will show that the slope of the associated edge characterizes the asymptotic of the branch at infinity. \medskip For two real valued functions $g$, $h$ defined in some interval $(R,\infty)$ we will write $g(x)\sim h(x)$ if there exist constants $c>0$, $C>0$, and $r>0$ such that $c |h(x)|\leq |g(x)| \leq C|h(x)|$ for all $x>r$. \begin{Lemma}\label{structure1} Let $p(x,y)$ be a nonzero real polynomial such that for every~$x_0$ the set $X=\{\,(x,y)\in\Rn^2: x>x_0,\,y>0,\,p(x,y)=0\,\}$ is nonempty. Then for sufficiently large $x_0$ there exists a finite collection of continuous semialgebraic functions $f_k:(x_0,+\infty)\to\Rn$, $k=1,\dots, s$ such that \begin{itemize} \item[\emph{(i)}] $0<f_1(x)<\dots<f_s(x)$ for $x>x_0$, \item[\emph{(ii)}] $X$ is the union of graphs $\{\,(x,y)\in\Rn^2: y=f_k(x),\,x>x_0\,\}$, $k=1,\dots, s$, \item[\emph{(iii)}] for every $f_k$ there exists a right outer edge $S_k$ of the Newton polygon of $p(x,y)$ such that $f_k(x)\sim x^{\theta(S_k)}$. \end{itemize} \end{Lemma} \begin{proof} Part~(i) and~(ii) follow from the Cylindrical Decomposition Theorem for semialgebraic sets (see for example \cite[Theorem~2.2.1]{BR}). To prove~(iii) observe that the graph of $f_k$ is unbounded and homeomorphic to an open interval. Thus, we may assume, increasing $x_0$ if necessary, that this graph is a half-branch at infinity. By Lemma~\ref{structure} there exists a homeomorphism of an open interval $(R,+\infty)$ to the graph given by Laurent power series~(\ref{eq:x}),~(\ref{eq:y}) with $a_k\neq0$, $b_l\neq0$. Since $\tilde x(t)\to+\infty$ for $t\to+\infty$, the leading term of $\tilde x(t)$ has a positive exponent $k$. By estimations $\tilde x(t) \sim t^k$, $\tilde y(t) \sim t^l$ and identity $f_k(\tilde x(t))=\tilde y(t)$ we get $f_k(x)\sim x^{l/k}$. Finally, by Lemma~\ref{edges} there exists a right outer edge $S$ of the Newton polygon of $p$ such that $l/k=\theta(S)$. \end{proof} \section{Main result} \begin{Theorem}\label{branches} Assume that the Newton polygon of $p\in\Rn[x,y]$ has a right outer edge $S$ with endpoint $(0,1)$ and positive inclination and that the curve $p=0$ has a real branch at infinity associated with the edge $S$. Then $p$ has a glacial tongue with a straight border. \end{Theorem} \begin{proof} Without loss of generality we may assume, changing signs of variables if necessary, that one of half-branches associated with the edge $S$ lies in the positive quadrant $x>0$, $y>0$. Then, under notation of Lemma~\ref{structure1} this half-branch at infinity is a graph $y=f(x)$ where $f$ is one of the functions $f_k$, $k=1,\dots,s$. Comparing the asymptotic of these functions we see that $\theta(S_1)\leq\theta(S_2)\leq\dots\leq \theta(S_s)$. Since $S$ has the smallest slope among all right outer edges of the Newton polygon $\Delta(p)$, we have $S=S_1$ and we may assume that $f(x)=f_1(x)$. Let $$V=\{(x,y)\in\Rn^2:x>x_0,\, 0<y<f(x)\} .$$ The polynomial $p$ vanishes nowhere on $V$, hence without loss of generality we may assume that $p$ is positive on this set. \medskip \noindent \textbf{Claim~1.} For every $t\neq0$ the set $p^{-1}(t)\cap V$ is bounded. \smallskip \textbf{Proof of 1.} If not, then by the Curve Selection Lemma there exists a half-branch at infinity of a curve $p(x,y)=t$ contained in $V$. Let $y=g(x)$ be the graph of this half-branch at infinity. By Lemma~3 $g(x)\sim x^{\theta(S_1)}$, where $S_1$ is one of the right outer edges of the Newton polygon $\Delta(p-t)$. By inequalities $0<g(x)<f(x)$ we get $\theta(S_1)\leq \theta(S)$. This is impossible because all right outer edges of $\Delta(p-t)$ have slopes bigger than the slope of $S$. \medskip \noindent \textbf{Claim~2.} For $x_0$ sufficiently large, $V$ does not contain any critical point of~$p$. \smallskip \textbf{Proof of 2.} If the intersection of $V$ with the set of critical points is bounded then it is enough to enlarge $x_0$. If this intersection is unbounded then by the Curve Selection Lemma it contains an unbounded semi-algebraic arc $\Gamma\subset V$. It follows that $p$ restricted to $\Gamma$ is constant and nonzero -- contrary to Claim~1. \medskip Further, we will assume that $V$ satisfies assumptions of Claim~2. Then every level set $p^{-1}(t)$ intersected with $V$ is a one-dimensional smooth semialgebraic manifold. By Poincare-Bendixon Theorem $V_t=p^{-1}(t)\cap V$ has a finite number of connected components, each homeomorphic to a circle or to an open interval. \medskip \noindent \textbf{Claim~3.} There is no connected component of $V_t$ homeomorphic to a circle. \smallskip \textbf{Proof of 3.} Suppose there is. Then by Jordan's Theorem it cuts the set $V$ to two open regions. One of these regions is bounded. Since the function $p$ is constant on the boundary of this region, it attains an extreme value at some point inside. This is impossible because $p$ has no critical points in the set $V$. \medskip Let $h(y)=p(x_0,y)$ be the restriction of $p$ to the vertical line $\{x_0\}\times \Rn$. A function $h$ vanishes at the endpoints of the interval $[0,f(x_0)]$ and is positive inside. It is easy to find $t_0>0$ and two points $a<b$ inside the interval $[0,f(x_0)]$ such that:\\ $h'(y)\neq0$ for $y\in(0,a]\cup [b,f(x_0))$,\\ $h$ increases from $0$ to $t_0$ in the interval $[0,a]$, \\ $h(y)>t_0$ for $a<y<b$, \\ $h$ decreases from $t_0$ to $0$ in the interval $[b,f(x_0)]$. \medskip \noindent \textbf{Claim~4.} For every $t$ such that $0<t\leq t_0$ the set $V_t=p^{-1}(t)\cap V$ is connected and homeomorphic to an open interval. The topological closure of $V_t$ intersects the vertical segment $\{x_0\}\times (0,f(x_0))$ at two points. \smallskip \textbf{Proof of 4.} By the discussion proceeding Claim~4 the polynomial~$p$ attains value~$t$ precisely at two points of the boundary of $V$. These are $(x_0,y_1)$ and $(x_0,y_2)$, where $0<y_1\leq a$ and $b\leq y_2<f(x_0)$. Moreover $\partial p/\partial y$ does not vanish at these points. By Claim~2 and Claim~3 the set $V_t$ is a one-dimensional smooth manifold having a finite number of connected component; each component is semialgebraic and homeomorphic to an open interval. Thus, the closure of $V_t$ is a graph with vertexes $(x_0,y_1)$, $(x_0,y_2)$ and edges which are connected components of $V_t$. By the Implicit Function Theorem the closure of $V_t$ has in a small neighborhood of $(x_0,y_i)$, where $i=1,2$ a topological type of an interval $[0,1)$ which shows that there is exactly one edge which connects $(x_0,y_1)$ and $(x_0,y_2)$. \medskip By Claim~4 the closure of $V_{t_0}$ is a line with two endpoints: $(x_0,a)$ and $(x_0,b)$. Joining them by a vertical segment we get a non-smooth oval. By Jordan's Theorem this oval cuts the plane into two open regions. Let $B_0$ be the bounded region, let $B=B_0\cup\{x_0\}\times (0,f(x_0))$ and let $A=V\cup\{x_0\}\times (0,f(x_0))$. \smallskip If $t\leq 0$ then $A_t$ is empty. If $0<t\leq t_0$ then $A_t$ is homeomorphic to a line with endpoints at $\{x_0\}\times (0,f(x_0))$. If $t>t_0$ then either $A_t$ is empty or the closure of every connected component of $A_t$ intersects the border of $A$ along $x_0\times(a,b)$. In this case $A_t\subset B$. \end{proof} \begin{Corollary}\label{edge} Assume that the Newton polygon of a polynomial $p\in\Rn[x,y]$ has a right outer edge that: begins at $(0,1)$, has a positive inclination, and its only lattice points are the endpoints. Then $p$~does not have a real Jacobian mate. \end{Corollary} \begin{proof} It is enough to prove that there exists a branch at infinity of the curve $p=0$ associated with the edge $S$ satisfying assumptions of Corollary~\ref{edge}. Let $(0,1)$, $(a,b)$ be the endpoints of $S$. Then the polynomial $p$ has two terms $Ax^ay^b$ and $By$ corresponding to the lattice points of $S$. Multiplying $x$, $y$ and $p$ by nonzero constants, we may reduce our considerations to $A=1$ and $B=-1$. Substituting $(x(t),y(t))=(ct^{b-1},t^{-a})$ we get $p(x(t),y(t))=(c^a-1)t^{-a} + \mbox{ terms of lower degrees}$. Hence, the sign of the polynomial $p$ on the curve $(x(t),y(t))$ for large $t$ depends on the sign of $c^a-1$. The curve $(x(t),y(t))$ for $t>0$ is a graph of a function. By the appropriate choice of $c$ we can find two functions $f_1$ $f_2$ such that $0<f_1<f_2$, $f_1(x) \sim f_2(x) \sim x^{\theta(S)}$, $p$ has negative values on the graph of $f_1$ and has positive values on the graph of $f_2$. By Lemma~3 this can happen if and only if there is a half-branch at infinity of the curve $p=0$ which is a graph of a function $g$ with $g(x)\sim x^{\theta(S)}$. \end{proof} \medskip \noindent \textbf{Remark.} Using toric modifications of the real plane one can present a shorter proof of Corollary~1. \begin{Example}\label{list} Every polynomial from the list: $p_1=y+xy^2+y^4$, $p_2=y+ay^2+xy^3$, $p_3=y+x^2y^2$, $p_4=y+ay^2+y^3+x^2y^2$ satisfies assumptions of Corollary~\ref{edge}. The Newton polygons of these polynomials are drawn below. \end{Example} \setlength{\unitlength}{12pt} \begin{picture}(24,5)(0,0) \newsavebox{\sn} \thinlines \savebox{\sn}(6,5){\put(0,0){\vector(1,0){5}} \put(0,0){\vector(0,1){5}} \put(0,0){\makebox(0,0){.}} \put(1,0){\makebox(0,0){.}} \put(2,0){\makebox(0,0){.}} \put(3,0){\makebox(0,0){.}} \put(0,1){\makebox(0,0){.}} \put(1,1){\makebox(0,0){.}} \put(2,1){\makebox(0,0){.}} \put(3,1){\makebox(0,0){.}} \put(0,2){\makebox(0,0){.}} \put(1,2){\makebox(0,0){.}} \put(2,2){\makebox(0,0){.}} \put(0,3){\makebox(0,0){.}} \put(1,3){\makebox(0,0){.}} \put(0,4){\makebox(0,0){.}} } \put(0,0){\usebox{\sn}} \put(0,0) {\begin{picture}(5,4)(0,0) \put(2.5,2.5){$\Delta(p_1)$} \thicklines \put(0,1){\line(1,1){1}} \put(0,4){\line(1,-2){1}} \put(0,1){\line(0,1){3}} \end{picture}} \put(7,0){\usebox{\sn}} \put(7,0) {\begin{picture}(5,4)(0,0) \put(2.5,2.5){$\Delta(p_2)$} \thicklines \put(0,1){\line(1,2){1}} \put(0,2){\line(1,1){1}} \put(0,1){\line(0,1){1}} \end{picture}} \put(14,0){\usebox{\sn}} \put(14,0) {\begin{picture}(5,4)(0,0) \thicklines \put(2.5,2.5){$\Delta(p_3)$} \put(0,1){\line(2,1){2}} \end{picture}} \put(21,0){\usebox{\sn}} \put(21,0) {\begin{picture}(5,4)(0,0) \put(2.5,2.5){$\Delta(p_4)$} \thicklines \put(0,1){\line(2,1){2}} \put(0,3){\line(2,-1){2}} \put(0,1){\line(0,1){2}} \end{picture}} \end{picture} \bigskip The polynomials in the above example are taken from~\cite{BO}. Theorem~1.3 in the cited paper states that these polynomials are canonical forms, up to affine substitution of polynomials of degree~4 without critical points and with at least one disconnected level set. Theorem~5.5 says that none of these polynomials has a real Jacobian mate. The method of its proof uses an integration based on Green's formula and requires an analysis of each case separately.
\section{Introduction} \label{sec:intro} Sometimes, the equatorial plane of a star is grossly misaligned with the orbital plane of at least one of its planets \citep[see, e.g.,][]{WinnFabrycky2015,Triaud2018}. The reasons for these high stellar obliquities are unclear. Most of the available data are for stars with hot Jupiters. Early on, it became clear that cool stars ($T_{\rm eff} \lesssim 6000$\,K) with hot Jupiters tend to have low obliquities \citep{FabryckyWinn2009}. Among those stars, observations of high obliquities have mainly been restricted to those with wider-orbiting giant planets ($a/R_\star \gtrsim 8$). In contrast, hotter stars with hot Jupiters have a much broader obliquity distribution \citep{Schlaufman2010,Winn2010}. Many of the theories that have been offered to explain these results invoke formation pathways for hot Jupiters in which the planet's orbital plane is tilted away from the protoplanetary disk plane. Good alignment might eventually be restored by tidal dissipation, but only for cool stars with especially close-orbiting giant planets, owing to stronger tidal dissipation and more rapid magnetic braking \citep{Winn2010,Dawson2014}. It remains possible, though, that high obliquities are unrelated to hot Jupiter formation and instead reflect more general processes in star and planet formation. One way to make progress is to measure the obliquities of stars with different types of planets, including smaller and wider-orbiting planets than hot Jupiters. The {\it Kepler} survey provided a sample of about 4{,}000 transiting planets around FGKM host stars, the majority of which have sizes between 1 and 4\,$R_\oplus$ and orbital periods ranging from 1 to 100 days \citep{Borucki2017,Thompson2018}. In most cases, measuring the obliquities of individual stars via the Rossiter-McLaughlin effect is impractical, owing to the faintness of the star and the small size of the planet. But the sample is large enough for statistical probes of the obliquity distribution. In particular, since the planetary orbits are all being viewed at high inclination with respect to the line of sight (a requirement for transits to occur), any constraints on the inclination distribution of the stellar rotation axes are also constraints on the stellar obliquity distribution. \citet{Mazeh+2015} performed one such study, based on measurements of rotationally-induced photometric variability. They found clear evidence that stars cooler than about 6000\,K have low obliquities, as well as suggestive evidence that hotter stars have a broad range of obliquities, with caveats to be discussed later in this paper. \cite{Winn+2017} and \cite{MunozPerets2018} performed studies using measurements of the projected rotation velocities ($v\sin i$). Both sets of authors examined the cases in which measurements of $v\sin i$, rotation period, and stellar radius are available, to obtain constraints on $\sin i$. The results were generally consistent with low obliquities $(\lesssim\,30^\circ)$. A limitation of these studies was that the sample of stars with detected rotation periods may suffer from biases that favor low-mass stars, high inclinations, and relatively short rotation periods, all of which facilitate the detection of a photometric rotation signal. \cite{Winn+2017} also compared the $v\sin i$ distributions of planet hosts and samples of stars chosen without regard to planets. The planet hosts had systematically higher values of $v\sin i$, again suggesting low obliquities. However, the comparison stars were drawn from heterogeneous sources, some of which may have been biased against high-inclination stars and rapid rotators. This called into question the key assumptions that the comparison stars are randomly oriented and have the same distribution of rotation velocities as the planet hosts. The work presented here is a new application of this method with an improved control sample. The rest of this paper is organized as follows. Section 2 describes our observations of the candidate control stars. Section 3 compares the spectroscopic properties of the planet hosts and the control stars. Section 4 presents two statistical tests for differences between the $v\sin i$ distributions of the two samples. Section 5 describes a simple model that was used to characterize the obliquity distribution of the planet hosts. Section 6 summarizes and describes possible implications for theories of obliquity or inclination excitation. \section{Observations} \label{sec:obs} The best stars for this type of study are early-G and late-F main-sequence stars. Cooler stars typically rotate too slowly to permit reliable measurements of $v\sin i$, and hotter stars are not well represented in the {\it Kepler} sample of planet-hosting stars. We drew the data for the planet hosts from the California-{\it Kepler} Survey \citep[CKS,][]{Petigura+2017,Johnson2017}. The CKS team performed Keck/HIRES spectroscopy of 1{,}305 stars with transiting planets, of which several hundred have spectral types in the desired range. They provided precise determinations of the effective temperature ($T_{\rm eff}$), surface gravity ($\log g$), iron metallicity ([Fe/H]), and projected rotation velocity ($v\sin i$). We needed to construct a control sample as similar as possible to the {\it Kepler} planet hosts, but selected without regard to rotation rate or orientation. Only with such a sample can any systematic differences in $v\sin i$ between the planet hosts and control stars be attributed to the obliquity distribution of the planet hosts. We also wanted to observe the control stars with the same instrument as the planet hosts, and use the same software to analyze the spectra. This is important because measurements of $v\sin i$ are subject to systematic errors related to instrumental resolution and treatment of other line-broadening mechanisms. We selected candidate control stars based on low-resolution spectroscopy of the {\it Kepler} field by the LAMOST team \citep{Ren+2016}. We defined a similarity metric between two stars: \begin{equation} D^2 = \left( \frac{\Delta T_{\rm eff}}{100\,{\rm K}} \right)^2 + \left( \frac{\Delta\log g}{0.10\,{\rm dex}} \right)^2 + \left( \frac{\Delta{\rm [Fe/H]}}{0.10\,{\rm dex}} \right)^2. \end{equation} The quantities in the denominators are typical LAMOST uncertainties. We chose a trial value of $m_{\rm lim}$, the limiting apparent magnitude in the {\it Kepler} bandpass. For each CKS star in the desired range of effective temperatures, we selected the LAMOST star with the minimum $D$ and $m<m_{\rm lim}$. Then, we adjusted $m_{\rm lim}$ to be the brightest possible value for which two-sided Kolmogorov-Smirnov tests did not reject the hypotheses that the distributions of $T_{\rm eff}$, $\log g$ and [Fe/H] are the same for the CKS stars and the candidate control stars. This turned out to be $m_{\rm lim} = 11.1$, approximately 3 magnitudes brighter than the limiting magnitude of the planet hosts. We observed 188 candidate control stars with Keck/HIRES during the summer of 2018. The observations were spread out over several days, amounting to a total of about one half-night of Keck time. We used the same instrumental setup, observing protocols, data reduction software, and analysis procedures that were used by the CKS. In particular, the basic spectroscopic parameters of each star were determined with SpecMatch \citep{Petigura2015}, for which the CKS team demonstrated an internal precision of 60~K in $T_{\rm eff}$, 0.10~dex in $\log g$, 0.04~dex in [Fe/H], and 1.0 km\,s$^{-1}$ in $v\sin i$. The latest version of SpecMatch was applied to the Keck/HIRES spectra of both the planet hosts and the control stars as a single batch job, to ensure homogeneity in the analysis method.\footnote{In Paper~I of the CKS series of publications \citep{Petigura+2017}, the tabulated spectroscopic parameters are based on an average of the results obtained with two different analysis codes: SpecMatch, and {\tt SME@XSEDE}. For our study, we used only SpecMatch.} Nineteen of the candidate control stars turned out to be spectroscopic binaries and were discarded from the sample. Following the same quality control procedures as the recent CKS study by \citet{FultonPetigura2018}, we also eliminated from consideration any star for which the \citet{Gaia2018} geometric parallax is not available, has a precision lower than 10\%, or disagrees with the spectroscopic parallax by more than 4-$\sigma$. \section{Sample construction} \label{sec:samples} Because the selection of candidate control stars was based on low-resolution data, in many cases the stars turned out to have spectroscopic parameters far away from those of the planet hosts. To construct samples with overlapping properties, we restricted both the planet hosts and the control stars to have SpecMatch parameters satisfying \begin{eqnarray*} 5950\,{\rm K} < &T_{\rm eff}& < 6550\,{\rm K},\\ 3.95 < &\log g& < 4.45,~{\rm and}\\ -0.3 < &{\rm [Fe/H]}& < 0.3. \end{eqnarray*} Since we were interested in the obliquities of stars with small planets --- and not hot Jupiters --- we only included stars having at least one planet smaller than 4\,$R_\oplus$. This led to our final samples of 150 planet hosts and 101 control stars. The SpecMatch parameters of these stars are given in Tables~2 and 3, which appear at the end of the paper. Figure~1 shows the radius/period distribution of the known transiting planets for all the planet hosts in our sample. \begin{figure} \setcounter{figure}{0} \label{fig:RadiusPeriod} \begin{center} \includegraphics[width=0.45\textwidth]{RadiusPeriod.pdf} \end{center} \caption{Planetary radius and orbital period, for all the known transiting planets associated with the 153 planet hosts in our sample.} \end{figure} \begin{figure*} \label{fig:SampleComparison} \begin{center} \includegraphics[width=0.9\textwidth]{SampleComparison.pdf} \end{center} \caption{Comparison between the properties of the planet hosts (blue) and control stars (red). The precision of the measurements of $T_{\rm eff}$, $\log g$, [Fe/H], and $v\sin i$ is 60~K, 0.10~dex, 0.04~dex, and 1.0 km\,s$^{-1}$, respectively. In the the upper right panel, the vertical coordinate is the absolute magnitude based on the 2MASS apparent magnitude $m_K$ and the Gaia DR2 parallax, with no allowance for extinction. The radius, mean density, and age are from fits to MIST isochrones, and have typical internal uncertainties of 3\%, 6\%, and 30\%, respectively.} \end{figure*} We used two-sided Kolmogorov-Smirnov tests to check on the ``null hypothesis'' that the planet hosts and control stars have spectroscopic parameters drawn from the same parent distribution. The null hypothesis cannot be ruled out for $T_{\rm eff}$ ($p=0.57$), $\log g$ ($p=0.99$), or [Fe/H] ($p=0.44$). Likewise, the Anderson-Darling test cannot reject the null hypothesis for any of those three parameters ($p=0.66$, 0.99, and 0.43, respectively).\footnote{For these tests, as well as the other nonparametric tests described in this paper, the $p$-values were determined by bootstrap resampling, not by analytic approximations.} The preceding tests did not find any differences in the distributions of individual parameters, but are not capable of checking for differences in the joint distribution of two parameters. For this, we performed the two-dimensional generalization of the Kolmogorov-Smirnov test described by \cite{PressTeukolsky1988}, which they attributed to earlier work by \citet{FasanoFranceschini1987} and \citet{Peacock1983}. We tested the joint distributions of $(T_{\rm eff}, \log g)$, $(T_{\rm eff}, {\rm [Fe/H]})$, and $(\log g, {\rm [Fe/H]})$. In all three cases, the test result was compatible with the hypothesis that the parameters are drawn from the same joint distribution ($p>0.3$). While these tests are only 2-d and not 3-d, and share the same shortcomings as the original KS test \citep[see, e.g.,][]{FeigelsonBabu2012}, they give us some confidence that the control stars are similar to the planet hosts. Figure~2 shows the distributions of the spectroscopic parameters and other parameters of interest. This includes the $K$-band absolute magnitude, computed from the 2MASS apparent magnitude \citep{Cutri+2003} and the Gaia parallax \citep{Gaia2018} without any correction for extinction. The other parameters depicted are the mass, radius, mean density, and age of the stars, based on fitting the spectroscopic parameters to the MIST stellar-evolutionary models \citep{Choi2016}, using the method described by \cite{FultonPetigura2018}. \section{Model-Independent Tests} \label{sec:model-independent} \begin{figure*} \label{fig:VsiniTeff} \begin{center} \includegraphics[width=0.8\textwidth]{VsiniTeff.pdf} \end{center} \caption{Measurements of projected rotation velocity versus effective temperature, for planet hosts (blue) and control stars (orange). The diamonds are binned values of $v\sin i$. For effective temperatures cooler than 6250\,K, the planet hosts have higher mean values of $v\sin i$ than the control stars, indicating a tendency toward spin-orbit alignment.} \end{figure*} Figure~3 shows the projected rotation velocity as a function of effective temperature, both for individual stars and for averages within temperature bins. The bins were chosen to have a width of 50\,K for stars cooler than 6250\,K, and 100\,K for the less numerous hotter stars. For both samples, the average value of $v\sin i$ rises with $T_{\rm eff}$, as expected; this temperature range spans the well-known ``Kraft break'' above which stars are observed to rotate faster \citep{Struve1930,Kraft1967}. This trend is attributed to the reduced rate of magnetic braking for hot stars that lack thick outer convective envelopes. It appears from Figure~3 that the relatively cool planet-hosting stars ($T_{\rm eff} < 6250$\,K) tend to have higher $v\sin i$ values than the control stars. This is a sign that these planet hosts have systematically higher values of $\sin i$ and therefore have low obliquities. We performed two statistical tests to quantify the difference in the $v\sin i$ distributions. First, we performed the two-dimensional Kolmogorov-Smirnov test referenced earlier, using $T_{\rm eff}$ and $v\sin i$ as the two dimensions. The null hypothesis that the planet hosts and control stars have values of these two parameters drawn from the same joint distribution is assigned $p=0.028$. When applied only to the planet hosts and control stars with $T_{\rm eff} < 6250$\,K, the same test gives $p=0.0034$, representing a stronger rejection of the null hypothesis. The second test was based on the observation that the planet hosts have a mean $v\sin i$ that exceeds that of the control stars in all of the first 6 temperature bins shown in Figure~3. How often would differences at this level occur by chance, if $T_{\rm eff}$ and $v\sin i$ for all the stars were drawn from the same two-dimensional distribution? We answered this question through a Monte Carlo procedure. We quantified the difference between the two distribution with the statistic \begin{equation} S \equiv \sum_{n=1}^8 \frac{ \langle v\sin i\rangle_{{\rm p},n} - \langle v\sin i\rangle_{{\rm c},n} }{ \sqrt{ \sigma^2_{{\rm p},n} + \sigma^2_{{\rm c},n} } }, \end{equation} where $\langle v\sin i\rangle_n$ is the mean value $v\sin i$ within the $n$th temperature bin; $\sigma_n$ is the corresponding standard deviation of the mean; and ``p'' and ''c'' refer to the planet sample and the control sample, respectively. The real data have $S_{\rm obs} = 8.3$. To create simulated data sets, we combined the 150 planet hosts and 101 control stars to form a combined sample of 251 stars, and then randomly drew (with replacement) 150 members of the combined sample to serve as ``planet hosts'' and 101 members to serve as ``control stars.'' By construction, the simulated data sets have parameters that are drawn from the same joint distribution. We computed the $S$ statistic for each of $10^5$ simulated data sets; in no case did we find $S>S_{\rm obs}$. Therefore, according to this test, $p<10^5$. These model-independent tests confirmed the visual impression that the $v\sin i$ distributions of the planet hosts and control stars are significantly different, at least for the stars with $T_{\rm eff}<6250$\,K. In the following sections, we use a simple model to quantify the resulting constraints on the obliquity distribution of the planet-hosting stars. \section{A Simple Model} \label{sec:analysis} \begin{figure*} \label{fig:VsiniTeff_withModels} \begin{center} \includegraphics[width=0.8\textwidth]{VsiniTeffWithModels.pdf} \end{center} \caption{Measurements of projected rotation velocity versus effective temperature, for planet host (blue) and control stars (orange). The curves illustrate the best-fitting models. In the top panel, all the planet hosts are assumed to have the same value of $\langle \sin i\rangle$. In the bottom panel, the hosts cooler than 6250\,K were allowed to have a different value of $\langle \sin i \rangle$ from the hosts hotter than 6250\,K. In both panels, the gray dashed curve is the mean rotation velocity $\langle v\rangle$, the blue curve is $\langle v\rangle \langle \sin i\rangle$ fitted to the planet hosts, and the orange curve is $\langle v\rangle\times \pi/4$ fitted to the control stars.} \end{figure*} \subsection{Premises} Our model is based on the following premises: \begin{enumerate} \item A star's rotation velocity $v$ and inclination $i$ are independent variables. This seems uncontroversial, since the rotation velocity is an intrinsic quantity, while the inclination depends on our arbitrary position within the galaxy. \item For any value of the effective temperature, the control stars and the planet hosts have the same distribution of rotation velocities. This is justified by the sample construction and comparisons presented in Section~3. \item The mean rotation velocity $\langle v\rangle$ is a quadratic function of effective temperature. This is a simplifying assumption based on the trend observed in Figure~3. \item The measurements of $v\sin i$ for the control stars and the planet hosts are subject to the same systematic uncertainties. Ensuring this is the case was the motivation for obtaining all the spectra with the same instrument and analyzing them with the same code. \item The control stars are randomly oriented in space. \end{enumerate} To these, we add a sixth premise, and consider two different cases: \begin{itemize} \item[6a.] The obliquities of the transiting planet hosts are all drawn from the same distribution. \setcounter{enumi}{4} \item[6b.] There are two different obliquity distributions: one for hosts cooler than 6250\,K, and one for hosts hotter than 6250\,K. \end{itemize} The second case is inspired by the appearance of Figure~3, as well as the fact that the obliquity distribution of hot Jupiter hosts has been observed to broaden as the temperature is increased past 6250\,K, the approximate location of the Kraft break. The only aspect of the obliquity distribution that is well constrained by the data is $\langle \sin i\rangle$, the mean value of $\sin i$ for the planet hosts. For this reason, our models include $\langle \sin i\rangle$ as a free parameter but do not adopt a particular functional form for the obliquity distribution. A population of randomly oriented stars would have $\langle \sin i\rangle = \pi/4 \approx 0.785$, and a population of transiting-planet hosts with low obliquities would have $\langle \sin i\rangle \approx 1$. We fitted a single model to all of the stars, both the planet hosts and the control stars. For all the stars, the mean rotation velocity in the model is \begin{equation} \langle v\rangle(\tau) = c_0 + c_1 \tau + c_2 \tau^2, \end{equation} where \begin{equation} \tau \equiv \frac{T_{\rm eff} - 6250\,{\rm K}}{300\,{\rm K}} \end{equation} varies from $-1$ to $+1$, and $c_0$, $c_1$, and $c_2$ are free parameters. The mean $v\sin i$ value in the model depends on whether the star is a control star or a planet host: \begin{eqnarray} \langle v\sin i\rangle_n &=& \langle v\rangle_n \times \frac{\pi}{4}~~({\rm control~stars}) \\ \langle v\sin i\rangle_n &=& \langle v\rangle_n \times \langle \sin i\rangle~~({\rm planet~hosts}), \end{eqnarray} where we have used the fact that $v$ and $\sin i$ are uncorrelated. Thus, in this model, the polynomial coefficients are constrained by all of the stars, and the $\langle \sin i\rangle$ parameter is constrained by the planet hosts. The goodness-of-fit statistic was taken to be \begin{equation} \chi^2 = \sum_{n=1}^{251} \left( \frac{ v\sin i_{{\rm obs}, n} - \langle v\sin i\rangle_{{\rm calc}, n} } {1\,{\rm km\,s}^{-1}} \right)^2, \end{equation} where $v\sin i_{{\rm obs}, j}$ is the observed value of $v\sin i$ of the $n$th star, $\langle v\sin i\rangle_{{\rm calc}, i}$ is the mean value of $v\sin i$ calculated according to the model, and $1\,{\rm km\,s}^{-1}$ is the measurement uncertainty. \begin{figure} \label{fig:Posterior} \begin{center} \includegraphics[width=0.45\textwidth]{Posterior.pdf} \end{center} \caption{Posterior probability distributions for $\langle \sin i\rangle$, marginalized over all other parameter values. The gray curve shows the case in which the obliquities of all the planet hosts were assumed to be drawn from the same distribution. The red and blue curves shows the case in which the stars with $T_{\rm eff}<6250$\,K were allowed to have a different obliquity distribution from the stars with $T_{\rm eff}>6250$\,K.} \end{figure} \subsection{Results} For the case of a single obliquity distribution (premise 5a), the best-fitting model has $\langle \sin i \rangle = 0.856$ and $\chi^2_{\rm min}= 935$, with 247 degrees of freedom (251 data points and 4 free parameters). The model does not fit the data points to within the measurement uncertainties, nor should we expect it to fit so well. An individual measurement of $v\sin i$ departs from the calculated $\langle v\sin i\rangle$ not only because of the measurement uncertainty, but also because of the intrinsic dispersion in the rotation velocities and the dispersion in $\sin i$. These deviations are drawn from different distributions, neither of which is known well. For this reason, we used a bootstrap procedure to establish the confidence intervals for the model parameters. We created $10^5$ simulated data sets, each with the same number of planet hosts and control stars as the real data set, by drawing data points randomly (with repetitions allowed) from the real data. The model was fitted to each simulated data set by minimizing the $\chi^2$ statistic. The resulting collection of $10^5$ parameter sets was interpreted as a sampling from the joint probability density of the parameter values. For the case of a single obliquity distribution (premise 6a), the bootstrap procedure gave $\langle \sin i\rangle = 0.856\pm 0.036$, where the uncertainty interval encompasses 68\% of the bootstrap simulation results. For the case of two different obliquity distributions (premise 6b), the stars cooler than 6250\,K have $\langle \sin i\rangle = 0.928\pm 0.042$. The higher value obtained in this case implies a stronger tendency toward spin-orbit alignment; indeed, the result differs by only 1.7-$\sigma$ from the condition of perfect alignment. Conversely, the stars hotter than 6250\,K have $\langle \sin i\rangle = 0.794\pm 0.052$, which is consistent with random orientations ($\pi/4\approx 0.785$). Table 1 gives the results for all the parameters. Figure~4 show the best-fitting model curves, and Figure~5 shows the probability distributions for the key parameters. \begin{deluxetable}{ccc} \label{tbl:allparameters} \tablecaption{Parameter values.} \tablehead{ Parameter & Single obliquity & Two obliquity \\ value & distribution & distributions } \startdata \multirow{2}{*}{$\langle \sin i\rangle$} & \multirow{2}{*}{$0.856\pm 0.036$ } & $0.928\pm 0.042$, $<$6250\,K \\ & & $0.794\pm 0.052$, $>$6250\,K \\ $c_0$ & $9.57\pm 0.29$ & $9.44\pm 0.28$ \\ $c_1$ & $8.01\pm 0.54$ & $8.87\pm 0.61$ \\ $c_2$ & $3.30\pm 0.62$ & $4.05\pm 0.62$ \enddata \end{deluxetable} \subsection{von-Mises Fisher distribution} \begin{figure} \label{fig:vmfdist} \begin{center} \includegraphics[width=0.5\textwidth]{VMF_meansini.pdf} \end{center} \caption{Relationship between the concentration parameter $\kappa$ of the von-Mises Fisher distribution and the mean values of $\sin i$ (solid black line, left-side axis) and obliquity (gray dashed line, right-side axis). On the left, the colored bars indicate the 1-$\sigma$ allowed ranges of $\langle\sin i\rangle$ for the planet hosts, using the model described in Section 5.} \end{figure} Further steps are needed to obtain quantitative constraints on the obliquity distribution, because $\sin i$ is only one aspect of the obliquity. The other aspect is the position angle $\Omega$ of the projection of the spin axis onto the orbital plane. The relationship is \begin{equation} \sin i = \sqrt{1 - \sin^2\theta \cos^2\Omega}. \end{equation} Even though our model is not committed to a specific shape for the obliquity distribution, we find it useful to interpret the results with reference to a von-Mises Fisher (vMF) distribution, \begin{equation} p(\hat{n}_\star) \propto \exp(\kappa\hat{n}_\star\!\bigcdot\!\,\hat{n}_{\rm o}), \end{equation} where $\hat{n}_\star$ and $\hat{n}_{\rm o}$ are the unit vectors in the directions of the spin axis and the orbital axis, respectively, and the obliquity $\theta$ is equal to $\cos^{-1}(\hat{n}_\star \bigcdot \hat{n}_{\rm o})$. The vMF distribution is a widely-used model in directional statistics that resembles a two-dimensional Gaussian distribution wrapped around a sphere. Just as the Gaussian distribution has the maximum entropy for a given variance, the vMF distribution has the maximum entropy for a fixed value of the mean obliquity \citep{Mardia1975}. As $\kappa\rightarrow 0$, the distribution becomes isotropic, and as $\kappa \rightarrow \infty$, it approaches a delta-function centered on $\hat{n}_{\rm o}$. We numerically computed the relationship between $\kappa$ and the mean obliquity $\langle \theta\rangle$, as well as $\langle \sin i\rangle$, assuming that $\Omega$ is uniformly distributed between $0^\circ$ and $360^\circ$. The results are shown in Figure~6, along with the constraints on $\langle \theta\rangle$ obtained from our best-fitting models of the data. When all of the planet hosts are modeled together (premise 5a), the 1-$\sigma$ allowed range for $\kappa$ is from 1.7 to 4.2, and the mean obliquity ranges from 37 to 58 degrees. When the planet hosts are divided into two samples according to effective temperature (premise 5b), the stars cooler than 6250\,K have 1-$\sigma$ ranges of $\kappa=3.8$-16 and $\langle\theta\rangle = 18$-38 degrees, while the ranges for the hotter stars are $\kappa = 0$-2.3 and $\langle\theta\rangle = 49$-88 degrees. These results can be compared to previous inferences of $\kappa$ from different techniques and different samples of planet-hosting stars. \cite{FabryckyWinn2009} found $\kappa>7.6$ (95\% conf.) based on the first 11 observations of the Rossiter-McLaughlin effect, all of which were hot Jupiter hosts. Since that time, many more misaligned hot Jupiters have been found; a more up-to-date analysis by \cite{MunozPerets2018} gave $\kappa=2.2_{-0.6}^{+0.2}$. This is comparable to the obliquity distribution of the hotter half of the stars in our sample, while the cooler half of the stars have a greater tendency to be well-aligned. Previous inferences of the obliquity distribution of {\it Kepler} stars have mainly focused on the subset of stars with detected rotation periods. Such samples may be suffer from biases related to orientation and transiting planet detection, as noted in the Introduction. Nevertheless, in practice, our results are in agreement with the prior results. Since the stars in the previous studies were almost all cooler than $6250$\,K, the appropriate comparison is to cooler half of our sample, for which we obtained $\kappa=3.8$-16. \cite{MortonWinn2014} analyzed 70 {\it Kepler} stars, finding $\kappa=19_{-12}^{+73}$ for stars with multiple transiting planets, and $4.8_{-1.6}^{+2.0}$ for stars with only one detected transiting planet. To these results, \cite{Campante+2016} added asteroseismic determinations of $\sin i$ for 25 {\it Kepler} stars, finding $\kappa=11.5_{-5.7}^{+7.5}$ for the entire sample. \cite{Winn+2017} expanded the work by \cite{MortonWinn2014} to include 156 stars and found $\kappa \gtrsim 5$ regardless of transit multiplicity. Likewise, \cite{MunozPerets2018} analyzed a sample of 257 cool {\it Kepler} stars, and found $\kappa=14.5_{-6}^{+13.5}$. All of the confidence intervals of these previous studies overlap with ours, although in many cases the intervals are large. \subsection{Model validation} \label{subsec:validation} \begin{figure} \label{fig:SimulatedData} \begin{center} \includegraphics[width=0.45\textwidth]{SimulatedData.pdf} \end{center} \caption{Results of applying our modeling procedure to simulated data with different assumed values of $\langle \sin i\rangle$, as described in Section~\ref{subsec:validation}. The fitted values and uncertainties are consistent with the input values across the range of possible values.} \end{figure} To validate our modeling procedure, we fitted simulated data sets. We generated simulated data sets with different input values of $\langle \sin i\rangle$, and used our modeling procedure to ``recover'' the best-fitting value of $\langle \sin i\rangle$ and test for agreement. Each simulated data set was created as follows. The 101 control stars were assigned random orientations, and the 151 planet hosts were assigned fictitious obliquities drawn from a vMF distribution. For all the stars, a fictitious position angle was drawn from a uniform distribution. We assumed a quadratic relationship between $\langle v\rangle$ and $T_{\rm eff}$ based on the best-fitting model to the real data. Then, we assigned a $v\sin i$ value to each star: \begin{equation} v\sin i = \langle v\rangle (1 + 0.2 x_1) \sin i + (1\,{\rm km\,s}^{-1}) x_2, \end{equation} where $x_1$ and $x_2$ are independent random draws from a standard normal distribution $\mathcal{N}(0,1)$, to account for the intrinsic dispersion of rotation velocities (assumed to be 20\%) and the measurement uncertainty, respectively. Simulated data sets were created based on values of $\kappa$ ranging from 0 to 40, corresponding to nearly the full range of possible values of $\langle \sin i\rangle$. We fitted the simulated data sets using the same code that was used on the actual data. Figure~7 shows the results: the recovered values of $\langle \sin i\rangle$ agree with the input values to within the reported uncertainties, providing support for the validity of our procedure. \section{Discussion} \label{sec:end} Overall, the {\it Kepler} planet-hosting stars of spectral types from early-G to late-F have systematically higher values of the projected rotation velocity than similar stars chosen without regard to planets or spin-axis orientation. Although this trend had been seen by \citet{Winn+2017} and \citet{MunozPerets2018}, the improved control sample makes it possible to be more confident in quantitative comparisons. To explain the difference in terms of geometry, the obliquity distribution of the planet hosts must be intermediate between the limiting cases of perfect alignment and random directions. We analyzed the data using simple models for the obliquity distribution, and presented evidence that the hottest stars in the sample have a broader distribution than the less hot stars. Regardless of the details, the important point is that many of the stars in our sample (especially the late-F stars) appear to have larger obliquities than the Sun.\footnote{The Sun's obliquity is $6^\circ$ with respect to the total orbital angular momentum vector of the 8 planets, which is dominated by the contribution from Jupiter.} This has been known for a decade for the hosts of hot Jupiters, but to this point it has not been clear that it is also generally true of the hosts of other types of planets. A key assumption in our study is that the rotation velocities of the planet hosts and control stars are drawn from the same distribution. We tried to ensure this is the case through careful matching of observable spectroscopic parameters. Still, it remains possible that systematic differences exist. In principle, the control stars, being situated in a different and more nearby location in the Galaxy, may have systematically different rotation velocities than the planet hosts even for fixed values of the spectroscopic parameters, due to subtle differences in chemical composition or formation history. There might also be physical processes specific to the formation and evolution of {\it Kepler}-type planets that alter a star's rotational history. Tidal interactions with the known planets are generally too weak to affect the star's rotation, but one might speculate about previously ingested planets, or differing magnetic and accretion histories. Any such differences would be muted, though, by the fact that between one-third and one-half of the control stars also have {\it Kepler}-type planets that do not happen to be transiting. Despite these caveats, our conclusions are supported by two complementary lines of evidence. The first is the work by \citet{Mazeh+2015}, noted in Section 1. They studied the obliquities of {\it Kepler} stars using photometric variability data. Stronger variability is expected for stars viewed at high inclination, the perspective that allows spots and plages to rotate into and out of view. Therefore, if the transiting-planet hosts have low obliquities, they should show stronger variability than a sample of randomly-oriented stars, whereas for random obliquities, the planet hosts would show the same level of variability as the randomly-oriented stars. For stars cooler than about 6000\,K, \citet{Mazeh+2015} found the planet hosts to show stronger variability than stars without detected transiting planets, by approximately the factor of $4/\pi \approx 1.3$ that is expected if the planet hosts have low obliquities and the other stars are randomly oriented. They also found that this trend reverses for the hotter stars: the planet hosts display {\it weaker} variability than the randomly-oriented stars, with an amplitude ratio of 0.6. This was surprising, because even in the seemingly extreme case in which the planet hosts are randomly oriented, the amplitude ratio would be 1.0. For this ratio to fall below unity due only to differences in viewing angles, we would be led to the unexpected conclusion that the obliquities of the hot stars are preferentially near 90$^\circ$. However, \citet{Mazeh+2015} found that at least part of the difference between the variability levels of the hot planet hosts and the randomly-oriented stars is due to a selection effect. Namely, transiting planets are more readily detected around stars with intrinsically lower levels of photometric variability. Simulations of this selection effect showed that it was indeed significant, but not large enough to have reduced an intrinsic amplitude ratio of 1.3 all the way down to the observed ratio of 0.6. Thus, this study left open the possibility that the hot {\it Kepler} stars have a broader obliquity distribution than the cool stars, an interpretation that harmonizes with our findings. The second line of evidence for high obliquities among hot stars with planets other than hot Jupiters comes from recent observations of individual systems. We are aware of only two obliquity measurements for stars with effective temperatures between 5900\,K and 6450\,K that do not involve hot Jupiters, and in both cases, the obliquity is high. The first case is Kepler-408 ($T_{\rm eff} = 6088$\,K), which has an Earth-sized planet in a 2.5-day orbit. Asteroseismology revealed that the obliquity is approximately 45$^\circ$ \citep{Kamiaka+2019}. The second case is K2-290 ($T_{\rm eff} = 6302$\,K), which has a 3\,$R_\oplus$ planet in a 9.2-day orbit and a Jupiter-sized planet in a 48-day orbit. Observations of the Rossiter-McLaughlin effect show that the star's rotation is retrograde (Hjorth et al., submitted). Another relevant case is Kepler-56 \citep{Huber+2013}, which has two planets of sizes 6.5 and 9.8~$R_\oplus$ and orbital periods of 10.5 and 21 days. The host star is a subgiant with a mass of 1.3~$M_\odot$ and an effective temperature of 4840\,K, although it was probably about 6400\,K when it was on the main sequence. The stellar obliquity is at least 45$^\circ$, based on an asteroseismic analysis. Some day we may accumulate enough of these individual measurements to measure the obliquity distribution more directly. While this is not the place to evaluate specific theories in detail, we can list the previously published theories for obliquity excitation that have the desired property that they do not require the presence of a close-orbiting giant planet:\\[0.05in] \indent $\bullet$ A misalignment between the protoplanetary disk due to inhomogeneities in the molecular cloud \citep{Bate2010,Fielding+2015, Takaishi2020}, magnetic interactions \citep{Lai2011}, or a companion star \citep{Batygin2012,SpaldingBatygin2015,ZanazziLai2018}.\\[0.05in] \indent $\bullet$ Ongoing nodal precession driven by a stellar companion or wide-orbiting giant planet on a highly inclined orbit \citep{AndersonLai2018}.\\[0.05in] \indent $\bullet$ A resonance between the nodal precession rates of an inner planet and an outer planet that occurs during the dissipation of the protoplanetary disk \citep{Petrovich+2020}.\\[0.05in] \indent $\bullet$ Random tumbling of the spin-axis orientation of the photosphere due to stochastic internal gravity waves \citep{RogersLin2012}.\\[0.05in] Another desired property is that cooler stars with small planets should have low obliquities. The dividing line of about $6250$\,K is significant in stellar-evolutionary theory because the hot stars have thin or absent outer convective zones, leading to weaker or absent magnetic braking, more rapid rotation, and weaker tidal dissipation. Thus, it seems likely that a successful theory will involve these distinctions. At least two of the theories listed above make an explicit distinction between cool and hot stars: those of \cite{RogersLin2012} (which pertains only to hot stars) and \cite{SpaldingBatygin2015} (which appeals to the weaker magnetic field of hot stars). Of course, obliquities might be excited and damped by different mechanisms in different situations, including some that theoreticians have not yet identified. \acknowledgements We are grateful to the anonymous referee for a helpful critique of the manuscript, and to Subo Dong for providing the LAMOST data in a convenient format. J.N.W.\ thanks the members of the Princeton exoplanet discussion group and Heather Knutson's group for useful feedback, and Geoff Marcy for input at the outset of this project. J.N.W.\ also acknowledges support from a NASA Keck PI Data Award, administered by the NASA Exoplanet Science Institute. S.A.\ acknowledges support from the Danish Council for Independent Research through the DFF Sapere Aude Starting Grant No.\ 4181-00487B, and the Stellar Astrophysics Centre for which funding is provided by The Danish National Research Foundation (Grant agreement no.\ DNRF106). M.R.K.\ is supported by the NSF Graduate Research Fellowship grant no.\ DGE\,1339067. This work made use of data from the LAMOST (Guoshoujing) Telescope, a National Major Scientific Project built by the Chinese Academy of Sciences, for which funding was provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. We acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \facility{Keck:I (HIRES)} \facility{LAMOST} \startlongtable \begin{deluxetable}{cccccc} \label{tbl:planet_hosts} \tablecaption{Spectroscopic properties of the planet hosts.} \tablehead{ KIC no. & KOI no. & $T_\mathrm{eff}$ [K] & $\log g$ & [Fe/H] & $v\sin i$ [km~s$^{-1}$] } \startdata \input{planet_hosts} \enddata \tablecomments{The uncertainties in $T_\mathrm{eff}$, $\log g$, [Fe/H], and $v\sin i$ are 60\,K, 0.1, 0.1, and 1\,km~s$^{-1}$, respectively.} \end{deluxetable} \startlongtable \begin{deluxetable}{ccccc} \label{tbl:control_stars} \tablecaption{Spectroscopic properties of the control stars.} \tablehead{ KIC no. & $T_\mathrm{eff}$ [K] & $\log g$ & [Fe/H] & $v\sin i$ [km~s$^{-1}$] } \startdata \input{control_stars.tex} \enddata \tablecomments{The uncertainties in $T_\mathrm{eff}$, $\log g$, [Fe/H], and $v\sin i$ are 60\,K, 0.1, 0.1, and 1\,km/s, respectively.} \end{deluxetable}
\section{Introduction} We consider the one dimensional compressible Euler equations in the Lagrangian coordinates: \begin{eqnarray}\label{equ-Euler} \begin{cases} \tau_t - u_x = 0,\\ u_t+p_x=0,\\ \left(e+\frac{u^2}{2}\right)_t + (up)_x = 0, \end{cases} \end{eqnarray} where $x$ is the space variable, $t$ is the time variable, $u$ is the velocity, $\rho$ is the density, $\tau=\rho^{-1}$ is the specific volume, $p$ is the pressure, $e$ is the internal energy. Due to the second law of thermodynamics, $\tau$, $p$ and $e$ are not independent, the relation within which is determined by the state equation(\textit{c.f.} \cite{Dafermos}). Normally, another physical quantity entropy $S$ is considered, which formulates the state equation as $p=p(\tau, S)$. For $C^1$ solutions, the third equation of \eqref{equ-Euler} is equivalent to the conservation of entropy (\textit{c.f.} \cite{smoller}): \begin{equation}\label{equ-entropy} S_t =0. \end{equation} Apparently, \eqref{equ-entropy} shows that $S$ is just the function of $x$. And, the general pressure law we consider in this paper is \begin{equation}\label{equ-p} p=p(\tau, S)=p(\tau, S(x)). \end{equation} Then the system \eqref{equ-Euler} becomes \begin{eqnarray}\label{system-general-pressure} \begin{cases} \tau_t - u_x = 0,\\ u_t + p(\tau, S(x))_x = 0.\\ \end{cases} \end{eqnarray} We consider the calssical solution of initial value problem for \eqref{system-general-pressure} with initial data $$ \tau(x,t=0)=\tau_0(x), \quad u(x,t=0)= u_0(x). $$ Compressible Euler equations is one of the most important physical models for systems of hyperbolic conservation laws. It is well known that shock waves are typically formed in finite time and the analysis on the system is difficult because of the lack of regularity. The singularity formation for both the small initial data problem and the large initial data problem has long been a very serious issue for the systems of conservation laws. The well-posedness theory for systems of hyperbolic conservation laws could be found in \cite{Bressan, Dafermos, Glimm,TW}. When initial data is small, the singularity formation has been well studied for decades. Lax \cite{lax2} proved that singularity forms in finite time for the general systems of strictly hyperbolic conservation laws with two unknowns with some initial compression. For general systems of conservation laws, \cite{Fritz John, li-zhou-kong, Li daqian, Liu1} provide fairly complete results for small data. Specifically, these results prove that the shock formation happens in finite time in any truly nonlinear characteristic field if the initial data includes compression. However, the large data singularity formation theory has been finally established in very recent papers \cite{CPZ, chen-young-zhang} for isentropic Euler equations with $\gamma$-law pressure ($p=K_1\tau^{-\gamma}$) and the full compressible Euler equations of polytropic ideal gas ($p=K_2e^{S/c_\tau}\tau^{-\gamma}$), where $K_1$, $K_2$ are positive constants and $\gamma>1$ is the adiabatic gas constant. The key point in proving the finite time shock formation for large solution is to have sharp global upper and lower bounds of the density. More precisely, if we restrict our consideration on singularity formation for full compressible Euler equations, the uniform upper bound of density is needed for any $\gamma>1$, but the time-dependent lower bound of density is needed only for the most physical case $1<\gamma<3$ (\textit{c.f.} \cite{G3}). The uniform upper bound on density for $\gamma$-law pressure case has been found by \cite{chen-young-zhang} which directs to a resolution of the shock formation when $\gamma\geq 3$. The singularity formation problem when $1<\gamma<3$ was finally resolved by \cite{CPZ}, in which the authors proved a crucial time-dependent lower bound estimate on density lower bound. Later on, the time-dependent lower bound of density is improved to its optimal order $O(1/t)$ in \cite{geng chen lower bound}. Nevertheless, for the full compressible Euler equations with general pressure law, the singularity formation results for non-isentropic case are still not satisfied when the smallness assumption on the initial data is removed. In fact, a complete finite time gradient blow-up result has been showed in \cite{CPZ} when entropy $S$ is a given constant. Furthermore, \cite{chen-young} provides a singularity formation result for the non-isentropic general pressure law case. Unfortunately, in \cite{chen-young}, there are still several a priori conditions on the pressure function which are not automatically satisfied for the gas dynamics. The target of this paper is to establish a better singularity formation result on non-isentropic Euler equations without such kind of a priori assumptions. The key idea is to establish a uniform upper bound estimate on density, which was lack for general pressure law case previously. In this case, the lower bound of density is redundant. Our proof relies on the careful study on the decoupled Riccati type ordinary differential equations on gradient variables which was provided in \cite{chen-young}. Using our new estimates, we can get the constant lower bound on coefficients of the Riccati type equations, and the quadratic nonlinearity implies the derivatives must blow-up in finite time. Through out this paper, we need to propose the following assumptions on the pressure: there exists a positive function $m=m(S)$, positive constants $A$ , $k>1$, $k_1, k_2$ and $l_i$ $(i=1,2,\cdots, 8)$ such that, for $\tau\in(0, +\infty)$, \begin{eqnarray} \textbf{(H1)} && p_\tau<0, \quad p_{\tau\tau}>0,\quad \lim\limits_{\tau\rightarrow 0} p(\tau) = + \infty, \quad \lim\limits_{\tau\rightarrow + \infty} p(\tau) = 0,\label{relation-p1}\\ \textbf{(H2)} &&\int_0^1 \sqrt{-p_\tau} d\tau = + \infty,\quad\quad \int_1^{+\infty} \sqrt{-p_\tau} d\tau < + \infty,\label{relation-p2}\\ \textbf{(H3)} && l_2 p c^{\frac{7}{2}} \leq pp_{\tau\tau} \leq l_1c^{\frac{7}{2}}, \label{assumption-p-ptautau-ptau}\\ && 2(k-1)(-p_\tau)^2\geq k p p_{\tau\tau},\quad (5+A) (p_{\tau\tau})^2 - 4 p_\tau p_{\tau\tau\tau} \geq 0,\label{p-tau-k-k-1-relation-A-ptau}\\ \textbf{(H4)} && \frac{m'(x)}{k_2 m} p \leq p_\mu \leq \frac{m'(x)}{k_1 m} p, \quad \frac{m'}{l_4 m} p_\tau \leq p_{\tau\mu} \leq \frac{m'}{l_3 m} p_\tau,\label{inequ-pmu} \\ && \frac{m'}{l_6 m} p \leq p_{\mu\mu} \leq \frac{m'}{l_5 m} p, \quad \frac{m'}{l_8 m} p_{\tau\tau} \leq p_{\tau\tau\mu} \leq \frac{m'}{l_7 m} p_{\tau\tau}. \label{inequ-ptaumu-pmumu-inequ-ptaotaomu} \end{eqnarray} Here, the sound speed is \begin{equation}\label{equ-c} c=\sqrt{-p_\tau(\tau, S)}, \end{equation} and $p_\mu=\frac{\partial p(\tau,S(x))}{\partial S}S'(x)$. \begin{remark} $(\textbf{H1})$ is physically motivated for classical hydrodynamics (\textit{c.f.} \cite{courant,menikoff-plohr}). $(\textbf{H2})$ is the sound speed condition. $(\textbf{H3})$ and $(\textbf{H4})$ are the nonlinearity conditions. \end{remark} Now, we introduce the derivatives combination of $(\tau, u)$: \begin{eqnarray}\label{defi-y-q} \begin{split} y:=\sqrt{c} (u+h)_x + \frac{p_\mu}{\sqrt{c}}-I, \quad \quad q:=\sqrt{c} (u-h)_x - \frac{p_\mu}{\sqrt{c}}+I, \end{split} \end{eqnarray} where \begin{equation}\label{equ-h} h=\int_\tau^{+\infty}\sqrt{-p_\xi(\xi, S(x))}d\xi, \quad I = \int_{h_0}^h \frac{\sqrt{c}}{2}\left(\frac{p_\mu}{c}\right)_h dh, \end{equation} and $h_0$ is a constant. Under the above assumptions, we can present the main theorem of this paper. \begin{theorem}\label{main-thm} Suppose that \textbf{(H1)}-\textbf{(H4)} are satisfied, assume that the initial entropy $S=S(x)$ is $C^1$, finite piecewise monotonic and has bounded total variation, if $\left(\tau_0(x), u_0(x)\right)$ are $C^1$ functions, and there are positive constants $k_{01}$ and $k_{02}$ such that $$ \|\left(\tau_0(x), u_0(x)\right)\|_{C^1} \leq k_{01}, \quad \tau_0(x)\geq k_{02}, \quad \mbox{for} ~x\in\mathbb{R}. $$ Then, the solution of the Cauchy problem of \eqref{system-general-pressure} blows up in finite time, if \begin{equation}\label{inequ-y0-q0} \inf_{x\in \mathbb{R}}\{y(x,0), q(x,0)\}<-N. \end{equation} Here $N$ is a positive constant depending only on $k_{01}$, $k_{02}$ and initial entropy function. \end{theorem} This paper is organized as follows: in section 2, we introduce some notations and prove the properties of the pressure $p$. In section 3, we obtain the $L^\infty$ boundedness of the Riemann invariants, and this gives the upper bound of density and wave speed. In section 4, we prove the finite time singularity formation by analysising the Riccati type equations. \section{Notations and Preparations} We denote the forward and backward characteristics by $$ \frac{dx}{dt}=c \quad \mbox{and} \quad \frac{dx}{dt}=-c, $$ and the corresponding directional derivatives along the characteristics are \begin{eqnarray*} \partial_+:=\partial_t+c\partial_x \quad \mbox{and} \quad \partial_-:=\partial_t-c\partial_x. \end{eqnarray*} Then we can denote the Riemann invariants by \begin{equation}\label{rs} s:=u+h \quad \mbox{and} \quad r:=u-h. \end{equation} We can easily get the following system of $u$ and $h$ (\textit{c.f.}\cite{chen-young}): \begin{equation}\label{equ-hup} \begin{cases} h_t + cu_x =0,\\ u_t + ch_x +p_\mu = 0. \end{cases} \end{equation} Thus, direct calculation shows that \begin{eqnarray}\label{partial-sr} \begin{split} &\partial_+s =(u_t+ch_x)+(h_t+cu_x) =-\frac{\partial p(\tau,S(x))}{\partial S}S'(x),\\ &\partial_-r =(u_t+ch_x)-(h_t+cu_x) =-\frac{\partial p(\tau,S(x))}{\partial S}S'(x). \end{split} \end{eqnarray} Furthermore, \eqref{inequ-pmu} yields the following inequality: \begin{equation}\label{s+r-} -\frac{p}{k_1}\frac{m'(x)}{m(x)}\leq \partial_+s=\partial_-r \leq -\frac{p}{k_2}\frac{m'(x)}{m(x)}. \end{equation} Next, we will prove the property of the pressure $p$ which will play a vital role in the proof of Theorem \ref{main-thm}. \begin{lemma} Under the assumptions of (\textbf{H1}) and (\textbf{H3}), \begin{equation}\label{inequ-pcsr} \frac{1}{2k} c(s-r)\leq p\leq \frac{1}{2} c(s-r), \end{equation} where $k>1$. \end{lemma} \begin{proof} If we can prove that $$ p\leq ch\leq kp, $$ then, due to $s-r=2h$, we deduce \eqref{inequ-pcsr}. For the first part, we know that $-p_\tau$ is monotone decreasing in view of $(\ref{relation-p1})_2$, so we have \begin{eqnarray*} ch =\int_\tau^{+\infty}\sqrt{-p_\tau}\sqrt{-p_\xi}d\xi \geq \int_\tau^{+\infty} \sqrt{-p_\xi}\sqrt{-p_\xi}d\xi=p, \end{eqnarray*} where we have used \eqref{equ-c}, $(\ref{equ-h})_1$ and $(\ref{relation-p1})_4$, thus we get $p\leq ch$. For the second part, if there is a constant $k>1$ such that \begin{equation}\label{inequ-k-p-c-ptau} k\left(\frac{p}{c}\right)_\tau \leq -\sqrt{-p_\tau}, \end{equation} integrating both sides from $\tau$ to $+\infty$ with respect to the time variable yields $ch\leq kp$. Actually, direct calculation shows that \begin{eqnarray*} \left(\frac{p}{c}\right)_\tau=\left(\frac{p}{\sqrt{-p_\tau}}\right)_\tau =-\sqrt{-p_\tau}+\frac{1}{2}pp_{\tau\tau}(-p_\tau)^{-\frac{3}{2}}, \end{eqnarray*} this yields that \eqref{inequ-k-p-c-ptau} is equivalent to $(\ref{p-tau-k-k-1-relation-A-ptau})_1$. Thus, we prove the result of this lemma. \end{proof} \begin{remark} It is worth noticing that $p\leq ch$ is the direct conclusion of the property of $p$. So, the following property is natural \begin{equation}\label{p-leq-ch} \frac{p}{c}\leq h. \end{equation} Here $p$, $c$ and $h$ are defined by \eqref{equ-p}, \eqref{equ-c} and \eqref{equ-h} respectively. \end{remark} \section{The $L^\infty$ boundedness of $s$ and $r$} In this section, we will first prove the $L^\infty$ boundedness of the Riemann invariants $s$ and $r$. Based on this, we can get the boundedness of $|u|$ and $|h|$. Finally, the upper bound of density $\rho$ will be obtained, which is crucial to gain the singularity formation. Notice that \begin{equation}\label{m+-} \partial_+m=cm'(x) \quad \mbox{and} \quad \partial_-m=-cm'(x). \end{equation} We will discuss from two aspects according to the sign of the derivative of $m(x)$: (\textbf{i}) When $m'(x)\geq0$, using \eqref{s+r-} and \eqref{inequ-pcsr}, we have $$ -\frac{c}{2k_1} \frac{m'(x)}{m(x)}(s-r)\leq \partial_+s=\partial_-r \leq -\frac{c}{2kk_2}\frac{m'(x)}{m(x)}(s-r), $$ which means, \begin{eqnarray*}\begin{split} -\frac{1}{2k_1} \frac{\partial_+m}{m}(s-r)\leq&\partial_+s\leq-\frac{1}{2kk_2}\frac{\partial_+m}{m}(s-r),\\ \frac{1}{2k_1} \frac{\partial_-m}{m}(s-r)\leq&\partial_-r\leq\frac{1}{2kk_2}\frac{\partial_-m}{m}(s-r). \end{split}\end{eqnarray*} Introducing new variables \begin{eqnarray}\label{defi-s1r1} s_{11} = m^{\frac{1}{2k_1}}s, \quad r_{11} = m^{\frac{1}{2k_1}}r, \quad s_{12}=m^{\frac{1}{2kk_2}}s, \quad r_{12}=m^{\frac{1}{2kk_2}}r, \end{eqnarray} then \begin{equation}\label{inequ-s1} \partial_+s_{11}\geq\frac{1}{2k_1}\frac{\partial_+m}{m}r_{11}, \quad \partial_+s_{12}\leq\frac{1}{2kk_2}\frac{\partial_+m}{m}r_{12}, \end{equation} \begin{equation}\label{inequ-r1} \partial_-r_{11}\geq\frac{1}{2k_1}\frac{\partial_-m}{m}s_{11}, \quad \partial_-r_{12}\leq\frac{1}{2kk_2}\frac{\partial_-m}{m}s_{12}. \end{equation} (\textbf{ii}) When $m'(x)\leq 0$, we similarly have \begin{equation}\label{inequ-s2} \partial_+s_{21}\geq\frac{1}{2kk_1}\frac{\partial_+m}{m}r_{21},\quad \partial_+s_{22}\leq\frac{1}{2k_2}\frac{\partial_+m}{m}r_{22}, \end{equation} \begin{equation}\label{inequ-r2} \partial_-r_{21}\geq\frac{1}{2kk_1}\frac{\partial_-m}{m}s_{21},\quad \partial_-r_{22}\leq\frac{1}{2k_2}\frac{\partial_-m}{m}s_{22}, \end{equation} where \begin{eqnarray}\label{defi-s2r2} s_{21} = m^{\frac{1}{2kk_1}}s,\quad r_{21} = m^{\frac{1}{2kk_1}}r,\quad s_{22} = m^{\frac{1}{2k_2}}s,\quad r_{22} = m^{\frac{1}{2k_2}}r. \end{eqnarray} According to the assumptions on the initial entropy in Theorem \ref{main-thm}, we have \begin{equation}\label{v} V:= \frac{1}{2c_\tau}\int_{-\infty}^{+\infty} |S'(x)|dx = \int_{-\infty}^{+\infty}\left|\frac{m'}{m}(x)\right|dx <+\infty. \end{equation} Due to the assumptions on the initial data in Theorem \ref{main-thm}, from $m=m(S)$, there exist positive constants $k_{ml}$ and $k_{mr}$, such that \begin{equation}\label{m} 0<k_{ml}<m(x)<k_{mr}. \end{equation} Also, we denote positive constants $k_{s}$ and $k_{r}$, then \begin{equation}\label{sr0} |s(\cdot,0)|<k_{s}, \quad |r(\cdot,0)|<k_{r}. \end{equation} \begin{lemma}\label{lem-sr-bound} Under the assumptions of Theorem \ref{main-thm}, given a point $(x_1, t_1)$, suppose the solution of \eqref{system-general-pressure} is $C^1$ in the characteristic triangle bounded by the forward and backward characteristics through $(x_1, t_1)$ and the line $t=0$. Then, we can prove that $$ |s(x_1,t_1)|\leq n_s, \quad |r(x_1,t_1)|\leq n_r, \quad i=1,2. $$ Here $n_s$ and $n_r$ depend on the initial data and the number of piecewise monotonic regions. \end{lemma} \begin{proof} Denote the forward and backward characteristics through a point $(x_*, t_*)$ by \begin{eqnarray*}\begin{split} &\overrightarrow{\mathcal{L}}_{x_*}:=\{(x, \overrightarrow{t}(x))|x\leq x_*\} =\{(\overrightarrow{x}(t),t)|t\leq t_*\},\\ &\overleftarrow{\mathcal{L}}_{x_*}:=\{(x, \overleftarrow{t}(x))|x\geq x_*\} =\{(\overleftarrow{x}(t),t)|t\leq t_*\}. \end{split}\end{eqnarray*} First, we will prove this lemma using the following case of three piecewise monotonic regions: suppose there is a point $(x_2, t_2)$ in the forward characteristic line and $(x_3,t_3)$ in the backward characteristic line. Assume $m'\leq 0$ in the domain where the region from $x=x_2$ to $x=x_3$ intersects with the characteristic triangle, and $m'\geq 0$ in the rest of the characteristic triangle. \begin{figure}[htbp] \centering \includegraphics[width=8cm]{Figure-curve.pdf} \caption{Characteristic triangle} \label{fig-1} \end{figure} In the forward characteristic line $\overrightarrow{\mathcal{L}}_{x_1}$, due to $m'(x)\geq 0$ from $(\overrightarrow{x}_1(0),0)$ to $(x_2,t_2)$, so integrating \eqref{inequ-s1} along this part, we can get \begin{eqnarray*}\begin{split} &s_{11}(x_2,t_2)\geq s_{11}(\overrightarrow{x}_1(0),0) +\frac{1}{2k_1}\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_1(0)}^{x_2}\frac{m'}{m}(x) r_{11}\left(x,\overrightarrow{t}(x)\right)dx,\\ &s_{12}(x_2,t_2)\leq s_{12}(\overrightarrow{x}_1(0),0) +\frac{1}{2kk_2}\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_1(0)}^{x_2}\frac{m'}{m}(x) r_{12}\left(x,\overrightarrow{t}(x)\right)dx. \end{split}\end{eqnarray*} Due to the monotone increasing property of $m(x)$, we have $$\left|\frac{m(\overrightarrow{x}_1(0))}{m(x_2)}\right|\leq 1, \quad \quad \left|\frac{m(x)}{m(x_2)}\right| \leq 1\quad \mbox{for} ~ x\in\left(\overrightarrow{x}_1(0), x_2\right). $$ Then, we have \begin{equation}\label{s1-rightarrow} |s(x_2,t_2)|\leq |s(\overrightarrow{x}_1(0),0)| +k_3\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_1(0)}^{x_2}\left|\frac{m'}{m}(x)\right| \left|r\left(x,\overrightarrow{t}(x)\right)\right|dx, \end{equation} where $k_3=\max\left\{(2k_1)^{-1}, (2kk_2)^{-1}\right\}$. Since $m'(x)\leq 0$, integrating \eqref{inequ-s2} from $(x_2,t_2)$ to $(x_1,t_1)$ along the forward characteristic line $\overrightarrow{\mathcal{L}}_{x_1}$, we have \begin{eqnarray*}\begin{split} &s_{21}(x_1,t_1)\geq s_{21}(x_2, t_2) +\frac{1}{2kk_1}\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{x_2}^{x_1}\frac{m'}{m}(x)r_{21}\left(x,\overrightarrow{t}(x)\right)dx,\\ &s_{22}(x_1,t_1)\leq s_{22}(x_2, t_2) +\frac{1}{2k_2}\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{x_2}^{x_1}\frac{m'}{m}(x)r_{22}\left(x,\overrightarrow{t}(x)\right)dx. \end{split}\end{eqnarray*} Recalling \eqref{m}, one can yield $$ \left|\frac{[m(\overrightarrow{x}_1(0))]}{[m(x_1)]}\right|\leq \frac{k_{mr}}{k_{ml}}, \quad\quad \left|\frac{[m(x)]}{[m(x_1)]}\right| \leq \frac{k_{mr}}{k_{ml}}. $$ Considering these two facts, substitute \eqref{defi-s2r2} into the above two inequalities, we obtain \begin{equation}\label{s1-rightarrow<} |s(x_1,t_1)|\leq k_4|s(x_2, t_2)| +k_5\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{x_2}^{x_1}\left|\frac{m'}{m}(x)\right| \left|r\left(x,\overrightarrow{t}(x)\right)\right|dx, \end{equation} where $k_4=\max\Big\{\left(k_{mr}k_{ml}^{-1}\right)^{\frac{1}{2kk_1}},\left(k_{mr}k_{ml}^{-1}\right)^{\frac{1}{2k_2}}\Big\}$ and $ k_5=\max\Big\{(2kk_1)^{-1}\big(k_{mr}$ $k_{ml}^{-1}\big)^{\frac{1}{2kk_1}}, (2k_2)^{-1}\left(k_{mr}k_{ml}^{-1}\right)^{\frac{1}{2k_2}}\Big\} $. From the above analyses, replacing the integration variable $x$ by $x_\sigma$ in \eqref{s1-rightarrow} and \eqref{s1-rightarrow<} shows: \begin{eqnarray}\label{s1-rihghtarrow-new}\begin{split} |s(x_1,t_1)|\leq& k_4 |s(\overrightarrow{x}_1(0),0)|\\ &+(k_3k_4+k_5)\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_1(0)}^{x_1}\left|\frac{m'}{m}(x_\sigma)\right| \left|r\left(x_\sigma,\overrightarrow{t}(x_\sigma)\right)\right|dx_\sigma. \end{split}\end{eqnarray} As showed in Figure \ref{fig-1}, there are four different cases about the position of $\overleftarrow{\mathcal{L}}_{x_\sigma}$, then we will have four different results about the relationship between $r(x_\sigma,t_\sigma)$ and $r(\overleftarrow{x}_\sigma(0),0)$ by the above mentioned method. When all these four relsuts are taken into account together, we can get \begin{eqnarray}\label{r1-leftarrow}\begin{split} |r(x_\sigma,t_\sigma)|&\leq \max\{1,k_6\} |r(\overleftarrow{x}_\sigma(0),0)|\\ &+\max\{k_6k_7+k_8, k_7+k_8\} \mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{\overleftarrow{x}_\sigma(0)}^{x_\sigma}\left|\frac{m'}{m}(x)\right||s(x,\overleftarrow{t}_\sigma(x))|dx, \end{split}\end{eqnarray} where $k_6=\max\Big\{\left(k_{mr}k_{ml}^{-1}\right)^{\frac{1}{2k_1}},\left(k_{mr}k_{ml}^{-1}\right)^{\frac{1}{2kk_2}}\Big\}$, $k_7=\max\left\{(2kk_1)^{-1},(2k_2)^{-1}\right\}$ and $k_8=\max\Big\{(2k_1)^{-1}\left(k_{mr}k_{ml}^{-1}\right)^{\frac{1}{2k_1}}, (2kk_2)^{-1}\left(k_{mr}k_{ml}^{-1}\right)^{\frac{1}{2kk_2}}\Big\}$. Finally, denoting $k_9=k_3k_4+k_5$, $k_{10} = \max\{1,k_6\}$ and $k_{11} =\max\{k_6k_7+k_8, k_7+k_8\}$, by substituting \eqref{r1-leftarrow} into \eqref{s1-rihghtarrow-new}, we can get \begin{eqnarray}\label{s1r1}\begin{split} |s(x_1,t_1)|\leq& k_4 |s(\overrightarrow{x}_1(0),0)| +k_9k_{10}\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_1(0)}^{x_1}\left|\frac{m'}{m}(x_\sigma)\right||r(\overleftarrow{x}_\sigma(0),0)|dx_\sigma\\ &+k_9k_{11}\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_1(0)}^{x_1}\left|\frac{m'}{m}(x_\sigma)\right| \mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{\overleftarrow{x}_\sigma(0)}^{x_\sigma}\left|\frac{m'}{m}(x)\right||s(x,\overleftarrow{t}_\sigma(x))|dxdx_\sigma. \end{split}\end{eqnarray} The first two terms can be bounded by our initial bounds. Similarly, we also have \begin{eqnarray}\label{s1r1-new}\begin{split} |s(x_\xi,t_\xi)|\leq& k_{12} |s(\overrightarrow{x}_\xi(0),0)| +k_{10}k_{13}\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_\xi(0)}^{x_\xi}\left|\frac{m'}{m}(x_\zeta)\right||r(\overleftarrow{x}_\zeta(0),0)|dx_\zeta\\ &+k_{11}k_{13}\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_\xi(0)}^{x_\xi}\left|\frac{m'}{m}(x_\zeta)\right| \mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{\overleftarrow{x}_\zeta(0)}^{x_\zeta}\left|\frac{m'}{m}(x)\right||s(x,\overleftarrow{t}_\zeta(x))|dxdx_\zeta, \end{split}\end{eqnarray} where $k_{12} = \max\{1, k_4\}$ and $k_{13} = \max\{k_3k_4+k_5, k_3+k_5\}$. Multiplying \eqref{s1r1-new} by $\frac{m'}{m}(x_\xi)$, and integrating the product from $x_1$ to $\overleftarrow{x}_1(0)$ along $\overleftarrow{\mathcal{L}}_{x_1}$, we have \begin{eqnarray}\label{s1r1-m}\begin{split} &\mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{x_1}^{\overleftarrow{x}_1(0)}\left|\frac{m'}{m}(x_\xi)\right||s(x_\xi,t_\xi)|dx_\xi\\ &\leq k_{12} \mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{x_1}^{\overleftarrow{x}_1(0)}\left|\frac{m'}{m}(x_\xi)\right||s(\overrightarrow{x}_\xi(0),0)|dx_\xi\\ &\quad+k_{10}k_{13}\mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{x_1}^{\overleftarrow{x}_1(0)}\left|\frac{m'}{m}(x_\xi)\right| \mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_\xi(0)}^{x_\xi}\left|\frac{m'}{m}(x_\zeta)\right||r(\overleftarrow{x}_\zeta(0),0)|dx_\zeta dx_\xi\\ &\quad+k_{11}k_{13}\mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{x_1}^{\overleftarrow{x}_1(0)}\left|\frac{m'}{m}(x_\xi)\right| \mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_\xi(0)}^{x_\xi}\left|\frac{m'}{m}(x_\zeta)\right| \mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{x_\zeta}^{\overleftarrow{x}_\zeta(0)}\left|\frac{m'}{m}(x)\right|s(x,\overleftarrow{t}_\zeta(x))dxdx_\zeta dx_\xi. \end{split}\end{eqnarray} Set $$ F(x_\eta):=\mathop{\mathrel{\nwarrow}\!\!\!\!\!\!\mathrel{\int}}_{x_\eta}^{\overleftarrow{x}_\eta(0)}\left|\frac{m'}{m}(x)\right||s(x,\overleftarrow{t}_\eta(x))|dx. $$ Since $\overleftarrow{x}_\zeta(0)=\overleftarrow{x}_\sigma(0)$ and $x_\zeta>x_\sigma$ in the same characteristic line, so we have $F(x_\zeta)\leq F(x_\sigma)$. Combining with \eqref{v} and \eqref{sr0}, we can rewrite \eqref{s1r1-m} as $$ F(x_1)\leq k_{12}k_sV+k_{10}k_{13}k_rV^2+k_{11}k_{13}V\mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_{\overrightarrow{x}_1(0)}^{x_1} \left|\frac{m'}{m}(x_\sigma)\right|F(x_\sigma)dx_\sigma. $$ Now, using the Gronwall inequality, we can get $$ F(x_1)\leq \left(k_{12}k_sV+k_{10}k_{13}k_rV^2\right)e^{k_{11}k_{13}V^2}. $$ For $(x_\sigma,t_\sigma)\in \overrightarrow{\mathcal{L}}_{x_1}$, so $F(x_\sigma)$ is also bounded by the same quantity. Thus, \eqref{s1r1} yields that \begin{eqnarray*}\begin{split} |s(x_1,t_1)| \leq k_6k_s+ k_9k_{10}k_rV+k_9k_{11}V \left(k_{12}k_sV+k_{10}k_{13}k_rV^2\right)e^{k_{11}k_{13}V^2}. \end{split}\end{eqnarray*} Similarly, we can get \begin{eqnarray*}\begin{split} |r(x_1,t_1)| \leq k_6k_r+ k_9k_{10}k_sV+k_9k_{11}V \left(k_{12}k_rV+k_{10}k_{13}k_sV^2\right)e^{k_{11}k_{13}V^2}. \end{split}\end{eqnarray*} From the above analyses, we can show that the Riemann invariants $s$ and $r$ are bounded in finite piecewise monotic regions. \end{proof} \begin{cor}\label{rmk-bdd} Under the assumptions of Theorem \ref{main-thm}, we can get the $L^\infty$ bounds of $u$ and $h$. Also, we have the upper bound of $\rho$, $c$ and $p$. \end{cor} \begin{proof} First, \eqref{rs} gives the $L^\infty$ bounds of $u$ and $h$. Due to $$ \int_{\tau}^{1}c(\xi)d\xi = \int_{\tau}^{1}\sqrt{-p_\xi(\xi,S)}d\xi \leq h = \frac{1}{2}(s-r) $$ and the assumption \eqref{relation-p2}, there exists positive constants $\tau_{\min}$ and $c_{\max}$ depending only on the initial data such that $$ \tau(x,t)\geq \tau_{\min}>0, \quad c(x,t)\leq c_{\max}. $$ So we have the upper bound of $\rho$ and $p$ on account of the definition of $\tau$ and \eqref{p-leq-ch}. \end{proof} \section{Singularity formation} First, we recall the characteristic decompositions. By the definition of $y$ and $q$, we have (\textit{c.f.} \cite{chen-young}) \begin{eqnarray}\label{equ-partial+-partial-q} \partial_+ y = a_0 + a_1 y - a_2 y^2, \quad\quad \partial_-q = a_0 - a_1 q - a_2 q^2, \end{eqnarray} where \begin{eqnarray*} \begin{split} & a_0 = -c I_\mu + \frac{\sqrt{c}}{2} \left(\frac{p_\mu}{c}\right)_h p_\mu -c \left(\frac{p_\mu}{c}\right)_h I - \frac{c_h}{2\sqrt{c}}I^2,\\ & a_1 = -(2\sqrt{c} I)_h,\quad\quad a_2=\frac{c_h}{2\sqrt{c}}>0. \end{split} \end{eqnarray*} \subsection{Estimate on the root of $a_0+a_1y-a_2y^2=0$} The first major step is to prove the lower bound on the roots of \[ a_0+a_1y-a_2y^2=0, \] which is $$ y_{root} = \frac{-a_1\pm\sqrt{a_1^2 + 4 a_0 a_2}}{-2 a_2} =\frac{1}{2}\left[\frac{a_1}{a_2} \mp\sqrt{\left(\frac{a_1}{a_2}\right)^2 +4\frac{a_0}{a_2}}\right]. $$ Here $(\frac{a_1}{a_2})^2+4\frac{a_0}{a_2}\geq0$. It is noticeable that the lower bound of $y_{root}$ depends on the $L^\infty$ estimates of $s$ and $r$. \begin{lemma}\label{lemma-lower-bound} Under the assumptions of Theorem \ref{main-thm}, there exists a positive constant $N$ only depending on the initial data such that \[ \left|\frac{1}{2}\left[\frac{a_1}{a_2} \mp\sqrt{\left(\frac{a_1}{a_2}\right)^2 +4\frac{a_0}{a_2}}\right]\right|<N. \] \end{lemma} \begin{proof} We only need to show the boundedness of ${a_1}{a_2}^{-1}$ and ${a_0}{a_2}^{-1}$. First, we give elaborative calculation on $\left(p_\mu c^{-1}\right)_h$. Because $$ c_h = c_\tau \tau_h = -c^{-1} c_\tau = \frac{1}{2}(-p_\tau)^{-1} p_{\tau\tau}, $$ then \begin{equation*} \left(p_\mu c^{-1}\right)_h=c^{-1} p_{\mu h} - c^{-2} c_h p_\mu = (-p_\tau)^{-\frac{1}{2}} p_{\mu h} - \frac{1}{2} (-p_\tau)^{-2} p_{\tau\tau} p_\mu. \end{equation*} Also we have $$ c^2=-p_\tau=- p_h h_\tau=c p_h, $$ this yields $p_h = c$, $$ p_{\mu h} = p_{h\mu} = c_\mu = -\frac{1}{2} (-p_\tau)^{-\frac{1}{2}} p_{\tau \mu}, $$ which implies, \begin{equation}\label{equ-pmu-c-1} \left(p_\mu c^{-1}\right)_h = -\frac{1}{2} (-p_\tau)^{-1} p_{\tau \mu} - \frac{1}{2} (-p_\tau)^{-2} p_{\tau\tau} p_\mu. \end{equation} Then, from the definition $(\ref{equ-h})_2$ of $I$, we can get \begin{equation}\label{equ-ih} I_h = \frac{1}{2} \sqrt{c} \left(p_\mu c^{-1}\right)_h = -\frac{1}{4} (-p_\tau)^{-\frac{3}{4}} p_{\tau\mu} -\frac{1}{4} (-p_\tau)^{-\frac{7}{4}} p_{\tau\tau} p_\mu. \end{equation} Integration by parts yields that \begin{eqnarray}\label{equ-i-ibp} \begin{split} I = \frac{1}{2} \int_{h_0}^h \sqrt{c}d\left(p_\mu c^{-1}\right) & =\frac{1}{2} \left(c^{-\frac{1}{2}} p_\mu\right)\bigg|_{h_0}^h -\frac{1}{4} \int_{h_0}^h c^{-\frac{3}{2}} c_h p_\mu dh\\ & = \frac{1}{2} \left[(-p_\tau)^{-\frac{1}{4}} p_\mu \right]\bigg|_{h_0}^h -\frac{1}{8} \int_{h_0}^h (-p_\tau)^{-\frac{7}{4}} p_{\tau\tau} p_\mu dh. \end{split} \end{eqnarray} Then, we have \begin{eqnarray}\label{equ-i-mu-ibp} \begin{split} I_\mu =& \frac{1}{2} \left[\frac{1}{4} (-p_\tau)^{-\frac{5}{4}} p_{\tau\mu}p_\mu + (-p_\mu)^{-\frac{1}{4}} p_{\mu\mu}\right]\bigg|_{h_0}^h\\ & - \frac{1}{8} \int_{h_0}^h \left[\frac{7}{4}(-p_\tau)^{-2} p_{\tau\mu} p_{\tau\tau} p_\mu + (-p_\tau)^{-\frac{7}{4}} p_{\tau\tau\mu}p_\mu + (-p_\tau)^{-\frac{7}{4}} p_{\tau\tau}p_{\mu\mu}\right]dh. \end{split} \end{eqnarray} Direct calculation shows that \begin{eqnarray}\label{equ-a1/a2} \begin{split} \frac{a_1}{a_2} = -2 I - 4 c c_h^{-1} I_h =& - 2 I - 8 (-p_\tau)^{\frac{3}{2}} p_{\tau\tau}^{-1} I_h\\ =& - 2 I - 2 (-p_\tau)^{\frac{3}{4}} p_{\tau\tau}^{-1} p_{\tau\mu} + 2(-p_\tau)^{-\frac{1}{4}} p_\mu. \end{split} \end{eqnarray} and \begin{eqnarray}\label{equ-a0/a2} \begin{split} \frac{a_0}{a_2} =& -2 c^{\frac{3}{2}} c_h^{-1} I_\mu + c c_h^{-1}\left(p_\mu c^{-1}\right)_h p_\mu - 2 c^{\frac{3}{2}} c_h^{-1} \left(p_\mu c^{-1}\right)_h I -I^2\\ =& -2 c^{\frac{3}{2}} c_h^{-1}\left[I_\mu + \left(p_\mu c^{-1}\right)_h I\right] + c c_h^{-1}\left(p_\mu c^{-1}\right)_h p_\mu -I^2\\ =& -4 (-p_\tau)^{\frac{7}{4}} p_{\tau\tau}^{-1} \left\{I_\mu -\frac{1}{2} \left[(-p_\tau)^{-1} p_{\tau\mu} + (-p_\tau)^{-2} p_{\tau\tau} p_\mu\right]I\right\}\\ & - (-p_\tau)^{\frac{1}{2}} p_{\tau\tau}^{-1} p_{\tau\mu} p_\mu - (- p_\tau)^{-\frac{1}{2}} p_\mu^2-I^2\\ \end{split} \end{eqnarray} By using of \eqref{assumption-p-ptautau-ptau}, \eqref{inequ-pmu}, \eqref{inequ-ptaumu-pmumu-inequ-ptaotaomu}, \eqref{p-leq-ch} and corollary \ref{rmk-bdd}, we can get the boundedness of ${a_1}a_2^{-1}$ and $a_0a_2^{-1}$. Therefore, we prove this lemma. \end{proof} On the basis of the above lemma, it is easy to get: \begin{lemma} Under the assumptions of Theorem \ref{main-thm}, we can prove that \begin{equation}\label{ineq-yq} y(x,t)\leq Y, \quad q(x,t)\leq Q, \end{equation} where \begin{equation}\label{defi-YQ} Y = \max\left\{N, \sup_xy(x,0)\right\},\quad\quad Q=\max\left\{N, \sup_xq(x,0)\right\}. \end{equation} \end{lemma} \subsection{Time-dependent lower bound on $a_2$} To show the formation of singularity, the key step is to obtain the lower bound of $a_2$. In fact, the function $a_2$ might vanish as time tends to infinity, such as for the gas dynamic case (\textit{c.f.} \cite{CPZ,CPZ2,smoller}). \begin{lemma} Assume that the pressure satisfies the assumptions (\textbf{H1}) and (\textbf{H3}), then \begin{equation}\label{a2-infinity} \int_0^\infty a_2 \left(\tau(x,t),t\right)dt = \infty. \end{equation} \end{lemma} \begin{proof} We know that \eqref{a2-infinity} is true if we can prove \begin{equation}\label{ineq-a2} \left[a_2\left(\tau(x,t)\right)\right]^{-1} \leq k_{14} t + k_{15}. \end{equation} Direct calculation shows that $$ a_2 \left(\tau(x,t)\right) = \frac{c_h}{2\sqrt{c}} =\frac{c_\tau \tau_h}{2\sqrt{c}} =-\frac{1}{c} \frac{c_\tau}{2\sqrt{c}} =\frac{1}{4}\left(-p_\tau\right)^{-\frac{5}{4}} p_{\tau\tau}. $$ Then \begin{equation}\label{equ-a2-tau-inverse} \left[a_2\left(\tau(x,t)\right)\right]^{-1} = 4\left(-p_\tau\right)^{\frac{5}{4}} (p_{\tau\tau})^{-1}. \end{equation} From $(\ref{p-tau-k-k-1-relation-A-ptau})_3$, we can get \begin{equation}\label{ineq-Aptau-ptautau} \left[4\left(-p_\tau\right)^{\frac{5}{4}} (p_{\tau\tau})^{-1}\right]_\tau \leq A \left(-p_\tau\right)^{\frac{1}{4}}. \end{equation} And we also have \begin{eqnarray*} \begin{split} \left\{\int_{\tau_{\min}}^{\tau(x,t)} \left[-p_\xi(\xi)\right]^{\frac{1}{4}}d\xi\right\}_t &= \left[\int_{\tau_{\min}}^{\tau(x,t)} \sqrt{c(\xi)}d\xi\right]_t\\ &=\sqrt{c}\tau_t =\sqrt{c} u_x =\frac{\sqrt{c}}{2}(s_x+r_x) =\frac{1}{2}(y+q)\\ &\leq \frac{1}{2}(Y+Q). \end{split} \end{eqnarray*} Integrating the last inequality with respect to $t$ yields \begin{equation}\label{inequ-k1112} \int_{\tau_{\min}}^{\tau(x,t)} \left[-p_\xi(\xi)\right]^{\frac{1}{4}}d\xi \leq \int_{\tau_{\min}}^{\tau(x,0)}\left[-p_\xi(\xi)\right]^{\frac{1}{4}}d\xi + \frac{1}{2}(Y+Q)t \leq k_{16}t + k_{17}. \end{equation} Combining \eqref{equ-a2-tau-inverse}, \eqref{ineq-Aptau-ptautau} and \eqref{inequ-k1112}, we can complete the proof. \end{proof} \subsection{Singularity formation} In this subsection, we will prove the main theorem. \textbf{Proof of Theorem \ref{main-thm}:} We just consider the $\inf\limits_{x\in\mathbb{R}}y(x,0)<-N$ case, the other case for $q$ is similar. We can assume that $-N$ is a uniform lower bound for the roots of $a_0+a_1 y - (1-\varepsilon)a_2 y^2=0$, according to lemma \ref{lemma-lower-bound}. Then since $a_2>0$, we have \begin{equation}\label{inequ-equation-eps} a_0+a_1 y - (1-\varepsilon)a_2 y^2\leq 0, \quad \mbox{for ~every} ~y\leq -N. \end{equation} According to the definition of infimum, there exist $0<\varepsilon\ll1$ and $x_0\in\mathbb{R}$ such that \begin{equation}\label{y-x0-0} y(x_0, 0) <-(1+\varepsilon)N. \end{equation} Now we consider the forward characteristic passing $(x_0,0)$, we have $$ \partial_+ y\leq -\varepsilon a_2 y^2, $$ integrating the last inequality from $0$ to $t$ with respect to the time variable, we can get $$ \frac{1}{y(\overrightarrow{x}(t), t)} \geq \frac{1}{y(x_0,0)} + \varepsilon \mathop{\mathrel{\nearrow}\!\!\!\!\!\!\mathrel{\int}}_0^t a_2(\overrightarrow{x}(t), t) dt, $$ From \eqref{a2-infinity} and \eqref{y-x0-0}, we can show that $y$ blows up in finite time. \section*{Acknowledgments} This research was partially supported by NSFC grant \#11301293/A010801, and the China Scholarship Council No. 201406210115 as an exchange graduate student at the Georgia Institute of Technology.
\section{Proof of the First Zonklar Equation} \bibliographystyle{IEEEtran} \section{Introduction} With the proliferation of mobile internet usage, WiFi access point (AP) has become a ubiquitous infrastructure in smart environments, ranging from commercial buildings to domestic settings. By analysing the patterns of its wireless signal, today's AP has evolved beyond a pure WiFi router, but is also widely used as a type of 'sensor device' to enable new services for human sensing. Particularly, recent studies have found that WiFi signals in the form of Channel State Information (CSI)~\cite{halperin2011tool,xie2015precise} are extremely promising for a variety of device-free human sensing tasks, such as occupancy detection~\cite{zou2017non}, activity recognition~\cite{wang2014eyes,zou2018deepsense,yang2018carefi,zou2019wificv}, fall detection~\cite{wang2016rt}, gesture recognition~\cite{yang2019learning,zou2018gesture}, human identification~\cite{zou2018identification,wang2022caution}, and people counting~\cite{zou2018device,FreeCount}. Unlike the coarse-grained received signal strengths, WiFi CSI records more fine-grained information about how a signal propagates between WiFi devices and how a signal is reflected from the surrounding environment in which humans move around. On the other side, as WiFi signals (2.4GHz or 5GHz) lie in the non-visible band of the electromagnetic spectrum, WiFi CSI based human sensing is intrinsically more privacy-friendly than cameras and draws increasing attention from both academia and industry. Motivated by increasing interests needs, a new WiFi standard, 802.11bf by the IEEE 802.11bf Task Group (TGbf) will amend the current WiFi standard both at the Medium Access Control (MAC) and Physical Layer (PHY) to officially include WiFi sensing as part of a regular WiFi service by late 2024~\cite{802.11bf}. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/principle.pdf} \caption{The technical contributions and summary of SenseFi.}\label{fig:principle} \end{figure} Existing WiFi sensing methods can be categorized into model-based methods and learning-based methods. Model-based methods rely on physical models that describe the WiFi signals propagation, such as Fresnel Zone~\cite{wu2017device}. Model based methods help us understand the underlying mechanism of WiFi sensing and design sensing methods for periodic or single motions, such as respiration \cite{wang2016human,8210837,9341474} and falling down~\cite{wang2016rt,9322323,9716074}. Nevertheless, model based methods fall short when it comes to the complicated human activities that consist of a series of different motions. For example, a human gait comprises the synergistic movements of arms, legs and bodies, the differences of which are hard to depict by physical models. In contrast, by feeding a massive amount of data into machine learning \cite{yang2018device} or deep learning networks, \cite{yang2019learning,zou2018deepsense}, learning based achieve remarkable performances in complicated sensing tasks. Various deep neural networks are designed to enable many applications including activity recognition \cite{zou2017multiple}, gesture recognition \cite{yang2019learning}, human identification \cite{zou2018identification,wang2022caution,zhang2020gate}, and people counting \cite{zou2018device, FreeCount}. Though deep learning models have a strong ability of function approximation, they require tremendous labeled data that is expensive to collect and suffer from the negative effect of distribution shift caused by environmental dynamics \cite{zou2018robust}. Most state-of-the-art deep learning models are developed for computer vision \cite{voulodimos2018deep} and natural language processing tasks \cite{otter2020survey}, which demonstrates the capacity of processing high-dimensional and multi-modal data problems. These approaches inspire the deep learning applications in WiFi sensing in terms of data preprocessing, network design, and learning objectives. It is seen that more and more deep models \cite{bu2021transfersense,zhang2018crosssense} for WiFi sensing come into existence and overcome the aforementioned obstacles that traditional statistical learning methods cannot address. However, current works mainly aim to achieve high accuracy on specific sensing tasks by tailoring deep neural networks but do not explore the intrinsic tension between various deep learning models and distinct WiFi sensing data collected by different devices and CSI tools. It is unclear if the remarkable results of a WiFi sensing research paper come from the deep model design or the WiFi platform. Hence, there still exist some significant gaps between current deep learning and WiFi sensing research: (i) how to customize a deep neural network for a WiFi sensing task by integrating prevailing network modules (\textit{e.g.}, fully-connected layer, convolutional layer, recurrent neural unit, transformer block) into one synergistic framework? (ii) how do the prevailing models perform when they are compared fairly on multiple WiFi sensing platforms and data modalities? (iii) how to achieve a trade-off between recognition accuracy and efficiency? To answer these questions, we propose SenseFi, a benchmark and model zoo library for WiFi CSI sensing using deep learning. Firstly, we introduce the prevalent deep learning models, including multilayer perceptron (MLP), convolutional neural network (CNN), recurrent neural network (RNN), variants of RNN, CSI transformers, and CNN-RNN, and summarize how they are effective for CSI feature learning and WiFi sensing tasks. Then we investigate and benchmark these models on three WiFi human activity recognition data that consists of both raw CSI data and processed data collected by Intel 5300 CSI tool \cite{halperin2011tool} and Atheros CSI tool \cite{xie2015precise,yang2018device}. The accuracy and efficiency of these models are compared and discussed to show their viability for real-world applications. We further investigate how different WiFi sensing tasks can benefit each other by transfer learning, and how unsupervised learning can be used to exploit features without labels, reducing the annotation cost. These features are summarized in Figure~\ref{fig:principle}. All the source codes are written into one library so that the researchers can develop and evaluate their models conveniently. As such, the contributions are summarized as follows: \begin{itemize} \item We analyze and summarize how the widespread deep learning models in computer vision and natural language processing benefit WiFi sensing in terms of network structure and feature extraction. \item We select two public datasets (UT-HAR \cite{yousefi2017survey} and Widar \cite{zhang2021widar3}) and collect two new datasets (NTU-Fi HAR and Human-ID) using different CSI platforms, which allows us to benchmark the deep learning methods and evaluate their feasibility for WiFi sensing. \item We explore the transfer learning scheme that transfers knowledge across different sensing tasks, and benchmark it across all models. \item We investigate the unsupervised learning scheme that contrastively learns the feature extractor without data annotation, and benchmark it across all models. \item We develop the \textbf{SenseFi} library and open-source the benchmarking codes. To the best of our knowledge, this is the first work that benchmarks advanced deep models and learning schemes for WiFi sensing, which provides comprehensive and significant evidence and tools for future research. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:pre} introduces the fundamental knowledge on WiFi sensing and CSI data. Then we introduce the prevalent deep learning models and how they are applied to WiFi sensing in Section~\ref{sec:dl-models}. The empirical study is detailed in Section~\ref{sec:empirical-study}, and then the summaries and discussions are made in Section~\ref{sec:summary}. Finally, the paper is concluded in Section~\ref{sec:conclusion}. \section{Preliminaries of WiFi Sensing}\label{sec:pre} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figures/CSI_samples.pdf} \caption{The CSI samples of three human activities in NTU-Fi, collected by Atheros CSI Tool.}\label{fig:csi-samples} \end{figure*} \subsection{Channel State Information} In WiFi communication, channel state information reflects how wireless signals propagate in a physical environment after diffraction, reflections, and scattering, which describes the channel properties of a communication link. For modern wireless communication networks following the IEEE 802.11 standard, Multiple-Input Multiple-Output (MIMO) and Orthogonal Frequency Division Multiplexing (OFDM) at the physical layer contribute to increasing data capacity and better orthogonality in transmission channels affected by multi-path propagation. As a result, current WiFi APs usually have multiple antennas with many subcarriers for OFDM. For a pair of transmitter and receiver antennas, CSI describes the phase shift of multi-path and amplitude attenuation on each subcarrier. Compared to received signal strength, CSI data has better resolutions for sensing and can be regarded as ``WiFi images'' for the environment where WiFi signals propagate. Specifically, the Channel Impulse Response (CIR) $h(\tau)$ of the WiFi signals is defined in the frequency domain: \begin{equation} h(\tau)=\sum_{l=1}^{L}\alpha_l e^{j\phi_l} \delta(\tau-\tau_l), \end{equation} where $\alpha_l$ and $\phi_l$ denote the amplitude and phase of the $l$-th multi-path component, respectively, $\tau_l$ is the time delay, $L$ denotes the number of multi-paths and $\delta(\tau)$ is the Dirac delta function. To estimate the CIR, the OFDM receiver samples the signal spectrum at subcarrier level in the realistic implementation, which represents amplitude attenuation and phase shift via complex number. In WiFi sensing, the CSI recording functions are realized by specific tools \cite{halperin2011tool,xie2015precise}. The estimation can be represented by: \begin{equation} H_i=||H_i||e^{j \angle H_i} \end{equation} where $||H_i||$ and $\angle H_i$ are the amplitude and phase of $i$-th subcarrier, respectively. \subsection{CSI Tools and Platforms} The number of subcarriers is decided by the bandwidth and the tool. The more subcarriers one has, the better resolution the CSI data is. Existing CSI tools include Intel 5300 NIC \cite{halperin2011tool}, Atheros CSI Tool \cite{xie2015precise} and Nexmon CSI Tool \cite{nexmon2019tool}, and many realistic sensing platforms are built on them. The Intel 5300 NIC is the most commonly used tool, which is the first released CSI tool. It can record 30 subcarriers for each pair of antennas running with 20MHz bandwidth. Atheros CSI Tool increases the CSI data resolution by improving the recording CSI to 56 subcarriers for 20MHz and 114 subcarriers for 40MHz, which has been widely used for many applications \cite{zou2018deepsense,yang2018device,yang2018carefi,yang2019learning,yang2022efficientfi}. The Nexmon CSI Tool firstly enables CSI recording on smartphones and Raspberry Pi, and can capture 256 subcarriers for 80MHz. However, past works~\cite{sharma2021passive,schafer2021human} show that their CSI data is quite noisy, and there do not exist common datasets based on Nexmon. In this paper, we only investigate the effectiveness of the deep learning models trained on representative CSI data from the widely-used Intel 5300 NIC and Atheros CSI Tool. \subsection{CSI Data Transformation and Cleansing} In general, the CSI data consists of a vector of complex number including the amplitude and phase. The question is how we process these data for the deep models of WiFi sensing? We summarize the answers derived from existing works: \begin{enumerate} \item \textbf{Only use the amplitude data as input.} As the raw phases from a single antenna are randomly distributed due to the random phase offsets \cite{liu2020human}, the amplitude of CSI is more stable and suitable for WiFi sensing. A simple denoising scheme is enough to filter the high-frequency noise of CSI amplitudes, such as the wavelet denoising \cite{yang2018device}. This is the most common practice for most WiFi sensing applications. \item \textbf{Use the CSI difference between antennas for model-based methods.} Though the raw phases are noisy, the phase difference between two antennas is quite stable \cite{yang2019learning}, which can better reflect subtle gestures than amplitudes. Then the CSI ratio \cite{zeng2019farsense} is proposed to mitigate the noise by the division operation and thus increases the sensing range. These techniques are mostly designed for model-based solutions as they require clean data for selecting thresholds. \item \textbf{Use the processed doppler representation of CSI.} To eliminate the environmental dependency of CSI data, the body-coordinate velocity profile (BVP) is proposed to simulate the doppler feature \cite{zhang2021widar3} that only reflects human motions. \end{enumerate} In our benchmark, as we focus on the learning-based methods, we choose the most common data modality (\textit{i.e.,} amplitude only) and the novel BVP modality that is domain-invariant. \subsection{How Human Activities Affect CSI} As shown in Figure~\ref{fig:csi-samples}, the CSI data for human sensing is composed of two dimensions: the subcarrier and the packet number (\textit{i.e.}, time duration). For each packet or timestamp $t$, we have $X_t=N_T\times N_R \times N_{sub}$ where $N_T$, $N_R$ and $N_{sub}$ denote the number of transmitter antennas, receiver antennas and subcarriers per antenna, respectively. This can be regarded as a ``CSI image'' for the surrounding environment at time $t$. Then along with subsequent timestamps, the CSI images form a ``CSI video'' that can describe human activity patterns. To connect CSI data with deep learning models, we summarize the data properties that serve for a better understanding of deep model design: \begin{enumerate} \item \textbf{Subcarrier dimension $\to$ spatial features.} The values of many subcarriers can represent how the signal propagates after diffraction, reflections, and scattering, and thus describe the spatial environment. These subcarriers are seen as an analogy for image pixels, from which \textit{convolutional layers} can extract spatial features \cite{lecun2015deep}. \item \textbf{Time dimension $\to$ temporal features.} For each subcarrier, its temporal dynamics indicate an environmental change. In deep learning, the temporal dynamics are usually modeled by \textit{recurrent neural networks} \cite{schuster1997bidirectional}. \item \textbf{Antenna dimension $\to$ resolution and channel features.} As each antenna captures a different propagation path of signals, it can be regarded as a channel in deep learning that is similar to RGB channels of an image. If only one pair of antennas exists, then the CSI data is similar to a gray image with only one channel. Hence, the more antennas we have, the higher resolution the CSI has. The antenna features should be processed separately in convolutional layers or recurrent neurons. \end{enumerate} \section{Deep Learning Models for WiFi Sensing}\label{sec:dl-models} Deep learning enables models composed of many processing layers to learn representations of data, which is a branch of machine learning~\cite{lecun2015deep}. Compared to classic statistical learning that mainly leverages handcrafted features designed by humans with prior knowledge, deep learning aims to extract features automatically by learning massive labeled data and optimizing the model by back-propagation. The theories of deep learning were developed in the 1980s but they were not attractive due to the need of enormous computational resources. With the development of graphical processing units (GPUs), deep learning techniques have become affordable, and has been widely utilized in computer vision~\cite{voulodimos2018deep}, natural language processing~\cite{otter2020survey}, and interdisciplinary research~\cite{chen2021deep}. A standard classification model in deep learning is composed of a feature extractor and a classifier. The classifier normally consists of several fully-connected layers and can perform well, while the design of the feature extractor is the key to the success. Extensive works explore a large number of deep architectures for feature extractors, and each of them has specific advantages for one type of data. The deep learning models for WiFi sensing are built on these prevailing architectures to extract patterns of human motions. We summarize the latest works on deep models for WiFi sensing in Table~\ref{tab:survey}, and it is observed that the networks of these works are quite similar. In the following, we introduce these key architectures and how they are applied to WiFi sensing tasks. To better instantiate these networks, we define the CSI data $x\in \mathbb{R}^{N_s\times T}$ where $N_s$ denotes the total number of subcarriers across all antenna pairs, and $T$ denotes the duration. The deep learning model $f(\cdot)$ aims to map the data to the corresponding label: $y=f(x)$. Denote $\Phi_i(\cdot)$ and $z_i$ as the $i$-th layer of the deep model and the feature of the $i$-th layer. Apart from the illustration, we visualize the intuition of how to feed these CSI data into various networks in Figure~\ref{fig:framework}. \begin{figure*}[h] \centering \includegraphics[width=0.75\textwidth]{figures/Models.pdf} \caption{The illustration of how CSI data is processed by MLP, CNN, RNN and Transformer.} \label{fig:framework} \end{figure*} \subsection{Multilayer Perceptron} Multilayer perceptron (MLP) \cite{gardner1998artificial} is one of the most classic architectures and has played the classifier role in most deep classification networks. It normally consists of multiple fully-connected layers followed by activation functions. The first layer is termed the input layer that transforms the input data into the hidden latent space, and after several hidden layers, the last layer maps the latent feature into the categorical space. Each layer is calculated as \begin{equation} \Phi_i(z_{i-1})=\sigma(W_i z_{i-1}), \end{equation} where $W_i$ is the parameters of $\Phi_i$, and $\sigma(\cdot)$ is the activation function that aims to increase the non-linearity for MLP. The input CSI has to be flattened to a vector and then fed into the MLP, such that $x\in \mathbb{R}^{N_s T}$. Such a process mixes the spatial and temporal dimensions and damages the intrinsic structure of CSI data. Despite this, the MLP can still work with massive labeled data, because the MLP has a fully-connected structure with a large number of parameters, yet leading to slow convergence and huge computational costs. Therefore, though the MLP shows satisfactory performance, stacking many layers in MLP is not common for feature learning, which makes MLP usually serve as a classifier. \subsection{Convolutional Neural Network} Convolutional neural network (CNN) was firstly proposed for image recognition tasks by LeCun~\cite{lecun1998gradient}. It addresses the drawbacks of MLP by weight sharing and spatial pooling. CNN models have achieved remarkable performances in classification problems of 2D data in computer vision~\cite{khan2020survey,wang2019kervolutional} and sequential data in speech recognition~\cite{abdel2012applying} and natural language processing~\cite{yin2017comparative}. CNN learns features by stacking convolutional kernels and spatial pooling operations. The convolution operation refers to the dot product between a filter $\mathbf{k}\in\mathbb{R}^d$ and an input vector $\mathbf{v}\in\mathbb{R}^d$, defined as follows: \begin{equation} \mathbf{k} \otimes \mathbf{v}=\sigma(\mathbf{k}^T \mathbf{v}). \end{equation} The pooling operation is a down-sampling strategy that calculates the maximum (max pooling) or mean (average pooling) inside a kernel. The CNNs normally consist of several convolutional layers, max-pooling layers, and the MLP classifier. Generally speaking, increasing the depth of CNNs can lead to better model capacity. Nevertheless, when the depth of CNN is too large (\textit{e.g.}, greater than 20 layers), the gradient vanishing problem leads to degrading performance. Such degradation is addressed by ResNet~\cite{he2016deep}, which uses the residual connections to reduce the difficulty of optimization. In WiFi sensing, the convolution kernel can operate on a 2D patch of CSI data (\textit{i.e.}, Conv1D) that includes a spatial-temporal feature, or on a 1D patch of each subcarrier of CSI data (\textit{i.e.}, Conv2D). For Conv2D, a 2D convolution kernel $\mathbf{k}_{2D}\in\mathbb{R}^{h\times w}$ operates on all patches of the CSI data via the sliding window strategy to obtain the output of the feature map, while the Conv1D only extracts the spatial feature along the subcarrier dimension. The Conv2D can be applied independently as it considers both spatial and temporal features, while the Conv1D is usually used with other temporal feature learning methods. To enhance the capacity of CNN, multiple convolution kernels with a random initialization process are used. The advantages of CNNs for WiFi sensing consist of fewer training parameters and the preservation of the subcarrier and time dimension in CSI data. However, the disadvantage is that CNN has an insufficient receptive field due to the limited kernel size and thus fails to capture the dependencies that exceed the kernel size. Another drawback is that CNN stack all the feature maps of kernels equally, which has been revamped by an attention mechanism that assigns different weights in the kernel or spatial level while stacking features. These techniques have been successfully used in WiFi sensing~\cite{xue2020deepmv,ding2022wi,9275362,9721516}. \subsection{Recurrent Neural Network} Recurrent neural network (RNN) is one of the deepest network architectures that can memorize arbitrary-length sequences of input patterns. The unique advantage of RNN is that it enables multiple inputs and multiple outputs, which makes it very effective for time sequence data, such as video \cite{yang2018deep} and CSI \cite{zou2018deepsense,8918311,8761445}. Its principle is to create internal memory to store historical patterns, which are trained via back-propagation through time \cite{lipton2015rnnsurvey}. For a CSI sample $x$, we denote a CSI frame at the $t$ as $x_t \in \mathbb{R}^{N_s}$. The vanilla RNN uses two sharing matrices $W_x,W_h$ to generate the hidden state $h_t$: \begin{equation} h_t = \sigma(W_x x_t+W_h h_{t-1}), \end{equation} where the activation function $\sigma(\cdot)$ is usually Tanh or Sigmoid functions. RNN is designed to capture temporal dynamics, but it suffers from the vanishing gradient problem during back-propagation and thus cannot capture long-term dependencies of CSI data. \subsection{Variants of RNN (LSTM)} To tackle the problem of long-term dependencies of RNN, Long-short term memory (LSTM) \cite{hochreiter1997long} is proposed by designing several gates with varying purposes and mitigating the gradient instability during training. The standard LSTM sequentially updates a hidden sequence by a memory cell that contains four states: a memory state $c_t$, an output gate $o_t$ that controls the effect of output, an input gate $i_t$ and a forget gate $f_t$ that decides what to preserve and forget in the memory. The LSTM is parameterized by weight matrices $W_i,W_f,W_c,W_o,U_i,U_f,U_c,U_o$ and biases $b^i,b^f,b^c,b^o$, and the whole update is performed at each $t \in \{ 1,...,T\}$: \begin{align} & i_t=\sigma(W_i x_t + U_i h_{t-1}+b^i), \\ & f_t=\sigma(W_f x_t + U_f h_{t-1}+b^f), \\ & \tilde{c_t}=tanh(W_c x_t + U_c h_{t-1}+b^c), \\ & c_t=i_t \odot \tilde{c_t} + f_t \odot c_{t-1}, \\ & o_t=\sigma(W_o x_t + U_o h_{t-1} + b^o), \\ & h_t = o_t \odot tanh(c_t), \end{align} where $\sigma$ is a Sigmoid function. Apart from the LSTM cell \cite{9722627,9532548,9384510}, the multi-layer and bi-directional structure further boost the model capacity. The bidirectional LSTM (BiLSTM) model processes the sequence in two directions and concatenates the features of the forward input $\grave{x}$ and backward input $\acute{x}$. It has been proven that BiLSTM shows better results than LSTM in \cite{chen2018wifi,9641123}. \subsection{Recurrent Convolutional Neural Network} Though LSTM addresses the long-term dependency, it leads to a large computation overhead. To overcome this issue, Gated Recurrent Unit (GRU) is proposed. GRU combines the forget gate and input gate into one gate, and does not employ the memory state in LSTM, which simplifies the model but can still capture long-term dependency. GRU is regarded as a simple yet effective version of LSTM. Leveraging the simple recurrent network, we can integrate the Conv1D and GRU to extract spatial and temporal features, respectively. \cite{dua2021multi,zhang2021widar3} show that CNN-GRU is effective for human activity recognition. In WiFi sensing, DeepSense~\cite{zou2018deepsense} proposes Conv2D with LSTM for human activity recognition. SiaNet~\cite{yang2019learning} proposes Conv1D with BiLSTM for gesture recognition. As they perform quite similarly, we use CNN-GRU with fewer parameters in this paper for the benchmark. \subsection{Transformer} Transformer~\cite{vaswani2017attention} was firstly proposed for NLP applications to extract sequence embeddings by exploiting attention of words, and then it was extended to the computer vision field where each patch is regarded as a word and one image consists of many patches~\cite{dosovitskiy2020image}. The vanilla consists of an encoder and a decoder to perform machine translation, and only the encoder is what we need. The transformer block is composed of a multi-head attention layer, a feed-forward neural network (MLP), and layer normalization. Since MLP has been explained in previous section, we mainly introduce the attention mechanism in this section. For a CSI sample $x$, we first divide it into $P$ patches $x_p \in \mathbb{R}^{h\times w}$, of which each patch has contained spatial-temporal features. Then these patches are concatenated and added by positional embeddings that infer the spatial position of patches, which makes the input matrix $v\in \mathbb{R}^{d_k}$ where $d_k=P\times hw$. This matrix is transformed into three different matrices via linear embedding: the query $Q$, the key $K$, and the value $V$. The self-attention process is calculated by \begin{equation} \text{Attention}(Q,K,V)=\text{softmax}(\frac{Q\cdot K^T}{\sqrt{d_k}})\cdot V. \end{equation} Intuitively, such a process calculates the attention of any two patches via dot product, \textit{i.e.}, cosine similarity, and then the weighting is performed with normalization to enhance gradient stability for improved training. Multi-head attention just repeats the self-attention several times and enhances the diversity of attentions. The transformer architecture can interconnect with every patch of CSI, which makes it strong if given sufficient training data, such as THAT~\cite{li2021two}. However, transformer has a great number of parameters that makes the training cost expensive, and enormous labeled CSI data is hard to collect, which makes transformers not really attractive for the supervised learning. \subsection{Generative Models} Different from the aforementioned discriminative models that mainly conducts classification, generative models aim to capture the data distribution of CSI. Generative Adversarial Network (GAN) \cite{goodfellow2014generative} is a classic generative model that learns to generate real-like data via an adversarial game between a generative network and a discriminator network. In WiFi sensing, GAN helps deal with the environmental dependency by generating labeled samples in the new environment from the well-trained environment~\cite{xiao2019csigan,wang2021multimodal}. GAN also inspires domain-adversarial training that enables deep models to learn domain-invariant representations for the training and real-world testing environments \cite{zou2018robust,yang2021robust,yang2020mind,xu2021partial}. Variational network~\cite{kingma2013auto} is another common generative model that maps the input variable to a multivariate latent distribution. Variational autoencoder learns the data distribution by a stochastic variational inference and learning algorithm~\cite{kingma2013auto}, which has been used in CSI-based localization~\cite{kim2021multiview,chen2020fido} and CSI compression~\cite{yang2022efficientfi}. For instance, EfficientFi~\cite{yang2022efficientfi} leverages the quantized variational model to compress the CSI transmission data for large-scale WiFi sensing in the future. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/Learning_strategy.pdf} \caption{The illustration of the learning strategies.} \label{fig:strategies} \end{figure} \section{Learning Methods for Deep WiFi Sensing Models} Traditional training of deep models relies on supervised learning with massive labeled data, but the data collection and annotation is a bottleneck in the realistic WiFi sensing applications. For example, to recognize human gestures, we may need the volunteers to perform gestures for a hundred times, which is not realistic. In this section, as shown in Figure~\ref{fig:strategies}, we illustrate the learning methods and how they contribute to WiFi sensing in the real world. \textbf{Supervised Learning} is an approach to training deep models using input data that has been labeled for a particular output. It is the most common learning strategy in current WiFi sensing works \cite{yousefi2017survey,zou2018deepsense,wang2018spatial,zou2019wificv}. They usually adopt cross-entropy loss between the ground truth label and the prediction for model optimization. Though supervised learning is easy to implement and achieves high performance for many tasks, its requirement of tremendous labeled data hinders its pervasive realistic applications. \textbf{Few-shot Learning} is a data-efficient learning strategy that only utilizes several samples of each category for training. This is normally achieved by contrastive learning or prototypical learning. It is firstly exploited for WiFi sensing in SiaNet~\cite{yang2019learning} that proposes a Siamese network for few-shot learning. Subsequent works~\cite{gu2021wione,wang2022caution} extend prototypical networks from visual recognition to WiFi sensing, also achieving good recognition results. Specially, when only one sample for each class is employed for training, we term it as one-shot learning. As only a few samples are required, few-shot learning contributes to WiFi-based gesture recognition and human identification in practice. \textbf{Transfer Learning} aims to transfer knowledge from one domain to another domain~\cite{tlsurvey}. When the two domains are similar, we pretrain the model on one domain and fine-tune the model in a new environment, which can lead to significant performance. When the two domains are distinct, such as the different environments of CSI data, the distribution shift hinders the performance so domain adaptation should be adopted. Domain adaptation is a category of semi-supervised learning that mitigates the domain shift for transfer learning. Cross-domain scenarios are quite common in WiFi sensing scenarios since the CSI data is highly dependent on the training environment. Many works have been developed to deal with this problem~\cite{zou2018robust,jiang2018towards,wang2021multimodal,8793019,9763693}. \textbf{Unsupervised Learning} aims to learn data representations without any labels. Then the feature extractor can facilitate down-streaming tasks by training a specific classifier. From the experience of visual recognition tasks~\cite{grill2020bootstrap}, unsupervised learning can even enforce the model to gain better generalization ability since the model is not dependent on any specific tasks. Current unsupervised learning models are based on self-supervised learning~\cite{wang2021self}. Despite its effectiveness, the unsupervised learning has not been well exploited in WiFi sensing, and only AutoFi is developed to enable model initialization for automatic user setup in WiFi sensing applications ~\cite{yang2022autofi}. \textbf{Ensemble Learning} uses multiple models to obtain better predictive performance~\cite{sagi2018ensemble}. The ensemble process can operate on feature level or prediction level. Feature-level ensemble concatenates the features from multiple models and one final classifier is trained. Prediction-level ensemble is more common, usually referring to voting or probability addition. Ensemble learning can increase the performance but the computation overhead also explodes by multiple times. CrossSense~\cite{jiang2018towards} develops a mixture-of-experts approach and only chooses the appropriate expert for a specific input, addressing the computation cost. In this paper, we empirically explore the effectiveness of supervised learning, transfer learning and unsupervised learning for WiFi CSI data, as they are the most commonly used learning strategies in WiFi sensing applications. \section{Empirical Studies of Deep Learning in WiFi Sensing: A Benchmark}\label{sec:empirical-study} In this section, we conduct an empirical study of the aforementioned deep learning models on WiFi sensing data and firstly provide the benchmarks with open-source codes in \url{http://www.github.com/}. The four datasets are illustrated first, and then we evaluate the deep models on these datasets in terms of three learning strategies. Eventually, some detailed analytics are conducted on the convergence of optimization, network depth, and network selection. \subsection{Datasets} We choose two public CSI datasets (UT-HAR~\cite{yousefi2017survey} and Widar~\cite{zhang2021widar3}) collected using Intel 5300 NIC. To validate the effectiveness of deep learning models on CSI data of different platforms, we collect two new datasets using Atheros CSI Tool~\cite{xie2015precise} and our embedded IoT system~\cite{yang2018device}, namely NTU-Fi HAR and NTU-Fi Human-ID. The statistics of these datasets are summarized in Table~\ref{tab:datasets}. \textbf{UT-HAR}~\cite{yousefi2017survey} is the first public CSI dataset for human activity recognition. It consists of seven categories and is collected via Intel 5300 NIC with 3 pairs of antennas that record 30 subcarriers per pair. All the data is collected in the same environment. However, its data is collected continuously and has no golden labels for activity segmentation. Following existing works~\cite{li2021two}, the data is segmented using a sliding window, inevitably causing many repeated data among samples. Hence, though the total number of samples reaches around 5000, it is a small dataset with intrinsic drawbacks. \textbf{Widar}~\cite{zhang2021widar3} is the largest WiFi sensing dataset for gesture recognition, which is composed of 22 categories and 43K samples. It is collected via Intel 5300 NIC with $3\times3$ pairs of antennas in many distinct environments. To eliminate the environmental dependencies, the data is processed to the body-coordinate velocity profile (BVP). \textbf{NTU-Fi} is our proposed dataset for this benchmark that includes both human activity recognition (\textbf{HAR}) and human identification (\textbf{Human ID}) tasks. Different from UT-HAR and Widar, our dataset is collected using Atheros CSI Tool and has a higher resolution of subcarriers (114 per pair of antennas). Each CSI sample is perfectly segmented. For the HAR dataset, we collect the data in three different layouts. For the Human ID dataset, we collect the human walking gaits in three situations: wearing a T-shirt, a coat, or a backpack, which brings many difficulties. The NTU-Fi data is simultaneously collected in these works~\cite{yang2022efficientfi,wang2022caution} that describe the detailed layouts for data collection. \subsection{Implementation Details} We normalize the data for each dataset and implement all the aforementioned methods using the PyTorch framework~\cite{paszke2019pytorch}. To ensure the convergence, we train the UT-HAR, Widar, and NTU-Fi for 200, 100, and 30 epochs, respectively, for all the models except RNN. As the vanilla RNN is hard to converge due to the gradient vanishing, we train them for two times of the specified epochs. We use the Adam optimizer with a learning rate of 0.001, and the beta of 0.9 and 0.999. We follow the original Adam paper~\cite{kingma2014adam} to set these hyper-parameters. The ratio of training and testing splits is 8:2 for all datasets using stratified sampling. \subsection{Baselines and Criterion} We design the baseline networks of MLP, CNN, RNN, GRU, LSTM, BiLSTM, CNN+GRU, and Transformer following the experiences learned from existing works in Table~\ref{tab:survey}. The CNN-5 is modified from LeNet-5~\cite{lecun1998gradient}. We further introduce the series of ResNet~\cite{he2016deep} that have deeper layers. The transformer network is based on the vision transformer (ViT)~\cite{dosovitskiy2020image} so that each patch can contain spatial and temporal dimensions. It is found that given sufficient parameters and reasonable depth of layers, they can converge to more than 98\% in the training split. Since the data sizes of UT-HAR, Widar and NTU-Fi are different, we use a convolutional layer to map them into a unified size, which enables us to use the same network architecture. The specific network architectures for all models are illustrated in the \textit{Appendix}. To compare the baseline models, we select three classic criteria: accuracy (Acc) that evaluates the prediction ability, floating-point operations (Flops) that evaluates the computational complexity, and the number of parameters (Params) that measures the requirement of GPU memory. As WiFi sensing is usually performed on the edge, the Flops and Params also matter with limited resources. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figures/Cross_Dataset_Accuracy.pdf} \caption{The performance comparison across four datasets.}\label{fig:cross-dataset} \end{figure} \subsection{Evaluations of Different Deep Architectures} \textbf{Overall Comparison.} We summarize the performance of all baseline models in Table~\ref{tab:overall}. On UT-HAR, the ResNet-18 achieves the best accuracy of 98.11\% and the CNN-5 achieves the second best. The shallow CNN-5 can attain good results on all datasets but the deep ResNet-18 fails to generalize on Widar, which will be explained in Section~\ref{sec:exp-analytics}. The BiLSTM yields the best performance on two NTU-Fi benchmarks. To compare these results, we visualize them in Figure~\ref{fig:cross-dataset}, from which we can conclude the observations: \begin{itemize} \item The MLP, CNN, GRU, LSTM, and Transformer can achieve satisfactory results on all benchmarks. \item The MLP, GRU, and CNN show stable and superior performances when they are compared to others. \item The very deep networks (\textit{i.e.}, the series of ResNet) can work on simple data but cannot generalize to Widar which has more categories and multiple domains. \item The RNN is worse than LSTM and GRU. \item The transformer cannot work well when only limited training data is available in NTU-Fi Human-ID. \item The models show inconsistent performances on different datasets, as the Widar dataset is much more difficult. \end{itemize} \textbf{Computational Complexity.} The Flops value shows the computational complexity of models in Table~\ref{tab:overall}. The vanilla RNN has low complexity but cannot perform well. The GRU and CNN-5 are the second-best models and simultaneously generate good results. It is also noteworthy that the ViT (transformer) has a very large computational complexity as it is composed of many MLPs for feature embedding. Since its performance is similar to that of CNN, MLP, and GRU, the transformer is not suitable for supervised learning tasks in WiFi sensing. \textbf{Model Parameters.} The number of model parameters determines how many GPU memories are occupied during inference. As shown in Table~\ref{tab:overall}, the vanilla RNN has the smallest parameter size and then is followed by the CNN-5 and CNN-GRU. The parameter sizes of CNN-5, RNN, GRU, LSTM, BiLSTM, and CNN-GRU are all small and acceptable for model inference in the edge. Considering both the Params and Acc, CNN-5, GRU, BiLSTM, and CNN-GRU are good choices for WiFi sensing. Though the model parameters can be reduced by model pruning~\cite{chen2019metaquant}, quantization~\cite{chen2019cooperative} or fine-tuning the hyper-parameters, here we only evaluate the pure models that have the minimum parameter sizes to converge in the training split. \subsection{Evaluations of Learning Schemes} Apart from supervised learning, other learning schemes are also useful for realistic applications of WiFi sensing. Here we evaluate two prevailing learning strategies on these models. \input{tables/transfer_learning} \input{figures/0.transfer_loss} \textbf{Evaluations on Transfer Learning} The transfer learning experiments are conducted on NTU-Fi. We transfer the model from the HAR to Human-ID by pre-training the model in HAR (whole dataset) and then fine-tuning a new classifier in Human-ID (training split). This simulates the situation when we train the model using massive labeled data collected in the lab, and then use a few data to realize customized tasks for users. The human activities in HAR and human gaits in Human-ID are composed of human motions, and thus the feature extractor should learn to generalize across these two tasks. We evaluate this setting for all baseline models and the results are shown in Table~\ref{tab:transfer_learning}. It is observed that the CNN feature extractor has the best transferability, achieving 96.35\% on the Human-ID task. Similar to CNN, the MLP and BiLSTM also have such capacity. However, the RNN, CNN+GRU, and ViT only achieve 57.84\%, 51.73\%, and 66.20\%, which demonstrates their weaker capacity for transfer learning. This can be caused by the overfitting phenomenon, such as the simple RNN that only memorizes the specific patterns for HAR but cannot recognize the new patterns. This can also be caused by the mechanism of feature learning. For example, the transformer (ViT) learns the connections of local patches by self-attention, but such connections are different between HAR and Human-ID. Recognizing different activities relies on the difference of a series of motions, but most human gaits are so similar that only subtle patterns can be an indicator for gait identification. \textbf{Evaluations on Unsupervised Learning} We further exploit the effectiveness of unsupervised learning for CSI feature learning. We follow the AutoFi~\cite{yang2022autofi} to construct two parallel networks and adopt the KL-divergence, mutual information, and kernel density estimation loss to train the two networks only using the CSI data. After unsupervised learning, we train the independent classifier based on the fixed parameters of the two networks. All the backbone networks are tested using the same strategy: unsupervised training on NTU-Fi HAR and supervised learning on NTU-Fi Human-ID. The evaluation is conducted on Human-ID, and the results are shown in Table~\ref{tab:unsupervised_learning}. It is shown that CNN achieves the best accuracy of 97.62\% that is followed by MLP and ViT. The results demonstrate that unsupervised learning is effective for CSI data. It yields better cross-task evaluation results than those of transfer learning, which demonstrates that unsupervised learning helps learn features with better generalization ability. CNN and MLP-based networks are more friendly for unsupervised learning of WiFi CSI data. \input{tables/self-supervised_learning} \input{figures/0.training_loss} \subsection{Analysis}\label{sec:exp-analytics} \textbf{Convergence of Deep Models.} Though all models converge eventually, their training difficulties are different and further affect their practical usage. To compare their convergence difficulties, we show the training losses of MLP, CNN-5, ViT, and RNN in terms of epochs in Figure~\ref{fig:training-loss}. It is noted that CNN converges very fast within 25 epochs for four datasets, and MLP also converges at a fast speed. The transformer requires more epochs of training since it consists of more model parameters. In comparison, RNN hardly converges on UT-HAR and Widar, and converges slower on NTU-Fi. Then we further explore the convergence of RNN-based models, including GRU, LSTM, BiLSTM, and CNN+GRU in Figure~\ref{fig:training-loss-rnn}. Though there show strong fluctuations during the training phase of GRU, LSTM, and BiLSTM, these three models can achieve much lower training loss. Especially, GRU achieves the lowest loss among all RNN-based methods. For CNN+GRU, the training phase is more stable but its convergence loss is larger than others. \textbf{How Transfer Learning Matters.} We further draw the training losses of all models on NTU-Fi Human-ID with pre-trained parameters of NTU-Fi HAR in Figure~\ref{fig:transfer-loss}. Compared to the training procedures of randomly-initialized models in Figures ~\ref{subfig:NTU-HAR-GP1} and~\ref{subfig:NTU-HAR-GP2}, the convergence can be achieved and even become much more stable. We can draw two conclusions from these results: (a) the feature extractors of these models are transferable across two similar tasks; (b) the fluctuations of training losses are caused by the feature extractor since only the classifier is trained for the transfer learning settings. \input{figures/0.widar-accuracy} \textbf{Poor Performance of Deep CNN on Widar.} In Table~\ref{tab:overall}, a noticeable phenomenon is that the ResNet-18/50/101 cannot generalize well on Widar data, only achieving 17.91\%, 19.47\%, and 14.47\%, respectively. In visual recognition, a deeper network should perform better on large-scale datasets ~\cite{he2016deep}. Then we have the question: is the degeneration of these deep models caused by underfitting or overfitting in WiFi sensing? We seek the reason by plotting their training losses in Figure~\ref{fig:widar-acc}. Figure~\ref{subfig:widar-resnet-acc} shows that even though the training accuracy has been almost 100\%, the testing accuracy remains low, under 20\%. Whereas, other networks (MLP, CNN, GRU) have similar training accuracy while the testing accuracy is increased to over 60\%. This indicates that the degrading performances of ResNets are caused by overfitting, and different domains in Widar~\cite{zhang2021widar3} might be the main reasons. This discovery tells us that very deep networks are prone to suffer from overfitting for cross-domain tasks and may not be a good choice for current WiFi sensing applications due to their performance and computational overhead. \textbf{Choices of Optimizer.} During the training phase, we find that though Adam can help models converge at a fast speed, it also leads to much training instability, especially for the very deep neural networks. In Figure~\ref{subfig:resnet-adam}, we can see that ResNet-18 converges stably but ResNet-50 and ResNet-101 have fluctuating losses every 20-30 epochs. This might be caused by the dramatically changing values of WiFi data and its adaptive learning rate of Adam~\cite{kingma2014adam}. Then we consider changing the optimizer from Adam to a more stable optimizer, Stochastic Gradient Descent (SGD). In Figure~\ref{subfig:resnet-sgd}, we find that the training procedure becomes more stable. This implies that if a very deep model is utilized in WiFi sensing, the SGD should be a better choice. If a simple model is sufficient for the sensing task, then Adam can enforce the model to converge better and faster. \input{figures/0.optimizer} \section{Discussions and Summary}\label{sec:summary} Having analyzed the empirical results and the characteristics of deep learning models for WiFi sensing, we summarize the experiences and observations that facilitate future research on model design, model training, and real world use case: \begin{itemize} \item \textbf{Model Choices.} We recommend CNN, GRU, and BiLSTM due to their high performance, low computational cost, and small parameter size. The shallow models have achieved remarkable results for activity recognition, gesture recognition, and human identification, while the very deep models confront the overfitting issue, especially for cross-domain scenarios. \item \textbf{Optimization.} We recommend using Adam or SGD optimizer. The Adam optimizer enforces the model to converge at a fast speed but sometimes it causes instability of training. When such a situation happens, the SGD is a more secure way but the hyper-parameters of SGD (\textit{i.e.}, the learning rate and momentum) need to be manually specified and tuned. \item \textbf{Advice on Transfer Learning Applications.} We recommend applying transfer learning when the task is similar to existing applications and the same CSI sensing platform is employed. The pre-trained parameters provide a good initialization and better generalization ability. CNN, MLP, and BiLSTM have superior transferability. \item \textbf{Advice on Unsupervised Learning.} We recommend applying unsupervised learning to initialize the model for similar tasks since unsupervised learning extracts more generalizable features than transfer learning. CNN, MLP, and ViT are more suitable in the unsupervised learning framework in general. \end{itemize} \section{Grand Challenges and Future Directions} Deep learning still keeps booming in many research fields and continuously empowers more challenging applications and scenarios. Based on the new progress, we look into the future directions of deep learning for WiFi sensing and summarize them as follows. \textbf{Data-efficient learning.} As CSI data is expensive to collect, data-efficient learning methods should be further explored. Existing works have utilized few-shot learning, transfer learning, and domain adaptation, which yield satisfactory results in a new environment with limited training samples. However, since the testing scenarios are simple, the transferability of these models cannot be well evaluated. In the future, meta-learning and zero-shot learning can further help learn robust features across environments and tasks. \textbf{Model compression or lightweight model design.} In the future, WiFi sensing requires real-time processing for certain applications, such as vital sign monitoring~\cite{hu2022resfi}. To this end, model compression techniques can play a crucial role, such as model pruning~\cite{chen2019cooperative}, quantization~\cite{chen2019metaquant} and distillation~\cite{yang2020mobileda}, which decreases the model size via an extra learning step. The lightweight model design is also favorable, such as the EfficientNet~\cite{tan2019efficientnet} in computer vision that is designed from scratch by balancing network depth, width, and resolution. \textbf{Multi-modal learning.} WiFi sensing is ubiquitous, cost-effective, and privacy-preserving, and can work without the effect of illumination and part of occlusion, which is complementary to the existing visual sensing technique. To achieve robust sensing 24/7, multiple modalities of sensing data should be fused using multi-modal learning. WiVi~\cite{zou2019wificv} pioneers human activity recognition by integrating WiFi sensing and visual recognition. Multi-modal learning can learn joint features from multiple modalities and make decisions by choosing reliable modalities. \textbf{Cross-modal learning.} WiFi CSI data describes the surrounding environment that can also be captured by cameras. Cross-modal learning aims to supervise or reconstruct one modality from another modality, which helps WiFi truly ``see'' the environment and visualize them in videos. Wi2Vi~\cite{kefayati2020wi2vi} manages to generate video frames by CSI data and firstly achieves cross-modal learning in WiFi sensing. The human pose is then estimated by supervising the model by the pose landmarks of OpenPose~\cite{wang2019person}. In the future, cross-modal learning may enable the WiFi model to learn from more supervisions such as radar and Lidar. \textbf{Model robustness and security for trustworthy sensing.} When deploying WiFi sensing models in the real world, the model should be secure to use. Existing works study the accuracy of models but few pay attention to the security issue. First, during the communication, the sensing data may leak the privacy of users. Second, if any adversarial attack is made on the CSI data, the modal can perform wrongly and trigger the wrong actions of smart appliances. RobustSense seeks to overcome adversarial attacks by augmentation and adversarial training~\cite{yang2022autofi}. EfficientFi proposes a variational auto-encoder to quantize the CSI for efficient and robust communication. WiFi-ADG~\cite{zhou2019adversarial} protects the user privacy by enforcing the data not recognizable by general classifiers. More works should be focused on secure WiFi sensing and establish trustworthy models for large-scale sensing, such as federated learning. \textbf{Complicated human activities and behaviors analytics.} While current methods have shown prominent recognition accuracy for single activities or gestures, human behavior is depicted by more complicated activities. For example, to indicate if a patient may have a risk of Alzheimer’s disease, the model should record the routine and analyze the anomaly activity, which is still difficult for existing approaches. Precise user behavior analysis can contribute to daily healthcare monitoring and behavioral economics. \textbf{Model interpretability for a physical explanation.} Model-based and learning-based methods develop fast but in a different ways. Recent research has investigated the interpretability of deep learning models that looks for the justifications of classifiers. In WiFi sensing, if the model is interpreted well, there may exist a connection between the data-driven model and the physical model. The modal interpretability may inspire us to develop new theories of physical models for WiFi sensing, and oppositely, the existing model (\textit{e.g.}, Fresnel Zone) may enable us to propose new learning methods based on the physical models. It is hoped that two directions of methods can be unified theoretically and practically. \section{Conclusion}\label{sec:conclusion} Deep learning methods have been proven to be effective for challenging applications in WiFi sensing, yet these models exhibit different characteristics on WiFi sensing tasks and a comprehensive benchmark is highly demanded. To this end, this work reviews the recent progress on deep learning for WiFi human sensing, and benchmarks prevailing deep neural networks and deep learning strategies on WiFi CSI data across different platforms. We summarize the conclusions drawn from the experimental observations, which provide valuable experiences for model design in practical WiFi sensing applications. Last but not least, the grand challenges and future directions are proposed to imagine the research issues emerging from future large-scale WiFi sensing scenarios. \section{Proof of the First Zonklar Equation} \bibliographystyle{IEEEtran} \section{Introduction} With the proliferation of mobile internet usage, WiFi access point (AP) has become a ubiquitous infrastructure in smart environments, ranging from commercial buildings to domestic settings. By analysing the patterns of its wireless signal, today's AP has evolved beyond a pure WiFi router, but is also widely used as a type of 'sensor device' to enable new services for human sensing. Particularly, recent studies have found that WiFi signals in the form of Channel State Information (CSI)~\cite{halperin2011tool,xie2015precise} are extremely promising for a variety of device-free human sensing tasks, such as occupancy detection~\cite{zou2017non}, activity recognition~\cite{wang2014eyes,zou2018deepsense,yang2018carefi,zou2019wificv}, fall detection~\cite{wang2016rt}, gesture recognition~\cite{yang2019learning,zou2018gesture}, human identification~\cite{zou2018identification,wang2022caution}, and people counting~\cite{zou2018device,FreeCount}. Unlike the coarse-grained received signal strengths, WiFi CSI records more fine-grained information about how a signal propagates between WiFi devices and how a signal is reflected from the surrounding environment in which humans move around. On the other side, as WiFi signals (2.4GHz or 5GHz) lie in the non-visible band of the electromagnetic spectrum, WiFi CSI based human sensing is intrinsically more privacy-friendly than cameras and draws increasing attention from both academia and industry. Motivated by increasing interests needs, a new WiFi standard, 802.11bf by the IEEE 802.11bf Task Group (TGbf) will amend the current WiFi standard both at the Medium Access Control (MAC) and Physical Layer (PHY) to officially include WiFi sensing as part of a regular WiFi service by late 2024~\cite{802.11bf}. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/principle.pdf} \caption{The technical contributions and summary of SenseFi.}\label{fig:principle} \end{figure} Existing WiFi sensing methods can be categorized into model-based methods and learning-based methods. Model-based methods rely on physical models that describe the WiFi signals propagation, such as Fresnel Zone~\cite{wu2017device}. Model based methods help us understand the underlying mechanism of WiFi sensing and design sensing methods for periodic or single motions, such as respiration \cite{wang2016human,8210837,9341474} and falling down~\cite{wang2016rt,9322323,9716074}. Nevertheless, model based methods fall short when it comes to the complicated human activities that consist of a series of different motions. For example, a human gait comprises the synergistic movements of arms, legs and bodies, the differences of which are hard to depict by physical models. In contrast, by feeding a massive amount of data into machine learning \cite{yang2018device} or deep learning networks, \cite{yang2019learning,zou2018deepsense}, learning based achieve remarkable performances in complicated sensing tasks. Various deep neural networks are designed to enable many applications including activity recognition \cite{zou2017multiple}, gesture recognition \cite{yang2019learning}, human identification \cite{zou2018identification,wang2022caution,zhang2020gate}, and people counting \cite{zou2018device, FreeCount}. Though deep learning models have a strong ability of function approximation, they require tremendous labeled data that is expensive to collect and suffer from the negative effect of distribution shift caused by environmental dynamics \cite{zou2018robust}. Most state-of-the-art deep learning models are developed for computer vision \cite{voulodimos2018deep} and natural language processing tasks \cite{otter2020survey}, which demonstrates the capacity of processing high-dimensional and multi-modal data problems. These approaches inspire the deep learning applications in WiFi sensing in terms of data preprocessing, network design, and learning objectives. It is seen that more and more deep models \cite{bu2021transfersense,zhang2018crosssense} for WiFi sensing come into existence and overcome the aforementioned obstacles that traditional statistical learning methods cannot address. However, current works mainly aim to achieve high accuracy on specific sensing tasks by tailoring deep neural networks but do not explore the intrinsic tension between various deep learning models and distinct WiFi sensing data collected by different devices and CSI tools. It is unclear if the remarkable results of a WiFi sensing research paper come from the deep model design or the WiFi platform. Hence, there still exist some significant gaps between current deep learning and WiFi sensing research: (i) how to customize a deep neural network for a WiFi sensing task by integrating prevailing network modules (\textit{e.g.}, fully-connected layer, convolutional layer, recurrent neural unit, transformer block) into one synergistic framework? (ii) how do the prevailing models perform when they are compared fairly on multiple WiFi sensing platforms and data modalities? (iii) how to achieve a trade-off between recognition accuracy and efficiency? To answer these questions, we propose SenseFi, a benchmark and model zoo library for WiFi CSI sensing using deep learning. Firstly, we introduce the prevalent deep learning models, including multilayer perceptron (MLP), convolutional neural network (CNN), recurrent neural network (RNN), variants of RNN, CSI transformers, and CNN-RNN, and summarize how they are effective for CSI feature learning and WiFi sensing tasks. Then we investigate and benchmark these models on three WiFi human activity recognition data that consists of both raw CSI data and processed data collected by Intel 5300 CSI tool \cite{halperin2011tool} and Atheros CSI tool \cite{xie2015precise,yang2018device}. The accuracy and efficiency of these models are compared and discussed to show their viability for real-world applications. We further investigate how different WiFi sensing tasks can benefit each other by transfer learning, and how unsupervised learning can be used to exploit features without labels, reducing the annotation cost. These features are summarized in Figure~\ref{fig:principle}. All the source codes are written into one library so that the researchers can develop and evaluate their models conveniently. As such, the contributions are summarized as follows: \begin{itemize} \item We analyze and summarize how the widespread deep learning models in computer vision and natural language processing benefit WiFi sensing in terms of network structure and feature extraction. \item We select two public datasets (UT-HAR \cite{yousefi2017survey} and Widar \cite{zhang2021widar3}) and collect two new datasets (NTU-Fi HAR and Human-ID) using different CSI platforms, which allows us to benchmark the deep learning methods and evaluate their feasibility for WiFi sensing. \item We explore the transfer learning scheme that transfers knowledge across different sensing tasks, and benchmark it across all models. \item We investigate the unsupervised learning scheme that contrastively learns the feature extractor without data annotation, and benchmark it across all models. \item We develop the \textbf{SenseFi} library and open-source the benchmarking codes. To the best of our knowledge, this is the first work that benchmarks advanced deep models and learning schemes for WiFi sensing, which provides comprehensive and significant evidence and tools for future research. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:pre} introduces the fundamental knowledge on WiFi sensing and CSI data. Then we introduce the prevalent deep learning models and how they are applied to WiFi sensing in Section~\ref{sec:dl-models}. The empirical study is detailed in Section~\ref{sec:empirical-study}, and then the summaries and discussions are made in Section~\ref{sec:summary}. Finally, the paper is concluded in Section~\ref{sec:conclusion}. \section{Preliminaries of WiFi Sensing}\label{sec:pre} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figures/CSI_samples.pdf} \caption{The CSI samples of three human activities in NTU-Fi, collected by Atheros CSI Tool.}\label{fig:csi-samples} \end{figure*} \subsection{Channel State Information} In WiFi communication, channel state information reflects how wireless signals propagate in a physical environment after diffraction, reflections, and scattering, which describes the channel properties of a communication link. For modern wireless communication networks following the IEEE 802.11 standard, Multiple-Input Multiple-Output (MIMO) and Orthogonal Frequency Division Multiplexing (OFDM) at the physical layer contribute to increasing data capacity and better orthogonality in transmission channels affected by multi-path propagation. As a result, current WiFi APs usually have multiple antennas with many subcarriers for OFDM. For a pair of transmitter and receiver antennas, CSI describes the phase shift of multi-path and amplitude attenuation on each subcarrier. Compared to received signal strength, CSI data has better resolutions for sensing and can be regarded as ``WiFi images'' for the environment where WiFi signals propagate. Specifically, the Channel Impulse Response (CIR) $h(\tau)$ of the WiFi signals is defined in the frequency domain: \begin{equation} h(\tau)=\sum_{l=1}^{L}\alpha_l e^{j\phi_l} \delta(\tau-\tau_l), \end{equation} where $\alpha_l$ and $\phi_l$ denote the amplitude and phase of the $l$-th multi-path component, respectively, $\tau_l$ is the time delay, $L$ denotes the number of multi-paths and $\delta(\tau)$ is the Dirac delta function. To estimate the CIR, the OFDM receiver samples the signal spectrum at subcarrier level in the realistic implementation, which represents amplitude attenuation and phase shift via complex number. In WiFi sensing, the CSI recording functions are realized by specific tools \cite{halperin2011tool,xie2015precise}. The estimation can be represented by: \begin{equation} H_i=||H_i||e^{j \angle H_i} \end{equation} where $||H_i||$ and $\angle H_i$ are the amplitude and phase of $i$-th subcarrier, respectively. \subsection{CSI Tools and Platforms} The number of subcarriers is decided by the bandwidth and the tool. The more subcarriers one has, the better resolution the CSI data is. Existing CSI tools include Intel 5300 NIC \cite{halperin2011tool}, Atheros CSI Tool \cite{xie2015precise} and Nexmon CSI Tool \cite{nexmon2019tool}, and many realistic sensing platforms are built on them. The Intel 5300 NIC is the most commonly used tool, which is the first released CSI tool. It can record 30 subcarriers for each pair of antennas running with 20MHz bandwidth. Atheros CSI Tool increases the CSI data resolution by improving the recording CSI to 56 subcarriers for 20MHz and 114 subcarriers for 40MHz, which has been widely used for many applications \cite{zou2018deepsense,yang2018device,yang2018carefi,yang2019learning,yang2022efficientfi}. The Nexmon CSI Tool firstly enables CSI recording on smartphones and Raspberry Pi, and can capture 256 subcarriers for 80MHz. However, past works~\cite{sharma2021passive,schafer2021human} show that their CSI data is quite noisy, and there do not exist common datasets based on Nexmon. In this paper, we only investigate the effectiveness of the deep learning models trained on representative CSI data from the widely-used Intel 5300 NIC and Atheros CSI Tool. \subsection{CSI Data Transformation and Cleansing} In general, the CSI data consists of a vector of complex number including the amplitude and phase. The question is how we process these data for the deep models of WiFi sensing? We summarize the answers derived from existing works: \begin{enumerate} \item \textbf{Only use the amplitude data as input.} As the raw phases from a single antenna are randomly distributed due to the random phase offsets \cite{liu2020human}, the amplitude of CSI is more stable and suitable for WiFi sensing. A simple denoising scheme is enough to filter the high-frequency noise of CSI amplitudes, such as the wavelet denoising \cite{yang2018device}. This is the most common practice for most WiFi sensing applications. \item \textbf{Use the CSI difference between antennas for model-based methods.} Though the raw phases are noisy, the phase difference between two antennas is quite stable \cite{yang2019learning}, which can better reflect subtle gestures than amplitudes. Then the CSI ratio \cite{zeng2019farsense} is proposed to mitigate the noise by the division operation and thus increases the sensing range. These techniques are mostly designed for model-based solutions as they require clean data for selecting thresholds. \item \textbf{Use the processed doppler representation of CSI.} To eliminate the environmental dependency of CSI data, the body-coordinate velocity profile (BVP) is proposed to simulate the doppler feature \cite{zhang2021widar3} that only reflects human motions. \end{enumerate} In our benchmark, as we focus on the learning-based methods, we choose the most common data modality (\textit{i.e.,} amplitude only) and the novel BVP modality that is domain-invariant. \subsection{How Human Activities Affect CSI} As shown in Figure~\ref{fig:csi-samples}, the CSI data for human sensing is composed of two dimensions: the subcarrier and the packet number (\textit{i.e.}, time duration). For each packet or timestamp $t$, we have $X_t=N_T\times N_R \times N_{sub}$ where $N_T$, $N_R$ and $N_{sub}$ denote the number of transmitter antennas, receiver antennas and subcarriers per antenna, respectively. This can be regarded as a ``CSI image'' for the surrounding environment at time $t$. Then along with subsequent timestamps, the CSI images form a ``CSI video'' that can describe human activity patterns. To connect CSI data with deep learning models, we summarize the data properties that serve for a better understanding of deep model design: \begin{enumerate} \item \textbf{Subcarrier dimension $\to$ spatial features.} The values of many subcarriers can represent how the signal propagates after diffraction, reflections, and scattering, and thus describe the spatial environment. These subcarriers are seen as an analogy for image pixels, from which \textit{convolutional layers} can extract spatial features \cite{lecun2015deep}. \item \textbf{Time dimension $\to$ temporal features.} For each subcarrier, its temporal dynamics indicate an environmental change. In deep learning, the temporal dynamics are usually modeled by \textit{recurrent neural networks} \cite{schuster1997bidirectional}. \item \textbf{Antenna dimension $\to$ resolution and channel features.} As each antenna captures a different propagation path of signals, it can be regarded as a channel in deep learning that is similar to RGB channels of an image. If only one pair of antennas exists, then the CSI data is similar to a gray image with only one channel. Hence, the more antennas we have, the higher resolution the CSI has. The antenna features should be processed separately in convolutional layers or recurrent neurons. \end{enumerate} \section{Deep Learning Models for WiFi Sensing}\label{sec:dl-models} Deep learning enables models composed of many processing layers to learn representations of data, which is a branch of machine learning~\cite{lecun2015deep}. Compared to classic statistical learning that mainly leverages handcrafted features designed by humans with prior knowledge, deep learning aims to extract features automatically by learning massive labeled data and optimizing the model by back-propagation. The theories of deep learning were developed in the 1980s but they were not attractive due to the need of enormous computational resources. With the development of graphical processing units (GPUs), deep learning techniques have become affordable, and has been widely utilized in computer vision~\cite{voulodimos2018deep}, natural language processing~\cite{otter2020survey}, and interdisciplinary research~\cite{chen2021deep}. A standard classification model in deep learning is composed of a feature extractor and a classifier. The classifier normally consists of several fully-connected layers and can perform well, while the design of the feature extractor is the key to the success. Extensive works explore a large number of deep architectures for feature extractors, and each of them has specific advantages for one type of data. The deep learning models for WiFi sensing are built on these prevailing architectures to extract patterns of human motions. We summarize the latest works on deep models for WiFi sensing in Table~\ref{tab:survey}, and it is observed that the networks of these works are quite similar. In the following, we introduce these key architectures and how they are applied to WiFi sensing tasks. To better instantiate these networks, we define the CSI data $x\in \mathbb{R}^{N_s\times T}$ where $N_s$ denotes the total number of subcarriers across all antenna pairs, and $T$ denotes the duration. The deep learning model $f(\cdot)$ aims to map the data to the corresponding label: $y=f(x)$. Denote $\Phi_i(\cdot)$ and $z_i$ as the $i$-th layer of the deep model and the feature of the $i$-th layer. Apart from the illustration, we visualize the intuition of how to feed these CSI data into various networks in Figure~\ref{fig:framework}. \begin{figure*}[h] \centering \includegraphics[width=0.75\textwidth]{figures/Models.pdf} \caption{The illustration of how CSI data is processed by MLP, CNN, RNN and Transformer.} \label{fig:framework} \end{figure*} \subsection{Multilayer Perceptron} Multilayer perceptron (MLP) \cite{gardner1998artificial} is one of the most classic architectures and has played the classifier role in most deep classification networks. It normally consists of multiple fully-connected layers followed by activation functions. The first layer is termed the input layer that transforms the input data into the hidden latent space, and after several hidden layers, the last layer maps the latent feature into the categorical space. Each layer is calculated as \begin{equation} \Phi_i(z_{i-1})=\sigma(W_i z_{i-1}), \end{equation} where $W_i$ is the parameters of $\Phi_i$, and $\sigma(\cdot)$ is the activation function that aims to increase the non-linearity for MLP. The input CSI has to be flattened to a vector and then fed into the MLP, such that $x\in \mathbb{R}^{N_s T}$. Such a process mixes the spatial and temporal dimensions and damages the intrinsic structure of CSI data. Despite this, the MLP can still work with massive labeled data, because the MLP has a fully-connected structure with a large number of parameters, yet leading to slow convergence and huge computational costs. Therefore, though the MLP shows satisfactory performance, stacking many layers in MLP is not common for feature learning, which makes MLP usually serve as a classifier. \subsection{Convolutional Neural Network} Convolutional neural network (CNN) was firstly proposed for image recognition tasks by LeCun~\cite{lecun1998gradient}. It addresses the drawbacks of MLP by weight sharing and spatial pooling. CNN models have achieved remarkable performances in classification problems of 2D data in computer vision~\cite{khan2020survey,wang2019kervolutional} and sequential data in speech recognition~\cite{abdel2012applying} and natural language processing~\cite{yin2017comparative}. CNN learns features by stacking convolutional kernels and spatial pooling operations. The convolution operation refers to the dot product between a filter $\mathbf{k}\in\mathbb{R}^d$ and an input vector $\mathbf{v}\in\mathbb{R}^d$, defined as follows: \begin{equation} \mathbf{k} \otimes \mathbf{v}=\sigma(\mathbf{k}^T \mathbf{v}). \end{equation} The pooling operation is a down-sampling strategy that calculates the maximum (max pooling) or mean (average pooling) inside a kernel. The CNNs normally consist of several convolutional layers, max-pooling layers, and the MLP classifier. Generally speaking, increasing the depth of CNNs can lead to better model capacity. Nevertheless, when the depth of CNN is too large (\textit{e.g.}, greater than 20 layers), the gradient vanishing problem leads to degrading performance. Such degradation is addressed by ResNet~\cite{he2016deep}, which uses the residual connections to reduce the difficulty of optimization. In WiFi sensing, the convolution kernel can operate on a 2D patch of CSI data (\textit{i.e.}, Conv1D) that includes a spatial-temporal feature, or on a 1D patch of each subcarrier of CSI data (\textit{i.e.}, Conv2D). For Conv2D, a 2D convolution kernel $\mathbf{k}_{2D}\in\mathbb{R}^{h\times w}$ operates on all patches of the CSI data via the sliding window strategy to obtain the output of the feature map, while the Conv1D only extracts the spatial feature along the subcarrier dimension. The Conv2D can be applied independently as it considers both spatial and temporal features, while the Conv1D is usually used with other temporal feature learning methods. To enhance the capacity of CNN, multiple convolution kernels with a random initialization process are used. The advantages of CNNs for WiFi sensing consist of fewer training parameters and the preservation of the subcarrier and time dimension in CSI data. However, the disadvantage is that CNN has an insufficient receptive field due to the limited kernel size and thus fails to capture the dependencies that exceed the kernel size. Another drawback is that CNN stack all the feature maps of kernels equally, which has been revamped by an attention mechanism that assigns different weights in the kernel or spatial level while stacking features. These techniques have been successfully used in WiFi sensing~\cite{xue2020deepmv,ding2022wi,9275362,9721516}. \subsection{Recurrent Neural Network} Recurrent neural network (RNN) is one of the deepest network architectures that can memorize arbitrary-length sequences of input patterns. The unique advantage of RNN is that it enables multiple inputs and multiple outputs, which makes it very effective for time sequence data, such as video \cite{yang2018deep} and CSI \cite{zou2018deepsense,8918311,8761445}. Its principle is to create internal memory to store historical patterns, which are trained via back-propagation through time \cite{lipton2015rnnsurvey}. For a CSI sample $x$, we denote a CSI frame at the $t$ as $x_t \in \mathbb{R}^{N_s}$. The vanilla RNN uses two sharing matrices $W_x,W_h$ to generate the hidden state $h_t$: \begin{equation} h_t = \sigma(W_x x_t+W_h h_{t-1}), \end{equation} where the activation function $\sigma(\cdot)$ is usually Tanh or Sigmoid functions. RNN is designed to capture temporal dynamics, but it suffers from the vanishing gradient problem during back-propagation and thus cannot capture long-term dependencies of CSI data. \subsection{Variants of RNN (LSTM)} To tackle the problem of long-term dependencies of RNN, Long-short term memory (LSTM) \cite{hochreiter1997long} is proposed by designing several gates with varying purposes and mitigating the gradient instability during training. The standard LSTM sequentially updates a hidden sequence by a memory cell that contains four states: a memory state $c_t$, an output gate $o_t$ that controls the effect of output, an input gate $i_t$ and a forget gate $f_t$ that decides what to preserve and forget in the memory. The LSTM is parameterized by weight matrices $W_i,W_f,W_c,W_o,U_i,U_f,U_c,U_o$ and biases $b^i,b^f,b^c,b^o$, and the whole update is performed at each $t \in \{ 1,...,T\}$: \begin{align} & i_t=\sigma(W_i x_t + U_i h_{t-1}+b^i), \\ & f_t=\sigma(W_f x_t + U_f h_{t-1}+b^f), \\ & \tilde{c_t}=tanh(W_c x_t + U_c h_{t-1}+b^c), \\ & c_t=i_t \odot \tilde{c_t} + f_t \odot c_{t-1}, \\ & o_t=\sigma(W_o x_t + U_o h_{t-1} + b^o), \\ & h_t = o_t \odot tanh(c_t), \end{align} where $\sigma$ is a Sigmoid function. Apart from the LSTM cell \cite{9722627,9532548,9384510}, the multi-layer and bi-directional structure further boost the model capacity. The bidirectional LSTM (BiLSTM) model processes the sequence in two directions and concatenates the features of the forward input $\grave{x}$ and backward input $\acute{x}$. It has been proven that BiLSTM shows better results than LSTM in \cite{chen2018wifi,9641123}. \subsection{Recurrent Convolutional Neural Network} Though LSTM addresses the long-term dependency, it leads to a large computation overhead. To overcome this issue, Gated Recurrent Unit (GRU) is proposed. GRU combines the forget gate and input gate into one gate, and does not employ the memory state in LSTM, which simplifies the model but can still capture long-term dependency. GRU is regarded as a simple yet effective version of LSTM. Leveraging the simple recurrent network, we can integrate the Conv1D and GRU to extract spatial and temporal features, respectively. \cite{dua2021multi,zhang2021widar3} show that CNN-GRU is effective for human activity recognition. In WiFi sensing, DeepSense~\cite{zou2018deepsense} proposes Conv2D with LSTM for human activity recognition. SiaNet~\cite{yang2019learning} proposes Conv1D with BiLSTM for gesture recognition. As they perform quite similarly, we use CNN-GRU with fewer parameters in this paper for the benchmark. \subsection{Transformer} Transformer~\cite{vaswani2017attention} was firstly proposed for NLP applications to extract sequence embeddings by exploiting attention of words, and then it was extended to the computer vision field where each patch is regarded as a word and one image consists of many patches~\cite{dosovitskiy2020image}. The vanilla consists of an encoder and a decoder to perform machine translation, and only the encoder is what we need. The transformer block is composed of a multi-head attention layer, a feed-forward neural network (MLP), and layer normalization. Since MLP has been explained in previous section, we mainly introduce the attention mechanism in this section. For a CSI sample $x$, we first divide it into $P$ patches $x_p \in \mathbb{R}^{h\times w}$, of which each patch has contained spatial-temporal features. Then these patches are concatenated and added by positional embeddings that infer the spatial position of patches, which makes the input matrix $v\in \mathbb{R}^{d_k}$ where $d_k=P\times hw$. This matrix is transformed into three different matrices via linear embedding: the query $Q$, the key $K$, and the value $V$. The self-attention process is calculated by \begin{equation} \text{Attention}(Q,K,V)=\text{softmax}(\frac{Q\cdot K^T}{\sqrt{d_k}})\cdot V. \end{equation} Intuitively, such a process calculates the attention of any two patches via dot product, \textit{i.e.}, cosine similarity, and then the weighting is performed with normalization to enhance gradient stability for improved training. Multi-head attention just repeats the self-attention several times and enhances the diversity of attentions. The transformer architecture can interconnect with every patch of CSI, which makes it strong if given sufficient training data, such as THAT~\cite{li2021two}. However, transformer has a great number of parameters that makes the training cost expensive, and enormous labeled CSI data is hard to collect, which makes transformers not really attractive for the supervised learning. \subsection{Generative Models} Different from the aforementioned discriminative models that mainly conducts classification, generative models aim to capture the data distribution of CSI. Generative Adversarial Network (GAN) \cite{goodfellow2014generative} is a classic generative model that learns to generate real-like data via an adversarial game between a generative network and a discriminator network. In WiFi sensing, GAN helps deal with the environmental dependency by generating labeled samples in the new environment from the well-trained environment~\cite{xiao2019csigan,wang2021multimodal}. GAN also inspires domain-adversarial training that enables deep models to learn domain-invariant representations for the training and real-world testing environments \cite{zou2018robust,yang2021robust,yang2020mind,xu2021partial}. Variational network~\cite{kingma2013auto} is another common generative model that maps the input variable to a multivariate latent distribution. Variational autoencoder learns the data distribution by a stochastic variational inference and learning algorithm~\cite{kingma2013auto}, which has been used in CSI-based localization~\cite{kim2021multiview,chen2020fido} and CSI compression~\cite{yang2022efficientfi}. For instance, EfficientFi~\cite{yang2022efficientfi} leverages the quantized variational model to compress the CSI transmission data for large-scale WiFi sensing in the future. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/Learning_strategy.pdf} \caption{The illustration of the learning strategies.} \label{fig:strategies} \end{figure} \section{Learning Methods for Deep WiFi Sensing Models} Traditional training of deep models relies on supervised learning with massive labeled data, but the data collection and annotation is a bottleneck in the realistic WiFi sensing applications. For example, to recognize human gestures, we may need the volunteers to perform gestures for a hundred times, which is not realistic. In this section, as shown in Figure~\ref{fig:strategies}, we illustrate the learning methods and how they contribute to WiFi sensing in the real world. \textbf{Supervised Learning} is an approach to training deep models using input data that has been labeled for a particular output. It is the most common learning strategy in current WiFi sensing works \cite{yousefi2017survey,zou2018deepsense,wang2018spatial,zou2019wificv}. They usually adopt cross-entropy loss between the ground truth label and the prediction for model optimization. Though supervised learning is easy to implement and achieves high performance for many tasks, its requirement of tremendous labeled data hinders its pervasive realistic applications. \textbf{Few-shot Learning} is a data-efficient learning strategy that only utilizes several samples of each category for training. This is normally achieved by contrastive learning or prototypical learning. It is firstly exploited for WiFi sensing in SiaNet~\cite{yang2019learning} that proposes a Siamese network for few-shot learning. Subsequent works~\cite{gu2021wione,wang2022caution} extend prototypical networks from visual recognition to WiFi sensing, also achieving good recognition results. Specially, when only one sample for each class is employed for training, we term it as one-shot learning. As only a few samples are required, few-shot learning contributes to WiFi-based gesture recognition and human identification in practice. \textbf{Transfer Learning} aims to transfer knowledge from one domain to another domain~\cite{tlsurvey}. When the two domains are similar, we pretrain the model on one domain and fine-tune the model in a new environment, which can lead to significant performance. When the two domains are distinct, such as the different environments of CSI data, the distribution shift hinders the performance so domain adaptation should be adopted. Domain adaptation is a category of semi-supervised learning that mitigates the domain shift for transfer learning. Cross-domain scenarios are quite common in WiFi sensing scenarios since the CSI data is highly dependent on the training environment. Many works have been developed to deal with this problem~\cite{zou2018robust,jiang2018towards,wang2021multimodal,8793019,9763693}. \textbf{Unsupervised Learning} aims to learn data representations without any labels. Then the feature extractor can facilitate down-streaming tasks by training a specific classifier. From the experience of visual recognition tasks~\cite{grill2020bootstrap}, unsupervised learning can even enforce the model to gain better generalization ability since the model is not dependent on any specific tasks. Current unsupervised learning models are based on self-supervised learning~\cite{wang2021self}. Despite its effectiveness, the unsupervised learning has not been well exploited in WiFi sensing, and only AutoFi is developed to enable model initialization for automatic user setup in WiFi sensing applications ~\cite{yang2022autofi}. \textbf{Ensemble Learning} uses multiple models to obtain better predictive performance~\cite{sagi2018ensemble}. The ensemble process can operate on feature level or prediction level. Feature-level ensemble concatenates the features from multiple models and one final classifier is trained. Prediction-level ensemble is more common, usually referring to voting or probability addition. Ensemble learning can increase the performance but the computation overhead also explodes by multiple times. CrossSense~\cite{jiang2018towards} develops a mixture-of-experts approach and only chooses the appropriate expert for a specific input, addressing the computation cost. In this paper, we empirically explore the effectiveness of supervised learning, transfer learning and unsupervised learning for WiFi CSI data, as they are the most commonly used learning strategies in WiFi sensing applications. \section{Empirical Studies of Deep Learning in WiFi Sensing: A Benchmark}\label{sec:empirical-study} In this section, we conduct an empirical study of the aforementioned deep learning models on WiFi sensing data and firstly provide the benchmarks with open-source codes in \url{http://www.github.com/}. The four datasets are illustrated first, and then we evaluate the deep models on these datasets in terms of three learning strategies. Eventually, some detailed analytics are conducted on the convergence of optimization, network depth, and network selection. \subsection{Datasets} We choose two public CSI datasets (UT-HAR~\cite{yousefi2017survey} and Widar~\cite{zhang2021widar3}) collected using Intel 5300 NIC. To validate the effectiveness of deep learning models on CSI data of different platforms, we collect two new datasets using Atheros CSI Tool~\cite{xie2015precise} and our embedded IoT system~\cite{yang2018device}, namely NTU-Fi HAR and NTU-Fi Human-ID. The statistics of these datasets are summarized in Table~\ref{tab:datasets}. \textbf{UT-HAR}~\cite{yousefi2017survey} is the first public CSI dataset for human activity recognition. It consists of seven categories and is collected via Intel 5300 NIC with 3 pairs of antennas that record 30 subcarriers per pair. All the data is collected in the same environment. However, its data is collected continuously and has no golden labels for activity segmentation. Following existing works~\cite{li2021two}, the data is segmented using a sliding window, inevitably causing many repeated data among samples. Hence, though the total number of samples reaches around 5000, it is a small dataset with intrinsic drawbacks. \textbf{Widar}~\cite{zhang2021widar3} is the largest WiFi sensing dataset for gesture recognition, which is composed of 22 categories and 43K samples. It is collected via Intel 5300 NIC with $3\times3$ pairs of antennas in many distinct environments. To eliminate the environmental dependencies, the data is processed to the body-coordinate velocity profile (BVP). \textbf{NTU-Fi} is our proposed dataset for this benchmark that includes both human activity recognition (\textbf{HAR}) and human identification (\textbf{Human ID}) tasks. Different from UT-HAR and Widar, our dataset is collected using Atheros CSI Tool and has a higher resolution of subcarriers (114 per pair of antennas). Each CSI sample is perfectly segmented. For the HAR dataset, we collect the data in three different layouts. For the Human ID dataset, we collect the human walking gaits in three situations: wearing a T-shirt, a coat, or a backpack, which brings many difficulties. The NTU-Fi data is simultaneously collected in these works~\cite{yang2022efficientfi,wang2022caution} that describe the detailed layouts for data collection. \subsection{Implementation Details} We normalize the data for each dataset and implement all the aforementioned methods using the PyTorch framework~\cite{paszke2019pytorch}. To ensure the convergence, we train the UT-HAR, Widar, and NTU-Fi for 200, 100, and 30 epochs, respectively, for all the models except RNN. As the vanilla RNN is hard to converge due to the gradient vanishing, we train them for two times of the specified epochs. We use the Adam optimizer with a learning rate of 0.001, and the beta of 0.9 and 0.999. We follow the original Adam paper~\cite{kingma2014adam} to set these hyper-parameters. The ratio of training and testing splits is 8:2 for all datasets using stratified sampling. \subsection{Baselines and Criterion} We design the baseline networks of MLP, CNN, RNN, GRU, LSTM, BiLSTM, CNN+GRU, and Transformer following the experiences learned from existing works in Table~\ref{tab:survey}. The CNN-5 is modified from LeNet-5~\cite{lecun1998gradient}. We further introduce the series of ResNet~\cite{he2016deep} that have deeper layers. The transformer network is based on the vision transformer (ViT)~\cite{dosovitskiy2020image} so that each patch can contain spatial and temporal dimensions. It is found that given sufficient parameters and reasonable depth of layers, they can converge to more than 98\% in the training split. Since the data sizes of UT-HAR, Widar and NTU-Fi are different, we use a convolutional layer to map them into a unified size, which enables us to use the same network architecture. The specific network architectures for all models are illustrated in the \textit{Appendix}. To compare the baseline models, we select three classic criteria: accuracy (Acc) that evaluates the prediction ability, floating-point operations (Flops) that evaluates the computational complexity, and the number of parameters (Params) that measures the requirement of GPU memory. As WiFi sensing is usually performed on the edge, the Flops and Params also matter with limited resources. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figures/Cross_Dataset_Accuracy.pdf} \caption{The performance comparison across four datasets.}\label{fig:cross-dataset} \end{figure} \subsection{Evaluations of Different Deep Architectures} \textbf{Overall Comparison.} We summarize the performance of all baseline models in Table~\ref{tab:overall}. On UT-HAR, the ResNet-18 achieves the best accuracy of 98.11\% and the CNN-5 achieves the second best. The shallow CNN-5 can attain good results on all datasets but the deep ResNet-18 fails to generalize on Widar, which will be explained in Section~\ref{sec:exp-analytics}. The BiLSTM yields the best performance on two NTU-Fi benchmarks. To compare these results, we visualize them in Figure~\ref{fig:cross-dataset}, from which we can conclude the observations: \begin{itemize} \item The MLP, CNN, GRU, LSTM, and Transformer can achieve satisfactory results on all benchmarks. \item The MLP, GRU, and CNN show stable and superior performances when they are compared to others. \item The very deep networks (\textit{i.e.}, the series of ResNet) can work on simple data but cannot generalize to Widar which has more categories and multiple domains. \item The RNN is worse than LSTM and GRU. \item The transformer cannot work well when only limited training data is available in NTU-Fi Human-ID. \item The models show inconsistent performances on different datasets, as the Widar dataset is much more difficult. \end{itemize} \textbf{Computational Complexity.} The Flops value shows the computational complexity of models in Table~\ref{tab:overall}. The vanilla RNN has low complexity but cannot perform well. The GRU and CNN-5 are the second-best models and simultaneously generate good results. It is also noteworthy that the ViT (transformer) has a very large computational complexity as it is composed of many MLPs for feature embedding. Since its performance is similar to that of CNN, MLP, and GRU, the transformer is not suitable for supervised learning tasks in WiFi sensing. \textbf{Model Parameters.} The number of model parameters determines how many GPU memories are occupied during inference. As shown in Table~\ref{tab:overall}, the vanilla RNN has the smallest parameter size and then is followed by the CNN-5 and CNN-GRU. The parameter sizes of CNN-5, RNN, GRU, LSTM, BiLSTM, and CNN-GRU are all small and acceptable for model inference in the edge. Considering both the Params and Acc, CNN-5, GRU, BiLSTM, and CNN-GRU are good choices for WiFi sensing. Though the model parameters can be reduced by model pruning~\cite{chen2019metaquant}, quantization~\cite{chen2019cooperative} or fine-tuning the hyper-parameters, here we only evaluate the pure models that have the minimum parameter sizes to converge in the training split. \subsection{Evaluations of Learning Schemes} Apart from supervised learning, other learning schemes are also useful for realistic applications of WiFi sensing. Here we evaluate two prevailing learning strategies on these models. \input{tables/transfer_learning} \input{figures/0.transfer_loss} \textbf{Evaluations on Transfer Learning} The transfer learning experiments are conducted on NTU-Fi. We transfer the model from the HAR to Human-ID by pre-training the model in HAR (whole dataset) and then fine-tuning a new classifier in Human-ID (training split). This simulates the situation when we train the model using massive labeled data collected in the lab, and then use a few data to realize customized tasks for users. The human activities in HAR and human gaits in Human-ID are composed of human motions, and thus the feature extractor should learn to generalize across these two tasks. We evaluate this setting for all baseline models and the results are shown in Table~\ref{tab:transfer_learning}. It is observed that the CNN feature extractor has the best transferability, achieving 96.35\% on the Human-ID task. Similar to CNN, the MLP and BiLSTM also have such capacity. However, the RNN, CNN+GRU, and ViT only achieve 57.84\%, 51.73\%, and 66.20\%, which demonstrates their weaker capacity for transfer learning. This can be caused by the overfitting phenomenon, such as the simple RNN that only memorizes the specific patterns for HAR but cannot recognize the new patterns. This can also be caused by the mechanism of feature learning. For example, the transformer (ViT) learns the connections of local patches by self-attention, but such connections are different between HAR and Human-ID. Recognizing different activities relies on the difference of a series of motions, but most human gaits are so similar that only subtle patterns can be an indicator for gait identification. \textbf{Evaluations on Unsupervised Learning} We further exploit the effectiveness of unsupervised learning for CSI feature learning. We follow the AutoFi~\cite{yang2022autofi} to construct two parallel networks and adopt the KL-divergence, mutual information, and kernel density estimation loss to train the two networks only using the CSI data. After unsupervised learning, we train the independent classifier based on the fixed parameters of the two networks. All the backbone networks are tested using the same strategy: unsupervised training on NTU-Fi HAR and supervised learning on NTU-Fi Human-ID. The evaluation is conducted on Human-ID, and the results are shown in Table~\ref{tab:unsupervised_learning}. It is shown that CNN achieves the best accuracy of 97.62\% that is followed by MLP and ViT. The results demonstrate that unsupervised learning is effective for CSI data. It yields better cross-task evaluation results than those of transfer learning, which demonstrates that unsupervised learning helps learn features with better generalization ability. CNN and MLP-based networks are more friendly for unsupervised learning of WiFi CSI data. \input{tables/self-supervised_learning} \input{figures/0.training_loss} \subsection{Analysis}\label{sec:exp-analytics} \textbf{Convergence of Deep Models.} Though all models converge eventually, their training difficulties are different and further affect their practical usage. To compare their convergence difficulties, we show the training losses of MLP, CNN-5, ViT, and RNN in terms of epochs in Figure~\ref{fig:training-loss}. It is noted that CNN converges very fast within 25 epochs for four datasets, and MLP also converges at a fast speed. The transformer requires more epochs of training since it consists of more model parameters. In comparison, RNN hardly converges on UT-HAR and Widar, and converges slower on NTU-Fi. Then we further explore the convergence of RNN-based models, including GRU, LSTM, BiLSTM, and CNN+GRU in Figure~\ref{fig:training-loss-rnn}. Though there show strong fluctuations during the training phase of GRU, LSTM, and BiLSTM, these three models can achieve much lower training loss. Especially, GRU achieves the lowest loss among all RNN-based methods. For CNN+GRU, the training phase is more stable but its convergence loss is larger than others. \textbf{How Transfer Learning Matters.} We further draw the training losses of all models on NTU-Fi Human-ID with pre-trained parameters of NTU-Fi HAR in Figure~\ref{fig:transfer-loss}. Compared to the training procedures of randomly-initialized models in Figures ~\ref{subfig:NTU-HAR-GP1} and~\ref{subfig:NTU-HAR-GP2}, the convergence can be achieved and even become much more stable. We can draw two conclusions from these results: (a) the feature extractors of these models are transferable across two similar tasks; (b) the fluctuations of training losses are caused by the feature extractor since only the classifier is trained for the transfer learning settings. \input{figures/0.widar-accuracy} \textbf{Poor Performance of Deep CNN on Widar.} In Table~\ref{tab:overall}, a noticeable phenomenon is that the ResNet-18/50/101 cannot generalize well on Widar data, only achieving 17.91\%, 19.47\%, and 14.47\%, respectively. In visual recognition, a deeper network should perform better on large-scale datasets ~\cite{he2016deep}. Then we have the question: is the degeneration of these deep models caused by underfitting or overfitting in WiFi sensing? We seek the reason by plotting their training losses in Figure~\ref{fig:widar-acc}. Figure~\ref{subfig:widar-resnet-acc} shows that even though the training accuracy has been almost 100\%, the testing accuracy remains low, under 20\%. Whereas, other networks (MLP, CNN, GRU) have similar training accuracy while the testing accuracy is increased to over 60\%. This indicates that the degrading performances of ResNets are caused by overfitting, and different domains in Widar~\cite{zhang2021widar3} might be the main reasons. This discovery tells us that very deep networks are prone to suffer from overfitting for cross-domain tasks and may not be a good choice for current WiFi sensing applications due to their performance and computational overhead. \textbf{Choices of Optimizer.} During the training phase, we find that though Adam can help models converge at a fast speed, it also leads to much training instability, especially for the very deep neural networks. In Figure~\ref{subfig:resnet-adam}, we can see that ResNet-18 converges stably but ResNet-50 and ResNet-101 have fluctuating losses every 20-30 epochs. This might be caused by the dramatically changing values of WiFi data and its adaptive learning rate of Adam~\cite{kingma2014adam}. Then we consider changing the optimizer from Adam to a more stable optimizer, Stochastic Gradient Descent (SGD). In Figure~\ref{subfig:resnet-sgd}, we find that the training procedure becomes more stable. This implies that if a very deep model is utilized in WiFi sensing, the SGD should be a better choice. If a simple model is sufficient for the sensing task, then Adam can enforce the model to converge better and faster. \input{figures/0.optimizer} \section{Discussions and Summary}\label{sec:summary} Having analyzed the empirical results and the characteristics of deep learning models for WiFi sensing, we summarize the experiences and observations that facilitate future research on model design, model training, and real world use case: \begin{itemize} \item \textbf{Model Choices.} We recommend CNN, GRU, and BiLSTM due to their high performance, low computational cost, and small parameter size. The shallow models have achieved remarkable results for activity recognition, gesture recognition, and human identification, while the very deep models confront the overfitting issue, especially for cross-domain scenarios. \item \textbf{Optimization.} We recommend using Adam or SGD optimizer. The Adam optimizer enforces the model to converge at a fast speed but sometimes it causes instability of training. When such a situation happens, the SGD is a more secure way but the hyper-parameters of SGD (\textit{i.e.}, the learning rate and momentum) need to be manually specified and tuned. \item \textbf{Advice on Transfer Learning Applications.} We recommend applying transfer learning when the task is similar to existing applications and the same CSI sensing platform is employed. The pre-trained parameters provide a good initialization and better generalization ability. CNN, MLP, and BiLSTM have superior transferability. \item \textbf{Advice on Unsupervised Learning.} We recommend applying unsupervised learning to initialize the model for similar tasks since unsupervised learning extracts more generalizable features than transfer learning. CNN, MLP, and ViT are more suitable in the unsupervised learning framework in general. \end{itemize} \section{Grand Challenges and Future Directions} Deep learning still keeps booming in many research fields and continuously empowers more challenging applications and scenarios. Based on the new progress, we look into the future directions of deep learning for WiFi sensing and summarize them as follows. \textbf{Data-efficient learning.} As CSI data is expensive to collect, data-efficient learning methods should be further explored. Existing works have utilized few-shot learning, transfer learning, and domain adaptation, which yield satisfactory results in a new environment with limited training samples. However, since the testing scenarios are simple, the transferability of these models cannot be well evaluated. In the future, meta-learning and zero-shot learning can further help learn robust features across environments and tasks. \textbf{Model compression or lightweight model design.} In the future, WiFi sensing requires real-time processing for certain applications, such as vital sign monitoring~\cite{hu2022resfi}. To this end, model compression techniques can play a crucial role, such as model pruning~\cite{chen2019cooperative}, quantization~\cite{chen2019metaquant} and distillation~\cite{yang2020mobileda}, which decreases the model size via an extra learning step. The lightweight model design is also favorable, such as the EfficientNet~\cite{tan2019efficientnet} in computer vision that is designed from scratch by balancing network depth, width, and resolution. \textbf{Multi-modal learning.} WiFi sensing is ubiquitous, cost-effective, and privacy-preserving, and can work without the effect of illumination and part of occlusion, which is complementary to the existing visual sensing technique. To achieve robust sensing 24/7, multiple modalities of sensing data should be fused using multi-modal learning. WiVi~\cite{zou2019wificv} pioneers human activity recognition by integrating WiFi sensing and visual recognition. Multi-modal learning can learn joint features from multiple modalities and make decisions by choosing reliable modalities. \textbf{Cross-modal learning.} WiFi CSI data describes the surrounding environment that can also be captured by cameras. Cross-modal learning aims to supervise or reconstruct one modality from another modality, which helps WiFi truly ``see'' the environment and visualize them in videos. Wi2Vi~\cite{kefayati2020wi2vi} manages to generate video frames by CSI data and firstly achieves cross-modal learning in WiFi sensing. The human pose is then estimated by supervising the model by the pose landmarks of OpenPose~\cite{wang2019person}. In the future, cross-modal learning may enable the WiFi model to learn from more supervisions such as radar and Lidar. \textbf{Model robustness and security for trustworthy sensing.} When deploying WiFi sensing models in the real world, the model should be secure to use. Existing works study the accuracy of models but few pay attention to the security issue. First, during the communication, the sensing data may leak the privacy of users. Second, if any adversarial attack is made on the CSI data, the modal can perform wrongly and trigger the wrong actions of smart appliances. RobustSense seeks to overcome adversarial attacks by augmentation and adversarial training~\cite{yang2022autofi}. EfficientFi proposes a variational auto-encoder to quantize the CSI for efficient and robust communication. WiFi-ADG~\cite{zhou2019adversarial} protects the user privacy by enforcing the data not recognizable by general classifiers. More works should be focused on secure WiFi sensing and establish trustworthy models for large-scale sensing, such as federated learning. \textbf{Complicated human activities and behaviors analytics.} While current methods have shown prominent recognition accuracy for single activities or gestures, human behavior is depicted by more complicated activities. For example, to indicate if a patient may have a risk of Alzheimer’s disease, the model should record the routine and analyze the anomaly activity, which is still difficult for existing approaches. Precise user behavior analysis can contribute to daily healthcare monitoring and behavioral economics. \textbf{Model interpretability for a physical explanation.} Model-based and learning-based methods develop fast but in a different ways. Recent research has investigated the interpretability of deep learning models that looks for the justifications of classifiers. In WiFi sensing, if the model is interpreted well, there may exist a connection between the data-driven model and the physical model. The modal interpretability may inspire us to develop new theories of physical models for WiFi sensing, and oppositely, the existing model (\textit{e.g.}, Fresnel Zone) may enable us to propose new learning methods based on the physical models. It is hoped that two directions of methods can be unified theoretically and practically. \section{Conclusion}\label{sec:conclusion} Deep learning methods have been proven to be effective for challenging applications in WiFi sensing, yet these models exhibit different characteristics on WiFi sensing tasks and a comprehensive benchmark is highly demanded. To this end, this work reviews the recent progress on deep learning for WiFi human sensing, and benchmarks prevailing deep neural networks and deep learning strategies on WiFi CSI data across different platforms. We summarize the conclusions drawn from the experimental observations, which provide valuable experiences for model design in practical WiFi sensing applications. Last but not least, the grand challenges and future directions are proposed to imagine the research issues emerging from future large-scale WiFi sensing scenarios.